JP2015176526A - Head-mounted type display device and control method of the same - Google Patents

Head-mounted type display device and control method of the same Download PDF

Info

Publication number
JP2015176526A
JP2015176526A JP2014054395A JP2014054395A JP2015176526A JP 2015176526 A JP2015176526 A JP 2015176526A JP 2014054395 A JP2014054395 A JP 2014054395A JP 2014054395 A JP2014054395 A JP 2014054395A JP 2015176526 A JP2015176526 A JP 2015176526A
Authority
JP
Japan
Prior art keywords
unit
outside scene
head
image
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2014054395A
Other languages
Japanese (ja)
Other versions
JP6287399B2 (en
Inventor
健郎 矢島
Takeo Yajima
健郎 矢島
小林 伸一
Shinichi Kobayashi
伸一 小林
直人 有賀
Naoto Ariga
直人 有賀
Original Assignee
セイコーエプソン株式会社
Seiko Epson Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by セイコーエプソン株式会社, Seiko Epson Corp filed Critical セイコーエプソン株式会社
Priority to JP2014054395A priority Critical patent/JP6287399B2/en
Priority claimed from US14/626,103 external-priority patent/US9715113B2/en
Publication of JP2015176526A publication Critical patent/JP2015176526A/en
Application granted granted Critical
Publication of JP6287399B2 publication Critical patent/JP6287399B2/en
Application status is Active legal-status Critical
Anticipated expiration legal-status Critical

Links

Images

Abstract

In a head-mounted display device, the position of a real-world object is grasped using a single outside scene information acquisition means. A head-mounted display device that allows a user to visually recognize a virtual image and an outside scene, and obtains outside image information including at least a feature of the outside scene in the direction of the user's field of view and an image display unit that allows the user to visually recognize the virtual image. An external scene acquisition unit, a position estimation unit that estimates the position of an arbitrary target object existing in the real world based on at least two external scene information acquired over time by the external scene acquisition unit, and the estimated target object An augmented reality processing unit that gives the user augmented reality by forming a virtual image representing a virtual object for expanding the target object on the image display unit based on the position of the target object. [Selection] Figure 4

Description

  The present invention relates to a head-mounted display device.

  2. Description of the Related Art A head-mounted display device that forms a virtual image in a visual field region of an observer by being used on the observer's head is known. This head-mounted display device is also called a head mounted display (HMD). The head-mounted display device includes a non-transmissive head-mounted display device that blocks the user's view when worn, and a transmissive head-mounted device that does not block the user's view when worn. There is a type display device.

  On the other hand, a technique called augmented reality (AR) is known in which information is additionally presented to a real environment using a computer. In order to realize augmented reality in a transmissive head-mounted display device, only additional presentation information (for example, characters and images) for decorating objects in the real world is displayed on the liquid crystal display. Let The user can experience augmented reality by visually recognizing both the additional presentation information displayed as a virtual image via the liquid crystal display and the real-world outside scene seen through the lens in front of the user. .

  In this way, in the case of realizing augmented reality with a transmissive head-mounted display device, if the shift between the position where the information for additional presentation is displayed as a virtual image and the object in the real world becomes large The user feels uncomfortable. For this reason, in the case of realizing augmented reality, there has been a demand for grasping the position of an object in the real world. Patent Document 1 describes a technique for grasping the position of a real-world object using a stereo camera composed of two or more cameras.

JP 2003-316510 A JP 2011-259341 A JP-A-2005-122100

  In the technique described in Patent Document 1, there is a problem that two or more cameras are required to grasp the position of an object in the real world. Further, in the techniques described in Patent Documents 2 and 3, it is not considered to grasp the position of the real world object in the head-mounted display device. Such a problem is not limited to grasping the position of an object in the real world using the image of the outside scene acquired by the camera, but information on the outside scene acquired by other means (for example, an infrared sensor) ( This is also a common problem when the position of an object in the real world is grasped using “outside scene information”).

  Therefore, a head-mounted display device that can grasp the position of an object in the real world using a single outside scene information acquisition unit is desired. In addition, the head-mounted display device is desired to be reduced in size, cost, resource saving, manufacturing ease, usability improvement, and the like.

  SUMMARY An advantage of some aspects of the invention is to solve at least a part of the problems described above, and the invention can be implemented as the following forms.

(1) According to one aspect of the present invention, a head-mounted display device that allows a user to visually recognize a virtual image and an outside scene is provided. The head-mounted display device includes: an image display unit that allows the user to visually recognize the virtual image; an external scene acquisition unit that acquires external scene information including at least features of an external scene in the user's visual field direction; and the external scene acquisition unit A position estimation unit that estimates the position of an arbitrary target object that exists in the real world based on at least two of the outside scene information acquired over time, and the target based on the estimated position of the target object An augmented reality processing unit that provides the user with augmented reality by causing the image display unit to form the virtual image representing a virtual object for extending the object. According to this form of the head-mounted display device, the position estimation unit estimates the position of the target object with respect to the outside scene acquisition unit based on at least two outside scene information acquired over time by the outside scene acquisition unit. Therefore, it is possible to provide a head-mounted display device that can grasp the position of an arbitrary target object existing in the real world using a single outside scene information acquisition unit (for example, a monocular camera). Further, the augmented reality processing unit causes the image display unit to form a virtual image representing a virtual object for extending the target object based on the estimated position of the target object. For this reason, it is possible to reduce the deviation between the target object that is an object in the real world and the virtual object displayed as a virtual image.

(2) In the head-mounted display device of the above aspect; the position estimation unit; a movement amount of the outside scene acquisition unit after the outside scene acquisition unit acquires the first outside scene information has a predetermined threshold value If it exceeds, the outside scene acquisition unit may acquire the second outside scene information; the position of the target object may be estimated using the first outside scene information and the second outside scene information. According to this form of the head-mounted display device, the position estimation unit, when the movement amount of the outside scene acquisition unit after the outside scene acquisition unit acquires the first outside scene information exceeds a predetermined threshold, The outside scene acquisition unit acquires the second outside scene information. By appropriately designing the predetermined threshold value, the position estimation unit can acquire the second outside scene information suitable for estimating the position of the target object.

(3) The head-mounted display device according to the above aspect further includes: a motion detection unit that detects a motion of the user's head; the position estimation unit is the head detected by the motion detection unit The amount of movement of the outside scene acquisition unit may be estimated from the movement of. According to the head-mounted display device of this form, the position estimation unit estimates the movement amount of the outside scene acquisition unit from the head movement detected by the motion detection unit. For this reason, the position estimation unit can estimate the amount of movement of the outside scene acquisition unit using the configuration for detecting the movement of the user's head, that is, the detection value by the motion detection unit.

(4) In the head-mounted display device according to the above aspect, the outside scene acquisition unit is a camera for acquiring an outside scene image representing an outside scene in the user's visual field direction as the outside scene information; Obtaining the parallax between the first outside scene information and the second outside scene information; using the obtained parallax, the movement amount of the outside scene acquisition unit, and the focal length of the outside scene acquisition unit; The position of the target object may be estimated. According to the head-mounted display device of this aspect, the position estimation unit includes the parallax between the first outside scene information and the second outside scene information, the movement amount of the outside scene acquisition unit, and the focal length of the outside scene acquisition unit. And can be used to estimate the position of the target object.

(5) In the head mounted display device according to the above aspect; the position estimation unit includes an edge of the target object included in the first outside scene information and an edge of the target object included in the second outside scene information. The parallax may be obtained with reference to the above. According to the head-mounted display device of this aspect, the position estimation unit obtains the parallax with reference to the edge of the target object included in the first and second outside scene information, and thus the parallax between the first and second outside scene information. Can be obtained with high accuracy.

(6) In the head-mounted display device according to the above aspect; the image display unit includes an optical image display unit that forms the virtual image in front of the user's eyes; and the position estimation unit further includes the estimated A position of the optical image display unit corresponding to an extension line of the position of the target object and the position of the eye of the user is calculated; the augmented reality processing unit is calculated by calculating the position of the optical image display unit. The position of the virtual object may be determined based on According to the head-mounted display device of this aspect, the augmented reality processing unit is configured to generate a virtual object based on the position of the optical image display unit corresponding to the extension line between the position of the target object and the position of the user's eyes. Determine the position. That is, the augmented reality processing unit can determine the position of the virtual object based on the position of the target object visually recognized by the user through the optical image display unit. As a result, the augmented reality processing unit can display a virtual object for expanding the target object at a position where the user does not feel uncomfortable.

(7) The head-mounted display device of the above aspect further includes: an eye image acquisition unit that acquires an image of the user's eye; the position estimation unit is further acquired by the eye image acquisition unit Image analysis of the eye image may be performed to obtain the size of the user's eye, and the position of the user's eye may be estimated based on the obtained size of the eye. According to this form of the head-mounted display device, the position estimation unit can estimate the position of the user's eye based on the eye image acquired by the eye image acquisition unit.

(8) In the head-mounted display device of the above aspect, the eye image acquisition unit may be disposed in the vicinity of the outside scene acquisition unit. According to the head-mounted display device of this aspect, since the eye image acquisition unit is arranged in the vicinity of the outside scene acquisition unit, the position estimation unit improves accuracy when estimating the position of the user's eyes. be able to.

  A plurality of constituent elements of each embodiment of the present invention described above are not essential, and some or all of the effects described in the present specification are to be solved to solve part or all of the above-described problems. In order to achieve the above, it is possible to appropriately change, delete, replace with a new component, and partially delete the limited contents of some of the plurality of components. In order to solve some or all of the above-described problems or achieve some or all of the effects described in this specification, technical features included in one embodiment of the present invention described above. A part or all of the technical features included in the other aspects of the present invention described above may be combined to form an independent form of the present invention.

  For example, one aspect of the present invention is realized as an apparatus including some or all of the four elements of an image display unit, an outside scene acquisition unit, an augmented reality processing unit, and a position estimation unit. Is possible. That is, this apparatus may or may not have an image display unit. In addition, this apparatus may or may not have an outside scene acquisition unit. Moreover, this apparatus may or may not have an augmented reality processing unit. Moreover, this apparatus may have the position estimation part, and does not need to have it. Such a device can be realized, for example, as a head-mounted display device, but can also be realized as a device other than the head-mounted display device. Any or all of the technical features of each form of the head-mounted display device described above can be applied to this device.

  The present invention can be realized in various modes. For example, a head-mounted display device and a head-mounted display device control method, a head-mounted display system, these methods, devices, or The present invention can be realized in the form of a computer program for realizing the function of the system, a recording medium on which the computer program is recorded, or the like.

It is explanatory drawing which shows schematic structure of the head mounted display apparatus in one Embodiment of this invention. It is a block diagram which shows the structure of HMD functionally. It is explanatory drawing which shows an example of the virtual image visually recognized by the user. It is a flowchart which shows the procedure of an augmented reality process. It is explanatory drawing for demonstrating step S102 of an augmented reality process. It is explanatory drawing for demonstrating step S104 of an augmented reality process. It is explanatory drawing for demonstrating step S106 of an augmented reality process. It is explanatory drawing for demonstrating step S108 of an augmented reality process. It is an example of the image 1 and the image 2. FIG. It is explanatory drawing for demonstrating step S110 of an augmented reality process. It is explanatory drawing for demonstrating step S112 of an augmented reality process. It is explanatory drawing which shows the structure of the external appearance of HMD in a modification.

A. Embodiment:
A-1. Configuration of head mounted display device:
FIG. 1 is an explanatory diagram showing a schematic configuration of a head-mounted display device according to an embodiment of the present invention. The head-mounted display device 100 is a display device mounted on the head, and is also called a head mounted display (HMD). The HMD 100 of the present embodiment is an optically transmissive head-mounted display device that allows a user to visually recognize a virtual image and at the same time directly view an outside scene. In the present embodiment, a virtual image visually recognized by the user with the HMD 100 is also referred to as a “display image” for convenience. Moreover, emitting image light generated based on image data is also referred to as “displaying an image”.

  The HMD 100 includes an image display unit 20 that allows a user to visually recognize a virtual image when attached to the user's head, and a control unit (controller) 10 that controls the image display unit 20.

  The image display unit 20 is a mounting body that is mounted on the user's head, and has a glasses shape in the present embodiment. The image display unit 20 includes a right holding unit 21, a right display driving unit 22, a left holding unit 23, a left display driving unit 24, a right optical image display unit 26, a left optical image display unit 28, and an outer camera. 61, an inner camera 62, and a 9-axis sensor 66 are included. The right optical image display unit 26 and the left optical image display unit 28 are arranged so as to be positioned in front of the right and left eyes of the user when the user wears the image display unit 20, respectively. One end of the right optical image display unit 26 and one end of the left optical image display unit 28 are connected to each other at a position corresponding to the eyebrow of the user when the user wears the image display unit 20.

  The right holding unit 21 extends from the end ER which is the other end of the right optical image display unit 26 to a position corresponding to the user's temporal region when the user wears the image display unit 20. It is a member. Similarly, the left holding unit 23 extends from the end EL which is the other end of the left optical image display unit 28 to a position corresponding to the user's temporal region when the user wears the image display unit 20. It is a member provided. The right holding unit 21 and the left holding unit 23 hold the image display unit 20 on the user's head like a temple of glasses.

  The right display drive unit 22 is disposed inside the right holding unit 21, in other words, on the side facing the user's head when the user wears the image display unit 20. Further, the left display driving unit 24 is disposed inside the left holding unit 23. Hereinafter, the right holding unit 21 and the left holding unit 23 are collectively referred to simply as “holding unit”, and the right display driving unit 22 and the left display driving unit 24 are collectively referred to simply as “display driving unit”. The right optical image display unit 26 and the left optical image display unit 28 are collectively referred to simply as “optical image display unit”.

  The display driving unit includes a liquid crystal display (hereinafter referred to as “LCD”) 241, 242, projection optical systems 251, 252, and the like (see FIG. 2). Details of the configuration of the display driving unit will be described later. The optical image display unit as an optical member includes light guide plates 261 and 262 (see FIG. 2) and a light control plate. The light guide plates 261 and 262 are formed of a light transmissive resin material or the like, and guide the image light output from the display driving unit to the user's eyes. The light control plate is a thin plate-like optical element, and is disposed so as to cover the front side of the image display unit 20 (the side opposite to the user's eye side). The light control plate protects the light guide plates 261 and 262 and suppresses damage to the light guide plates 261 and 262 and adhesion of dirt. Further, by adjusting the light transmittance of the light control plate, it is possible to adjust the amount of external light entering the user's eyes and adjust the ease of visual recognition of the virtual image. The light control plate can be omitted.

  The outer camera 61 is disposed at a position corresponding to the temple on the right side when the user wears the image display unit 20. The outer camera 61 captures an outside scene (outside scenery) in the front side direction of the image display unit 20, in other words, the user's viewing direction when the HMD 100 is worn, and acquires an outside scene image. The outer camera 61 is a so-called visible light camera, and the outside scene image acquired by the outer camera 61 is an image representing the shape of the object from the visible light emitted from the object. The outside camera 61 functions as an “outside scene acquisition unit” and “outside scene information acquisition unit”, and the outside scene image functions as “outside scene information”.

  The outside scene acquisition unit can adopt any configuration as long as it can acquire “outside scene information” including at least features of the outside scene in the direction of the user's field of view. For example, the outside scene acquisition unit may be configured by an infrared sensor, an ultrasonic sensor, a radar, or the like instead of the visible light camera. In this case, the detection value by the sensor or radar functions as “outside scene feature”.

  The inner camera 62 is arranged at a position corresponding to the temple on the right side when the user wears the image display unit 20. The inner camera 62 images the left and right eyes of the user with the HMD 100 mounted, in other words, the back side direction of the image display unit 20, and acquires an image of the user's eyes. The inner camera 62 is a so-called visible light camera similar to the outer camera 61. The inner camera 62 functions as an “eye image acquisition unit”. The inner camera 62 is used to estimate the distance between the outer camera 61 and the user's eyes in augmented reality processing. For this reason, the inner camera 62 is preferably disposed in the vicinity of the outer camera 61.

  The 9-axis sensor 66 is disposed at a position corresponding to the user's eyebrow when the user wears the image display unit 20. The 9-axis sensor 66 is a motion sensor that detects acceleration (3 axes), angular velocity (3 axes), and geomagnetism (3 axes). Since the 9-axis sensor 66 is provided in the image display unit 20, when the image display unit 20 is mounted on the user's head, the 9-axis sensor 66 serves as a “motion detection unit” that detects the movement of the user's head. Function. Here, the movement of the head includes changes in the speed, acceleration, angular velocity, direction, and direction of the head.

  The image display unit 20 further includes a connection unit 40 for connecting the image display unit 20 to the control unit 10. The connection unit 40 includes a main body cord 48 connected to the control unit 10, a right cord 42 and a left cord 44 in which the main body cord 48 branches into two, and a connecting member 46 provided at the branch point. . The right cord 42 is inserted into the casing of the right holding unit 21 from the distal end AP in the extending direction of the right holding unit 21 and connected to the right display driving unit 22. Similarly, the left cord 44 is inserted into the housing of the left holding unit 23 from the distal end AP in the extending direction of the left holding unit 23 and connected to the left display driving unit 24. The connecting member 46 is provided with a jack for connecting the earphone plug 30. A right earphone 32 and a left earphone 34 extend from the earphone plug 30.

  The image display unit 20 and the control unit 10 transmit various signals via the connection unit 40. A connector (not shown) that fits each other is provided at each of the end of the main body cord 48 opposite to the connecting member 46 and the control unit 10. By fitting / releasing the connector of the main body cord 48 and the connector of the control unit 10, the control unit 10 and the image display unit 20 are connected or disconnected. For the right cord 42, the left cord 44, and the main body cord 48, for example, a metal cable or an optical fiber can be adopted.

  The control unit 10 is a device for controlling the HMD 100. The control unit 10 includes a determination key 11, a lighting unit 12, a display switching key 13, a track pad 14, a luminance switching key 15, a direction key 16, a menu key 17, and a power switch 18. Yes. The determination key 11 detects a pressing operation and outputs a signal for determining the content operated by the control unit 10. The lighting unit 12 notifies the operation state of the HMD 100 according to the light emission state. The operation state of the HMD 100 includes, for example, power ON / OFF. For example, an LED (Light Emitting Diode) is used as the lighting unit 12. The display switching key 13 detects a pressing operation and outputs a signal for switching the display mode of the content video between 3D and 2D, for example.

  The track pad 14 detects the operation of the user's finger on the operation surface of the track pad 14 and outputs a signal corresponding to the detected content. As the track pad 14, various methods such as an electrostatic method, a pressure detection method, and an optical method can be adopted. The luminance switching key 15 detects a pressing operation and outputs a signal for increasing or decreasing the luminance of the image display unit 20. The direction key 16 detects a pressing operation on a key corresponding to the up / down / left / right direction, and outputs a signal corresponding to the detected content. The power switch 18 switches the power-on state of the HMD 100 by detecting a slide operation of the switch.

  FIG. 2 is a block diagram functionally showing the configuration of the HMD 100. The control unit 10 includes an input information acquisition unit 110, a storage unit 120, a power supply 130, a wireless communication unit 132, a CPU 140, an interface 180, and transmission units (Tx) 51 and 52, which are not shown. They are connected to each other by a bus.

  The input information acquisition unit 110 acquires signals corresponding to operation inputs to the enter key 11, the display switching key 13, the track pad 14, the luminance switching key 15, the direction key 16, the menu key 17, and the power switch 18. Note that the input information acquisition unit 110 can acquire operation inputs using various methods other than those described above. For example, an operation input using a foot switch (a switch operated by a user's foot) may be acquired. Further, for example, a visual line detection unit such as an infrared sensor may be provided in the image display unit 20, and the user's visual line may be detected to obtain an operation input by a command associated with the movement of the visual line. Further, for example, the user's gesture may be detected using the outer camera 61, and an operation input by a command associated with the gesture may be acquired. When detecting a gesture, a user's fingertip, a ring attached to the user's hand, a medical device to be used by the user's hand, or the like can be used as a mark for motion detection. If it is possible to acquire an operation input by a foot switch or a line of sight, the input information acquisition unit 110 can acquire an operation input from the user even in a task in which it is difficult for the user to release his / her hand.

  The storage unit 120 includes a ROM, a RAM, a DRAM, a hard disk, and the like. The storage unit 120 stores various computer programs including an operating system (OS). The storage unit 120 includes a focal length 122 and a movement amount threshold value 124.

  The focal length 122 is a storage area for storing the focal length of the outer camera 61 in advance. A predetermined value is stored as a default value for the focal length of the outer camera 61 stored in the focal length 122. The predetermined value stored in the focal length 122 may be changed by the user. The movement amount threshold value 124 stores a threshold value indicating “movement amount of the outer camera 61” for performing the second imaging by the outer camera 61 in the augmented reality processing described later. The threshold value stored in the movement amount threshold value 124 may be changed by the user.

  The power supply 130 supplies power to each part of the HMD 100. As the power supply 130, for example, a secondary battery can be used. The wireless communication unit 132 performs wireless communication with other devices in accordance with a predetermined wireless communication standard such as a wireless LAN or Bluetooth (registered trademark).

  The CPU 140 reads out and executes a computer program stored in the storage unit 120 to thereby execute an OS 150, an image processing unit 160, an audio processing unit 170, a display control unit 190, a position estimation unit 142, and an AR (Augmented Reality) processing unit. It functions as 144.

  The position estimation unit 142 estimates the position of an object existing in the real world with respect to the outer camera 61 in an augmented reality process described later. Further, the position estimation unit 142 estimates the positions of the user's right eye LE and left eye RE with respect to the outer camera 61 in the augmented reality processing described later. The AR processing unit 144 implements augmented reality processing by cooperating with the position estimation unit 142. Augmented reality processing is processing for realizing augmented reality, in other words, an image representing additional presentation information (for example, characters and images) for extending an object existing in the real world. It is a process to display. The AR processing unit 144 corresponds to an “augmented reality processing unit”.

  The image processing unit 160 generates a signal based on content (video) input via the interface 180 or the wireless communication unit 132. Then, the image processing unit 160 supplies the generated signal to the image display unit 20 via the connection unit 40. The signal supplied to the image display unit 20 differs between the analog format and the digital format. In the case of the analog format, the image processing unit 160 acquires the image signal included in the content, separates the synchronization signal such as the vertical synchronization signal VSync and the horizontal synchronization signal HSync from the acquired image signal, and according to their period The clock signal PCLK is generated by a PLL circuit or the like. The image processing unit 160 converts the analog image signal from which the synchronization signal is separated into a digital image signal using an A / D conversion circuit or the like. The image processing unit 160 stores the converted digital image signal in the DRAM in the storage unit 120 for each frame as image data Data of RGB data. On the other hand, in the digital format, the image processing unit 160 generates and transmits a clock signal PCLK and image data Data. Specifically, when the content is in digital format, the clock signal PCLK is output in synchronization with the image signal, so that the generation of the vertical synchronization signal VSync and the horizontal synchronization signal HSync and the A / D conversion of the analog image signal are performed. It becomes unnecessary. The image processing unit 160 performs image processing such as resolution conversion processing, various tone correction processing such as adjustment of luminance and saturation, and keystone correction processing on the image data Data stored in the storage unit 120. May be executed.

  The image processing unit 160 transmits the generated clock signal PCLK, vertical synchronization signal VSync, horizontal synchronization signal HSync, and image data Data stored in the DRAM in the storage unit 120 via the transmission units 51 and 52, respectively. To do. Note that the image data Data transmitted through the transmission unit 51 is also referred to as “right-eye image data Data1”, and the image data Data transmitted through the transmission unit 52 is also referred to as “left-eye image data Data2”. The transmission units 51 and 52 function as a transceiver for serial transmission between the control unit 10 and the image display unit 20.

  The display control unit 190 generates control signals for controlling the right display drive unit 22 and the left display drive unit 24. Specifically, the display control unit 190 turns on and off the left and right LCDs 241 and 242 by the left and right LCD control units 211 and 212 and the left and right backlights by the left and right backlight control units 201 and 202 according to the control signal. The generation and emission of image light by the right display drive unit 22 and the left display drive unit 24 are controlled by individually controlling the driving ON / OFF of the 221 and 222, respectively. The display control unit 190 transmits control signals for the right LCD control unit 211 and the left LCD control unit 212 via the transmission units 51 and 52, respectively. In addition, the display control unit 190 transmits control signals for the right backlight control unit 201 and the left backlight control unit 202, respectively.

  The audio processing unit 170 acquires an audio signal included in the content, amplifies the acquired audio signal, and transmits the acquired audio signal to a speaker (not shown) in the right earphone 32 and a speaker (not shown) in the left earphone 34 connected to the connecting member 46. To supply. For example, when a Dolby (registered trademark) system is adopted, processing is performed on an audio signal, and different sounds with different frequencies or the like are output from the right earphone 32 and the left earphone 34, respectively.

  The interface 180 is an interface for connecting various external devices OA that are content supply sources to the control unit 10. Examples of the external device OA include a personal computer PC, a mobile phone terminal, and a game terminal. As the interface 180, for example, a USB interface, a micro USB interface, a memory card interface, or the like can be used.

  The image display unit 20 includes a right display drive unit 22, a left display drive unit 24, a right light guide plate 261 as a right optical image display unit 26, a left light guide plate 262 as a left optical image display unit 28, and an outer camera. 61, an inner camera 62, and a 9-axis sensor 66 are provided.

  The right display driving unit 22 includes a receiving unit (Rx) 53, a right backlight (BL) control unit 201 and a right backlight (BL) 221 that function as a light source, a right LCD control unit 211 that functions as a display element, and a right An LCD 241 and a right projection optical system 251 are included. The right backlight control unit 201, the right LCD control unit 211, the right backlight 221 and the right LCD 241 are also collectively referred to as “image light generation unit”.

  The receiving unit 53 functions as a receiver for serial transmission between the control unit 10 and the image display unit 20. The right backlight control unit 201 drives the right backlight 221 based on the input control signal. The right backlight 221 is a light emitter such as an LED or electroluminescence (EL). The right LCD control unit 211 drives the right LCD 241 based on the clock signal PCLK, the vertical synchronization signal VSync, the horizontal synchronization signal HSync, and the right eye image data Data1 input via the reception unit 53. The right LCD 241 is a transmissive liquid crystal panel in which a plurality of pixels are arranged in a matrix.

  The right projection optical system 251 is configured by a collimator lens that converts the image light emitted from the right LCD 241 to light beams in a parallel state. The right light guide plate 261 as the right optical image display unit 26 guides the image light output from the right projection optical system 251 to the right eye RE of the user while reflecting the image light along a predetermined optical path. The optical image display unit can use any method as long as a virtual image is formed in front of the user's eyes using image light. For example, a diffraction grating or a transflective film may be used. good.

  The left display drive unit 24 has the same configuration as the right display drive unit 22. That is, the left display driving unit 24 includes a receiving unit (Rx) 54, a left backlight (BL) control unit 202 and a left backlight (BL) 222 that function as a light source, and a left LCD control unit 212 that functions as a display element. And a left LCD 242 and a left projection optical system 252. Detailed description is omitted.

  FIG. 3 is an explanatory diagram illustrating an example of a virtual image visually recognized by the user. FIG. 3A illustrates the user's visual field VR during a normal display process. As described above, the image light guided to both eyes of the user of the HMD 100 forms an image on the retina of the user, so that the user visually recognizes the virtual image VI. In the example of FIG. 3A, the virtual image VI is a standby screen of the OS of the HMD 100. Further, the user views the outside scene SC through the right optical image display unit 26 and the left optical image display unit 28. As described above, the user of the HMD 100 of the present embodiment can see the virtual image VI and the outside scene SC behind the virtual image VI for the portion of the visual field VR where the virtual image VI is displayed. In addition, the portion of the visual field VR where the virtual image VI is not displayed can be seen through the optical scene display portion and directly through the outside scene SC.

  FIG. 3B illustrates the user's visual field VR during the augmented reality process. In the augmented reality processing, the AR processing unit 142 generates image data representing additional presentation information (for example, characters and images) for extending an object existing in the real world, and displays the generated image data as an image. To the unit 20. Here, the “object existing in the real world” means a real environment surrounding the user, specifically, an arbitrary object included in the outside scene SC visually recognized by the user. Hereinafter, an object that exists in the real world and is an object of augmented reality processing is also referred to as a “target object”. Further, “extending an object” means adding, deleting, emphasizing, attenuating, etc. information to a target object. Hereinafter, information added to the target object, deleted, emphasized, attenuated, etc. (information for additional presentation) is also referred to as a “virtual object”. In the example of FIG. 3B, an image VOB (virtual object VOB) representing an apple is displayed as a virtual image VI so as to overlap an actual road (target object) included in the outside scene SC. As a result, the user can feel as if an apple is falling on an empty road.

A-2. Augmented reality processing:
FIG. 4 is a flowchart showing the procedure of augmented reality processing. Augmented reality processing is triggered by a processing start request from the OS 150 or any application.

  FIG. 5 is an explanatory diagram for explaining step S102 of the augmented reality process. In step S102 of FIG. 4, the position estimation unit 142 instructs the outer camera 61 to capture an image, and acquires an outside scene image in the visual field direction of the user of the HMD 100 including the target object TOB (FIG. 5) of augmented reality processing. The position estimation unit 142 stores the acquired outside scene image in the storage unit 120. For convenience of explanation, the outside scene image acquired in step S102 is also referred to as “image 1”. The image 1 functions as “first outside scene information”. For convenience of illustration, the outer camera 61 is emphasized in FIGS. 5 to 7, 10, and 11.

  FIG. 6 is an explanatory diagram for explaining step S104 of the augmented reality process. In step S104 in FIG. 4, the position estimation unit 142 determines that the movement amount M2 (FIG. 6) of the outer camera 61 starting from the time when the image 1 is acquired in step S102 is greater than or equal to the threshold value stored in the movement amount threshold value 124. It is determined whether or not. Specifically, the position estimation unit 142 repeatedly acquires movements of the head of the user of the HMD 100 (velocity, acceleration, angular velocity, direction, change in orientation) from the 9-axis sensor 66. Then, the position estimating unit 142 estimates the movement amount M2 of the outer camera 61 starting from the time when the image 1 is acquired from the acquired rotation amount M1 of the head. As described above, the position estimation unit 142 of the present embodiment estimates the amount of movement of the outer camera 61 using the movement of the head of the user wearing the HMD 100.

  Thereafter, in step S104 of FIG. 4, when the estimated movement amount M2 of the outer camera 61 is less than the movement amount threshold value 124, the position estimation unit 142 shifts the process to step S104, and monitors the movement amount M2. continue. On the other hand, when the estimated movement amount M2 of the outer camera 61 is equal to or greater than the threshold value of the movement amount threshold value 124, the position estimation unit 142 shifts the process to step S106.

  FIG. 7 is an explanatory diagram for explaining step S106 of the augmented reality process. In step S106 of FIG. 4, the position estimation unit 142 instructs the outer camera 61 to capture an image, and acquires an outside scene image in the visual field direction of the user of the HMD 100 including the target object TOB (FIG. 7) of augmented reality processing. The position estimation unit 142 stores the acquired outside scene image in the storage unit 120 in a manner that can be distinguished from the image 1 in step S102. For convenience of explanation, the outside scene image acquired in step S106 is also referred to as “image 2”. The image 2 functions as “second outside scene information”.

  In step S108 in FIG. 4, the position estimation unit 142 estimates the position of the target object with respect to the outer camera 61 from the parallax between the image 1 and the image 2 using a stereo image processing technique. Specifically, the position estimation unit 142 can estimate the position of the target object TOB with respect to the outer camera 61 as follows.

  FIG. 8 is an explanatory diagram for explaining step S108 of the augmented reality process. FIG. 9 is an example of image 1 and image 2. In FIG. 8, the focus of image 1 is P1, and the focus of image 2 is P2. Also, the projection point of the target object TOB on the imaging plane PP1 of the image 1 is m (x1, y1), and the projection point of the target object TOB on the imaging plane PP2 of the image 2 is m (x2, y2). Further, a point on the actual space of the target object TOB is defined as TOB (X, Y, Z). OA1 is the parallel optical axis of the outer camera 61 in step S102, and OA2 is the parallel optical axis of the outer camera 61 in step S106.

The movement of the outer camera 61 accompanying the rotation of the user's head is a horizontal movement. For this reason, in the above, y1 = y2. At this time, the point TOB (X, Y, Z) in the actual space of the target object TOB, the projection point m (x1, y1) of the target object TOB of the image 1, and the projection point m of the target object TOB of the image 2 (X2, y2) can be expressed by the following formulas 1 to 3.
Z = (M2 × f) / (x1-x2) (Formula 1)
X = (Z / f) × x2 (Formula 2)
Y = (Z / f) × y2 (Formula 3)
Here, the distance between the focal point P1 and the focal point P2 can be regarded as the movement amount M2 of the outer camera 61. The distance f between the focal point P1 and the imaging surface PP1 and the distance f between the focal point P2 and the imaging surface PP2 are focal lengths of the outer camera 61 stored in advance in the focal length 122.

  Therefore, the position estimation unit 142 first measures the parallax PA (x1-x2) (FIG. 9) between the image 1 and the image 2. The position estimation unit 142 uses the measured parallax PA, the above formulas 1 to 3, the movement amount M2, and the focal length f of the focal length 122 to calculate the point TOB (X in the actual space of the target object TOB. , Y, Z). A reference point for measuring the parallax PA can be arbitrarily determined. For example, the position estimation unit 142 can use the edge EG (FIG. 9) of the target object as a reference point for measuring the parallax PA. The edge can be easily obtained by a generally known edge detection algorithm (an algorithm for specifying a portion where the brightness of an image changes sharply). Since the edge is often detected as a set (line) of continuous points, in this way, the position estimation unit 142 can accurately detect the image 1, compared with the case where one point is used as a reference point. The parallax between the two can be determined.

  FIG. 10 is an explanatory diagram for explaining step S110 of the augmented reality process. In step S110 in FIG. 4, the position estimation unit 142 estimates the position of the user's right eye RE (FIG. 10) with respect to the outer camera 61. Specifically, the position estimation unit 142 instructs the inner camera 62 to capture an image and acquires an image of the user's eyes. The position estimation unit 142 determines the position RE (x, y) of the right eye RE with respect to the outer camera 61 of the HMD 100 based on the size of the right eye RE of the user obtained by image analysis of the obtained eye image. , z).

  FIG. 11 is an explanatory diagram for explaining step S112 of the augmented reality process. In step S112 in FIG. 4, the position estimation unit 142 estimates the display position of the virtual object on the right screen from the position of the target object TOB and the position of the right eye RE. Specifically, the position estimating unit 142 determines the position TOB (X, Y, Z) of the target object TOB estimated in step S108 and the position RE (x, y, Z) of the right eye RE of the user estimated in step S110. The coordinate CO of the right optical image display unit 26 corresponding to the extended line z) is calculated.

  In step S114 of FIG. 4, the position estimation unit 142 transmits the coordinates CO calculated in step S112 to the AR processing unit 144. Thereafter, the AR processing unit 144 converts the coordinate CO of the right optical image display unit 26 into the coordinate COx of the right LCD 241. Thereafter, the AR processing unit 144 generates right-eye image data Data1 in which a virtual object is arranged at the coordinates COx, and transmits the right-eye image data Data1 to the image processing unit 160. The AR processing unit 144 only needs to arrange virtual objects based on the coordinates COx. For this reason, the AR processing unit 144 can place the virtual object in an arbitrary place determined based on the coordinates COx (for example, a place away from the coordinates COx by a predetermined distance).

  In steps S120 to S124 in FIG. 4, the position estimation unit 142 performs the same processing as the processing described in steps S110 to S114 on the left eye LE of the user. That is, in step S120, the position estimation unit 142 estimates the position of the user's left eye LE with respect to the outer camera 61. In step S122, the position estimation unit 142 estimates the display position of the virtual object on the left screen (left optical image display unit 28) from the position of the target object TOB and the position of the left eye LE. In step S124, the AR processing unit 144 converts the coordinates of the left optical image display unit 28 into the coordinates of the left LCD 242 and generates image data Data2 for the left eye in which the virtual object is arranged at the converted coordinates, to the image processing unit 160. Send.

  The image processing unit 160 transmits the received right-eye image data Data1 and left-eye image data Data2 to the image display unit 20. Thereafter, the display process described with reference to FIG. 2 is executed, so that the user of the HMD 100 can visually recognize the stereoscopic virtual object VOB in the visual field VR as described with reference to FIG.

  In the above embodiment, the position estimation unit 142 estimates the position of the target object with respect to the outer camera 61 using the two images 1 and 2. However, the position estimation unit 142 may estimate the position of the target object with respect to the outer camera 61 by using three or more images. If three or more images are used, the accuracy of estimation of the position of the target object can be improved.

  According to the augmented reality processing, the position estimation unit 142 includes at least two pieces of outside scene information (image 1 functioning as the first outside scene information, the second scene information acquired by the outside scene acquisition unit (outer camera 61) with time. Based on the image 2) functioning as outside scene information, the position of the target object TOB with respect to the outside scene acquisition unit of the head-mounted display device (HMD 100) is estimated. Therefore, it is possible to provide a head-mounted display device that can grasp the position of an arbitrary target object TOB existing in the real world using a single outside scene information acquisition unit (for example, a monocular camera). Further, the augmented reality processing unit (AR processing unit 144) causes the image display unit 20 to form a virtual image VI representing the virtual object VOB for expanding the target object TOB based on the estimated position of the target object TOB. . For this reason, it is possible to reduce the deviation between the target object TOB that is an object in the real world and the virtual object VOB displayed as a virtual image.

  Furthermore, according to the augmented reality process, the position estimation unit 142 causes the outside scene acquisition unit (outside camera 61) to acquire the first outside scene information (image 1) and then the movement amount (M2) of the outside scene acquisition unit. ) Exceeds the predetermined threshold value stored in the movement amount threshold value 124, the outside scene acquisition unit acquires the second outside scene information (image 2). For this reason, by appropriately designing the predetermined threshold value stored in the movement amount threshold value 124, the position estimation unit 142 can acquire the second outside scene information suitable for estimating the position of the target object TOB. it can. Further, the position estimation unit 142 estimates the movement amount (M2) of the outside scene acquisition unit from the movement (rotation amount M1) of the user's head detected by the motion detection unit (9-axis sensor 66). For this reason, the position estimation unit 142 can estimate the amount of movement of the outside scene acquisition unit using the configuration for detecting the movement of the user's head, that is, the detection value of the motion detection unit.

  Further, according to the augmented reality processing, the augmented reality processing unit 144 includes the position TOB (X, Y, Z) of the target object TOB and the position of the user's eyes (RE (x, y, z), LE). The position of the virtual object VOB is determined based on the position of the optical image display unit (the right optical image display unit 26 and the left optical image display unit 28) corresponding to the extension line with (x, y, z)). That is, the augmented reality processing unit 144 can determine the position of the virtual object VOB based on the position of the target object TOB visually recognized by the user through the optical image display unit. As a result, the augmented reality processing unit 144 can display the virtual object VOB for expanding the target object TOB at a position where the user does not feel uncomfortable.

  Further, according to the augmented reality processing, the position estimation unit 142 is based on the eye image acquired by the eye image acquisition unit (inner camera 62), and the outside scene acquisition unit of the head-mounted display device (HMD100). The position of the user's eyes (RE (x, y, z), LE (x, y, z)) with respect to (outer camera 61) can be estimated. Further, the eye image acquisition unit is disposed in the vicinity of the outside scene acquisition unit. For this reason, the precision at the time of the position estimation part 142 estimating a user's eye position can be improved.

B. Variation:
In the above embodiment, a part of the configuration realized by hardware may be replaced by software, and conversely, a part of the configuration realized by software may be replaced by hardware. Good. In addition, the following modifications are possible.

・ Modification 1:
In the said embodiment, it illustrated about the structure of HMD. However, the configuration of the HMD can be arbitrarily determined without departing from the gist of the present invention. For example, each component can be added, deleted, converted, and the like.

  The allocation of components to the control unit and the image display unit in the above embodiment is merely an example, and various aspects can be employed. For example, the following aspects may be adopted. (I) A mode in which processing functions such as a CPU and a memory are mounted on the control unit, and only a display function is mounted on the image display unit. (Ii) Processing functions such as a CPU and a memory are provided in both the control unit and the image display unit. (Iii) a mode in which the control unit and the image display unit are integrated (for example, a mode in which the control unit is included in the image display unit and functions as a glasses-type wearable computer), (iv) instead of the control unit A mode in which a smartphone or a portable game machine is used, (v) the control unit and the image display unit are connected by connection via a wireless signal transmission path such as a wireless LAN, infrared communication, or Bluetooth (registered trademark). A mode in which (code) is abolished. In this case, power supply to the control unit or the image display unit may be performed wirelessly.

  For example, the configurations of the control unit and the image display unit exemplified in the above embodiment can be arbitrarily changed. Specifically, for example, in the above embodiment, the control unit includes the transmission unit and the image display unit includes the reception unit. However, both the transmission unit and the reception unit have a function capable of bidirectional communication. It may be provided and may function as a transmission / reception unit. Further, for example, a part of an operation interface (such as various keys and a trackpad) provided in the control unit may be omitted. Further, the control unit may be provided with another operation interface such as an operation stick. Moreover, it is good also as what receives an input from a keyboard or a mouse | mouth as a structure which can connect devices, such as a keyboard and a mouse | mouth, to a control part. For example, although the secondary battery is used as the power source, the power source is not limited to the secondary battery, and various batteries can be used. For example, a primary battery, a fuel cell, a solar cell, a thermal cell, or the like may be used.

  FIG. 12 is an explanatory diagram showing an external configuration of the HMD in the modification. In the example of FIG. 12A, the image display unit 20x includes a right optical image display unit 26x instead of the right optical image display unit 26, and a left optical image display unit 28x instead of the left optical image display unit 28. I have. The right optical image display unit 26x and the left optical image display unit 28x are formed smaller than the optical member of the above-described embodiment, and are respectively disposed obliquely above the right eye and the left eye of the user when the HMD is mounted. . 12B, the image display unit 20y includes a right optical image display unit 26y instead of the right optical image display unit 26, and a left optical image display unit 28y instead of the left optical image display unit 28. I have. The right optical image display unit 26y and the left optical image display unit 28y are formed smaller than the optical member of the above-described embodiment, and are respectively disposed obliquely below the right eye and the left eye of the user when the HMD is worn. . Thus, it is sufficient that the optical image display unit is disposed in the vicinity of the user's eyes. The size of the optical member forming the optical image display unit is also arbitrary, and the optical image display unit covers only a part of the user's eye, in other words, the optical image display unit completely covers the user's eye. It can also be realized as an HMD in an uncovered form.

  For example, each processing unit (for example, an image processing unit, a display control unit, etc.) provided in the control unit is realized by the CPU developing and executing a computer program stored in the ROM or hard disk on the RAM. As described. However, these functional units may be configured using an ASIC (Application Specific Integrated Circuit) designed to realize the function.

  For example, the HMD is a binocular transmissive HMD, but may be a monocular HMD. Moreover, you may comprise as a non-transmission type HMD by which the transmission of an outside scene is interrupted | blocked in the state which the user mounted | wore with HMD. In addition, for example, instead of an image display unit worn like glasses, an ordinary display device (a liquid crystal display device, a plasma display device, an organic EL display device, a beam scanning display, etc.) is employed as the image display unit. Also good. Also in this case, the connection between the control unit and the image display unit may be a connection via a wired signal transmission path or a connection via a wireless signal transmission path. In this way, the control unit can be used as a remote controller for a normal display device. For example, instead of the image display unit worn like glasses, an image display unit having another shape such as an image display unit worn like a hat may be adopted as the image display unit. Further, the earphone may be an ear-hook type or a headband type, or may be omitted. Further, for example, it may be configured as a head-up display (HUD, Head-Up Display) mounted on a vehicle such as an automobile or an airplane. Further, for example, it may be configured as an HMD incorporated in a body protective device such as a helmet, or a hand-held display (HHD, Hand Held Display). Further, a video see-through HMD may be configured by combining a non-transparent HMD that blocks transmission of outside scenes and an outer camera.

  For example, in the above embodiment, the image light generation unit is configured using a backlight, a backlight control unit, an LCD, and an LCD control unit. However, the above aspect is merely an example. The image light generation unit may include a configuration unit for realizing another method together with or in place of these configuration units. For example, the image light generation unit may include an organic EL (Organic Electro-Luminescence) display and an organic EL control unit. Further, for example, the image generation unit can use a digital micromirror device or the like instead of the LCD. Further, for example, the present invention can be applied to a laser retinal projection type head-mounted display device.

  For example, in the above-described embodiment, the configuration in which the outside scene acquisition unit (outside camera) is built in the image display unit is illustrated. However, the outside scene acquisition unit may be configured to be removable from the image display unit. Specifically, for example, a WEB camera that can be attached to and detached from the image display unit using a clip or an attachment may be employed as the outside scene acquisition unit. Even in this case, the position of the target object relative to the outside scene acquisition unit can be estimated based on at least two outside scene information acquired over time by the outside scene acquisition unit. Note that when estimating the position of the target object, the relative positions of the image display unit and the outside scene acquisition unit may be considered. The relative positions of the image display unit and the outside scene acquisition unit can be detected by providing a displacement sensor in each of the image acquisition unit and the outside scene acquisition unit.

  For example, in the above-described embodiment, an example of the arrangement of the outside scene acquisition unit (outer camera) is shown. However, the arrangement of the outer camera can be arbitrarily changed. For example, the outer camera may be disposed at a position corresponding to the user's eyebrow, or may be disposed at a position corresponding to the temple on the left side of the user. Also, the angle of view of the outer camera can be arbitrarily determined. When the angle of view of the outer camera is widened (for example, 360 degrees), a step of extracting a part including the target object from the outside scene image obtained by the outer camera may be performed in the augmented reality process.

Modification 2
In the above embodiment, an example of augmented reality processing has been described. However, the processing procedure shown in the above embodiment is merely an example, and various modifications are possible. For example, some steps may be omitted, and other steps may be added. Further, the order of the steps to be executed may be changed.

  For example, in step S102, the position estimation unit estimates the movement amount of the outer camera from the movement of the user's head, but adds a motion sensor near the outer camera and uses the detection value of the motion sensor. The amount of movement of the outer camera may be directly acquired.

  For example, in step S104, the position estimation unit determines whether or not to acquire the image 2 based on whether or not the movement amount of the outer camera is equal to or greater than the threshold value stored in the movement amount threshold value. However, the position estimation unit may determine whether to acquire the image 2 using other conditions. For example, the position estimation unit may determine whether or not the position of the outer camera is a preferable position when acquiring the image 2. “The position where the position of the outer camera is preferable for acquiring the image 2” means, for example, that the movement amount of the outer camera is a predetermined amount or more and the position (height) of the outer camera on the y-axis is the image 1. If there is no significant change from the time of acquisition. If the condition of the position (height) on the y-axis of the outer camera is added, the premise (y1 = y2) in the above equations 1 to 3 can be secured.

  For example, in step S108, the position estimation unit can estimate the position of the target object with respect to the outer camera from the parallax between the image 1 and the image 2 using another known technique other than the stereo image processing.

  For example, in step S108, the position estimation unit estimates the position of the target object with respect to the outer camera from a plurality of still images. However, the position estimation unit may estimate the position of the target object with respect to the outer camera using a moving image (a set of a plurality of still images acquired with time).

  For example, in steps S110 and S120, the position estimation unit estimates the positions of the user's right eye and left eye with respect to the outer camera. However, the positions of the user's right eye and left eye with respect to the outer camera are stored in advance in the storage unit, and steps S110 and S120 may be omitted. Further, the position estimation unit may estimate the positions of the right eye and the left eye of the user with respect to the outer camera using ultrasonic waves or infrared rays instead of acquiring the eye image with the inner camera.

  For example, the process that is executed by the position estimation unit in the above embodiment may be executed by the augmented reality processing unit, and the process that is executed by the augmented reality processing unit may be executed by the position estimation unit.

・ Modification 3:
The present invention is not limited to the above-described embodiments, examples, and modifications, and can be realized with various configurations without departing from the spirit thereof. For example, the technical features in the embodiments, examples, and modifications corresponding to the technical features in each embodiment described in the summary section of the invention are to solve some or all of the above-described problems, or In order to achieve part or all of the above-described effects, replacement or combination can be performed as appropriate. Further, if the technical feature is not described as essential in the present specification, it can be deleted as appropriate.

10. Control unit (controller)
DESCRIPTION OF SYMBOLS 11 ... Decision key 12 ... Lighting part 13 ... Display switching key 14 ... Trackpad 15 ... Luminance switching key 16 ... Direction key 17 ... Menu key 18 ... Power switch 20 ... Image display part 21 ... Right holding part 22 ... Right display drive part DESCRIPTION OF SYMBOLS 23 ... Left holding part 24 ... Left display drive part 26 ... Right optical image display part 28 ... Left optical image display part 30 ... Earphone plug 32 ... Right earphone 34 ... Left earphone 40 ... Connection part 42 ... Right cord 44 ... Left cord 46 ... Connecting member 48 ... Body code 51 ... Transmission unit 52 ... Transmission unit 53 ... Reception unit 54 ... Reception unit 61 ... Outside camera (outside scene acquisition unit)
62 ... Inner camera (eye image acquisition unit)
66 ... 9-axis sensor (motion detector)
110: Input information acquisition unit 100 ... HMD (head-mounted display device)
DESCRIPTION OF SYMBOLS 120 ... Memory | storage part 122 ... Focal length 124 ... Movement amount threshold value 130 ... Power supply 140 ... CPU
142: position estimation unit 144: AR processing unit (augmented reality processing unit)
DESCRIPTION OF SYMBOLS 160 ... Image processing part 170 ... Audio | voice processing part 180 ... Interface 190 ... Display control part 201 ... Right backlight control part 202 ... Left backlight control part 211 ... Right LCD control part 212 ... Left LCD control part 221 ... Right backlight 222 ... Left backlight 241 ... Right LCD
242 ... Left LCD
251 ... Right projection optical system 252 ... Left projection optical system 261 ... Right light guide plate 262 ... Left light guide plate PCLK ... Clock signal VSync ... Vertical sync signal HSync ... Horizontal sync signal Data ... Image data Data1 ... Right eye image data Data2 ... Left Image data for eyes OA ... External device PC ... Personal computer SC ... Outside view VI ... Virtual image VR ... Field of view RE ... Right eye LE ... Left eye ER ... End EL ... End AP ... End VOB ... Virtual object TOB ... Target object M1 ... head rotation amount M2 ... movement amount of outer camera CO ... coordinate

Claims (9)

  1. A head-mounted display device that allows a user to visually recognize a virtual image and an outside scene,
    An image display unit for allowing the user to visually recognize the virtual image;
    An outside scene acquisition unit that acquires outside scene information including at least a feature of the outside scene in the user's visual field direction;
    A position estimation unit that estimates the position of an arbitrary target object existing in the real world based on at least two of the outside scene information acquired over time by the outside scene acquisition unit;
    Based on the estimated position of the target object, the virtual image representing the virtual object for extending the target object is formed on the image display unit, thereby providing the user with augmented reality. A processing unit;
    A head-mounted display device comprising:
  2. The head-mounted display device according to claim 1,
    The position estimation unit
    When the movement amount of the outside scene acquisition unit after the outside scene acquisition unit acquires the first outside scene information exceeds a predetermined threshold, the outside scene acquisition unit acquires the second outside scene information,
    A head-mounted display device that estimates the position of the target object using the first outside scene information and the second outside scene information.
  3. The head-mounted display device according to claim 2, further comprising:
    A movement detecting unit for detecting movement of the user's head;
    The position estimation unit is a head-mounted display device that estimates the amount of movement of the outside scene acquisition unit from the movement of the head detected by the motion detection unit.
  4. The head-mounted display device according to claim 2 or claim 3,
    The outside scene acquisition unit is a camera for acquiring an outside scene image representing an outside scene in the user's field of view direction as the outside scene information,
    The position estimation unit
    Obtaining a parallax between the first outside scene information and the second outside scene information;
    A head-mounted display device that estimates the position of the target object using the obtained parallax, the amount of movement of the outside scene acquisition unit, and the focal length of the outside scene acquisition unit.
  5. The head-mounted display device according to claim 4,
    The position estimation unit obtains the parallax with reference to an edge of the target object included in the first outside scene information and an edge of the target object included in the second outside scene information. Type display device.
  6. The head-mounted display device according to any one of claims 1 to 5,
    The image display unit includes an optical image display unit that forms the virtual image in front of the user's eyes,
    The position estimation unit further includes:
    Calculating the position of the optical image display unit corresponding to an extension of the estimated position of the target object and the position of the user's eye;
    The augmented reality processing unit
    A head-mounted display device that determines the position of the virtual object based on the calculated position of the optical image display unit.
  7. The head-mounted display device according to claim 6, further comprising:
    An eye image acquisition unit for acquiring an image of the user's eye;
    The position estimation unit further includes:
    Image analysis of the eye image acquired by the eye image acquisition unit is performed to acquire the size of the user's eye, and the position of the user's eye is estimated based on the acquired size of the eye A head-mounted display device.
  8. The head-mounted display device according to claim 7,
    The eye image acquisition unit is a head-mounted display device arranged in the vicinity of the outside scene acquisition unit.
  9. A method for controlling a head-mounted display device,
    (A) allowing the user of the head-mounted display device to visually recognize a virtual image;
    (B) acquiring outside scene information including at least a feature of the outside scene in the user's visual field direction;
    (C) estimating a position of an arbitrary target object existing in the real world based on at least two outside scene information acquired over time;
    (D) Based on the estimated position of the target object, the virtual image representing the virtual object for extending the target object is formed by the step (a), so that the user has augmented reality. Giving process;
    Including a method.
JP2014054395A 2014-03-18 2014-03-18 Head-mounted display device and method for controlling head-mounted display device Active JP6287399B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2014054395A JP6287399B2 (en) 2014-03-18 2014-03-18 Head-mounted display device and method for controlling head-mounted display device

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2014054395A JP6287399B2 (en) 2014-03-18 2014-03-18 Head-mounted display device and method for controlling head-mounted display device
US14/626,103 US9715113B2 (en) 2014-03-18 2015-02-19 Head-mounted display device, control method for head-mounted display device, and computer program
US15/629,352 US10297062B2 (en) 2014-03-18 2017-06-21 Head-mounted display device, control method for head-mounted display device, and computer program

Publications (2)

Publication Number Publication Date
JP2015176526A true JP2015176526A (en) 2015-10-05
JP6287399B2 JP6287399B2 (en) 2018-03-07

Family

ID=54255624

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2014054395A Active JP6287399B2 (en) 2014-03-18 2014-03-18 Head-mounted display device and method for controlling head-mounted display device

Country Status (1)

Country Link
JP (1) JP6287399B2 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004198211A (en) * 2002-12-18 2004-07-15 Aisin Seiki Co Ltd Apparatus for monitoring vicinity of mobile object
JP2011203824A (en) * 2010-03-24 2011-10-13 Sony Corp Image processing device, image processing method and program
JP2013120133A (en) * 2011-12-07 2013-06-17 Fujitsu Ltd Three-dimensional coordinate measuring instrument, three-dimensional coordinate measurement method, and program
JP2014504413A (en) * 2010-12-16 2014-02-20 マイクロソフト コーポレーション Augmented reality display content based on understanding and intention

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004198211A (en) * 2002-12-18 2004-07-15 Aisin Seiki Co Ltd Apparatus for monitoring vicinity of mobile object
JP2011203824A (en) * 2010-03-24 2011-10-13 Sony Corp Image processing device, image processing method and program
JP2014504413A (en) * 2010-12-16 2014-02-20 マイクロソフト コーポレーション Augmented reality display content based on understanding and intention
JP2013120133A (en) * 2011-12-07 2013-06-17 Fujitsu Ltd Three-dimensional coordinate measuring instrument, three-dimensional coordinate measurement method, and program

Also Published As

Publication number Publication date
JP6287399B2 (en) 2018-03-07

Similar Documents

Publication Publication Date Title
EP3029550B1 (en) Virtual reality system
US9202443B2 (en) Improving display performance with iris scan profiling
CN104076512B (en) The control method of head-mount type display unit and head-mount type display unit
US9122321B2 (en) Collaboration environment using see through displays
US9588345B2 (en) Head-mounted display device and control method for the head-mounted display device
JP6060512B2 (en) Head-mounted display device
US9720231B2 (en) Display, imaging system and controller for eyewear display device
CN105045375B (en) Head-mounted display device, control method therefor, control system, and computer program
US9213185B1 (en) Display scaling based on movement of a head-mounted display
US10162412B2 (en) Display, control method of display, and program
KR101773892B1 (en) Display device, head mounted display, display system, and control method for display device
JP6160154B2 (en) Information display system using head-mounted display device, information display method using head-mounted display device, and head-mounted display device
JP5884576B2 (en) Head-mounted display device and method for controlling head-mounted display device
US9959591B2 (en) Display apparatus, method for controlling display apparatus, and program
US20150262424A1 (en) Depth and Focus Discrimination for a Head-mountable device using a Light-Field Display System
KR101845350B1 (en) Head-mounted display device, control method of head-mounted display device, and display system
US9454006B2 (en) Head mounted display and image display system
JP6387825B2 (en) Display system and information display method
CN104615237B (en) Image display system, the method and head-mount type display unit for controlling it
WO2013006518A2 (en) Multi-visor: managing applications in head mounted displays
US10445579B2 (en) Head mounted display device, image display system, and method of controlling head mounted display device
JP6299067B2 (en) Head-mounted display device and method for controlling head-mounted display device
US9557566B2 (en) Head-mounted display device and control method for the head-mounted display device
US9411160B2 (en) Head mounted display, control method for head mounted display, and image display system
JP6209906B2 (en) Head-mounted display device, method for controlling head-mounted display device, and image display system

Legal Events

Date Code Title Description
RD04 Notification of resignation of power of attorney

Free format text: JAPANESE INTERMEDIATE CODE: A7424

Effective date: 20160530

A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20170106

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20171003

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20170929

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20171127

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20180109

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20180122

R150 Certificate of patent or registration of utility model

Ref document number: 6287399

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150