WO2010064361A1 - Head-mounted display - Google Patents
Head-mounted display Download PDFInfo
- Publication number
- WO2010064361A1 WO2010064361A1 PCT/JP2009/006012 JP2009006012W WO2010064361A1 WO 2010064361 A1 WO2010064361 A1 WO 2010064361A1 JP 2009006012 W JP2009006012 W JP 2009006012W WO 2010064361 A1 WO2010064361 A1 WO 2010064361A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- state
- user
- mounted display
- face
- specific part
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/017—Head mounted
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/012—Head tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0179—Display position adjusting means not related to the information to be displayed
- G02B2027/0187—Display position adjusting means not related to the information to be displayed slaved to motion of at least a part of the body of the user, e.g. head, eye
Definitions
- the present disclosure relates to a head-mounted display that presents a content image indicated by content data to a user's eyes so that the user can recognize the content image.
- a technology related to a head-mounted display has been proposed in which a content image indicated by content data is presented to a user's eyes so that the user can recognize the content image. For example, in order to improve the operability of the head mounted display, the gaze direction, focal length, pupil state, fundus pattern, eyelid movement of the user (user) wearing the head mounted display is detected by a sensor.
- a technique has been proposed in which a user's intention or situation is determined from information acquired by a sensor, and based on the determination result, a control is performed on display operation in a display unit that displays an image (content image) ( For example, see Patent Document 1).
- the operation that instructs the user's intended operation is not performed via the user interface that the user directly operates through the hand etc., but the state of the pupil of the user's body, the fundus pattern, the eyelid A technique that can be performed based on movement is a useful technique because it can reduce the operation burden on the user.
- a work instruction is presented to a user who wears the head-mounted display, and when the user assembles a product, both hands of the user who is working are It is used for product assembly work and it is difficult to operate the head mounted display.
- the above technique can reduce the burden of operation on the head mounted display by the user wearing the head mounted display.
- it is important that an operation that matches the user's intention is accurately executed in the head mounted display.
- maintaining a state in which the line-of-sight direction is artificially fixed in a certain direction is a painful operation for humans, and usually the direction of the user's line-of-sight is constantly changing.
- the line of sight may be changed without intention to operate. Therefore, it is difficult to distinguish whether the eye direction is detected by the method of detecting the direction of the line of sight, and erroneous detection occurs.
- the movement of the eyelids is the same, and the eyelids are usually unintentionally blinked or the eyelids are closed without intention to operate due to fatigue or the like, and erroneous detection is likely to occur. Also, it is difficult to intentionally change the state of the pupil and the fundus pattern.
- This disclosure aims to provide a highly reliable head mounted display that is excellent in operability with respect to an operation for instructing the operation of the head mounted display.
- the head-mounted display according to the present disclosure has a specific state when there is almost no unintentional movement and the user consciously moves among the parts constituting the face of the user wearing the head-mounted display. Detecting the state of a specific part and controlling the operation associated with the detected state to be executed on the head mounted display on condition that the state of the specific part is a specific state It is what.
- a head-mounted display that presents a content image indicated by content data to a user's eye so that the user can recognize the content image
- the head-mounted display Control means for controlling the operation of the control unit, and detection means for detecting that a state of a specific part of the user's face that does not move unless the user is conscious is a predetermined state
- the control means Is characterized in that the operation associated with the predetermined state is controlled on condition that the detection unit detects that the state of a specific part of the user's face is the predetermined state.
- an erroneous operation due to an unconscious operation is unlikely to occur with respect to an operation for instructing an operation of the head mounted display, and a reliable operation is performed according to the user's intention without using a hand. be able to.
- the detection unit detects that the state of the eyebrow of the user is the predetermined state as a state of a specific part of the user's face
- the control unit A head-mounted display that controls the operation associated with the predetermined state on condition that the detection unit detects that the state of the eyebrows of the user is the predetermined state; Obtainable.
- the detection unit detects that the state of the user's cheek is the predetermined state as a state of a specific part of the user's face
- the control unit The head-mounted display controls the operation associated with the predetermined state on condition that the detecting unit detects that the state of the cheek of the user is the predetermined state. Can be obtained.
- the detection unit detects that the state of the user's mouth is the predetermined state as a state of a specific portion of the user's face
- the control unit The head-mounted display controls the operation associated with the predetermined state on condition that the detection unit detects that the state of the mouth of the user is the predetermined state. Can be obtained.
- the “operation of the head mounted display” includes, for example, processing related to selection of content data, playback / stop of content data executed when presented to the user's eyes, presentation of content images to the user's eyes, Alternatively, it includes supplying or shutting off power to the head mounted display and other various operations realized in the head mounted display.
- the head-mounted display is not necessarily a single device, and may be configured by two devices, for example.
- the control means may be configured as another device and connected to the head mounted display with a predetermined signal cable.
- the detection unit includes an imaging unit that can capture a specific image of a specific part of the user's face, and an analysis unit that analyzes the specific image captured by the imaging unit.
- an imaging unit that can capture a specific image of a specific part of the user's face
- an analysis unit that analyzes the specific image captured by the imaging unit.
- the detection unit includes: a light emitting element capable of irradiating a specific part of the user's face; and the light applied to the specific part of the user's face.
- the control unit controls a first operation and a second operation as operations of the head mounted display
- the detection unit is configured to detect a specific part of the user's face.
- the state is the first state
- the first state is detected as the predetermined state
- the second state is the predetermined state.
- the control means controls the first operation associated with the first state on the condition that the detection means detects the first state
- the detection means It is possible to obtain a head mounted display that controls the second operation associated with the second state on condition that the state is detected in two states.
- a plurality of operations can be assigned to the state of a specific part of the user's face, and the plurality of operations can be controlled.
- or (c) is a figure which shows the external appearance of a head mounted display main body. It is a figure which shows a head mounted display. It is a figure which shows the functional block of the control box of a head mounted display. (A) thru
- or (d) is a figure which shows notionally the registration state of a table. It is a figure which shows the flow of a main process. It is a figure which shows the flow of a state determination process. It is a figure which shows notionally the registration state (registration state regarding the state of an eyebrow) of another table.
- a head mounted display configured by connecting a head mounted display main body and a control box that provides a content image to the head mounted display main body will be described as an example. It can also be configured as a device.
- the head-mounted display body is simply referred to as a head-mounted display (hereinafter referred to as “HMD”).
- the HMD 100 includes temples 104A and 104B, armatures 106A and 106B, and a front frame 108.
- Moderns 102A and 102B that hit the user's ear are attached to one end of the temples 104A and 104B.
- Trillion numbers 112A and 112B are provided at the other ends of the temples 104A and 104B.
- Temples 104A and 104B and Yoroi 106A and 106B are connected via these trillion numbers 112A and 112B.
- the front frame 108 connects the end pieces 106A and 106B.
- a nose pad 110 that contacts the user's nose is attached to the center of the front frame 108.
- the skeleton of the HMD 100 is formed by the temples 104A and 104B, the armatures 106A and 106B, the front frame 108, and the nose pad 110.
- the temples 104A and 104B can be folded at the trillions 112A and 112B formed on the armatures 106A and 106B.
- the structure of the skeleton part of HMD100 is the same as that of normal glasses, for example.
- the HMD 100 is supported on the user's face by the modern 102 ⁇ / b> A and 102 ⁇ / b> B and the nose pad 110 when the HMD 100 is worn on the user.
- drawing of the moderns 102A and 102B and the temples 104A and 104B is omitted.
- the image presentation device 114 is attached to the skeleton part of the HMD 100 via an attachment part 122 provided (provided) near the armor 106A.
- the image presentation device 114 is disposed at a position that is substantially the same height as the left eye 118 of the user wearing the HMD 100 in a state where the image presentation device 114 is attached to the vicinity of the armor 106A via the attachment portion 122.
- the image presentation device 114 is connected to the control box 200 via a predetermined signal cable 250. Although details will be described later, the control box 200 executes a rendering process on the content data stored in the predetermined area.
- the control box 200 controls the input / output interface (hereinafter referred to as “I / F”) included in the own device, thereby transmitting the content image signal including the content image obtained by the rendering process (reproduction process) to the signal cable.
- the image is output to the image presentation device 114 via 250.
- the image presentation device 114 acquires the content image signal output from the control box 200 via an input / output I / F that is not drawn in FIGS. Then, the content image based on the content image signal is optically emitted toward the half mirror 116.
- reference numeral 120a indicates a light beam related to the content image emitted from the image presentation device 114
- reference numeral 120b indicates a light beam that reflects the half mirror 116 and enters the left eye 118 of the user. Show.
- the image presentation device 114 scans the light rays 120a and 120b corresponding to the acquired content image signal in a two-dimensional direction, guides the scanned light rays 120a and 120b to the user's left eye 118, and displays the content image on the retina.
- a retinal scanning type display that forms a liquid crystal display
- a liquid crystal display an organic EL (Organic Electroluminescence) display, and other devices may be used.
- the eyebrow sensor 214 is attached to the upper surface of the image presentation device 114, and the cheek sensor 216 is attached to the lower surface. As shown in FIG. 2, a stay 124 (not shown in FIG. 1) with a mouth sensor 218 attached to the tip is attached to the temple 104A.
- Control box configuration For example, the control box 200 is attached to a user's waist or the like. As shown in FIG. 3, the control box 200 includes a CPU 202 that controls the apparatus itself, a ROM 204 that stores various programs, a RAM 206 as a work area, a storage unit 208 that stores content data 2082 and a table 2084. , An input / output I / F 210 that transmits / receives various signals to / from the HMD 100, and an operation unit 212 that is operated by the user and receives an instruction from the user.
- control box 200 includes an eyebrow sensor 214 attached to the upper surface of the image presentation device 114, a cheek sensor 216 attached to the lower surface, and a mouth sensor 218 attached to the tip of the stay 124 (see FIG. 2). , Is connected.
- the storage unit 208 is constituted by, for example, a hard disk.
- the content data 2082 stored in the storage unit 208 includes, for example, a work instruction describing a method for assembling a predetermined product (hereinafter, the case where the content data 2082 is a work instruction is taken as an example). explain.).
- the table 2084 associates each state of the eyebrows, cheeks, and mouth detected by the eyebrow sensor 214, cheek sensor 216, and mouth sensor 218 attached to the HMD 100 with each action realized in the HMD 100 and the control box 200. Registered table.
- the operation unit 212 includes, for example, keys, and receives instructions to start and end (stop) playback of the content data 2082.
- the eyebrow sensor 214 detects the state (movement) of the user's left eyebrow
- the cheek sensor 216 detects the state (movement) of the user's cheek
- the mouth sensor 218 detects the state (shape) of the user's mouth, more specifically, the user's lips.
- an image sensor composed of a CCD (Charge Coupled Devices) or a photoelectric sensor composed of a light emitting element and a light detecting element can be used.
- Detection signals including states detected by the sensors 214, 216, and 218 are input to the control box 200 and stored on the RAM 206.
- the state detected by each sensor 214, 216, 218 is indicated by an image indicating a state such as eyebrows in the case of an image sensor, and is indicated by the intensity of reflected light in the case of a photoelectric sensor.
- the CPU 202 acquires a content image by executing a program for reproducing (rendering) the content data 2082 stored in the ROM 204 on the RAM 206. Then, a program for controlling the input / output I / F 210 stored in the ROM 204 is executed on the RAM 206, and a content image signal including a content image is output from the input / output I / F 210 to the HMD 100.
- the CPU 202 detects an analysis program (for example, a program for pattern matching) stored in the ROM 204 by the sensors 214, 216, and 218, and uses the detection signal and the table 2084 stored in the RAM 206. By executing on the RAM 206, the state of the user's eyebrows, cheeks and mouth is analyzed.
- the CPU 202 executes a program for controlling the HMD 100 stored in the ROM 204 on the RAM 206, thereby performing the operation of the HMD 100 instructed by the operation unit 212 and the operation of the HMD 100 based on the analysis result. Control. Therefore, the CPU 202 uses the various data such as the content data 2082, the table 2084, and the detection signal, and executes various programs stored in the ROM 204 on the RAM 206, whereby various functional units (for example, a control unit and an analysis unit) are used. Composed.
- a standard face image which is a face image of the user in the standard state, is registered in the table 2084.
- the operation of the HMD 100 is not associated with the standard face image.
- the standard face image is associated with continuing the operation in progress on the HMD 100, for example, continuously executing the reproduction operation.
- the face image indicating the state of “raising the eyebrows” is associated with the operation “next”. For example, in a state where a predetermined content image is presented to the user's left eye 118, when the user raises his / her eyebrows, the content image next to the content image being presented is presented. More specifically, when a user who has finished the work in the first process raises the eyebrows in the state where the content image including the work instruction in the first process is presented, the work instruction related to the second process is given. A content image as content is presented.
- the action “return to the front” is associated with the face image indicating the state of “drawing the eyebrows”. Specifically, in the state where the content image including the work instruction relating to the second step is presented, when the user brings his / her eyebrows, the content image including the work instruction relating to the first step is presented.
- the eyebrow sensor 214 is disposed so as to face the user's left eyebrow. Therefore, whether the eyebrow is raised or not is determined by the state of the left eyebrow. Judgment based on.
- the action “decision” is associated with the face image indicating the state of “raised cheek”.
- the operation “cancel” is associated with the face image indicating the state of “dented cheek”.
- a content image containing the work instruction of the first step, a content image containing the work instruction of the second step, and a content image containing the work instruction of the third step are sequentially arranged at predetermined intervals. It is displayed while being switched. In this state, if the user raises the cheek while the content image including the work instruction in the second step is presented, the content image including the work instruction in the second step continues. The content image to be presented is determined. After this decision, if the user dents his cheek, this decision is canceled.
- the cheek sensor 216 is also disposed opposite the user's left cheek, as with the eyebrow sensor 214, so whether or not the cheek is raised is determined based on the state of the left cheek.
- the “brightness adjustment” of the presented content image is associated with the face image indicating the state of the mouth when “e” is uttered.
- “brightness up” is associated with an image indicating the state of the mouth when “a” is uttered.
- “brightness down” is associated with the state of the mouth when “o” is uttered. More specifically, if the user's left eye 118 is presented with a content image that contains the work instruction of the first step, it will become bright when the user speaks “e”.
- a setting screen for adjustment is presented. If the user speaks “A” with the setting screen presented, the brightness increases by one level. On the other hand, if the user speaks “O”, the brightness decreases by one level. Note that if the user makes “a” or “o” into the shape of a mouth that utters two times in succession, the brightness increases or decreases by two levels.
- the main processing shown in FIG. 5 is started when the CPU 202 executes a program stored in the ROM 204 on the RAM 206 on condition that the HMD 100 and the control box 200 are turned on.
- the detection signal detected by the eyebrow sensor 214, cheek sensor 216, and mouth sensor 218 and input to the control box 200, the content data 2082 stored in the storage unit 208, and the table 2084 are displayed at a predetermined timing. Used.
- the CPU 202 that has started the process first initializes the components of the control box 200 (S100), initializes the sensors 214, 216, and 218 (S102), and moves the process to S104.
- step S ⁇ b> 104 the CPU 202 determines whether the user has input an instruction to start playback of the content data 2082 via the operation unit 212. As a result of the determination, if a playback start instruction is not input (S104: No), the CPU 202 waits until a playback start instruction is input. On the other hand, when an instruction to start reproduction is input (S104: Yes), the CPU 202 starts content image signal output processing (S106).
- the content image signal output process is a process in which the content data 2082 is read from the storage unit 208 to the RAM 206, the content data 2082 is rendered, and the content image signal including the content image obtained by the rendering is input / output I / F 210. This is a process of controlling and outputting to the HMD 100.
- the CPU 202 that has started the content image signal output process in S106 determines whether a face motion trigger has been detected from any one of the sensors 214, 216, and 218 (S108). S108 will be described in detail. For example, the CPU 202 determines the state of the user's eyebrow (left eyebrow) included in the detection signal input from the eyebrow sensor 214 to the control box 200 and the state of the left eyebrow of the standard face image (see FIG. 4A). , A pattern matching process is executed to determine whether or not matching can be achieved between the two. Further, the CPU 202 determines between the state of the user's cheek (left cheek) included in the detection signal input from the cheek sensor 216 and the state of the left cheek of the standard face image (see FIG. 4A).
- a pattern matching process is executed, and it is determined whether or not matching can be obtained between the two. Furthermore, the CPU 202 determines between the mouth state (shape) of the user included in the detection signal input from the mouth sensor 218 and the mouth state (shape) of the standard face image (see FIG. 4A). A pattern matching process is executed in step (b), and it is determined whether or not matching can be achieved.
- the CPU 202 determines whether or not the face motion trigger detected in S108 is based on a change in eyebrow state (S200). If the state of the user's left eyebrow matches the state of the left eyebrow indicated by the standard image in S108, the CPU 202 determines that the change is not based on the change in eyebrow state (S200: No), and performs the process. The process proceeds to S204. On the other hand, if the two do not match, the CPU 202 determines that the face motion trigger is based on a change in the state of the eyebrows (S200: Yes), and moves the process to S202.
- S200 change in eyebrow state
- the CPU 202 determines again by the pattern matching process what the state of the user's left eyebrow included in the detection signal input to the control box 200 by the eyebrow sensor 214 is.
- the determination in S202 will be specifically described.
- the CPU 202 determines that the eyebrow state of the user included in the detection signal input by the eyebrow sensor 214 is the eyebrow state registered in the table 2084 (see FIG. 4B). Which of the following is true is determined.
- the CPU 202 specifies the operation “next”.
- the state of the eyebrows of the user included in the detection signal matches the image indicating the close state
- the CPU 202 specifies the operation “return to the previous”.
- the CPU 202 shifts the process to S210.
- the CPU 202 determines whether or not the face motion trigger detected in S108 is based on a change in cheek condition (S204). If the state of the user's left cheek matches the state of the left cheek indicated by the standard image in S108, the CPU 202 determines that the change is not based on a change in the state of the cheek (S204: No), and performs processing. The process proceeds to S208. On the other hand, if the two do not match, the CPU 202 determines that the face motion trigger is based on a change in cheek state (S204: Yes), and proceeds to S206.
- S204 a change in cheek condition
- the CPU 202 executes pattern matching processing to determine what the state of the user's left cheek included in the detection signal input to the control box 200 by the cheek sensor 216 is.
- the determination in S206 will be described in detail.
- the CPU 202 determines that the state of the user's cheek included in the detection signal input by the cheek sensor 216 is the state of the cheek registered in the table 2084 (see FIG. 4C). Which of the following is true is determined.
- the CPU 202 specifies the operation “decision”.
- the state of the user's cheek included in the detection signal matches the image indicating the depressed state
- the CPU 202 specifies the operation “cancel”.
- the CPU 202 shifts the processing to S210.
- the CPU 202 determines that the face motion trigger detected in S ⁇ b> 108 is based on a change in the mouth state (shape), and the user included in the detection signal input to the control box 200 by the mouth sensor 218. A pattern matching process is performed to determine whether the mouth (lips) state is “e”, “a”, or “o”. Continuing the description of the determination in S208, the CPU 202 determines that the state (shape) of the user's mouth included in the detection signal input by the mouth sensor 218 is the state of the mouth registered in the table 2084 (FIG. 4D). It is determined whether it corresponds to (see).
- the CPU 202 specifies the operation “brightness adjustment”.
- the CPU 202 identifies the operation “brightness up” and when “o” is uttered If the image matches the image, the CPU 202 identifies the operation “brightness down”. After executing S206, the CPU 202 shifts the processing to S210.
- the CPU 202 controls so that the operation specified in S202, S206, or S208 is executed in the content image signal output process being executed. For example, in the state where the content image including the work instruction related to the first process is presented, when the operation “proceed to the next” is specified in S202, the CPU 202 includes the content including the work instruction related to the second process. The rendering process is executed so that the image is presented, and the content image signal including the content image including the work instruction related to the second step is output from the input / output I / F 210 to the HMD 100. A content image based on the content image signal including the work instructions regarding the two steps is optically emitted toward the half mirror 116. After executing S210, the CPU 202 ends the state determination process and proceeds to S112 (see FIG. 5).
- the CPU 202 determines whether or not the reproduction of the content data 2082 has ended, in other words, whether or not the content data has been processed to the end. If the result of determination is that playback has not ended (S112: No), the process proceeds to S108. On the other hand, when the reproduction is completed (S112: Yes), the CPU 202 executes a content image signal output process termination process (S114), and terminates the main process. Note that the reproduction is also terminated based on a reproduction termination (stop) instruction input by the user via the operation unit 212.
- each of the sensors 214, 216, and 218 can be a photoelectric sensor.
- the intensity of the reflected light is registered in the table 2084 in association with each operation of the HMD 100.
- the CPU 202 determines the state of the user's eyebrows, cheeks, and mouth based on the intensity registered in the table 2084 and the intensity of the reflected light input from the sensors 214, 216, and 218 to the HMD 100.
- “hatched rectangle” indicates the eyebrows of the user
- “ ⁇ ” indicates the “light spot” from the light emitting element.
- the intensity (reference value) of the reflected light when the user's face is in the standard state is “brow: 80 ⁇ 15 (65 to 95)” (see FIG. 7), “cheek: 120 ⁇ 15 ( 105 to 135) ”and“ mouth: 100 ⁇ 15 (85 to 115) ”are registered in the table 2084.
- detection signals including reflected light intensities of “brow: 82”, “cheek: 118”, and “mouth: 100” are respectively sent to the control box 200. If it is input, the CPU 202 determines that a face motion trigger has not been detected in S108 (see FIG. 5) (S108: No).
- the state of a specific part of the user's face alone included in the detection signal input to the control box 200 from each of the eyebrow sensor 214, the cheek sensor 216, and the mouth sensor 218, and the operation of the HMD 100 is based on the associated configuration.
- the following configuration may be adopted.
- the operation of the HMD 100 for example, “fast forward” is set to a combination of a state where the eyebrows are raised and a state where the cheeks are raised, in other words, a state where the eyebrows are raised while raising the eyebrows. It can also be set as the structure matched.
- the CPU 202 determines that a face motion trigger has been input based on detection signals input from the eyebrow sensor 214 and cheek sensor 216 (see S108 in FIG. 5: Yes), and executes state determination processing (FIG. 5). (See FIG. 6 for details).
- the CPU 202 makes the determination in S200 of FIG. 6 and, after affirming the determination (see S200: Yes in FIG. 6), again performs the same determination as in S204 of FIG. If both the determinations in S200 and S204 are affirmative (see S200 and S204 in FIG. 6: Yes), the CPU 202 determines that the state is “raised the eyebrows and raised the cheek”, and the CPU 202 performs the operation “fast forward”. Identify.
- the eyebrow sensor 214 is configured to detect the state (movement) of one eyebrow, specifically, the left eyebrow.
- a configuration in which “reduction” is associated can be adopted.
- the CPU 202 determines that a face motion trigger has been input based on the detection signals input from the right and left eyebrow sensors 214 (see S108 in FIG. 5: Yes), and executes state determination processing (see FIG. 5). S110 (Refer to FIG. 6 for details).
- the CPU 202 performs the determination in S200 of FIG. If the CPU 202 affirms the determination regarding the state of the right and left eyebrows (see S200 in FIG. 6: Yes), the CPU 202 determines that the left eyebrow is raised and the right eyebrow is lowered. Specify “reduction”.
- the state of the eyebrows, cheeks and mouth does not change unless the user wearing the HMD 100 consciously moves, and the HMD 100 and the control box 200
- the movements to be realized are registered in association with each other, the state of each of these parts is sensed, and when the state of the eyebrows, cheeks and mouth is the registered state, the movement associated with the state is executed.
- the configuration is adopted.
- a hands-free operation can be realized while preventing an operation instruction related to an operation not intended by the user from being input to the control box 200.
- the user can use the HMD 100 or his / her hands while working. Without operating the control box 200, it is possible to visually recognize and recognize the work instruction related to the second step, for example, by the act of raising the eyebrows.
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Optics & Photonics (AREA)
- User Interface Of Digital Computer (AREA)
- Controls And Circuits For Display Device (AREA)
- Control Of Indicators Other Than Cathode Ray Tubes (AREA)
Abstract
Disclosed is a head-mounted display, which visibly provides a content image, which is depicted with content data, to a user's eyes. The head-mounted display comprises a control means that controls the operation of the head-mounted display, and a detection means, which detects that the state of specified parts of the user's face, which do not move unless the user wills the movement thereof, is a predetermined state. The control means controls for operations that are associated with the predetermined state, per step S210, on condition that the detection means detects that the state of the specified parts of the user's face is the predetermined state, per steps S202, S206, and S208.
Description
本開示は、利用者の眼にコンテンツデータにより示されるコンテンツ画像を視認可能に提示し、この利用者にコンテンツ画像を認識させるヘッドマウントディスプレイに関するものである。
The present disclosure relates to a head-mounted display that presents a content image indicated by content data to a user's eyes so that the user can recognize the content image.
利用者の眼にコンテンツデータにより示されるコンテンツ画像を視認可能に提示し、この利用者にコンテンツ画像を認識させるヘッドマウントディスプレイに関する技術が提案されている。例えば、ヘッドマウントディスプレイの操作性を向上させるものとして、ヘッドマウントディスプレイを装着している使用者(利用者)の視線方向、焦点距離、瞳孔の状態、眼底パターン、まぶたの動きをセンサで検出し、センサで取得された情報から、使用者の意志又は状況を判定し、判定結果に基づいて、画像(コンテンツ画像)の表示を行う表示手段における表示動作に関する制御を行う技術が提案されている(例えば、特許文献1参照)。
A technology related to a head-mounted display has been proposed in which a content image indicated by content data is presented to a user's eyes so that the user can recognize the content image. For example, in order to improve the operability of the head mounted display, the gaze direction, focal length, pupil state, fundus pattern, eyelid movement of the user (user) wearing the head mounted display is detected by a sensor. A technique has been proposed in which a user's intention or situation is determined from information acquired by a sensor, and based on the determination result, a control is performed on display operation in a display unit that displays an image (content image) ( For example, see Patent Document 1).
ヘッドマウントディスプレイの動作に関し、利用者が意図する動作を指示する操作を、利用者が手等を介して直接操作するユーザインターフェースを介さず、利用者の身体の瞳孔の状態、眼底パターン、まぶたの動きに基づき行うことができる技術は、ユーザの操作負担を軽減することが可能で、有用な技術である。例えば、製品の生産現場において、ヘッドマウントディスプレイを用いて作業指示を、これを装着する利用者に提示し、この利用者が、製品を組み立てるような場面において、作業中である利用者の両手は、製品の組立作業に利用されており、ヘッドマウントディスプレイを操作することは困難である。
With regard to the operation of the head-mounted display, the operation that instructs the user's intended operation is not performed via the user interface that the user directly operates through the hand etc., but the state of the pupil of the user's body, the fundus pattern, the eyelid A technique that can be performed based on movement is a useful technique because it can reduce the operation burden on the user. For example, in a production site of a product, a work instruction is presented to a user who wears the head-mounted display, and when the user assembles a product, both hands of the user who is working are It is used for product assembly work and it is difficult to operate the head mounted display.
このような場面において、上記技術は、ヘッドマウントディスプレイを装着した利用者の、ヘッドマウントディスプレイに対する操作の負担を軽減することができる。ただし、この場合、利用者の意図に合致した動作が、ヘッドマウントディスプレイにおいて正確に実行されることが重要である。例えば、視線方向を人為的に一定方向に固定した状態を保つことは人間にとって苦痛をともなう動作であり、通常、利用者の視線方向は絶えず変化している。さらに、外界からの種々の刺激によっては操作する意図なしに視線方向を変えることもある。したがって、視線方向を検出する方式では利用者が意図したのか、意図しない眼球運動であるのか区別が難しく誤検出が発生する。まぶたの動きも同様であり、通常無意識に瞬きを行ったり、疲れなどにより操作する意図なしにまぶたを閉じることもあり、誤検出が発生し易い。また、瞳孔の状態、眼底パターンは意図的に変化させることは困難である。
In such a situation, the above technique can reduce the burden of operation on the head mounted display by the user wearing the head mounted display. However, in this case, it is important that an operation that matches the user's intention is accurately executed in the head mounted display. For example, maintaining a state in which the line-of-sight direction is artificially fixed in a certain direction is a painful operation for humans, and usually the direction of the user's line-of-sight is constantly changing. Furthermore, depending on various stimuli from the outside world, the line of sight may be changed without intention to operate. Therefore, it is difficult to distinguish whether the eye direction is detected by the method of detecting the direction of the line of sight, and erroneous detection occurs. The movement of the eyelids is the same, and the eyelids are usually unintentionally blinked or the eyelids are closed without intention to operate due to fatigue or the like, and erroneous detection is likely to occur. Also, it is difficult to intentionally change the state of the pupil and the fundus pattern.
本開示は、ヘッドマウントディスプレイの動作を指示するための操作に関し、操作性に優れた信頼性の高いヘッドマウントディスプレイを提供することを目的とする。
This disclosure aims to provide a highly reliable head mounted display that is excellent in operability with respect to an operation for instructing the operation of the head mounted display.
本開示のヘッドマウントディスプレイは、ヘッドマウントディスプレイを装着している利用者の顔を構成する部分の中で、無意識的な動きがほとんどなくかつ利用者が意識して動かすことによって特定の状態となる特定の部分の状態を検知するとともに、特定の部分の状態が、特定の状態であることを条件として、検知された状態に対応付けられた動作が、ヘッドマウントディスプレイにおいて実行されるよう制御することとしたものである。
The head-mounted display according to the present disclosure has a specific state when there is almost no unintentional movement and the user consciously moves among the parts constituting the face of the user wearing the head-mounted display. Detecting the state of a specific part and controlling the operation associated with the detected state to be executed on the head mounted display on condition that the state of the specific part is a specific state It is what.
本開示の一側面によれば、利用者の眼に、コンテンツデータにより示されるコンテンツ画像を視認可能に提示し、前記利用者に前記コンテンツ画像を認識させるヘッドマウントディスプレイであって、前記ヘッドマウントディスプレイの動作を制御する制御手段と、前記利用者が意識しないと動かない前記利用者の顔の特定の部分の状態が、所定の状態であることを検知する検知手段と、を備え、前記制御手段は、前記検知手段が前記利用者の顔の特定の部分の状態が前記所定の状態であると検知したことを条件として、前記所定の状態に対応付けられた前記動作を制御することを特徴とするヘッドマウントディスプレイを得ることができる。
According to an aspect of the present disclosure, a head-mounted display that presents a content image indicated by content data to a user's eye so that the user can recognize the content image, the head-mounted display Control means for controlling the operation of the control unit, and detection means for detecting that a state of a specific part of the user's face that does not move unless the user is conscious is a predetermined state, the control means Is characterized in that the operation associated with the predetermined state is controlled on condition that the detection unit detects that the state of a specific part of the user's face is the predetermined state. A head mounted display can be obtained.
このようなヘッドマウントディスプレイによれば、ヘッドマウントディスプレイの動作を指示するための操作に関し無意識動作による誤操作が発生しにくく、手を使わずに利用者の意図に応じて信頼性の高い操作を行うことができる。
According to such a head mounted display, an erroneous operation due to an unconscious operation is unlikely to occur with respect to an operation for instructing an operation of the head mounted display, and a reliable operation is performed according to the user's intention without using a hand. be able to.
本開示の他の側面によれば、前記検知手段は、前記利用者の顔の特定の部分の状態として前記利用者の眉の状態が、前記所定の状態であると検知し、前記制御手段は、前記検知手段が前記利用者の眉の状態が前記所定の状態であると検知したことを条件として、前記所定の状態に対応付けられた前記動作を制御することを特徴とするヘッドマウントディスプレイを得ることができる。
According to another aspect of the present disclosure, the detection unit detects that the state of the eyebrow of the user is the predetermined state as a state of a specific part of the user's face, and the control unit A head-mounted display that controls the operation associated with the predetermined state on condition that the detection unit detects that the state of the eyebrows of the user is the predetermined state; Obtainable.
本開示のさらに他の側面によれば、前記検知手段は、前記利用者の顔の特定の部分の状態として前記利用者の頬の状態が、前記所定の状態であると検知し、前記制御手段は、前記検知手段が前記利用者の頬の状態が前記所定の状態であると検知したことを条件として、前記所定の状態に対応付けられた前記動作を制御することを特徴とするヘッドマウントディスプレイを得ることができる。
According to still another aspect of the present disclosure, the detection unit detects that the state of the user's cheek is the predetermined state as a state of a specific part of the user's face, and the control unit The head-mounted display controls the operation associated with the predetermined state on condition that the detecting unit detects that the state of the cheek of the user is the predetermined state. Can be obtained.
本開示のさらに他の側面によれば、前記検知手段は、前記利用者の顔の特定の部分の状態として前記利用者の口の状態が、前記所定の状態であると検知し、前記制御手段は、前記検知手段が前記利用者の口の状態が前記所定の状態であると検知したことを条件として、前記所定の状態に対応付けられた前記動作を制御することを特徴とするヘッドマウントディスプレイを得ることができる。
According to still another aspect of the present disclosure, the detection unit detects that the state of the user's mouth is the predetermined state as a state of a specific portion of the user's face, and the control unit The head-mounted display controls the operation associated with the predetermined state on condition that the detection unit detects that the state of the mouth of the user is the predetermined state. Can be obtained.
このようなヘッドマウントディスプレイでは、ヘッドマウントディスプレイを装着した利用者が特に意識しなければ、大きな状態変化が生じ難い、眉、頬又は口の状態に基づき、ヘッドマウントディスプレイの動作を制御することができるため、誤動作を抑制しつつ、ヘッドマウントディスプレイの操作性を向上させることができる。
In such a head-mounted display, unless the user wearing the head-mounted display is particularly conscious, a large state change is unlikely to occur, and the operation of the head-mounted display can be controlled based on the state of the eyebrows, cheeks or mouth. Therefore, the operability of the head mounted display can be improved while suppressing malfunction.
なお、「ヘッドマウントディスプレイの動作」は、例えば、コンテンツデータの選択、利用者の眼への提示に際し実行されるコンテンツデータに対する再生・停止、コンテンツ画像の利用者の眼への提示に係る処理、又は、ヘッドマウントディスプレイへの電源の供給若しくは遮断、その他ヘッドマウントディスプレイにおいて実現される各種動作を含む。また、ヘッドマウントディスプレイは、必ずしも1つの装置である必要はなく、例えば、2つの装置による構成であってもよい。例えば、制御手段を別の装置として構成し、ヘッドマウントディスプレイと所定の信号ケーブルで接続した構成とすることもできる。
The “operation of the head mounted display” includes, for example, processing related to selection of content data, playback / stop of content data executed when presented to the user's eyes, presentation of content images to the user's eyes, Alternatively, it includes supplying or shutting off power to the head mounted display and other various operations realized in the head mounted display. Further, the head-mounted display is not necessarily a single device, and may be configured by two devices, for example. For example, the control means may be configured as another device and connected to the head mounted display with a predetermined signal cable.
本開示のさらに他の側面によれば、前記検知手段は、前記利用者の顔の特定の部分の特定画像を撮像可能な撮像手段と、前記撮像手段が撮像した前記特定画像を解析する解析手段と、を含み、前記解析手段の解析結果にしたがい、前記利用者の顔の特定の部分の状態が、前記所定の状態であることを検知することを特徴とするヘッドマウントディスプレイを得ることができる。このようなヘッドマウントディスプレイでは、利用者の顔の特定の部分の状態を適切に検知することができる。
According to still another aspect of the present disclosure, the detection unit includes an imaging unit that can capture a specific image of a specific part of the user's face, and an analysis unit that analyzes the specific image captured by the imaging unit. In accordance with the analysis result of the analysis means, it is possible to obtain a head-mounted display that detects that the state of a specific part of the user's face is the predetermined state . In such a head mounted display, the state of a specific part of the user's face can be detected appropriately.
本開示のさらに他の側面によれば、前記検知手段は、前記利用者の顔の特定の部分に光を照射可能な発光素子と、前記利用者の顔の特定の部分に照射した前記光が反射した反射光を検出する光検出素子と、を含み、前記光検出素子が検出した反射光の強度が、所定の基準値を逸脱した場合、前記利用者の顔の特定の部分の状態が、前記所定の状態であることを検知することを特徴とするヘッドマウントディスプレイを得ることができる。このようなヘッドマウントディスプレイでは、利用者の顔の特定の部分の状態を適切に検知することができる。
According to still another aspect of the present disclosure, the detection unit includes: a light emitting element capable of irradiating a specific part of the user's face; and the light applied to the specific part of the user's face. A light detecting element for detecting reflected light, and when the intensity of the reflected light detected by the light detecting element deviates from a predetermined reference value, the state of a specific part of the user's face is: It is possible to obtain a head-mounted display that detects the predetermined state. In such a head mounted display, the state of a specific part of the user's face can be detected appropriately.
本開示のさらに他の側面によれば、前記制御手段は、前記ヘッドマウントディスプレイの動作として、第1動作及び第2動作を制御し、前記検知手段は、前記利用者の顔の特定の部分の状態が第1状態である場合、前記所定の状態として前記第1状態を検知し、前記利用者の顔の特定の部分の状態が第2状態である場合、前記所定の状態として前記第2状態を検知し、前記制御手段は、前記検知手段が前記第1状態であると検知したことを条件として、前記第1状態に対応付けられた前記第1動作を制御し、前記検知手段が前記第2状態であると検知したことを条件として、前記第2状態に対応付けられた前記第2動作を制御することを特徴とするヘッドマウントディスプレイを得ることができる。このようなヘッドマウントディスプレイでは、利用者の顔の特定の部分の状態に対して複数の動作を割り当て、複数の動作を制御することができる。
According to still another aspect of the present disclosure, the control unit controls a first operation and a second operation as operations of the head mounted display, and the detection unit is configured to detect a specific part of the user's face. When the state is the first state, the first state is detected as the predetermined state, and when the state of the specific part of the user's face is the second state, the second state is the predetermined state. The control means controls the first operation associated with the first state on the condition that the detection means detects the first state, and the detection means It is possible to obtain a head mounted display that controls the second operation associated with the second state on condition that the state is detected in two states. In such a head mounted display, a plurality of operations can be assigned to the state of a specific part of the user's face, and the plurality of operations can be controlled.
本開示を反映した実施形態について、図面を用いて以下に詳細に説明する。なお、本開示は、以下に記載の構成に限定されるものではなく、同一の技術的思想において種々の構成を採用することができる。例えば、以下の説明では、ヘッドマウントディスプレイ本体と、ヘッドマウントディスプレイ本体にコンテンツ画像を提供する制御ボックスと、を接続して構成されたヘッドマウントディスプレイを例に説明するが、これら各装置を一体の装置として構成することもできる。なお、以下の説明において、ヘッドマウントディスプレイ本体を、単にヘッドマウントディスプレイ(Head Mounted Display/以下、「HMD」という。)という。
Embodiments reflecting the present disclosure will be described in detail below with reference to the drawings. Note that the present disclosure is not limited to the configurations described below, and various configurations can be employed in the same technical idea. For example, in the following description, a head mounted display configured by connecting a head mounted display main body and a control box that provides a content image to the head mounted display main body will be described as an example. It can also be configured as a device. In the following description, the head-mounted display body is simply referred to as a head-mounted display (hereinafter referred to as “HMD”).
(ヘッドマウントディスプレイの概要)
図1に示されるように、HMD100は、テンプル104A,104Bと、ヨロイ106A,106Bと、フロントフレーム108と、を備える。テンプル104A,104Bの一端には、利用者の耳に当たるモダン102A,102Bが取り付けられる。テンプル104A,104Bの他端には、兆番112A,112Bが設けられる。テンプル104A,104Bとヨロイ106A,106Bとは、この兆番112A,112Bを介して連結される。フロントフレーム108は、ヨロイ106A,106Bを連結する。フロントフレーム108の中央部には、利用者の鼻に当接する鼻パッド110が取り付けられる。テンプル104A,104B、ヨロイ106A,106B、フロントフレーム108及び鼻パッド110によって、HMD100の骨格部が形成されている。ヨロイ106A,106Bに形成された兆番112A,112Bでテンプル104A,104Bを折りたたむことができる。HMD100の骨格部の構成は、例えば、通常の眼鏡と同様である。図2に示されるように、HMD100は、利用者に装着された状態において、モダン102A,102Bと、鼻パッド110と、により利用者の顔に支持される。なお、図1(b)において、モダン102A,102B及びテンプル104A,104Bの描画を省略している。 (Overview of head mounted display)
As shown in FIG. 1, the HMD 100 includes temples 104A and 104B, armatures 106A and 106B, and a front frame 108. Moderns 102A and 102B that hit the user's ear are attached to one end of the temples 104A and 104B. Trillion numbers 112A and 112B are provided at the other ends of the temples 104A and 104B. Temples 104A and 104B and Yoroi 106A and 106B are connected via these trillion numbers 112A and 112B. The front frame 108 connects the end pieces 106A and 106B. A nose pad 110 that contacts the user's nose is attached to the center of the front frame 108. The skeleton of the HMD 100 is formed by the temples 104A and 104B, the armatures 106A and 106B, the front frame 108, and the nose pad 110. The temples 104A and 104B can be folded at the trillions 112A and 112B formed on the armatures 106A and 106B. The structure of the skeleton part of HMD100 is the same as that of normal glasses, for example. As shown in FIG. 2, the HMD 100 is supported on the user's face by the modern 102 </ b> A and 102 </ b> B and the nose pad 110 when the HMD 100 is worn on the user. In FIG. 1B, drawing of the moderns 102A and 102B and the temples 104A and 104B is omitted.
図1に示されるように、HMD100は、テンプル104A,104Bと、ヨロイ106A,106Bと、フロントフレーム108と、を備える。テンプル104A,104Bの一端には、利用者の耳に当たるモダン102A,102Bが取り付けられる。テンプル104A,104Bの他端には、兆番112A,112Bが設けられる。テンプル104A,104Bとヨロイ106A,106Bとは、この兆番112A,112Bを介して連結される。フロントフレーム108は、ヨロイ106A,106Bを連結する。フロントフレーム108の中央部には、利用者の鼻に当接する鼻パッド110が取り付けられる。テンプル104A,104B、ヨロイ106A,106B、フロントフレーム108及び鼻パッド110によって、HMD100の骨格部が形成されている。ヨロイ106A,106Bに形成された兆番112A,112Bでテンプル104A,104Bを折りたたむことができる。HMD100の骨格部の構成は、例えば、通常の眼鏡と同様である。図2に示されるように、HMD100は、利用者に装着された状態において、モダン102A,102Bと、鼻パッド110と、により利用者の顔に支持される。なお、図1(b)において、モダン102A,102B及びテンプル104A,104Bの描画を省略している。 (Overview of head mounted display)
As shown in FIG. 1, the HMD 100 includes
HMD100の骨格部には、画像提示装置114が、ヨロイ106A付近に配設された(設けられた)取付部122を介して取り付けられている。画像提示装置114は、ヨロイ106A付近に取付部122を介して取り付けられた状態において、HMD100を装着した利用者の左眼118と略同一の高さとなる位置に配設される。図2に示されるように、画像提示装置114は、制御ボックス200と、所定の信号ケーブル250を介して接続されている。詳細は後述するが、制御ボックス200は、所定の領域に記憶されたコンテンツデータに対しレンダリング処理を実行する。制御ボックス200は、自装置が備える入出力インターフェース(以下、「I/F」という。)を制御することで、レンダリング処理(再生処理)によって得られたコンテンツ画像を含むコンテンツ画像信号を、信号ケーブル250を介して画像提示装置114に出力する。画像提示装置114は、制御ボックス200が出力するコンテンツ画像信号を、図1及び2には描画していない入出力I/Fを介して取得する。そして、コンテンツ画像信号に基づくコンテンツ画像を、ハーフミラー116に向けて光学的に出射する。
The image presentation device 114 is attached to the skeleton part of the HMD 100 via an attachment part 122 provided (provided) near the armor 106A. The image presentation device 114 is disposed at a position that is substantially the same height as the left eye 118 of the user wearing the HMD 100 in a state where the image presentation device 114 is attached to the vicinity of the armor 106A via the attachment portion 122. As shown in FIG. 2, the image presentation device 114 is connected to the control box 200 via a predetermined signal cable 250. Although details will be described later, the control box 200 executes a rendering process on the content data stored in the predetermined area. The control box 200 controls the input / output interface (hereinafter referred to as “I / F”) included in the own device, thereby transmitting the content image signal including the content image obtained by the rendering process (reproduction process) to the signal cable. The image is output to the image presentation device 114 via 250. The image presentation device 114 acquires the content image signal output from the control box 200 via an input / output I / F that is not drawn in FIGS. Then, the content image based on the content image signal is optically emitted toward the half mirror 116.
画像提示装置114から出射されたコンテンツ画像(光線)は、ハーフミラー116で反射し、利用者の左眼118に入射、換言すれば、視認可能に提示(投影)される。これにより、利用者はコンテンツ画像を認識する。ここで、図1(a)において符号120aは、画像提示装置114から出射されたコンテンツ画像に関する光線を示し、符号120bは、ハーフミラー116を反射し、利用者の左眼118に入射する光線を示す。なお、画像提示装置114は、取得したコンテンツ画像信号に応じた光線120a,120bを2次元方向に走査し、その走査された光線120a,120bを利用者の左眼118に導き網膜上にコンテンツ画像を形成する網膜走査型のディスプレイを用いて構成することができる他、液晶ディスプレイ、有機EL(Organic Electroluminescence)ディスプレイその他の装置を用いた構成とすることもできる。
The content image (light beam) emitted from the image presentation device 114 is reflected by the half mirror 116 and is incident on the user's left eye 118, in other words, presented (projected) so as to be visible. Thereby, the user recognizes the content image. Here, in FIG. 1A, reference numeral 120a indicates a light beam related to the content image emitted from the image presentation device 114, and reference numeral 120b indicates a light beam that reflects the half mirror 116 and enters the left eye 118 of the user. Show. The image presentation device 114 scans the light rays 120a and 120b corresponding to the acquired content image signal in a two-dimensional direction, guides the scanned light rays 120a and 120b to the user's left eye 118, and displays the content image on the retina. In addition to a retinal scanning type display that forms a liquid crystal display, a liquid crystal display, an organic EL (Organic Electroluminescence) display, and other devices may be used.
画像提示装置114の上面には眉センサ214が取り付けられ、下面には頬センサ216が取り付けられている。図2に示されるように、テンプル104Aには口センサ218が先端に取り付けられたステー124(図1において描画を省略)が取り付けられている。
The eyebrow sensor 214 is attached to the upper surface of the image presentation device 114, and the cheek sensor 216 is attached to the lower surface. As shown in FIG. 2, a stay 124 (not shown in FIG. 1) with a mouth sensor 218 attached to the tip is attached to the temple 104A.
(制御ボックスの構成)
制御ボックス200は、例えば、利用者の腰等に取り付け、利用される。図3に示されるように、制御ボックス200は、自装置の制御を司るCPU202と、各種プログラムを記憶するROM204と、作業領域としてのRAM206と、コンテンツデータ2082及びテーブル2084を記憶する記憶部208と、HMD100との間で各種信号の送受信を行う入出力I/F210と、利用者によって操作され、利用者からの指示を受け付ける操作部212と、を備えている。また、制御ボックス200には、画像提示装置114の上面に取り付けられた眉センサ214と、下面に取り付けられた頬センサ216と、ステー124(図2参照)の先端に取り付けられた口センサ218と、が接続されている。 (Control box configuration)
For example, thecontrol box 200 is attached to a user's waist or the like. As shown in FIG. 3, the control box 200 includes a CPU 202 that controls the apparatus itself, a ROM 204 that stores various programs, a RAM 206 as a work area, a storage unit 208 that stores content data 2082 and a table 2084. , An input / output I / F 210 that transmits / receives various signals to / from the HMD 100, and an operation unit 212 that is operated by the user and receives an instruction from the user. Further, the control box 200 includes an eyebrow sensor 214 attached to the upper surface of the image presentation device 114, a cheek sensor 216 attached to the lower surface, and a mouth sensor 218 attached to the tip of the stay 124 (see FIG. 2). , Is connected.
制御ボックス200は、例えば、利用者の腰等に取り付け、利用される。図3に示されるように、制御ボックス200は、自装置の制御を司るCPU202と、各種プログラムを記憶するROM204と、作業領域としてのRAM206と、コンテンツデータ2082及びテーブル2084を記憶する記憶部208と、HMD100との間で各種信号の送受信を行う入出力I/F210と、利用者によって操作され、利用者からの指示を受け付ける操作部212と、を備えている。また、制御ボックス200には、画像提示装置114の上面に取り付けられた眉センサ214と、下面に取り付けられた頬センサ216と、ステー124(図2参照)の先端に取り付けられた口センサ218と、が接続されている。 (Control box configuration)
For example, the
ここで、記憶部208は、例えば、ハードディスクにより構成されている。記憶部208に記憶されるコンテンツデータ2082は、例えば、所定の製品の組立方法について記述された作業指図書を内容とするものである(以下、コンテンツデータ2082が作業指図書である場合を例として説明する。)。テーブル2084は、HMD100に取り付けられた眉センサ214、頬センサ216及び口センサ218によって検知される眉、頬及び口の各状態と、HMD100及び制御ボックス200において実現される各動作と、を対応付けて登録したテーブルである。操作部212は、例えば、キーにより構成され、コンテンツデータ2082の再生開始及び再生終了(停止)の指示を受け付ける。
Here, the storage unit 208 is constituted by, for example, a hard disk. The content data 2082 stored in the storage unit 208 includes, for example, a work instruction describing a method for assembling a predetermined product (hereinafter, the case where the content data 2082 is a work instruction is taken as an example). explain.). The table 2084 associates each state of the eyebrows, cheeks, and mouth detected by the eyebrow sensor 214, cheek sensor 216, and mouth sensor 218 attached to the HMD 100 with each action realized in the HMD 100 and the control box 200. Registered table. The operation unit 212 includes, for example, keys, and receives instructions to start and end (stop) playback of the content data 2082.
また、眉センサ214は利用者の左側の眉の状態(動き)を検知し、頬センサ216は利用者の頬の状態(動き)を検知する。また、口センサ218は利用者の口、より具体的には利用者の唇の状態(形状)を検知する。眉センサ214、頬センサ216及び口センサ218は、例えば、CCD(Charge Coupled Devices)から構成される画像センサ、又は、発光素子及び光検出素子から構成される光電センサを用いることができる。各センサ214,216,218によって検知された状態を含む検知信号は、制御ボックス200に入力され、RAM206上に記憶される。各センサ214,216,218によって検知された状態とは、画像センサの場合は眉等の状態を示す画像によって示され、光電センサの場合は反射光の強度によって示される。
The eyebrow sensor 214 detects the state (movement) of the user's left eyebrow, and the cheek sensor 216 detects the state (movement) of the user's cheek. The mouth sensor 218 detects the state (shape) of the user's mouth, more specifically, the user's lips. As the eyebrow sensor 214, the cheek sensor 216, and the mouth sensor 218, for example, an image sensor composed of a CCD (Charge Coupled Devices) or a photoelectric sensor composed of a light emitting element and a light detecting element can be used. Detection signals including states detected by the sensors 214, 216, and 218 are input to the control box 200 and stored on the RAM 206. The state detected by each sensor 214, 216, 218 is indicated by an image indicating a state such as eyebrows in the case of an image sensor, and is indicated by the intensity of reflected light in the case of a photoelectric sensor.
CPU202は、ROM204に記憶されているコンテンツデータ2082を再生(レンダリング)するためのプログラムをRAM206上で実行することで、コンテンツ画像を取得する。そして、ROM204に記憶されている入出力I/F210を制御するためのプログラムをRAM206上で実行し、コンテンツ画像を含むコンテンツ画像信号を、入出力I/F210からHMD100に出力する。また、CPU202は、ROM204に記憶されている解析用プログラム(例えば、パターンマッチングのためのプログラム)を、各センサ214,216,218によって検知され、RAM206に記憶された検知信号及びテーブル2084を用いて、RAM206上で実行することで、利用者の眉、頬及び口の状態について解析する。さらに、CPU202は、ROM204に記憶されているHMD100を制御するためのプログラムをRAM206上で実行することで、操作部212によって指示されたHMD100の動作と、解析結果に基づいたHMD100の動作と、を制御する。したがって、CPU202が、コンテンツデータ2082、テーブル2084及び検知信号等の各種データを用い、ROM204に記憶された各種プログラムをRAM206上で実行することにより、各種機能手段(例えば、制御手段及び解析手段)が構成される。
The CPU 202 acquires a content image by executing a program for reproducing (rendering) the content data 2082 stored in the ROM 204 on the RAM 206. Then, a program for controlling the input / output I / F 210 stored in the ROM 204 is executed on the RAM 206, and a content image signal including a content image is output from the input / output I / F 210 to the HMD 100. The CPU 202 detects an analysis program (for example, a program for pattern matching) stored in the ROM 204 by the sensors 214, 216, and 218, and uses the detection signal and the table 2084 stored in the RAM 206. By executing on the RAM 206, the state of the user's eyebrows, cheeks and mouth is analyzed. Further, the CPU 202 executes a program for controlling the HMD 100 stored in the ROM 204 on the RAM 206, thereby performing the operation of the HMD 100 instructed by the operation unit 212 and the operation of the HMD 100 based on the analysis result. Control. Therefore, the CPU 202 uses the various data such as the content data 2082, the table 2084, and the detection signal, and executes various programs stored in the ROM 204 on the RAM 206, whereby various functional units (for example, a control unit and an analysis unit) are used. Composed.
図4(a)に示されるように、テーブル2084には、標準状態における利用者の顔画像である標準顔画像が登録されている。標準顔画像には、HMD100の動作は対応付けがなされていない。換言すれば、標準顔画像には、HMD100で進行中の動作を継続する、例えば、再生動作を継続して実行することが対応付けられている。
As shown in FIG. 4A, a standard face image, which is a face image of the user in the standard state, is registered in the table 2084. The operation of the HMD 100 is not associated with the standard face image. In other words, the standard face image is associated with continuing the operation in progress on the HMD 100, for example, continuously executing the reproduction operation.
図4(b)に示されるように、「眉を上げた」状態を示す顔画像に、動作「次へ進む」が対応付けられている。例えば、所定のコンテンツ画像が利用者の左眼118に提示されている状態において、利用者が眉を上ると、提示中のコンテンツ画像の次のコンテンツ画像が提示される。より具体的には、第1工程に関する作業指図を内容とするコンテンツ画像が提示されている状態において、第1工程の作業を終えた利用者が、眉を上げると、第2工程に関する作業指図を内容とするコンテンツ画像が提示される。
As shown in FIG. 4B, the face image indicating the state of “raising the eyebrows” is associated with the operation “next”. For example, in a state where a predetermined content image is presented to the user's left eye 118, when the user raises his / her eyebrows, the content image next to the content image being presented is presented. More specifically, when a user who has finished the work in the first process raises the eyebrows in the state where the content image including the work instruction in the first process is presented, the work instruction related to the second process is given. A content image as content is presented.
また、「眉を寄せた」状態を示す顔画像に、動作「前へ戻る」が対応付けられている。具体的には、第2工程に関する作業指図を内容とするコンテンツ画像が提示されている状態において、利用者が眉を寄せると、第1工程に関する作業指図を内容とするコンテンツ画像が提示される。なお、図1及び図2に示すHMD100において眉センサ214は、利用者の左眉に対向して配設されているため、眉が上がった状態等であるか否かは、左眉の状態に基づき判断される。
Also, the action “return to the front” is associated with the face image indicating the state of “drawing the eyebrows”. Specifically, in the state where the content image including the work instruction relating to the second step is presented, when the user brings his / her eyebrows, the content image including the work instruction relating to the first step is presented. In the HMD 100 shown in FIGS. 1 and 2, the eyebrow sensor 214 is disposed so as to face the user's left eyebrow. Therefore, whether the eyebrow is raised or not is determined by the state of the left eyebrow. Judgment based on.
図4(c)に示されるように、「頬を上げた」状態を示す顔画像に、動作「決定」が対応付けられている。また、「頬を凹ませた」状態を示す顔画像に、動作「取消し」が対応付けられている。例えば、第1工程の作業指図を内容とするコンテンツ画像と、第2工程の作業指図を内容とするコンテンツ画像と、第3工程の作業指図を内容とするコンテンツ画像と、が順次所定の間隔で切り替えられながら表示されている。この状態において、第2工程の作業指図を内容とするコンテンツ画像が提示されている間に、利用者が頬を上に上げると、第2工程の作業指図を内容とするコンテンツ画像が継続して提示されるコンテンツ画像に決定される。この決定後、利用者が頬を凹ませると、この決定が取り消される。なお、頬センサ216についても、眉センサ214同様、利用者の左頬に対向して配設されているため、頬を上げた状態等であるか否かは、左頬の状態に基づき判断される。
As shown in FIG. 4C, the action “decision” is associated with the face image indicating the state of “raised cheek”. Further, the operation “cancel” is associated with the face image indicating the state of “dented cheek”. For example, a content image containing the work instruction of the first step, a content image containing the work instruction of the second step, and a content image containing the work instruction of the third step are sequentially arranged at predetermined intervals. It is displayed while being switched. In this state, if the user raises the cheek while the content image including the work instruction in the second step is presented, the content image including the work instruction in the second step continues. The content image to be presented is determined. After this decision, if the user dents his cheek, this decision is canceled. Note that the cheek sensor 216 is also disposed opposite the user's left cheek, as with the eyebrow sensor 214, so whether or not the cheek is raised is determined based on the state of the left cheek. The
図4(d)に示されるように、「え」を発声するときの口の状態を示す顔画像に、提示されるコンテンツ画像の「明るさ調整」が対応付けられている。また、「あ」を発声するときの口の状態を示す画像に、「明るさアップ」が対応付けられている。さらに、「お」を発声するときの口の状態に、「明るさダウン」が対応付けられている。より具体的に説明すると、利用者の左眼118に第1工程の作業指図を内容とするコンテンツ画像が提示されている状態において、利用者が「え」と発声する口の形にすると、明るさ調整用の設定画面が提示される。そして、設定画面が提示された状態で、利用者が「あ」と発声する口の形にすると、明るさが1段階アップする。一方、利用者が「お」と発声する口の形にすると、明るさが1段階ダウンする。なお、利用者が「あ」又は「お」を、例えば、2回連続して発声する口の形にすると、明るさが2段階、アップ又はダウンする。
As shown in FIG. 4D, the “brightness adjustment” of the presented content image is associated with the face image indicating the state of the mouth when “e” is uttered. In addition, “brightness up” is associated with an image indicating the state of the mouth when “a” is uttered. Further, “brightness down” is associated with the state of the mouth when “o” is uttered. More specifically, if the user's left eye 118 is presented with a content image that contains the work instruction of the first step, it will become bright when the user speaks “e”. A setting screen for adjustment is presented. If the user speaks “A” with the setting screen presented, the brightness increases by one level. On the other hand, if the user speaks “O”, the brightness decreases by one level. Note that if the user makes “a” or “o” into the shape of a mouth that utters two times in succession, the brightness increases or decreases by two levels.
(制御ボックスによって実行される処理)
図5に示されるメイン処理は、HMD100及び制御ボックス200の電源ONを条件として、CPU202がROM204に記憶されたプログラムをRAM206上で実行することで開始される。この処理の実行において、眉センサ214、頬センサ216及び口センサ218によって検知され、制御ボックス200に入力される検知信号、記憶部208に記憶されたコンテンツデータ2082及びテーブル2084が、所定のタイミングで利用される。 (Processing executed by the control box)
The main processing shown in FIG. 5 is started when theCPU 202 executes a program stored in the ROM 204 on the RAM 206 on condition that the HMD 100 and the control box 200 are turned on. In the execution of this process, the detection signal detected by the eyebrow sensor 214, cheek sensor 216, and mouth sensor 218 and input to the control box 200, the content data 2082 stored in the storage unit 208, and the table 2084 are displayed at a predetermined timing. Used.
図5に示されるメイン処理は、HMD100及び制御ボックス200の電源ONを条件として、CPU202がROM204に記憶されたプログラムをRAM206上で実行することで開始される。この処理の実行において、眉センサ214、頬センサ216及び口センサ218によって検知され、制御ボックス200に入力される検知信号、記憶部208に記憶されたコンテンツデータ2082及びテーブル2084が、所定のタイミングで利用される。 (Processing executed by the control box)
The main processing shown in FIG. 5 is started when the
処理を開始したCPU202は、先ず、制御ボックス200の各構成を初期化するとともに(S100)、各センサ214,216,218を初期化し(S102)、処理をS104に移行する。S104でCPU202は、操作部212を介して利用者がコンテンツデータ2082の再生開始の指示を入力したか否かを判断する。判断の結果、再生開始の指示が入力されていない場合(S104:No)、CPU202は、再生開始の指示が入力されるまで待機する。これに対し、再生開始の指示が入力された場合(S104:Yes)、CPU202は、コンテンツ画像信号出力処理を開始する(S106)。ここで、コンテンツ画像信号出力処理とは、コンテンツデータ2082を記憶部208からRAM206上に読み出し、コンテンツデータ2082をレンダリングし、レンダリングによって得られたコンテンツ画像を含むコンテンツ画像信号を、入出力I/F210を制御し、HMD100に出力する処理である。
The CPU 202 that has started the process first initializes the components of the control box 200 (S100), initializes the sensors 214, 216, and 218 (S102), and moves the process to S104. In step S <b> 104, the CPU 202 determines whether the user has input an instruction to start playback of the content data 2082 via the operation unit 212. As a result of the determination, if a playback start instruction is not input (S104: No), the CPU 202 waits until a playback start instruction is input. On the other hand, when an instruction to start reproduction is input (S104: Yes), the CPU 202 starts content image signal output processing (S106). Here, the content image signal output process is a process in which the content data 2082 is read from the storage unit 208 to the RAM 206, the content data 2082 is rendered, and the content image signal including the content image obtained by the rendering is input / output I / F 210. This is a process of controlling and outputting to the HMD 100.
S106でコンテンツ画像信号出力処理を開始したCPU202は、各センサ214,216,218のいずれかのセンサから顔動作トリガが検出されたか否かを判断する(S108)。S108について、詳細に説明する。例えば、CPU202は、眉センサ214から制御ボックス200に入力される検知信号に含まれる利用者の眉(左眉)の状態と、標準顔画像(図4(a)参照)の左眉の状態と、の間でパターンマッチング処理を実行し、両者の間でマッチングが取れるか否かを判断する。また、CPU202は、頬センサ216から入力される検知信号に含まれる利用者の頬(左頬)の状態と、標準顔画像(図4(a)参照)の左頬の状態と、の間でパターンマッチング処理を実行し、両者の間でマッチングが取れるか否かを判断する。さらに、CPU202は、口センサ218から入力される検知信号に含まれる利用者の口の状態(形状)と、標準顔画像(図4(a)参照)の口の状態(形状)と、の間でパターンマッチング処理を実行し、両者の間でマッチングが取れるか否かを判断する。
The CPU 202 that has started the content image signal output process in S106 determines whether a face motion trigger has been detected from any one of the sensors 214, 216, and 218 (S108). S108 will be described in detail. For example, the CPU 202 determines the state of the user's eyebrow (left eyebrow) included in the detection signal input from the eyebrow sensor 214 to the control box 200 and the state of the left eyebrow of the standard face image (see FIG. 4A). , A pattern matching process is executed to determine whether or not matching can be achieved between the two. Further, the CPU 202 determines between the state of the user's cheek (left cheek) included in the detection signal input from the cheek sensor 216 and the state of the left cheek of the standard face image (see FIG. 4A). A pattern matching process is executed, and it is determined whether or not matching can be obtained between the two. Furthermore, the CPU 202 determines between the mouth state (shape) of the user included in the detection signal input from the mouth sensor 218 and the mouth state (shape) of the standard face image (see FIG. 4A). A pattern matching process is executed in step (b), and it is determined whether or not matching can be achieved.
S108においてマッチングが取れた場合、換言すれば、利用者の眉、頬及び口の状態と、標準画像によって示される左眉等の状態とが一致する場合、CPU202は顔動作トリガが入力されていないと判断し(S108:No)、処理をS112に移行する。これに対し、利用者の眉、頬及び口の状態のいずれか1つでも、標準画像によって示される左眉等の状態に一致しない場合、CPU202は顔動作トリガが入力されたと判断し(S108:Yes)、状態判定処理を実行する(S110)。
When matching is obtained in S108, in other words, when the state of the user's eyebrows, cheeks, and mouth matches the state of the left eyebrow indicated by the standard image, the CPU 202 has not input a face motion trigger. (S108: No), the process proceeds to S112. On the other hand, if any one of the eyebrows, cheeks, and mouth states of the user does not match the state of the left eyebrow indicated by the standard image, the CPU 202 determines that a face motion trigger has been input (S108: Yes), a state determination process is executed (S110).
図6に示される状態判定処理を開始したCPU202は、S108で検出された顔動作トリガが、眉の状態の変化に基づくものであるか否かを判断する(S200)。CPU202は、S108で、利用者の左眉の状態と、標準画像によって示される左眉の状態とが一致する場合、眉の状態の変化に基づくものでないと判断し(S200:No)、処理をS204に移行する。一方、両者が一致しない場合、CPU202は、顔動作トリガは眉の状態の変化に基づくものであると判断し(S200:Yes)、処理をS202に移行する。
The CPU 202 that has started the state determination process shown in FIG. 6 determines whether or not the face motion trigger detected in S108 is based on a change in eyebrow state (S200). If the state of the user's left eyebrow matches the state of the left eyebrow indicated by the standard image in S108, the CPU 202 determines that the change is not based on the change in eyebrow state (S200: No), and performs the process. The process proceeds to S204. On the other hand, if the two do not match, the CPU 202 determines that the face motion trigger is based on a change in the state of the eyebrows (S200: Yes), and moves the process to S202.
S202でCPU202は、眉センサ214によって制御ボックス200に入力された検知信号に含まれる利用者の左眉の状態が、どのような状態であるかを、再度、パターンマッチング処理によって判定する。S202の判定について具体的に説明すると、CPU202は、眉センサ214によって入力された検知信号に含まれる利用者の眉の状態が、テーブル2084に登録された眉の状態(図4(b)参照)のいずれに該当するかを、判定する。判定の結果、例えば、検知信号に含まれる利用者の眉の状態が、上がった状態を示す画像に一致(マッチング)する場合、CPU202は、動作「次へ進む」を特定する。同様に、検知信号に含まれる利用者の眉の状態が、寄った状態を示す画像に一致する場合、CPU202は、動作「前へ戻る」を特定する。CPU202はS202を実行後、処理をS210に移行する。
In S202, the CPU 202 determines again by the pattern matching process what the state of the user's left eyebrow included in the detection signal input to the control box 200 by the eyebrow sensor 214 is. The determination in S202 will be specifically described. The CPU 202 determines that the eyebrow state of the user included in the detection signal input by the eyebrow sensor 214 is the eyebrow state registered in the table 2084 (see FIG. 4B). Which of the following is true is determined. As a result of the determination, for example, when the state of the user's eyebrows included in the detection signal matches (matches) an image indicating the raised state, the CPU 202 specifies the operation “next”. Similarly, when the state of the eyebrows of the user included in the detection signal matches the image indicating the close state, the CPU 202 specifies the operation “return to the previous”. After executing S202, the CPU 202 shifts the process to S210.
S204でCPU202は、S108で検出された顔動作トリガが、頬の状態の変化に基づくものであるか否かを判断する(S204)。CPU202は、S108で、利用者の左頬の状態と、標準画像によって示される左頬の状態とが一致する場合、頬の状態の変化に基づくものでないと判断し(S204:No)、処理をS208に移行する。一方、両者が一致しない場合、CPU202は、顔動作トリガは頬の状態の変化に基づくものであると判断し(S204:Yes)、処理をS206に移行する。
In S204, the CPU 202 determines whether or not the face motion trigger detected in S108 is based on a change in cheek condition (S204). If the state of the user's left cheek matches the state of the left cheek indicated by the standard image in S108, the CPU 202 determines that the change is not based on a change in the state of the cheek (S204: No), and performs processing. The process proceeds to S208. On the other hand, if the two do not match, the CPU 202 determines that the face motion trigger is based on a change in cheek state (S204: Yes), and proceeds to S206.
S206でCPU202は、頬センサ216によって制御ボックス200に入力された検知信号に含まれる利用者の左頬の状態が、どのような状態であるかを、パターンマッチング処理を実行し、判定する。S206の判定について具体的に説明すると、CPU202は、頬センサ216によって入力された検知信号に含まれる利用者の頬の状態が、テーブル2084に登録された頬の状態(図4(c)参照)のいずれに該当するかを、判定する。判定の結果、例えば、検知信号に含まれる利用者の頬の状態が、上げた状態を示す画像に一致(マッチング)する場合、CPU202は動作「決定」を特定する。同様に、検知信号に含まれる利用者の頬の状態が、凹んだ状態を示す画像に一致する場合、CPU202は、動作「取消し」を特定する。CPU202はS206を実行後、処理をS210に移行する。
In S206, the CPU 202 executes pattern matching processing to determine what the state of the user's left cheek included in the detection signal input to the control box 200 by the cheek sensor 216 is. The determination in S206 will be described in detail. The CPU 202 determines that the state of the user's cheek included in the detection signal input by the cheek sensor 216 is the state of the cheek registered in the table 2084 (see FIG. 4C). Which of the following is true is determined. As a result of the determination, for example, when the state of the user's cheek included in the detection signal matches (matches) the image indicating the raised state, the CPU 202 specifies the operation “decision”. Similarly, when the state of the user's cheek included in the detection signal matches the image indicating the depressed state, the CPU 202 specifies the operation “cancel”. After executing S206, the CPU 202 shifts the processing to S210.
S208でCPU202は、S108で検出された顔動作トリガが、口の状態(形状)の変化に基づくものであると判断し、口センサ218によって制御ボックス200に入力された検知信号に含まれる利用者の口(唇)の状態が、「え」、「あ」及び「お」のいずれを発声したときのものであるかを、パターンマッチング処理を実行し、判定する。S208の判定について説明を続けると、CPU202は、口センサ218によって入力された検知信号に含まれる利用者の口の状態(形状)が、テーブル2084に登録された口の状態(図4(d)参照)のいずれに該当するかを、判定する。判定の結果、例えば、検知信号に含まれる利用者の口の状態が、「え」を発声したときの画像に一致(マッチング)する場合、CPU202は動作「明るさ調整」を特定する。同様に、検知信号に含まれる利用者の口の状態が、「あ」を発声したときの画像に一致する場合、CPU202は動作「明るさアップ」を特定し、「お」を発声したときの画像に一致する場合、CPU202は動作「明るさダウン」を特定する。CPU202はS206を実行後、処理をS210に移行する。
In S <b> 208, the CPU 202 determines that the face motion trigger detected in S <b> 108 is based on a change in the mouth state (shape), and the user included in the detection signal input to the control box 200 by the mouth sensor 218. A pattern matching process is performed to determine whether the mouth (lips) state is “e”, “a”, or “o”. Continuing the description of the determination in S208, the CPU 202 determines that the state (shape) of the user's mouth included in the detection signal input by the mouth sensor 218 is the state of the mouth registered in the table 2084 (FIG. 4D). It is determined whether it corresponds to (see). As a result of the determination, for example, when the state of the mouth of the user included in the detection signal matches (matches) the image when “e” is uttered, the CPU 202 specifies the operation “brightness adjustment”. Similarly, when the state of the user's mouth included in the detection signal matches the image when “a” is uttered, the CPU 202 identifies the operation “brightness up” and when “o” is uttered If the image matches the image, the CPU 202 identifies the operation “brightness down”. After executing S206, the CPU 202 shifts the processing to S210.
S210でCPU202は、S202,S206又はS208で特定された動作が、実行中のコンテンツ画像信号出力処理において実行されるよう制御する。例えば、第1工程に関する作業指図を内容とするコンテンツ画像が提示されている状態において、S202で動作「次へ進む」が特定された場合、CPU202は、第2工程に関する作業指図を内容とするコンテンツ画像が提示されるよう、レンダリング処理を実行し、第2工程に関する作業指図を内容とするコンテンツ画像を含むコンテンツ画像信号を、入出力I/F210からHMD100に出力し、画像提示装置114は、第2工程に関する作業指図を内容とするコンテンツ画像信号に基づくコンテンツ画像を、ハーフミラー116に向けて光学的に出射する。S210を実行後、CPU202は、この状態判定処理を終了し、処理をS112(図5参照)に移行する。
In S210, the CPU 202 controls so that the operation specified in S202, S206, or S208 is executed in the content image signal output process being executed. For example, in the state where the content image including the work instruction related to the first process is presented, when the operation “proceed to the next” is specified in S202, the CPU 202 includes the content including the work instruction related to the second process. The rendering process is executed so that the image is presented, and the content image signal including the content image including the work instruction related to the second step is output from the input / output I / F 210 to the HMD 100. A content image based on the content image signal including the work instructions regarding the two steps is optically emitted toward the half mirror 116. After executing S210, the CPU 202 ends the state determination process and proceeds to S112 (see FIG. 5).
説明を図5に戻し、S112でCPU202は、コンテンツデータ2082の再生が終了したか否か、換言すれば、コンテンツデータを最後まで処理したか否かを判断する。判断の結果、再生が終了していない場合(S112:No)、処理をS108に移行する。一方、再生が終了した場合(S112:Yes)、CPU202は、コンテンツ画像信号出力処理の終了処理を実行し(S114)、このメイン処理を終了する。なお、再生の終了は、操作部212を介して利用者が入力する再生終了(停止)の指示に基づいても行われる。
Returning to FIG. 5, in S112, the CPU 202 determines whether or not the reproduction of the content data 2082 has ended, in other words, whether or not the content data has been processed to the end. If the result of determination is that playback has not ended (S112: No), the process proceeds to S108. On the other hand, when the reproduction is completed (S112: Yes), the CPU 202 executes a content image signal output process termination process (S114), and terminates the main process. Note that the reproduction is also terminated based on a reproduction termination (stop) instruction input by the user via the operation unit 212.
なお、上記説明では、眉センサ214、頬センサ216及び口センサ218を画像センサで構成した例に基づき説明した。しかし、上述したとおり、各センサ214,216,218を光電センサとすることもできる。この場合、図7に示されるように、テーブル2084には、HMD100の各動作に対応付けて反射光の強度が登録される。CPU202は、このテーブル2084に登録された強度と、各センサ214,216,218からHMD100に入力される反射光の強度とに基づき、利用者の眉、頬及び口の状態を判定する。なお、図7において、「ハッチングされた長方形」は、利用者の眉を示し、「○」は、発光素子からの「光スポット」を示す。
In the above description, the eyebrow sensor 214, cheek sensor 216, and mouth sensor 218 have been described based on image sensors. However, as described above, each of the sensors 214, 216, and 218 can be a photoelectric sensor. In this case, as shown in FIG. 7, the intensity of the reflected light is registered in the table 2084 in association with each operation of the HMD 100. The CPU 202 determines the state of the user's eyebrows, cheeks, and mouth based on the intensity registered in the table 2084 and the intensity of the reflected light input from the sensors 214, 216, and 218 to the HMD 100. In FIG. 7, “hatched rectangle” indicates the eyebrows of the user, and “◯” indicates the “light spot” from the light emitting element.
例えば、利用者の顔の状態が標準状態であるときの反射光の強度(基準値)が、「眉:80±15(65~95)」(図7参照)、「頬:120±15(105~135)」及び「口:100±15(85~115)」であるとテーブル2084に登録されているとする。眉センサ214、頬センサ216及び口センサ218各々から、反射光の強度が「眉:82」、「頬:118」及び「口:100」であることを含む検知信号が制御ボックス200に、各々入力された場合、CPU202は、S108(図5参照)で顔動作トリガが検出されていないと判断する(S108:No)。
For example, the intensity (reference value) of the reflected light when the user's face is in the standard state is “brow: 80 ± 15 (65 to 95)” (see FIG. 7), “cheek: 120 ± 15 ( 105 to 135) ”and“ mouth: 100 ± 15 (85 to 115) ”are registered in the table 2084. From the eyebrow sensor 214, the cheek sensor 216, and the mouth sensor 218, detection signals including reflected light intensities of “brow: 82”, “cheek: 118”, and “mouth: 100” are respectively sent to the control box 200. If it is input, the CPU 202 determines that a face motion trigger has not been detected in S108 (see FIG. 5) (S108: No).
また、テーブル2084に、「眉を上げた」状態の変化量として、例えば、変化量の基準値「80±15(65~95)」を逸脱した「110±14(96≦強度≦124)」と登録されている状態(図7参照)とする。眉センサ214から反射光の強度が「眉:115」であることを含む検知信号が入力された場合、CPU202は、S108(図5参照)及びS200(図6参照)の判断を肯定し(S108,S200:Yes)、S202で動作「次へ進む」を特定する。
Further, in the table 2084, for example, “110 ± 14 (96 ≦ intensity ≦ 124)” deviating from the reference value “80 ± 15 (65 to 95)” of the amount of change as the amount of change in the state where the eyebrows are raised. Are registered (see FIG. 7). When the detection signal including that the intensity of the reflected light is “brow: 115” is input from the eyebrow sensor 214, the CPU 202 affirms the determination of S108 (see FIG. 5) and S200 (see FIG. 6) (S108). , S200: Yes), the operation “next” is specified in S202.
さらに、テーブル2084に、「眉を寄せた」状態の変化量として、例えば、基準値を逸脱した「50±14(36≦強度≦64)」と登録されている状態(図7参照)とする。眉センサ214から反射光の強度が「眉:45」であることを含む検知信号が入力された場合、CPU202は、S108(図5参照)及びS200(図6参照)の判断を肯定し(S108,S200:Yes)、S202で動作「前へ戻る」を特定する。
Further, as the amount of change in the state of “drawing the eyebrows” in the table 2084, for example, “50 ± 14 (36 ≦ intensity ≦ 64)” deviating from the reference value is registered (see FIG. 7). . When the detection signal including that the intensity of the reflected light is “brow: 45” is input from the eyebrow sensor 214, the CPU 202 affirms the determination of S108 (see FIG. 5) and S200 (see FIG. 6) (S108). , S200: Yes), the operation “return to the previous” is specified in S202.
また、上記説明では、眉センサ214、頬センサ216及び口センサ218の各々から制御ボックス200に入力される検知信号に含まれる利用者の顔の特定の部分単独の状態と、HMD100の動作とを対応付けた構成に基づき説明した。このような構成の他、次のような構成とすることもできる。例えば、「眉を上げた」状態と、「頬を上げた」状態との組合せ、換言すれば、「眉を上げつつ、頬を上げた」状態に、HMD100の動作、例えば、「早送り」を対応付けた構成とすることもできる。この場合、CPU202は、眉センサ214及び頬センサ216から入力された検知信号に基づき、顔動作トリガが入力されたと判断し(図5のS108:Yes参照)、状態判定処理を実行する(図5のS110。詳細は図6参照)。なお、状態判定処理において、CPU202は、図6のS200の判断を行うとともに、その判断を肯定後(図6のS200:Yes参照)、再度、図6のS204と同一の判断を行う。S200及びS204の判断がどちらも肯定された場合(図6のS200,S204:Yes参照)、「眉を上げつつ、頬を上げた」状態であると判定し、CPU202は、動作「早送り」を特定する。
In the above description, the state of a specific part of the user's face alone included in the detection signal input to the control box 200 from each of the eyebrow sensor 214, the cheek sensor 216, and the mouth sensor 218, and the operation of the HMD 100. The description is based on the associated configuration. In addition to such a configuration, the following configuration may be adopted. For example, the operation of the HMD 100, for example, “fast forward” is set to a combination of a state where the eyebrows are raised and a state where the cheeks are raised, in other words, a state where the eyebrows are raised while raising the eyebrows. It can also be set as the structure matched. In this case, the CPU 202 determines that a face motion trigger has been input based on detection signals input from the eyebrow sensor 214 and cheek sensor 216 (see S108 in FIG. 5: Yes), and executes state determination processing (FIG. 5). (See FIG. 6 for details). In the state determination process, the CPU 202 makes the determination in S200 of FIG. 6 and, after affirming the determination (see S200: Yes in FIG. 6), again performs the same determination as in S204 of FIG. If both the determinations in S200 and S204 are affirmative (see S200 and S204 in FIG. 6: Yes), the CPU 202 determines that the state is “raised the eyebrows and raised the cheek”, and the CPU 202 performs the operation “fast forward”. Identify.
さらに、上記説明では、例えば、眉センサ214に関し、片方の眉、具体的には、左側の眉の状態(動き)を検知する構成とした。このような構成の他、左右両側の眉の状態を検知する構成とし、左右の眉の状態の組合せ、具体的には、「左側の眉を上げ、右側の眉を下げた」状態に、HMD100の動作、例えば、「縮小」を対応付けた構成とすることもできる。この場合、CPU202は、左右両側の眉センサ214から入力された検知信号に基づき、顔動作トリガが入力されたと判断し(図5のS108:Yes参照)、状態判定処理を実行する(図5のS110。詳細は図6参照)。なお、状態判定処理において、CPU202は、左右両側の眉の状態について、図6のS200の判断を行う。そして、CPU202は、左右両側の眉の状態について、判断を肯定した場合(図6のS200:Yes参照)、「左側の眉を上げ、右側の眉を下げた」状態であると判定し、動作「縮小」を特定する。
Furthermore, in the above description, for example, the eyebrow sensor 214 is configured to detect the state (movement) of one eyebrow, specifically, the left eyebrow. In addition to such a configuration, the configuration of detecting the state of the right and left eyebrows, a combination of the state of the left and right eyebrows, specifically, the HMD 100 in a state where the left eyebrow is raised and the right eyebrow is lowered. For example, a configuration in which “reduction” is associated can be adopted. In this case, the CPU 202 determines that a face motion trigger has been input based on the detection signals input from the right and left eyebrow sensors 214 (see S108 in FIG. 5: Yes), and executes state determination processing (see FIG. 5). S110 (Refer to FIG. 6 for details). In the state determination process, the CPU 202 performs the determination in S200 of FIG. If the CPU 202 affirms the determination regarding the state of the right and left eyebrows (see S200 in FIG. 6: Yes), the CPU 202 determines that the left eyebrow is raised and the right eyebrow is lowered. Specify “reduction”.
また、上記説明では、HMD100を装着している利用者の顔を構成する部分の中で、利用者が意識して動かすことによって特定の状態となる特定の部分を、特に、眉、頬及び口とした。この他、例えば、おでこ又は顎の状態を検知する構成とすることもできる。
Further, in the above description, among the portions constituting the face of the user wearing the HMD 100, specific portions that are brought into a specific state by the user's conscious movement, particularly eyebrows, cheeks and mouths It was. In addition to this, for example, it may be configured to detect the state of the forehead or the jaw.
さらに、HMD100及び制御ボックス200によって実現される動作として、「次へ進む」、「前へ戻る」、「決定」、「取消し」、「明るさ調整」、「明るさアップ」及び「明るさダウン」を例に説明したが、これ以外の動作をテーブル2084に登録することもできる。例えば、複数のコンテンツデータ2082が記憶部208に記憶されている状態において「再生するコンテンツデータ2082の選択・確定」に関する動作、又は、コンテンツデータ2082の「再生の開始若しくは終了(停止)」に関する動作(上記説明では、操作部212を介して入力される例を説明。再生の開始について図5のS104参照)を、眉、頬又は口の状態に対応付けてテーブル2084に登録してもよい。また、HMD100又は制御ボックス200への「電源の供給又は供給されている電源の遮断」に関する動作を、眉等の所定の状態と対応付けて登録してもよい。
Furthermore, as operations realized by the HMD 100 and the control box 200, “next”, “return to previous”, “decision”, “cancel”, “brightness adjustment”, “brightness up”, and “brightness down” However, other operations can be registered in the table 2084. For example, an operation related to “selection / confirmation of content data 2082 to be played back” or an operation related to “start or end (stop) of playback” of content data 2082 in a state where a plurality of content data 2082 is stored in the storage unit 208 (In the above description, an example of inputting via the operation unit 212 will be described. Refer to S104 in FIG. 5 for the start of reproduction) may be registered in the table 2084 in association with the state of the eyebrows, cheeks or mouth. Further, an operation related to “power supply or power supply cutoff” to the HMD 100 or the control box 200 may be registered in association with a predetermined state such as an eyebrow.
(実施形態の構成に基づく有利な効果)
上記実施形態のHMD100と制御ボックス200による構成によれば、HMD100を装着した利用者が意識して動かさないと、その状態が変化しない、眉、頬及び口の状態と、HMD100及び制御ボックス200によって実現される動作とを対応付けて登録するとともに、これらの各部の状態をセンシングし、眉、頬及び口の状態が、登録された状態である場合、その状態に対応付けられた動作が実行される構成を採用した。 (Advantageous effects based on the configuration of the embodiment)
According to the configuration of theHMD 100 and the control box 200 in the above embodiment, the state of the eyebrows, cheeks and mouth does not change unless the user wearing the HMD 100 consciously moves, and the HMD 100 and the control box 200 The movements to be realized are registered in association with each other, the state of each of these parts is sensed, and when the state of the eyebrows, cheeks and mouth is the registered state, the movement associated with the state is executed. The configuration is adopted.
上記実施形態のHMD100と制御ボックス200による構成によれば、HMD100を装着した利用者が意識して動かさないと、その状態が変化しない、眉、頬及び口の状態と、HMD100及び制御ボックス200によって実現される動作とを対応付けて登録するとともに、これらの各部の状態をセンシングし、眉、頬及び口の状態が、登録された状態である場合、その状態に対応付けられた動作が実行される構成を採用した。 (Advantageous effects based on the configuration of the embodiment)
According to the configuration of the
これによれば、利用者が意図しない動作に関する操作指示が制御ボックス200に入力されることを防止しつつ、ハンズフリーの操作を実現することができる。具体的には、利用者は、例えば、所定の製品を組み立てるための第1工程に関する作業指図を内容とするコンテンツ画像を視認している状態において、作業中である自身の両手を介してHMD100又は制御ボックス200を操作することなく、例えば、眉を上げるという行為によって、第2工程に関する作業指図を視認し、認識することができる。
According to this, a hands-free operation can be realized while preventing an operation instruction related to an operation not intended by the user from being input to the control box 200. Specifically, for example, in a state where a user is viewing a content image including a work instruction related to a first process for assembling a predetermined product, the user can use the HMD 100 or his / her hands while working. Without operating the control box 200, it is possible to visually recognize and recognize the work instruction related to the second step, for example, by the act of raising the eyebrows.
Claims (7)
- 利用者の眼に、コンテンツデータにより示されるコンテンツ画像を視認可能に提示し、前記利用者に前記コンテンツ画像を認識させるヘッドマウントディスプレイであって、
前記ヘッドマウントディスプレイの動作を制御する制御手段と、
前記利用者が意識しないと動かない前記利用者の顔の特定の部分の状態が、所定の状態であることを検知する検知手段と、を備え、
前記制御手段は、前記検知手段が前記利用者の顔の特定の部分の状態が前記所定の状態であると検知したことを条件として、前記所定の状態に対応付けられた前記動作を制御することを特徴とするヘッドマウントディスプレイ。 A head-mounted display that presents a content image indicated by content data to a user's eye so that the user can recognize the content image, and allows the user to recognize the content image,
Control means for controlling the operation of the head mounted display;
Detecting means for detecting that a state of a specific part of the user's face that does not move unless the user is conscious, is a predetermined state;
The control means controls the action associated with the predetermined state on condition that the detection means detects that the state of a specific part of the user's face is the predetermined state. Head-mounted display featuring - 前記検知手段は、前記利用者の顔の特定の部分の状態として前記利用者の眉の状態が、前記所定の状態であると検知し、
前記制御手段は、前記検知手段が前記利用者の眉の状態が前記所定の状態であると検知したことを条件として、前記所定の状態に対応付けられた前記動作を制御することを特徴とする請求項1に記載のヘッドマウントディスプレイ。 The detecting means detects that the state of the user's eyebrows is the predetermined state as a state of a specific part of the user's face,
The control means controls the operation associated with the predetermined state on condition that the detection means detects that the state of the eyebrows of the user is the predetermined state. The head mounted display according to claim 1. - 前記検知手段は、前記利用者の顔の特定の部分の状態として前記利用者の頬の状態が、前記所定の状態であると検知し、
前記制御手段は、前記検知手段が前記利用者の頬の状態が前記所定の状態であると検知したことを条件として、前記所定の状態に対応付けられた前記動作を制御することを特徴とする請求項1に記載のヘッドマウントディスプレイ。 The detection means detects that the state of the user's cheek as the state of a specific part of the user's face is the predetermined state,
The control unit controls the operation associated with the predetermined state on condition that the detection unit detects that the state of the cheek of the user is the predetermined state. The head mounted display according to claim 1. - 前記検知手段は、前記利用者の顔の特定の部分の状態として前記利用者の口の状態が、前記所定の状態であると検知し、
前記制御手段は、前記検知手段が前記利用者の口の状態が前記所定の状態であると検知したことを条件として、前記所定の状態に対応付けられた前記動作を制御することを特徴とする請求項1に記載のヘッドマウントディスプレイ。 The detecting means detects that the state of the mouth of the user is the predetermined state as a state of a specific part of the user's face,
The control unit controls the operation associated with the predetermined state on condition that the detection unit detects that the state of the mouth of the user is the predetermined state. The head mounted display according to claim 1. - 前記検知手段は、
前記利用者の顔の特定の部分の特定画像を撮像可能な撮像手段と、
前記撮像手段が撮像した前記特定画像を解析する解析手段と、を含み、
前記解析手段の解析結果にしたがい、前記利用者の顔の特定の部分の状態が、前記所定の状態であることを検知することを特徴とする請求項1乃至請求項4のいずれか1項に記載のヘッドマウントディスプレイ。 The detection means includes
Imaging means capable of imaging a specific image of a specific part of the user's face;
Analyzing means for analyzing the specific image picked up by the image pickup means,
5. The method according to claim 1, wherein the state of the specific part of the user's face is detected as the predetermined state according to an analysis result of the analysis unit. The described head mounted display. - 前記検知手段は、
前記利用者の顔の特定の部分に光を照射可能な発光素子と、
前記利用者の顔の特定の部分に照射した前記光が反射した反射光を検出する光検出素子と、を含み、
前記光検出素子が検出した反射光の強度が、所定の基準値を逸脱した場合、前記利用者の顔の特定の部分の状態が、前記所定の状態であることを検知することを特徴とする請求項1乃至請求項4のいずれか1項に記載のヘッドマウントディスプレイ。 The detection means includes
A light emitting element capable of irradiating a specific part of the user's face;
A light detecting element for detecting reflected light reflected by the light irradiated on a specific part of the user's face,
When the intensity of the reflected light detected by the light detection element deviates from a predetermined reference value, it is detected that the state of a specific part of the user's face is the predetermined state. The head mounted display of any one of Claim 1 thru | or 4. - 前記制御手段は、前記ヘッドマウントディスプレイの動作として、第1動作及び第2動作を制御し、
前記検知手段は、前記利用者の顔の特定の部分の状態が第1状態である場合、前記所定の状態として前記第1状態を検知し、前記利用者の顔の特定の部分の状態が第2状態である場合、前記所定の状態として前記第2状態を検知し、
前記制御手段は、前記検知手段が前記第1状態であると検知したことを条件として、前記第1状態に対応付けられた前記第1動作を制御し、前記検知手段が前記第2状態であると検知したことを条件として、前記第2状態に対応付けられた前記第2動作を制御することを特徴とする請求項1乃至請求項4のいずれか1項に記載のヘッドマウントディスプレイ。 The control means controls the first operation and the second operation as the operation of the head mounted display,
When the state of the specific part of the user's face is the first state, the detecting means detects the first state as the predetermined state, and the state of the specific part of the user's face is the first state. In the case of two states, the second state is detected as the predetermined state,
The control unit controls the first operation associated with the first state on the condition that the detection unit detects that the first state is in the first state, and the detection unit is in the second state. The head mounted display according to any one of claims 1 to 4, wherein the second operation associated with the second state is controlled on the condition that the detection is detected.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2008-308045 | 2008-12-02 | ||
JP2008308045A JP2010134057A (en) | 2008-12-02 | 2008-12-02 | Head-mounted display |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2010064361A1 true WO2010064361A1 (en) | 2010-06-10 |
Family
ID=42233020
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2009/006012 WO2010064361A1 (en) | 2008-12-02 | 2009-11-11 | Head-mounted display |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP2010134057A (en) |
WO (1) | WO2010064361A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2015004948A (en) * | 2013-05-23 | 2015-01-08 | 独立行政法人理化学研究所 | Head mounted video display system and method, and head mounted video display program |
WO2016182504A1 (en) * | 2015-05-08 | 2016-11-17 | Chow Bryan Shwo-Kang | A virtual reality headset |
WO2018049747A1 (en) * | 2016-09-14 | 2018-03-22 | 歌尔科技有限公司 | Focus position determination method and device for virtual reality apparatus, and virtual reality apparatus |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2896986B1 (en) * | 2012-09-12 | 2021-02-24 | Sony Corporation | Image display device, image display method, and recording medium |
US10234938B2 (en) * | 2015-01-31 | 2019-03-19 | Brian Lee Moffat | Control of a computer via distortions of facial geometry |
US10775880B2 (en) * | 2016-11-30 | 2020-09-15 | Universal City Studios Llc | Animated character head systems and methods |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS61199178A (en) * | 1985-03-01 | 1986-09-03 | Nippon Telegr & Teleph Corp <Ntt> | Information input system |
JPH10301675A (en) * | 1997-02-28 | 1998-11-13 | Toshiba Corp | Multimodal interface device and multimodal interface method |
JP2003050663A (en) * | 2001-08-06 | 2003-02-21 | Hitachi Ltd | Sign language sentence recognizing device and user interface |
JP2004086364A (en) * | 2002-08-23 | 2004-03-18 | Sony Corp | Real world indicating device |
JP2004314855A (en) * | 2003-04-17 | 2004-11-11 | Sumitomo Electric Ind Ltd | Apparatus operation control method and apparatus operation control system |
JP2005293061A (en) * | 2004-03-31 | 2005-10-20 | Advanced Telecommunication Research Institute International | User interface device and user interface program |
JP2007220010A (en) * | 2006-02-20 | 2007-08-30 | Canon Inc | Electronic appliances |
JP2008065169A (en) * | 2006-09-08 | 2008-03-21 | Sony Corp | Display device and display method |
-
2008
- 2008-12-02 JP JP2008308045A patent/JP2010134057A/en active Pending
-
2009
- 2009-11-11 WO PCT/JP2009/006012 patent/WO2010064361A1/en active Application Filing
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS61199178A (en) * | 1985-03-01 | 1986-09-03 | Nippon Telegr & Teleph Corp <Ntt> | Information input system |
JPH10301675A (en) * | 1997-02-28 | 1998-11-13 | Toshiba Corp | Multimodal interface device and multimodal interface method |
JP2003050663A (en) * | 2001-08-06 | 2003-02-21 | Hitachi Ltd | Sign language sentence recognizing device and user interface |
JP2004086364A (en) * | 2002-08-23 | 2004-03-18 | Sony Corp | Real world indicating device |
JP2004314855A (en) * | 2003-04-17 | 2004-11-11 | Sumitomo Electric Ind Ltd | Apparatus operation control method and apparatus operation control system |
JP2005293061A (en) * | 2004-03-31 | 2005-10-20 | Advanced Telecommunication Research Institute International | User interface device and user interface program |
JP2007220010A (en) * | 2006-02-20 | 2007-08-30 | Canon Inc | Electronic appliances |
JP2008065169A (en) * | 2006-09-08 | 2008-03-21 | Sony Corp | Display device and display method |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2015004948A (en) * | 2013-05-23 | 2015-01-08 | 独立行政法人理化学研究所 | Head mounted video display system and method, and head mounted video display program |
WO2016182504A1 (en) * | 2015-05-08 | 2016-11-17 | Chow Bryan Shwo-Kang | A virtual reality headset |
WO2018049747A1 (en) * | 2016-09-14 | 2018-03-22 | 歌尔科技有限公司 | Focus position determination method and device for virtual reality apparatus, and virtual reality apparatus |
Also Published As
Publication number | Publication date |
---|---|
JP2010134057A (en) | 2010-06-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10133344B2 (en) | Head mounted display apparatus | |
CN106471419B (en) | Management information is shown | |
US10031576B2 (en) | Speech generation device with a head mounted display unit | |
US9959591B2 (en) | Display apparatus, method for controlling display apparatus, and program | |
WO2010064361A1 (en) | Head-mounted display | |
EP2751609B1 (en) | Head mounted display with iris scan profiling | |
US9081416B2 (en) | Device, head mounted display, control method of device and control method of head mounted display | |
US10140768B2 (en) | Head mounted display, method of controlling head mounted display, and computer program | |
JP6089705B2 (en) | Display device and control method of display device | |
JP5272827B2 (en) | Head mounted display | |
JP6492531B2 (en) | Display device and control method of display device | |
US20090115968A1 (en) | Display apparatus, display method, display program, integrated circuit, goggle-type head-mounted display, vehicle, monocle, and stationary display | |
US9261959B1 (en) | Input detection | |
WO2017053871A2 (en) | Methods and devices for providing enhanced visual acuity | |
WO2010073879A1 (en) | Head-mounted display | |
JP5953714B2 (en) | Device, head-mounted display device, device control method, and head-mounted display device control method | |
JP6459380B2 (en) | Head-mounted display device, head-mounted display device control method, and computer program | |
JP2016224086A (en) | Display device, control method of display device and program | |
JP2021119431A (en) | Display system, controller, control method of display system and program | |
JP2007232753A (en) | Spectacles specifications setting device and visual field detecting device | |
JP2014130204A (en) | Display device, display system, and control method of display device | |
JP2010085786A (en) | Head-mounted display device | |
JP4022921B2 (en) | Retina scanning display device | |
JP2008212718A (en) | Visual field detection system | |
JP2015177404A (en) | Head-mounted display device and control method therefor |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 09830132 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 09830132 Country of ref document: EP Kind code of ref document: A1 |