WO2022070415A1 - Head-mounted display device - Google Patents

Head-mounted display device Download PDF

Info

Publication number
WO2022070415A1
WO2022070415A1 PCT/JP2020/037601 JP2020037601W WO2022070415A1 WO 2022070415 A1 WO2022070415 A1 WO 2022070415A1 JP 2020037601 W JP2020037601 W JP 2020037601W WO 2022070415 A1 WO2022070415 A1 WO 2022070415A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
head
signal
display device
mounted display
Prior art date
Application number
PCT/JP2020/037601
Other languages
French (fr)
Japanese (ja)
Inventor
康宣 橋本
ニコラス サイモン ウォーカー
ジャンジャスパー ヴァンデンバーグ
治 川前
Original Assignee
マクセル株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by マクセル株式会社 filed Critical マクセル株式会社
Priority to PCT/JP2020/037601 priority Critical patent/WO2022070415A1/en
Publication of WO2022070415A1 publication Critical patent/WO2022070415A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer

Definitions

  • the present invention relates to a technique of a head-mounted display device (HMD).
  • HMD head-mounted display device
  • HMDs such as eyeglasses and goggles that are worn on the user's head have been realized.
  • the HMD displays an image on a display surface of a display method such as a transparent type or a non-transparent type.
  • the image can be displayed in an form such as augmented reality (AR) or virtual reality (VR).
  • AR augmented reality
  • VR virtual reality
  • the HMD can superimpose and display a virtual image on a real image viewed by a user through a display surface based on an OS, an application program, or the like.
  • Patent Document 1 describes, as an "information processing method” or the like, "to intuitively grasp what kind of state the user's own psychological state is” and the following.
  • Eyeglasses eyewear as shown in FIG. 1 have a bioelectrode mounted on a nose pad and a bridge portion, and acquire an electrooculogram signal from the bioelectrode.
  • This eyeglass processing device transmits a sensor signal, an electro-oculography signal, and the like to an external device or a server.
  • the external device displays a predetermined figure formed from a plurality of objects representing the psychological state of the user based on the sensor signal, the electrooculogram signal, and the like.
  • Patent Document 2 describes, as a "mental state estimation system” or the like, that "not only can the mental state be estimated quickly and accurately, but also switching or instantaneous change of the mental state is detected” and the following. There is.
  • This system estimates mental state using features related to muscle activity in facial muscles.
  • This system uses surface electrodes to acquire myoelectric signals and analyze time-series data of myoelectric signals to acquire features.
  • Patent Document 1 describes that a system including eyeglasses calculates psychological parameters from a user's electro-oculography signal, and measures the user's blinks, eye movements, body movements, etc., and the user's vitality and concentration. , There is a description to acquire a psychological state such as calmness.
  • the image such as the graphical user interface (GUI), which is the output content for the user
  • GUI graphical user interface
  • a technique of detecting a biological signal such as an electro-oculography signal from facial facial muscles to estimate the user's psychological / emotional state
  • a technique of estimating the influence of an image of an OS or an application on a user's psychological / emotional state from a biological signal and adjusting the output content based on the guess is conceivable.
  • An object of the present invention is to provide a technique for quickly and deeply guessing and grasping a user's psychological / emotional state with respect to the HMD technique, and a technique for appropriately adjusting the output content based on the technique. That is.
  • a typical embodiment of the present invention has the following configuration.
  • the head-mounted display device of the embodiment includes a device that acquires an electromyographic signal from a user's facial muscle, and uses the electromyographic signal as input information. Further, the head-mounted display device of the embodiment adjusts the output content according to the psychological state estimated based on the electromyographic signal. Further, the head-mounted display device of the embodiment adjusts the type or amount of information of the GUI image to be displayed as the adjustment of the output content.
  • the psychological / emotional state of the user can be quickly and deeply estimated / grasped with respect to the HMD technology, and the output content is appropriately adjusted based on the guess / deeper. Can be done. Issues, configurations, effects, and the like other than those described above will be described in the embodiment for carrying out the invention.
  • the configuration outline of the HMD of Embodiment 1 of this invention is shown.
  • the configuration of the HMD of the first embodiment is shown.
  • the configuration which looked at the front surface of the HMD of Embodiment 1 is shown.
  • An example of the configuration of the biological signal acquisition unit and the electrode of the HMD of the first embodiment is shown.
  • An example of the functional block configuration of the HMD of the first embodiment is shown.
  • the processing flow at the time of a test is shown.
  • an output example at the time of a test is shown.
  • an output example at the time of evaluation input is shown.
  • the processing flow at the time of user processing at the time of using an application is shown.
  • the first embodiment an example of the correspondence between the output content and the EMG signal is shown.
  • the first embodiment shows an example of control information.
  • an example of a target facial muscle is shown.
  • the configuration of the HMD of the modification of the first embodiment is shown.
  • the processing flow in the HMD of Embodiment 2 of this invention is shown.
  • an example of user setting information regarding a voluntary input operation is shown.
  • an example of a voluntary input operation and a display image is shown.
  • the processing flow in the HMD of Embodiment 3 of this invention is shown.
  • a control example is shown in the HMD of the third embodiment.
  • the same parts are designated by the same reference numerals in principle, and repeated description thereof will be omitted.
  • the main body as the hardware for them is a processor or a controller composed of the processor or the like.
  • the computer executes processing according to the program read out on the memory by the processor while appropriately using resources such as the memory and the communication interface. As a result, a predetermined function, a processing unit, and the like are realized.
  • the processor is composed of, for example, a semiconductor device such as a CPU or a GPU.
  • a processor is composed of a device or a circuit capable of performing a predetermined operation.
  • the processing is not limited to software program processing, but can be implemented by a dedicated circuit. FPGA, ASIC, etc. can be applied to the dedicated circuit.
  • the program may be installed in the target computer as data in advance, or may be distributed and installed as data from the program source to the target computer.
  • the program source may be a program distribution server on a communication network or a non-transient computer-readable storage medium.
  • the program may be composed of a plurality of program modules.
  • various data and information may be described by expressions such as tables and lists, but the structure and format are not limited to these.
  • data and information for identifying various elements may be described by expressions such as identification information, an identifier, an ID, a name, and a number, but these expressions can be replaced.
  • the HMD1 which is the head-mounted display apparatus of Embodiment 1 of this invention will be described with reference to FIGS. 1 to 13.
  • the HMD1 of the first embodiment has a function capable of acquiring and inputting information representing a user's unconscious psychological / emotional state based on the detection of an electromyographic signal from a facial muscle, and a GUI image based on the information. It has a function to adjust the output contents such as display.
  • the HMD 1 of the first embodiment shown in FIG. 1 and the like includes a function and devices 2 and 4 for acquiring an EMG signal from a user's facial muscle, and uses the acquired EMG signal as input information.
  • the HMD 1 of the first embodiment preferably adjusts the output content such as a GUI image according to the inferred psychological state based on the electromyographic signal. Further, the HMD 1 of the first embodiment adjusts the type of GUI image to be displayed, the amount of information, and the like as the adjustment of the output content.
  • the estimation / grasp of the psychological state is realized by controlling the correspondence between the EMG signal and the output content (for example, the image displayed on the display surface 11), and the parameter value representing the psychological state or the like is realized. It is not necessary to handle the information of. It should be noted that the HMD of the modified example is controlled by the correspondence between the three elements of the EMG signal, the psychological state, and the output content, and even if it is handled by reading and writing information such as a parameter value indicating the psychological state. good.
  • the user's psychological / emotional state includes states such as comfort / discomfort, fatigue, stress, vitality, relaxation, concentration, arousal, and anger.
  • the electromyogram signal may be described as a muscle potential signal.
  • the electromyogram may be described as EMG.
  • the electromyogram is a diagram of the weak changes in electric potential generated in the muscle on the axis of electric potential and time. EMG is used in medical treatment for auxiliary diagnosis of involuntary movements.
  • the electrooculogram signal may be described as an electrooculogram signal.
  • the electrocardiogram may be described as EOG.
  • the electro-oculography signal is related to the movement of the eye and records the potential due to the movement of the eyeball.
  • the electro-oculography signal also appears as a signal change during blinking or eye movement.
  • the electro-oculography signal is also mixed with the influence of biological phenomena (for example, smile) other than eye movement.
  • FIG. 1 shows a schematic view of how the HMD 1 of the first embodiment is attached to the user's head as an outline of the configuration.
  • (A) is a case of a glasses-type HMD
  • (B) is a case of a goggle-type HMD.
  • the HMD 1 is not limited to a single configuration, and may be provided with a remote controller, or may be connected to an external device such as a smartphone, a PC, or a server by communication.
  • the remote controller is an operation device that can input the operation by the user's hand.
  • the HMD 1 performs, for example, short-range wireless communication with the controller. By operating the manipulator, the user can input instructions to the HMD 1 and move the cursor on the display surface 11.
  • An external device such as a smartphone has, for example, an application program or data. Those programs and data may be provided to HMD1 from an external device. Data may be stored and displayed from the HMD 1 to an external device.
  • the application may be, for example, an application corresponding to AR or VR functions.
  • Examples of the application include those that guide the user in the space and those that provide work support.
  • the HMD 1 generates a virtual image for guidance and work support based on the program processing of the OS or the application, and displays it on the display surface 11.
  • the images also include various GUI images such as cursors, buttons, commands, icons, menus, windows and the like.
  • the HMD1 of the first embodiment includes unique devices 2 and 4 for handling an electromyographic signal.
  • the devices 2 and 4 are an electromyogram signal detection device, in other words, an electrode mounting portion, which is arranged so as to be in contact with a portion of facial facial muscles.
  • the device 2 is arranged so as to be in contact with the vicinity of the cheek in particular, and detects an electromyographic signal from the facial muscles near the cheek.
  • the device 4 is arranged so as to be in contact with the vicinity of the eyebrows in particular, and detects an electromyographic signal from the facial muscles near the eyebrows.
  • the controller of HMD1 performs control processing using those EMG signals.
  • the HMD1 of the first embodiment has an output content adjusting function using an EMG signal as a unique function. Based on the EMG signals acquired using the devices 2 and 4, the HMD1 automatically outputs the output contents such as GUI images by the OS, application, user interface function, etc. according to the estimated unconscious psychological state. adjust.
  • FIG. 2 shows the detailed configuration of the HMD1 of FIG. 1, particularly the spectacle-shaped HMD1 of (A), as a perspective view.
  • X, Y, and Z are shown as directions.
  • the X direction is the front-back direction
  • the Y direction is the left-right direction
  • the Z direction is the up-down direction.
  • the HMD 1 includes a spectacle-shaped housing 10, a display device including a display surface 11, a device 2 related to an EMG signal, a device 4, and the like.
  • a controller, a display device, a camera 12, a distance measuring sensor 13, a sensor unit 14, a biological signal acquisition unit 20, a voice output device 19, and the like are mounted on the housing 10.
  • the housing 10 has, as each part, a binocular portion 10a provided with a display surface 11, a bridge portion connecting between the left eye portion and the right eye portion of the binocular portion 10a, and temples on the left and right outside the binocular portion 10a. It has a part 10c and the like.
  • the binocular portion 10a is provided with a display surface 11, a camera 12, a distance measuring sensor 13, a device 4, an electrode 5, and the like.
  • An electrode 5 for detecting an electro-oculography signal is provided in the vicinity of the nose pad of the bridge portion 10b.
  • a controller, a biological signal acquisition unit 20, a sensor unit 14, an audio output device 19 including a speaker and an earphone terminal, and the like are mounted on the temple unit 10c.
  • the display device including the display surface 11 is a transparent display method, but the display device is not limited to this, and a non-transparent type (in other words, VR type) display method may be used.
  • the display device may be a non-transparent display method.
  • the camera 12 has, for example, two cameras arranged on the left and right sides of the housing 10, and captures a range including the front of the HMD 1 to acquire an image.
  • the distance measuring sensor 13 is a sensor that measures the distance between the HMD 1 and an object in the outside world. As the distance measuring sensor 13, a TOF (Time Of Flight) type sensor may be used, or a stereo camera or another type may be used.
  • the sensor unit 14 is a portion provided with various sensors other than the camera 12 and the distance measuring sensor 13.
  • the sensor unit 14 includes a group of sensors for detecting the position and orientation of the HMD1. The position where the camera 12, the distance measuring sensor 13, and the sensor unit 14 are arranged is not limited to the position shown in the figure.
  • Device 2 is located near the cheek.
  • the device 4 is arranged near the eyebrows.
  • the device 2 is configured as one set on the left and right of the device 2R on the right eye side and the device 2L on the left eye side, and has a symmetrical shape with respect to the XZ plane.
  • the device 4 is configured as one set on the left and right of the device 4R on the right eye side and the device 4L on the left eye side, and has a symmetrical shape with respect to the XZ plane.
  • the devices 2 and 4 are provided as an extension to the spectacle-shaped housing 10.
  • Devices 2 (2R, 2L) are connected to the left and right temple portions 10c so as to extend forward (in the V direction in the figure) on the lower side.
  • the housing 25 of the device 2 has a shape extending in the V direction, for example, substantially like an arm, but is not limited thereto.
  • the left and right devices 4 (4R, 4L) are connected to the upper side of the binocular portion 10a so as to be exposed to the eyebrow side.
  • the device 4 is arranged in the goggle type housing.
  • the devices 2 and 4 may be detachably connected to the housing 10, respectively, or one of the housings 10 so as to continuously extend from the housing 10 (temple portion 10c or binocular portion 10a). It may be fixedly provided as a portion.
  • Each of the device 2R and the device 2L is provided with an electrode 3 on the surface of the housing 25 facing inward in the Y direction.
  • Each of the device 4R and the device 4L is provided with an electrode 3 on a surface facing the eyebrow side in the X direction.
  • the electrode 3 is a sensor electrode for detecting an electromyographic signal.
  • an electrode 3 and a voice input device 18 including a microphone are arranged in the V direction, which is the longitudinal direction.
  • the devices 2 and 4 also include a circuit and the like connected to the electrode 3 and the biological signal acquisition unit 20 of the temple unit 10c.
  • the biological signal acquisition unit 20 is built in the left and right temple units 10c.
  • the biological signal acquisition unit 20 includes an electromyogram signal sensor 201 and an electro-oculography signal sensor 202, as shown in FIG. 4 described later.
  • the electromyogram signal sensor 201 of the biological signal acquisition unit 20 detects the electromyogram signal from the electrode 3 of the device 2 and the electrode 3 of the device 4, and the electrooculogram signal sensor 202 detects the electromyogram signal from the electrode 5. ..
  • the biological signal acquisition unit 20 acquires these electromyographic signals and electro-oculography signals as biological signals, and performs predetermined processing.
  • the biological signal acquisition unit 20 cooperates with the controller 101 (FIG. 4) of the HMD1.
  • the biological signal acquisition unit 20 may be mounted as a part of the controller 101.
  • the body part to detect the electromyographic signal that is, the part of the facial muscle to which the electrode 3 which is the sensor electrode is in contact is included, in particular, the vicinity of the cheek and the vicinity of the eyebrows. And said.
  • the target is not limited to this, and at least one facial muscle may be targeted.
  • the HMD 1 arbitrarily uses the electro-oculography signal from the electrode 5 as information independent of the electromyogram signal from the electrode 3.
  • the HMD 1 may determine a state such as blinking or eye movement from the electro-oculography signal.
  • the HMD of the modified example may be configured not to include the electrode 5 for the electro-oculography signal.
  • FIG. 3 shows a configuration in which the front surface of the HMD 1 in FIG. 2 is viewed, and in particular, shows a configuration example of the electrodes 3 of the devices 2 and 4 for the electromyographic signal, the electrodes 5 for the electrooculogram signal, and the like. Electrodes 5a and 5b are arranged as electrodes 5 in the vicinity of the nose pad. The electrode 5 comes into contact with the vicinity of the user's nose (FIG. 12).
  • the device 2R two electrodes 3 of the electrode 3a and the electrode 3b are arranged as one pair p1.
  • two electrodes 3 of the electrode 3c and the electrode 3d are arranged as one pair p2.
  • the housing 25 of the device 2 (2R, 2L) is arranged so as to be inclined to the left and right outward in the Z direction, for example.
  • the electrode 3 of the device 2 protrudes slightly outward from the inner surface in the Y direction of the housing 25, and easily comes into contact with the skin near the cheek.
  • two electrodes 3 are arranged on the left and right, respectively.
  • two electrodes 3 of the electrode 3e and the electrode 3f are arranged as one pair p3.
  • two electrodes 3 of the electrode 3g and the electrode 3h are arranged as one pair p4.
  • the electrode 3 of the device 4 protrudes slightly from the back surface to facilitate contact with the skin near the eyebrows.
  • the electrode 3 group (3a to 3d) of the device 2 provided on the buccal side is referred to as the electrode 3A
  • the electrode 3 group (3e to 3h) of the device 4 provided on the eyebrow side is referred to as the electrode 3B.
  • the electrode 3A is a sensor electrode for detecting an electromyographic signal from the facial muscle near the cheek.
  • the electrode 3B is a sensor electrode for detecting an electromyographic signal from the facial muscle near the eyebrows.
  • the electrode 3A contacts the skin near the cheek.
  • the electrode 3B contacts the skin near the eyebrows.
  • the shape of the electrode 3 may be a disk or the like.
  • the two electrodes 3 are used as one pair.
  • the electromyographic signal is detected as a potential difference between the pair of electrodes 3.
  • the potential difference between the two electrodes 3 is used as a 1-channel EMG signal.
  • the electro-oculography signal is detected for each electrode 5.
  • the potential difference between one electrode 5 and the ground electrode 21 is used as a one-channel ocular potential signal.
  • the ground electrode 21 is a ground electrode for the electrode 3 and the electrode 5 of the biological signal acquisition unit 20, and is common to a plurality of sensor electrodes.
  • the ground electrode 21 is arranged on a part of the housing 10, for example, as a surface facing the head side, on a surface near the ear hook at the rear of the temple portion 10c, and comes into contact with the vicinity of the ear.
  • the type, number, position, shape, and the like of the electrodes and devices for acquiring the biological signal mounted on the HMD are not limited to the example of the first embodiment.
  • the HMD may be configured to be provided with a device including electrodes so that an electromyographic signal of at least one facial muscle can be acquired.
  • the band 26 of FIG. 2 connects between the rear ends of the ear hooks of the left and right temple portions 10c of the housing 10. As a result, the housing 10 is stably fixed to the user's head, and the contact between the electrodes 3 and 5 and the skin is more reliable.
  • the device 2 (2R, 2L) When the user wears the HMD1 on the head, the device 2 (2R, 2L) is arranged so as to be in contact with the vicinity of the cheekbone, and the device 4 (4R, 4L) is arranged so as to be in contact with the vicinity of the eyebrows. ing.
  • the device 2 (2R, 2L) is arranged in such a state that a part of the portion including the electrode 3 comes into contact with and rides on the skin of the facial muscle near the cheekbone.
  • the device 2 (particularly the electrode 3) becomes a fulcrum that supports a part of the load of the HMD 1.
  • the load of the entire HMD is supported by the nose and ears where the housing mainly contacts.
  • the load of the entire HMD 1 is supported not only at the nose and the ear, but also at the device 2, and the load is distributed at the plurality of the devices.
  • the shape and the like of the device 2 are designed so that the contact between the electrode 3 and the portion of the facial muscle and the state of the load on the skin are suitable.
  • the HMD 1 of the first embodiment can reduce the weight of the housing 10 or the like applied to the user's ears or nose as compared with the conventional HMD without the device 2 or the like.
  • the user's feeling of wearing the HMD can be improved, and the discomfort due to the weight of the HMD can be prevented.
  • the contact between the electrode 3 and the portion of the facial muscle becomes better, and the detection of the EMG signal becomes more stable.
  • the inner surface of the housing 25 of the device 2 may be arranged diagonally with respect to the XZ surface or may be formed as a curved surface so as to fit well with the cheek surface, for example.
  • the device 2 may have an elliptical shape or another shape in a cross section perpendicular to the V direction.
  • the device 2 may be bent in the V direction.
  • the device 2 is not limited to rigidity and may be composed of an elastic body.
  • the HMD 1 has a mechanism that can adjust the position of the device 2 from the temple portion 10c of the housing 10 by sliding in each direction, bending, stretching, or the like.
  • the device 2 may be expanded and contracted in the V direction, for example.
  • the tip of the device 2 may be movable inward or outward in the Y direction or in another direction with the connection point with the temple portion 10c as a fulcrum.
  • the device 2 may be pressed against the inner face.
  • the device 2 is stably held, and the electrode 3 of the device 2 and the portion of the facial muscle are in constant contact with each other.
  • the device 4 has a mechanism capable of adjusting the state such as arrangement and shape.
  • a voice input device 18 including a microphone is also mounted near the tip of the device 2. Since this microphone is placed near the mouth, it is easy to collect sound. Not limited to this, other components may be mounted on the devices 2 and 4.
  • the HMD 1 may be provided with a mechanism that can be adjusted by the user so that the devices 2 and 4 do not come into contact with the face when the biological signal is not used, that is, when the function is off.
  • the mechanism may be such that the devices 2 and 4 can be attached to and detached from the housing 10.
  • a mechanism or the like that allows the devices 2 and 4 to be spread outward with respect to the face may be used.
  • FIG. 4 shows a configuration example of the biological signal acquisition unit 20 and the sensor electrode.
  • the biological information acquisition unit 20 includes an electromyogram signal sensor 201 and an electro-oculography signal sensor 202.
  • the EMG signal sensor 201 acquires two channels of EMG signals (signals sg1, sg2) from the electrodes 3A (3a to 3d) of the device 2, and the electrodes 3B (3e to 3h) of the device 4.
  • Two-channel EMG signals (signals sg3, sg4) are acquired from.
  • the electro-oculography signal sensor 202 acquires a 2-channel electro-oculography signal (signals sg5, sg6) from the electrodes 5 (5a, 5b).
  • the biological information acquisition unit 20 acquires biological signals from the user's head and face by using these sensors.
  • the electromyographic signal from the pair of two electrodes 3 is used as a one-channel signal.
  • the potential difference between the detection signal of the electrode 3a (difference from the ground electrode 21) and the detection signal of the electrode 3b (difference from the ground electrode 21) is the signal sg1 as one myocardial signal.
  • This signal sg1 is an electromyographic signal of the facial muscle near the right cheek.
  • the signal sg2 from the electrodes 3c and 3d of the device 2L is an electromyographic signal of the facial muscle near the left cheek.
  • the signal sg3 from the electrodes 3e and 3f of the device 4R is an electromyographic signal of the facial muscle near the right eyebrow.
  • the signal sg4 from the electrodes 3g and 3h of the device 4L is an electromyographic signal of the facial muscle near the left eyebrow.
  • the EMG signal sensor 201 and the electrooculogram signal sensor 202 are provided with an amplifier circuit, a noise removal circuit, a rectifier circuit, and the like, and the detected signal is processed by these circuits.
  • the biometric signal acquisition unit 20 has four (4 channels) EMG signals (signals sg1 to sg4) from the EMG signal sensor 201 and two (2 channels) electromyograms from the electrooculogram signal sensor 202.
  • the signal (signals sg5, sg6) and the signal are acquired.
  • the biological signal acquisition unit 20 processes those biological signals and stores them in a memory as data.
  • the controller 101 performs a predetermined control process using the data of those biological signals.
  • the process performed by the biological signal acquisition unit 20 may be a process of calculating a predetermined feature amount as a process of calculating an electromyogram signal.
  • the feature amount of the EMG signal the period, frequency, etc. may be used in addition to the level such as intensity and amplitude.
  • FIG. 5 shows an example of a functional block configuration of HMD1.
  • the HMD 1 includes a processor 101, a memory 102, a camera 12, a distance measuring sensor 13, a sensor unit 14, a display device 103 including a display surface 11, a communication device 104, a voice input device 18 including a microphone, and a voice output device 19 including a speaker.
  • the processor 101 is composed of a CPU, ROM, RAM, etc., and constitutes an HMD1 controller.
  • the processor 101 realizes functions such as an OS, middleware, and applications and other functions by executing processing according to the control program 31 and the application program 32 of the memory 102.
  • the memory 102 is composed of a non-volatile storage device or the like, and stores various data and information handled by the processor 101 and the like.
  • the memory 102 also stores images, detection information, and the like acquired by the camera 12 and the like as temporary information.
  • the camera 12 acquires an image by converting the light incident from the lens into an electric signal by an image sensor.
  • the distance measuring sensor 13 calculates the distance to the object from the time until the light emitted to the outside hits the object and returns.
  • the sensor unit 14 includes, for example, an acceleration sensor 141, a gyro sensor (in other words, an angular velocity sensor) 142, a geomagnetic sensor 143, and a GPS receiver 144.
  • the sensor unit 14 uses the detection information of these sensors to detect states such as the position, orientation, and movement of the HMD1.
  • the HMD is not limited to this, and may include an illuminance sensor, a proximity sensor, a barometric pressure sensor, and the like.
  • the display device 103 includes a display drive circuit and a display surface 11, and displays an image on the display surface 11 based on the display information 34.
  • the communication device 104 includes a communication processing circuit, an antenna, and the like corresponding to various predetermined communication interfaces. Examples of communication interfaces include mobile networks, Wi-Fi (registered trademark), Bluetooth (registered trademark), infrared rays and the like.
  • the communication device 104 performs wireless communication processing and the like with an external device.
  • the communication device 104 also performs short-range communication processing with the actuator.
  • the voice input device 18 converts the input voice from the microphone into voice data.
  • the voice output device 19 outputs voice from a speaker or the like based on the voice data.
  • the voice input device may include a voice recognition function.
  • the voice output device may include a voice synthesis function.
  • the operation input unit 107 is a part that receives operation inputs to the HMD 1, such as power on / off and volume adjustment, and is composed of a hardware button, a touch sensor, and the like.
  • the battery 108 supplies electric power to each part.
  • the memory 102 stores a control program 31, an application program 32, setting information 33, display information 34, biological signal data 35, control information 36, and the like.
  • the control program 31 is a program for realizing the unique function in the first embodiment.
  • the application program 32 is a program that realizes predetermined functions such as user guidance and work support by, for example, AR.
  • the setting information 33 includes system setting information and user setting information related to each function.
  • the display information 34 is information or data for displaying an image on the display surface 11.
  • the biological signal data 35 is the data of the biological signal acquired by the biological signal acquisition unit 20 and the processed data thereof.
  • the control information 36 is information for management / control related to a function peculiar to the first embodiment (output content adjusting function using an EMG signal).
  • the control information 36 which will be described later, is information / data for controlling output contents based on information / data such as tests, evaluations, and learnings, and estimation of a psychological state from biological signals.
  • the controller by the processor 101 has a communication control unit 101A, a display control unit 101B, a data processing unit 101C, and a data acquisition unit 101D as a configuration example of a functional block realized by processing.
  • the communication control unit 101A controls communication processing using the communication device 104 when communicating with an external device or the like.
  • the display control unit 101B controls the image display on the display surface 11 of the display device 103 by using the display information 34.
  • the data processing unit 101C performs processing related to a unique function while reading and writing the biological signal data 35 and the control information 36.
  • the data acquisition unit 101D acquires various information / data necessary for processing.
  • the data acquisition unit 101D acquires biological signal data from the biological signal acquisition unit 20, and acquires various data from the camera 12, the distance measuring sensor 13, the sensor unit 14, and the like.
  • FIG. 12 schematically shows an example of facial facial muscles.
  • Examples of facial muscles capable of acquiring an electromyographic signal by the electrode 3 provided in the HMD 1 of the first embodiment include the following. That is, the facial muscles include the frontalis muscle, the corrugator supercilii muscle, the orbicularis oculi muscle, the upper lip levitation muscle, the laughing muscle, the zygomaticus minor muscle, and the zygomaticus major muscle as shown in FIGS.
  • the frontalis muscle, the corrugator supercilii muscle, the orbicularis oculi muscle, and the levator labii muscle are provided with electrodes 3 for detecting signals from at least one of them.
  • the electrode 3B is arranged on the device 4 arranged on the upper side of the binocular portion 10a in the vicinity of the binocular portion 10a. With this electrode 3B, it is possible to efficiently detect the EMG signal from those facial muscles, particularly the corrugator supercilii muscle.
  • an electrode 3 for detecting a signal from at least one of them is arranged.
  • the electrode 3A is arranged on the device 2 arranged so as to extend from the temple portion 10c along the cheek. With this electrode 3A, it is possible to efficiently detect the electromyographic signal from those facial muscles, particularly the portion from the zygomaticus minor muscle to the zygomaticus major muscle.
  • Electrodes 5 for acquiring electrooculogram signals are arranged near the nasal muscles and the orbicularis oculi muscles. Further, in the HMD1 of the first embodiment, as described above, a part of the load of the HMD1 is supported by the electrodes 3A arranged for the facial muscles near the cheekbones located below the wearer's eyes.
  • the target facial muscles are, for example, the orbicularis oculi muscle, the levator labii superior muscle, the risorius muscle, the zygomaticus minor muscle, or the zygomaticus major muscle.
  • the psychological state can be inferred from the EMG signal of the facial muscles.
  • a known technique may be applied to a method of estimating and evaluating a psychological state from an electromyographic signal of a facial muscle.
  • the psychological states that can be inferred from the EMG signal based on the known technique include states such as comfort / discomfort, relaxation, concentration, stress, frustration, fatigue, arousal, and anger.
  • the degree of these various psychology and emotions can be inferred. For example, the degree of comfort may be quantified in multiple stages.
  • the psychological state estimated from the EMG signal is associated with the state of the user's evaluation of the GUI image which is the output content, and simply put, the user feels the output content. Corresponds to the state.
  • the HMD 1 displays an image on the display surface 11 and acquires an electromyogram signal through the device 2 or the like, and adjusts a GUI image or the like as an output content according to a psychological state estimated by the electromyogram signal.
  • the HMD1 of the first embodiment is adjusted so as to have a more suitable output content before the physical movement of the user is affected, for example, before it appears as a large stress.
  • This makes it possible to provide the user with a more suitable HMD usage environment.
  • the HMD1 displays a GUI image as an output by a guidance or work support application
  • the GUI image can be adjusted while observing the degree of comfort or discomfort of the user from the EMG signal.
  • the HMD1 can adjust, for example, the type, amount, size, fineness, speed, and the like of GUI images for work support instructions and explanations.
  • the HMD1 of the first embodiment outputs the output content as a test in advance in order to estimate and evaluate the psychological state of the user.
  • the HMD 1 acquires the user's EMG signal and the user's subjective evaluation value.
  • the HMD1 learns about the correspondence between the output content, the psychological state, and the EMG signal.
  • HMD1 creates and updates control information (control information 36 in FIG. 5) for controlling adjustment of output contents.
  • This process such as test output also corresponds to calibration for adapting to individual user differences.
  • the HMD1 creates and holds a dictionary for each user as control information based on the processing such as this test, evaluation, and learning.
  • FIG. 6 shows a flow of processing such as test output by HMD1 and has steps S601 to S605. The flow of FIG. 6 is similarly repeated for each test unit.
  • the user wears the HMD1 on the head.
  • the on / off state of the function related to the biological signal can be set by the user.
  • the HMD1 performs the following processing when the function is on.
  • step S601 the HMD1 outputs to initialize the psychological state of the user.
  • This initialization is to make the psychological state as neutral as possible and to relax it.
  • the output of this initialization includes, for example, a playback display of video or music that allows the user to relax, an output of a message to relax, and the like.
  • This output may be performed by the HMD1 or an external device (display or the like) different from the HMD1 may be used.
  • step S602 the HMD1 outputs for a test that allows the user to experience a psychological state.
  • the HMD 1 acquires the user's EMG signal through the device 2 and the like.
  • This test output includes playback display of various images and music for causing various psychology and emotions such as pleasure and discomfort, and work instructions through GUI images for work using HMD1.
  • the HMD1 performs this test output as a repetition of a plurality of test units while changing the type of video and music and the type of work every hour.
  • the HMD 1 changes the image including the GUI displayed on the display surface 11 at the time of the work instruction.
  • the HMD1 stores the acquired EMG signal data together with the output content information for each test.
  • the HMD1 asks the user to input the degree of psychology / emotion such as comfort / discomfort for the above test as a subjective evaluation.
  • the HMD 1 displays a GUI image for subjective evaluation input on the display surface 11, and asks the user to select and input an evaluation value for each test unit.
  • the degree of comfort / discomfort can be selectively input for the GUI image of the work instruction in a certain test unit.
  • This subjective evaluation corresponds to the user's psychological state with respect to the output during the test.
  • step S604 the HMD1 stores the test output content information, the EMG signal, and the subjective evaluation input information in association with each other as data.
  • step S605 the HMD1 learns about the relationship between the output content (for example, GUI image), the psychological state, and the EMG signal using the data accumulated by the processing of the above test, and records the learning result. For this learning, machine learning using a neural network or the like may be applied. Then, the HMD 1 sets and customizes the correspondence relationship between the EMG signal and the output content as control information (control information 36 in FIG. 5) based on the learning result.
  • This control information is information for defining and controlling what kind of output content is to be obtained in what kind of EMG signal state. This control information may be, for example, a table or the like that defines the correspondence. This control information may be a correspondence between the EMG signal and the output content, and it is possible to omit interposing a value representing a psychological state between them. This control information is updated according to the test.
  • the HMD1 can control the adjustment of the output content according to the state of the biological signal based on the above control information.
  • FIG. 7 shows an example of the test output, and shows an example in which a GUI image for a test is displayed on the display surface 11 of the HMD1.
  • the work task for the user is displayed as a work instruction or explanation by a GUI image.
  • the three examples (A), (B), and (C) in FIG. 7 are examples of GUI images in which the amount of explanation at that time is changed.
  • the amount of explanation is large in the order of the images (A), (B), and (C).
  • an image 710 (indicated by a square or a circle) of a virtual object is displayed on the right side in an area of a predetermined size corresponding to the size of the display surface 11. There is.
  • the object 710 is user operable for work.
  • the operation corresponds to the function of the main body of the HMD1, and examples of known techniques include operation by an operating device, recognition of hand gestures, voice recognition, operation by line-of-sight detection, and the like.
  • the image 720 of the work explanation is displayed on the left side.
  • an image (image 710 or image 720) as shown in FIG. 7 is superimposed and displayed on the actual object on the display surface 11.
  • the image 720 such as the work explanation is the image 720A having a relatively large amount of explanation in the image 701 of (A), the image 720B having a medium explanation amount in the image 702 of (B), and the explanation amount in the image 703 of (C). It is said that the image is 720C with few images.
  • the user inputs a subjective evaluation as to whether the amount of explanation by the image 720 or the like was appropriate for him / her, whether it was comfortable, easy to understand, or the like (step S603). What kind of explanatory amount or the like is suitable as the output content depends on the individual user. In some cases, an image with a large amount of explanation is more suitable, and in other cases, an image with a small amount of explanation is more suitable.
  • the amount of information of suitable instructions / explanations required by the user changes depending on the skill level of the user for the work of a certain application.
  • FIG. 8 shows an example of the output at the time of the subjective evaluation input (step S603).
  • the image 801 of FIG. 8A is an example in which the HMD 1 displays a GUI image for subjective evaluation input on the display surface 11.
  • the image 801 has an image of a message prompting a subjective evaluation input such as "How was the amount of explanation?"
  • an image 810 by a GUI component such as a scale for inputting the subjective evaluation value.
  • the scale of the image 810 shows the degree of appropriateness and satisfaction, such as insufficient, appropriate, and excessive, regarding how to feel the amount of explanation.
  • the user can input the value selected on the scale by the operation as the subjective evaluation value.
  • Image 802 of FIG. 8B shows another display example of the evaluation input.
  • the image 820 for evaluation input is a scale for inputting an evaluation value regarding how to feel the explanatory amount, as in (A), but it is "comfortable” and “unpleasant” depending on the operation of the bar. It is a format in which the degree of comfort and discomfort can be input as an evaluation value.
  • the HMD1 sets the control information for adjusting the output contents for each user and each application in steps S604 and S605 of FIG. 6 based on the learning of the test and the evaluation input as described above.
  • the HMD1 reflects the user's evaluation value in the user-specific dictionary of control information.
  • the HMD1 determines a GUI image or the like as a candidate for output content adjustment according to the classification of the correspondence between the output content and the user's psychological state (corresponding biological signal state), and sets it as control information. back. After that, the HMD1 appropriately adjusts the output contents such as the GUI image based on the control information when the user actually uses the application of the HMD1.
  • the dictionary is updated for each user, the sensitivity of the output content adjustment function gradually increases, and more suitable adjustment becomes possible.
  • the output of the GUI image or the like when actually using the application and the output at the time of the test output as shown in FIG. 7 may be basically in the same format, or the test output may be the actual output. It may be simplified as compared with.
  • the image 803 of FIG. 8C shows another display example regarding the subjective evaluation input in the modified example of the first embodiment.
  • the HMD 1 is an image 830 for evaluation input so that the user can input a subjective evaluation value for the output content at that time while displaying the GUI image on the display surface 11 as a user process when actually using the application. Is displayed. In this example, it is assumed that the image of the test output and the image when actually using the application are similar.
  • the HMD 1 displays, for example, an image 803 similar to the image 702 of FIG. 7 (B) on the display surface 11 during user processing when using the application.
  • the image 803 has an image 710 of the object and an image 720 (702B) of the description.
  • the HMD 1 displays the image 830 for evaluation input in a part of the area of the display surface 11 together with those images.
  • the HMD 1 may display the image 830 at the edge of the region or the like so as not to interfere with the main image.
  • the user can immediately input a subjective evaluation value for the output content (for example, image 702B) at that time to the image 830 and reflect it in the evaluation.
  • the HMD1 can use such an evaluation input at the time of actual use as a complement or an alternative to the preliminary test for collecting data related to the output content adjustment control.
  • the image 830 for evaluation input is displayed at the time of actual use as shown in FIG. 8C, the user does not have to ignore the image 830 and input the evaluation at any time. Just do.
  • the image 830 for evaluation input may be displayed according to a predetermined user operation, or may be displayed at regular time intervals.
  • the evaluation target image (for example, image 720) is displayed for each test unit, and the evaluation input image is shown as shown in FIG. 8 (C). 830 may be displayed.
  • the evaluation target becomes clearer.
  • the efficiency of the evaluation can be improved.
  • the adjustment of the output content according to the psychological state of the user is not limited to the change of the information amount of the GUI image, and various elements such as the following are possible.
  • the target of the output content adjustment is not limited to the amount and fineness of the explanatory text of the image 720 of FIG. 7, and the following is also possible. It is also possible to change the type, size, color, brightness, character type (font), etc. of the GUI image.
  • Examples of the type of GUI image include various GUI parts (also called Widgets and controls).
  • GUI components include windows, text boxes, dialog boxes, buttons, icons, bars, lists, menus, and the like.
  • the effect on psychology differs depending on what GUI parts, colors, character types, etc. are used for presentation.
  • the effect is different depending on the size (area, etc.) and display position of the explanatory image. For example, adjustment may be made to increase or decrease the ratio of the explanatory image to the area of the display surface 11.
  • the effect depends on whether the description is placed in a fixed position or near the object. The effect will differ depending on whether the explanation is in text or in pictures, figures, or icons.
  • the output of HMD1 is not limited to image display, but it is also possible to output voice, vibration, light emission, etc. according to the function of HMD1.
  • the HMD 1 includes a vibrator and a light emitting device in the housing 10 and the like.
  • the control of the adjustment of the output contents it is possible to control the various outputs thereof and their combinations.
  • voice, light emission, and vibration control it is possible to adjust the presence / absence, type, amount, timing, etc. of them.
  • the use of sound, light emission, and vibration with or instead of images also has different psychological effects. The effect is different depending on whether the explanation is read aloud, BGM is played, or the type of voice or BGM is changed. The effect is different depending on the type of viewing content such as output video or BGM. For example, if the user is presumed to be bored, the type of viewing content may be changed.
  • the output content may be adjusted as the output speed. For example, if there are a plurality of sentences or a plurality of images for work explanation and they are automatically sent, the speed can be adjusted.
  • the difficulty level of the game can be set as an adjustment target.
  • the HMD1 adjusts in real time from the occasional EMG signals during the game so that the difficulty level is appropriate for the user.
  • the adjustment target may be the story based on the branch.
  • the HMD1 changes the story by selecting a branch from the EMG signal so that the user feels an appropriate thrill in the progress of the story.
  • FIG. 9 shows a processing flow in the HMD 1 of the first embodiment when the user processes the user when the user actually uses the application and adjusts the output contents in real time.
  • This process is a process after the correspondence between the EMG signal and the output content is once set as control information based on the test and evaluation as shown in FIG.
  • FIG. 9 has steps S901 to S906.
  • an application such as work support provided for HMD1 gives an instruction or explanation regarding work to a user by displaying a GUI image on the display surface 11 will be described.
  • step S901 the HMD1 executes the processing of the application used by the user, determines, for example, a work instruction / explanation, and first outputs the first output based on the above-mentioned control information related to the output content adjustment or the initial setting information. Determine the content and output.
  • the HMD 1 displays a GUI image similar to that shown in FIG. 7 (B).
  • step S902 the HMD1 acquires the electromyographic signal as time-series data through the devices 2 and 4 and the biological signal acquisition unit 20 together with the above output.
  • the HMD1 may have acquired the EMG signal before the output.
  • the HMD 1 determines whether or not the current output content is appropriate, for example, whether or not the amount of explanation by the GUI image is large or small, based on the psychological state associated with the acquired EMG signal of the user. Guess based on.
  • HMD1 for example, when the feature amount of the EMG signal at that time represents discomfort, high stress, etc., it can be inferred that the user feels that the amount of explanation is too much or too little, which is pleasant. If it indicates low stress or the like, it can be inferred that the user feels that the amount of explanation is appropriate.
  • the HMD1 determines whether the state of comfort / discomfort, stress, satisfaction, etc. as the psychological state of the user is within a predetermined allowable range based on the control information.
  • the permissible range can be defined by a threshold value or the like.
  • the state of the feature amount of a certain EMG signal represents the degree of comfort, and when the degree of comfort is equal to or higher than the threshold value, the user feels comfortable or satisfied with the output content at that time. I can guess that there is.
  • the state of the feature amount of a certain EMG signal represents the degree of discomfort, and when the degree of discomfort is equal to or higher than the threshold value, the user feels discomfort or dissatisfaction with the output content at that time. I can guess that there is.
  • step S905 is skipped and the process proceeds to step S906, and if it is out of the permissible range (N), the process proceeds to step S905.
  • the HMD1 adjusts the output content based on the control information.
  • the HMD1 determines, for example, to adjust the output content so as to reduce the amount of explanation when it is estimated that the user feels that the comfort is low and the amount of explanation is too large.
  • the HMD 1 changes, for example, the image 720B as shown in FIG. 7 (B) to the image 720C as shown in FIG. 7 (C) so as to provide a more concise explanation.
  • the HMD1 determines the adjustment of the output content so as to increase the explanation amount.
  • HMD1 changes, for example, the image 720B as shown in FIG. 7 (B) to the image 720A as shown in FIG. 7 (A) so as to include more information.
  • the HMD1 determines from the EMG signal that the output content (eg, image 720) remains unchanged when the user infers that the current output content is within the permissible range of satisfaction. .. This corresponds to the case of proceeding from step S904 to step S906.
  • step S906 HMD1 confirms whether the user processing is completed or continued, and if it is terminated (Y), the flow ends, and if it is continued (N), it returns to step S901 and the same is performed every hour. It is a repetition of control.
  • the corrugator supercilii muscle and the zygomaticus major muscle as shown in FIG. 12 will be described as an example.
  • the corrugator supercilii is a muscle that wrinkles the eyebrows, and the tension of this muscle means generally negative emotions such as frustration, stress, concentration, tension, and discomfort.
  • the zygomaticus major muscle is a muscle that makes a smile, and the tension of this muscle means generally positive emotions such as a sense of security and pleasure.
  • the zygomaticus minor muscle is also a muscle that makes a smile, but I will explain it with the zygomaticus major muscle as a representative.
  • the electromyographic signal that can be detected from the zygomaticus major muscle corresponds to the signals sg1 and sg2 that can be detected from the electrode 3A of the device 2 of FIG.
  • the electromyographic signal that can be detected from the corrugator supercilii corresponds to the signals sg3 and sg4 that can be detected from the electrode 3B of the device 4 in FIG.
  • the EMG signal from the zygomaticus major muscle is used as an index of a positive state
  • the EMG signal from the corrugator supercilii muscle is used as an index of a negative state
  • the psychological state is estimated by a combination thereof. do.
  • FIG. 10 is a table summarizing an example of the correspondence between the output content and the level of the EMG signal as described above.
  • the combination of the levels of the EMG signals of the two types of facial muscles generally appears as, for example, three types of patterns, depending on the amount of the description of the image 720 which is the output content. ..
  • Pattern 1 corresponds to the example of image 701 of (A) above
  • pattern 2 corresponds to the example of image 702 of (B) above
  • pattern 3 corresponds to the example of image 703 of (C) above. ing.
  • the state of the EMG signal from the facial muscles particularly the state of the level such as the strength in the combination of the plurality of EMG signals from the plurality of facial muscles, reflects the psychological state of the user.
  • grasping the state of the EMG signal from the facial muscle it is possible to infer how the user feels about the output content.
  • the HMD1 collects, learns, and grasps data on the pattern of the correspondence between such output contents and EMG signals based on the above-mentioned tests and evaluations.
  • the HMD1 can control the adjustment of the output content according to the EMG signal by using the control information based on such a correspondence. For example, when the levels of the two types of EMG signals are both weak, it can be inferred that the user feels that the amount of explanatory text is large for the GUI image at that time based on the pattern 1. Therefore, in this case, the HMD 1 may attempt to adjust the output content by using the various means described above so as to reduce the amount of information in the GUI image.
  • FIG. 11 shows a configuration example of control information (control information 36 in FIG. 5) related to the function of adjusting the output content according to the EMG signal.
  • This control information is created as a setting for each user and each application.
  • This table of control information has an electromyographic signal, a psychological state, and output contents as columns.
  • the "electromyographic signal” column stores, for example, a pattern or classification of a feature amount calculated from the electromyographic signal, for example, a range of intensity.
  • the "psychological state” column can be omitted, a value representing the user's psychological state associated with the feature amount of the EMG signal, for example, the degree of comfort / discomfort or satisfaction, etc. is stored.
  • the "output content” column stores information that defines output content that is a candidate for output when adjusting the output content, for example, information such as the type of GUI image to be displayed and the amount of explanation.
  • the psychological / emotional state of the user can be quickly and deeply estimated by using the electromyographic signal acquired through the device 2 or the like, and the output content is based on the estimation. Can be suitably adjusted.
  • the GUI image or the like for each application can be adjusted to a suitable state according to the state of the EMG signal of the individual user. According to the first embodiment, it is possible to quickly adjust the output content to a suitable one before the physical movement of the user is affected, for example, before it appears as a large stress.
  • the HMD1 processor is not limited to the mode in which the control process such as the output content adjustment described above is performed.
  • the HMD may communicate with an external device such as a server, and the external device may perform the control process thereof.
  • FIG. 13 shows the configuration of HMD1 of the first modification of the first embodiment. Not limited to the example of FIG. 2 of the first embodiment, various arrangements and shapes of the above-mentioned devices 2 and 4 and the sensor electrodes are possible.
  • FIG. 13 is a modified example of the attachment structure of the device 2 and the electrode 3 to the housing 10, and other structural parts are the same as those in FIG.
  • the HMD 1 of the first modification is provided with the device 2 so as to be extended and connected to the lower side of the binocular portion 10a which is the front surface portion of the housing 10. Similar to the first embodiment, the device 2 is provided as left and right devices 2R and 2L in a symmetrical shape.
  • the housing 28 of the device 2 is for detecting an electrocardiogram signal from the facial muscles near the cheeks (the portion from the zygomaticus major muscle to the zygomaticus minor muscle). Electrodes 3i to 3l are provided as the electrodes 3A.
  • the device 2R on the right eye side is provided with electrodes 3i and 3j on the lower surface side.
  • the device 2L on the left eye side is provided with electrodes 3k and 3l on the lower surface side.
  • a microphone is provided near the tip of the device 2 as in the first embodiment.
  • This device 2 functions as a cheekbone pad, and is arranged so that a part of the device 2 including the electrode 3 comes into contact with the skin near the cheekbone and rides on the skin.
  • the binocular portion 10a is placed on the upper side of the cheekbone via the device 2.
  • the device 2 supports a part of the load of the HMD 1 and distributes the load.
  • the load of the HMD 1 applied to the vicinity of the nose pad of the nose can be dispersed by the device 2, and the user's wearing sensation can be improved.
  • the housing that comes into contact with the skin without providing the expansion portion as in the device 2 The electrode 3 may be partially provided.
  • the HMD of the second embodiment will be described with reference to FIG. 14 and the like.
  • the basic configuration of the second embodiment and the like is the same as that of the first embodiment, and the components different from the first embodiment of the second embodiment and the like will be mainly described below.
  • the configuration of the electrode 3 and the like of the device 2 in the HMD 1 of the second embodiment is the same as that of FIG.
  • the HMD of the second embodiment also includes the electrode 3 for detecting the electromyogram signal for reading the unconscious psychological state of the user described in the first embodiment.
  • the electrode 3 is also used to detect a user's conscious operation of facial muscle movement (also referred to as voluntary input).
  • the HMD 1 detects the user's conscious operation of the facial muscle movement as a voluntary input from the electromyographic signal of the electrode 3.
  • the user makes a voluntary input such as consciously raising the right cheek.
  • the HMD1 uses the detected information of the voluntary input as one of the inputs to the HMD1, for example, the input information for the GUI image of the application.
  • the HMD 1 describes an electromyographic signal regarding the operation of the facial muscle movement according to the unconscious psychological state of the user described in the first embodiment and the operation of the user's conscious facial muscle movement described in the second embodiment. Judgment and switching according to the level of strength etc.
  • the HMD1 uses the EMG signal after rectifying it.
  • the HMD1 compares the level of the rectified EMG signal with a predetermined threshold value, and when the level is equal to or higher than the threshold value, sometimes treats the signal as a conscious voluntary input.
  • the HMD1 treats the signal as a signal representing an unconscious psychological state when and sometimes the level is below the threshold.
  • the HMD1 stops estimating the unconscious psychological state at the time when the EMG signal is treated as a conscious voluntary input.
  • the HMD1 can set a threshold value according to the user for the threshold value of the level of the EMG signal.
  • the HMD 1 of the second embodiment includes a circuit or the like that rectifies the electromyographic signal acquired from the electrode 3.
  • the electromyogram signal sensor 201 of the biological signal acquisition unit 20 of FIG. 4 is provided with a circuit or the like for rectifying the signal sg1 or the like.
  • the circuit of the EMG signal sensor 201 cuts off a predetermined high frequency band component from the EMG signal in order to remove noise, and removes the DC component due to the polarization potential generated between the electrode 3 and the skin. Therefore, a predetermined low frequency band component is also cut off.
  • the EMG signal sensor 201 is provided with a filter circuit or the like for such processing.
  • the HMD1 of Embodiment 2 uses a bandwidth of 1 Hz to 500 Hz in an electromyographic signal.
  • FIG. 14 shows the processing flow of HMD1 of the second embodiment.
  • the flow of FIG. 14 is similar to the flow of FIG. 9 of the first embodiment for many steps, and mainly has steps S1403 and S1408 as different parts.
  • step S1403 the HMD1 compares the level of the strength of the EMG signal after rectification or the like with the threshold value of the EMG signal acquired during the user processing, and determines whether the EMG signal is equal to or higher than the threshold value. If it is equal to or more than the threshold value (Y), the process proceeds to step S1408, and if it is less than the threshold value (N), the process proceeds to step S1404.
  • the processing of steps S1404 to S1407 is the same as that of the first embodiment.
  • step S1408 the HMD1 determines that it is a voluntary input and processes the voluntary input.
  • the HMD1 determines what kind of operation is intended as a voluntary input from the state of the levels of the plurality of electromyographic signals (signals sg1 to sg4) at that time. For example, when the level of the signal sg1 corresponding to the facial muscle of the right cheek is the highest, the HMD1 can be determined to be a voluntary input associated with the right cheek, and corresponds to the voluntary input based on the setting information.
  • the attached operation for example, affirmative button operation
  • the HMD1 gives the OS or the application the information of the operation associated with the discriminated voluntary input.
  • the OS or the application executes a predetermined process (for example, affirmative button input process) according to the voluntary input operation.
  • the process proceeds to step 1407.
  • the above voluntary input is realized as an operation in which the user consciously moves, for example, the facial muscles near the cheeks and the facial muscles near the eyebrows in FIG.
  • This voluntary input operation can be used as a predetermined input associated with, for example, pressing a button of hardware or software for the HMD 1, cursor movement control, or the like.
  • the HMD 1 of the second embodiment allows the user to set what kind of input operation is assigned to the voluntary input operation using the facial muscle, for example, for each user and each application.
  • FIG. 15 shows an example of user setting information regarding voluntary input.
  • the HMD 1 displays user setting information on the display surface 11, for example, and enables the user to set the information.
  • the user setting information regarding this voluntary input has "voluntary input” and "operation” as columns.
  • a plurality of voluntary inputs can be set by combining the electromyographic signals of each facial muscle.
  • "voluntary input 1" corresponds to an operation of moving the facial muscle of the right cheek, and corresponds to the level of the signal sg1 being equal to or higher than the threshold value.
  • An operation selected by the user from the candidates, for example, an "affirmative button" can be assigned to this "voluntary input 1".
  • the "voluntary input 2" corresponds to the operation of moving the facial muscle of the left cheek and the signal sg2, and is assigned, for example, the operation of the "negative button".
  • the user selects an operation such as an "affirmative button” by "voluntary input 1" of this application or one of them. It can be given to an image 710 or the like of an object.
  • the HMD1 application receives such a voluntary input operation through the OS and performs the prescribed processing.
  • a voluntary input operation may be set by combining the movements of a plurality of facial muscles. For example, when both the "voluntary input 3" of the right eyebrow and the "voluntary input 4" of the left eyebrow are turned on (at or above the threshold value) at the same time, the operation is accepted as a predetermined operation.
  • the design item of the HMD 1 is not limited to the user setting, and may be fixedly set so that a predetermined voluntary input can be associated with a predetermined input operation in advance.
  • FIG. 16 shows a case where the cursor movement control by voluntary input is accepted in the image 1601 displayed on the display surface 11 by the HMD1.
  • the left-right direction in the display surface 11 is shown as the x direction
  • the up-down direction is shown as the y direction.
  • the HMD1 displays a GUI image 1602 such as an object depending on the OS or an application, and also displays a cursor 1603 for indicating a position.
  • the cursor 1603 is, for example, a cross-shaped GUI image.
  • the user can operate the cursor 1603 by using the function of the main body of the HMD1 or the voluntary input of the facial muscles.
  • the user for example, moves the cursor 1603 to position it on the GUI image 1602 and then presses the affirmative or negative button.
  • the user can perform operations such as cursor movement and affirmative button at this time by the above-mentioned optional input.
  • HMD1 acquires two channels of electromyographic signals (for example, signals sg1 and sg2 in FIG. 4) from two facial muscle locations, for example, the right cheek and the left cheek.
  • the HMD1 controls to change the movement position (direction, movement amount, etc.) of the cursor 1603 according to the state of the combination of the acquired two-channel EMG signal levels.
  • the left and right signals sg1 and sg2 are used to control the position coordinates of the cursor 1603 in the left-right direction (x direction) in the display surface 11.
  • two channels of EMG signals (for example, right eyebrow and left eyebrow signals sg3 and sg4) are added for the y direction, for a total of four channels of EMG. It can be controlled in the same way by combining the signals shown in the figure.
  • the above-mentioned optional input may be used for the test output and the evaluation input as shown in FIG.
  • the user can easily input the subjective evaluation value by using the voluntary input.
  • the voluntary input of the signal sg1 on the right cheek is an operation representing an evaluation of affirmation or comfort
  • the voluntary input of the signal sg2 on the left cheek is an operation representing an evaluation of negation or discomfort.
  • the HMD1 processes in response to an operation such as "affirmation" / "denial” by voluntary input so as to reflect it as a user's evaluation of the output content at that time.
  • the HMD of the third embodiment will be described with reference to FIG. 17 and the like.
  • the electrooculogram signal that can be detected from the electrode 5 of FIG. 2 is used in combination with the electromyogram signal that can be detected from the electrode 3.
  • the electrooculogram signal from the electrode 5 (signal sg5 in FIG. 4). , Sg6) are also used for control.
  • the user can be set as to what kind of voluntary input operation each signal of the EMG signal and the electrooculogram signal is assigned.
  • the cursor movement is controlled by the electromyographic signal and the button is pressed by the electromyographic signal, or the cursor movement is controlled by the electromyographic signal and the button is pressed by the electromyographic signal. It is possible to do.
  • the HMD of the third embodiment may stop the estimation / evaluation of the unconscious psychological state.
  • the HMD when the level of the electrooculogram signal acquired in response to the voluntary input operation using the user's conscious eye movement is equal to or higher than the threshold value, and sometimes, the psychological state using the EMG signal. Guessing and adjusting the output contents may be suspended. That is, the HMD controls while switching between the voluntary input and the psychological state estimation according to the level of the electro-oculography signal and the electromyogram signal on the time axis.
  • FIG. 17 shows a flow at the time of user processing of HMD1 in the third embodiment.
  • the flow of FIG. 17 has steps S1702 and S1703 as main differences from the flow of FIG. 14 of the second embodiment.
  • step S1702 the HMD1 acquires both the electromyographic signal and the electrooculographic signal.
  • the HMD 1 acquires an electromyographic signal (signals sg1 to sg4) from the electrodes 3 of the devices 2 and 4 and an electro-oculography signal (signals sg5, sg6) from the electrodes 5.
  • step S1703 in HMD1, the level of the EMG signal is equal to or higher than a predetermined threshold value (referred to as the threshold value A for the EMG signal), or the level of the electrooculogram signal is a predetermined threshold value (for the electrooculogram signal). It is determined whether or not it is equal to or higher than the threshold value B of). Each of these thresholds is user configurable. If any of the signals is less than the threshold value (N), the process proceeds to step S1704. If any of the signals is equal to or greater than the threshold value (Y), the process proceeds to step S1708. In steps S1704 to S1706, the HMD 1 adjusts the output content according to the EMG signal, as in the second embodiment. In step S1708, HMD1 performs a voluntary input process using at least one of the electromyogram signal and the electrooculogram signal which was equal to or higher than the threshold value in step S1703.
  • a predetermined threshold value referred to as the threshold value A for the EMG signal
  • FIG. 18 shows an example of a voluntary input operation when both an electromyogram signal and an electrooculogram signal are used in the third embodiment.
  • the horizontal axis of FIG. 18 is time. It has the above-mentioned 4-channel electromyographic signals (signals sg1 to sg4) and 2-channel electro-oculography signals (signals sg5, sg6).
  • the HMD1 performs an arbitrary input process according to the state of the combination of these signals.
  • the 4-channel EMG signal is used for cursor movement control (referred to as voluntary input A)
  • the 2-channel electrooculogram signal is used for button pressing (referred to as voluntary input B). Is shown.
  • the level of any signal is below the threshold value.
  • the output content is adjusted by guessing the psychological state using the EMG signal as needed. For example, as in FIG. 7, the amount of description of image 720 is adjusted.
  • the level of the electromyographic signal (at least one of the signals sg1 to sg4) is equal to or higher than the threshold value.
  • the cursor movement control is performed as the voluntary input A according to the states of the signals sg1 to sg4. For example, as in FIG. 16, the cursor 1603 is moved.
  • the level of the electro-oculography signal is equal to or higher than the threshold value.
  • the button is pressed as the voluntary input B according to the states of the signals sg5 and sg6.
  • the affirmative button or the negative button is pressed while the cursor 1603 is on the image 1602 of the object.
  • the adjustment of the output content by the psychological state estimation is temporarily stopped.
  • the HMD1 when the HMD1 simultaneously receives both the voluntary input by the electromyographic signal and the voluntary input by the electrooculogram signal, the HMD1 may be processed as follows.
  • the HMD 1 When the level of the EMG signal is equal to or higher than the threshold value and the level of the electrooculographic signal is equal to or higher than the threshold value, the HMD 1 may be processed as one predetermined voluntary input operation.
  • the HMD 1 may be processed as a voluntary input operation associated therewith by using a signal that is relatively larger than the level of the EMG signal and the level of the electrooculogram signal.
  • the user in addition to adjusting the output content according to the estimation of the user's unconscious psychological state, the user can also perform a conscious voluntary input operation using the electromyogram signal and the electrooculogram signal. , It is possible to increase and diversify the input means for the HMD.
  • the EMG signal of the facial muscle is used only for acquiring information on the unconscious psychological state, and only the electrooculogram signal is used for the voluntary input.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)

Abstract

Provided is a feature related to a head-mounted display device (HMD) that enables the psychological/emotional state of a user to be inferred/ascertained quickly and more profoundly, and, on the basis thereof, enables output content to be suitably adjusted. A HMD according to the present invention is provided with devices 2, 4 that acquire electromyogram signals from facial expression muscles of a user and uses the electromyogram signals as input information. The HMD adjusts output content in response to an inferred psychological state on the basis of the electromyogram signals. The HMD adjusts, as the adjustment output content, the displayed GUI image type or information quantity.

Description

ヘッドマウントディスプレイ装置Head-mounted display device
 本発明は、ヘッドマウントディスプレイ装置(Head Mounted Display:HMD)の技術に関する。 The present invention relates to a technique of a head-mounted display device (HMD).
 ユーザの頭部に装着される眼鏡型やゴーグル型等のHMDが実現されている。HMDは、透過型または非透過型等の表示方式の表示面において画像を表示する。画像は拡張現実(AR)や仮想現実(VR)等の態様で表示可能である。例えば、HMDは、OSやアプリケーションプログラム等に基づいて、ユーザが表示面を介して見る実像上に仮想画像を重畳表示できる。 HMDs such as eyeglasses and goggles that are worn on the user's head have been realized. The HMD displays an image on a display surface of a display method such as a transparent type or a non-transparent type. The image can be displayed in an form such as augmented reality (AR) or virtual reality (VR). For example, the HMD can superimpose and display a virtual image on a real image viewed by a user through a display surface based on an OS, an application program, or the like.
 一方、電子機器において、ユーザの眼電位信号等の生体信号を取得して、取得した生体信号に基づいてユーザの心理・感情状態を推測し、何らかの制御を行うような技術が開発されている。 On the other hand, in electronic devices, a technique has been developed in which a biological signal such as a user's electro-oculography signal is acquired, the psychological / emotional state of the user is estimated based on the acquired biological signal, and some control is performed.
 先行技術例としては、特開2017-70602号公報(特許文献1)や特開2019-118448号公報(特許文献2)が挙げられる。特許文献1には、「情報処理方法」等として、「ユーザ自身の心理状態がどのような状態にあるのかを直感的に把握させる」旨や以下の旨が記載されている。図1のような眼鏡(アイウェア)は、ノーズパッドおよびブリッジ部分に生体電極を搭載しており、生体電極から眼電位信号を取得する。この眼鏡の処理装置は、センサ信号や眼電位信号等を、外部装置やサーバに送信する。外部装置は、センサ信号や眼電位信号等に基づいて、ユーザの心理状態を表す複数のオブジェクトから形成される所定図形を表示する。 Examples of the prior art include JP-A-2017-70602 (Patent Document 1) and JP-A-2019-118448 (Patent Document 2). Patent Document 1 describes, as an "information processing method" or the like, "to intuitively grasp what kind of state the user's own psychological state is" and the following. Eyeglasses (eyewear) as shown in FIG. 1 have a bioelectrode mounted on a nose pad and a bridge portion, and acquire an electrooculogram signal from the bioelectrode. This eyeglass processing device transmits a sensor signal, an electro-oculography signal, and the like to an external device or a server. The external device displays a predetermined figure formed from a plurality of objects representing the psychological state of the user based on the sensor signal, the electrooculogram signal, and the like.
 特許文献2には、「精神状態推定システム」等として、「精神状態を素早くかつ精度良く推定できるだけでなく、精神状態の切替や瞬時的な変化も検出する」旨や以下の旨が記載されている。このシステムは、表情筋における筋活動に関する特徴量を用いて精神状態を推定する。このシステムは、表面電極を用いて、筋電信号を取得し、筋電信号の時系列データを解析して特徴量を取得する。 Patent Document 2 describes, as a "mental state estimation system" or the like, that "not only can the mental state be estimated quickly and accurately, but also switching or instantaneous change of the mental state is detected" and the following. There is. This system estimates mental state using features related to muscle activity in facial muscles. This system uses surface electrodes to acquire myoelectric signals and analyze time-series data of myoelectric signals to acquire features.
特開2017-70602号公報JP-A-2017-70602 特開2019-118448号公報Japanese Unexamined Patent Publication No. 2019-118448
 例えば、特許文献1には、眼鏡を含むシステムが、ユーザの眼電位信号から心理パラメータを算出する旨の記載や、ユーザの瞬目、視線移動、体動等を測定し、ユーザの活力、集中、落ち着きといった心理状態を取得する旨の記載がある。 For example, Patent Document 1 describes that a system including eyeglasses calculates psychological parameters from a user's electro-oculography signal, and measures the user's blinks, eye movements, body movements, etc., and the user's vitality and concentration. , There is a description to acquire a psychological state such as calmness.
 電子機器等において、ユーザの眼の動きに基づいて眼電位信号から心理・感情状態を推測・取得しようとする技術では、一般に、眼電位信号からは、顔の表情や動きに現れにくい無意識の心理・感情状態については、推測・把握がしにくい。不快や疲れ等の状態が眼の動きに現れた場合には眼電位信号から推測可能であるが、無意識の状態は眼電位信号に現れにくい。 In electronic devices, etc., in technology that attempts to infer and acquire psychological and emotional states from the electrooculographic signal based on the movement of the user's eyes, in general, the unconscious psychology that is difficult to appear in facial expressions and movements from the electrooculographic signal.・ It is difficult to guess and grasp the emotional state. When a state such as discomfort or tiredness appears in the movement of the eye, it can be inferred from the electro-oculography signal, but an unconscious state is unlikely to appear in the electro-oculography signal.
 一方、特許文献2の例のように、ユーザの筋電図信号から心理・感情状態を推測・取得しようとする技術もある。心理・感情状態としては、快・不快、疲労、リラックス、集中、ストレス等が挙げられる。筋電図信号を用いる技術は、眼電位信号を用いる技術よりも、ユーザの無意識の心理・感情状態について、推測・把握がしやすい。 On the other hand, as in the example of Patent Document 2, there is also a technique for inferring / acquiring a psychological / emotional state from a user's EMG signal. Psychological / emotional states include comfort / discomfort, fatigue, relaxation, concentration, and stress. The technique using the EMG signal is easier to guess and grasp the unconscious psychological / emotional state of the user than the technique using the electro-oculography signal.
 HMDにおいては、ユーザに対する出力内容であるグラフィカル・ユーザ・インタフェース(GUI)等の画像について、ユーザを疲れさせない、不快感を与えない、わかりやすい、といった好適な表示とすることが好ましい。そこで、ユーザの頭部に対し装着されるHMDにおいて、顔の表情筋から眼電位信号等の生体信号を検出してユーザの心理・感情状態を推測する技術が考えられる。特に、HMDにおいて、生体信号から、OSやアプリケーション等の画像がユーザの心理・感情状態に与える影響を推測し、それに基づいて出力内容を調整する技術が考えられる。 In the HMD, it is preferable that the image such as the graphical user interface (GUI), which is the output content for the user, is displayed in a suitable manner such that the user does not get tired, does not feel uncomfortable, and is easy to understand. Therefore, in an HMD worn on the user's head, a technique of detecting a biological signal such as an electro-oculography signal from facial facial muscles to estimate the user's psychological / emotional state can be considered. In particular, in HMD, a technique of estimating the influence of an image of an OS or an application on a user's psychological / emotional state from a biological signal and adjusting the output content based on the guess is conceivable.
 しかし、HMDにおいて、眼の動きに基づいて眼電位信号から心理・感情状態を推測しようとする場合、GUI等の画像による心理・感情への影響が、眼の動きに現れるとは限らないので、その心理・感情を推測しにくい。このような技術では、ユーザにとって不適切なGUI画像等の出力内容であった場合に、その出力内容の影響によって実際にユーザが不快や疲れ等の状態になって顔の表情や動きとして現れた後でないと、そのような心理・感情状態(すなわちその出力内容がユーザを疲れさせている等の影響)を推測・把握することは難しい。 However, in HMD, when trying to estimate the psychological / emotional state from the electrooculographic signal based on the movement of the eye, the influence of the image such as GUI on the psychological / emotional state does not always appear in the movement of the eye. It is difficult to guess the psychology and emotions. In such a technology, when the output content such as a GUI image is inappropriate for the user, the user actually becomes uncomfortable or tired due to the influence of the output content and appears as a facial expression or movement. It is difficult to guess and grasp such a psychological / emotional state (that is, the influence that the output content makes the user tired, etc.) unless later.
 本発明の目的は、HMDの技術に関して、ユーザの心理・感情状態を迅速に、より深く推測・把握することができる技術や、それに基づいて出力内容を好適に調整することができる技術を提供することである。 An object of the present invention is to provide a technique for quickly and deeply guessing and grasping a user's psychological / emotional state with respect to the HMD technique, and a technique for appropriately adjusting the output content based on the technique. That is.
 本発明のうち代表的な実施の形態は以下に示す構成を有する。実施の形態のヘッドマウントディスプレイ装置は、ユーザの表情筋からの筋電図信号を取得するデバイスを備え、前記筋電図信号を入力情報として用いる。また、実施の形態のヘッドマウントディスプレイ装置は、前記筋電図信号に基づいて推測される心理状態に対応させて、出力内容を調整する。また、実施の形態のヘッドマウントディスプレイ装置は、前記出力内容の調整として、表示するGUI画像の種類または情報量を調整する。 A typical embodiment of the present invention has the following configuration. The head-mounted display device of the embodiment includes a device that acquires an electromyographic signal from a user's facial muscle, and uses the electromyographic signal as input information. Further, the head-mounted display device of the embodiment adjusts the output content according to the psychological state estimated based on the electromyographic signal. Further, the head-mounted display device of the embodiment adjusts the type or amount of information of the GUI image to be displayed as the adjustment of the output content.
 本発明のうち代表的な実施の形態によれば、HMDの技術に関して、ユーザの心理・感情状態を迅速に、より深く推測・把握することができ、それに基づいて出力内容を好適に調整することができる。上記した以外の課題、構成および効果等については、発明を実施するための形態において説明される。 According to a typical embodiment of the present invention, the psychological / emotional state of the user can be quickly and deeply estimated / grasped with respect to the HMD technology, and the output content is appropriately adjusted based on the guess / deeper. Can be done. Issues, configurations, effects, and the like other than those described above will be described in the embodiment for carrying out the invention.
本発明の実施の形態1のHMDの構成概要を示す。The configuration outline of the HMD of Embodiment 1 of this invention is shown. 実施の形態1のHMDの構成を示す。The configuration of the HMD of the first embodiment is shown. 実施の形態1のHMDの前面を見た構成を示す。The configuration which looked at the front surface of the HMD of Embodiment 1 is shown. 実施の形態1のHMDの生体信号取得部および電極の構成例を示す。An example of the configuration of the biological signal acquisition unit and the electrode of the HMD of the first embodiment is shown. 実施の形態1のHMDの機能ブロック構成例を示す。An example of the functional block configuration of the HMD of the first embodiment is shown. 実施の形態1のHMDで、テスト時の処理フローを示す。In the HMD of the first embodiment, the processing flow at the time of a test is shown. 実施の形態1のHMDで、テスト時の出力例を示す。In the HMD of the first embodiment, an output example at the time of a test is shown. 実施の形態1のHMDで、評価入力時の出力例を示す。In the HMD of the first embodiment, an output example at the time of evaluation input is shown. 実施の形態1のHMDで、アプリケーション利用時のユーザ処理時の処理フローを示す。In the HMD of the first embodiment, the processing flow at the time of user processing at the time of using an application is shown. 実施の形態1で、出力内容と筋電図信号との対応関係の例を示す。In the first embodiment, an example of the correspondence between the output content and the EMG signal is shown. 実施の形態1で、制御情報の例を示す。The first embodiment shows an example of control information. 実施の形態1で、対象となる表情筋の例を示す。In the first embodiment, an example of a target facial muscle is shown. 実施の形態1の変形例のHMDの構成を示す。The configuration of the HMD of the modification of the first embodiment is shown. 本発明の実施の形態2のHMDにおける処理フローを示す。The processing flow in the HMD of Embodiment 2 of this invention is shown. 実施の形態2で、随意入力操作に関するユーザ設定情報の例を示す。In the second embodiment, an example of user setting information regarding a voluntary input operation is shown. 実施の形態2で、随意入力操作および表示画像の例を示す。In the second embodiment, an example of a voluntary input operation and a display image is shown. 本発明の実施の形態3のHMDにおける処理フローを示す。The processing flow in the HMD of Embodiment 3 of this invention is shown. 実施の形態3のHMDで、制御例を示す。A control example is shown in the HMD of the third embodiment.
 以下、図面を参照しながら本発明の実施の形態を詳細に説明する。図面において、同一部には原則として同一符号を付し、繰り返しの説明を省略する。説明上、プログラムによる処理について説明する場合に、プログラムや機能や処理部等を主体として説明する場合があるが、それらについてのハードウェアとしての主体は、プロセッサ、あるいはプロセッサ等で構成されるコントローラ、装置、計算機、システム等である。計算機は、プロセッサによって、適宜にメモリや通信インタフェース等の資源を用いながら、メモリ上に読み出されたプログラムに従った処理を実行する。これにより、所定の機能や処理部等が実現される。プロセッサは、例えばCPUやGPU等の半導体デバイス等で構成される。プロセッサは、所定の演算が可能な装置や回路で構成される。処理は、ソフトウェアプログラム処理に限らず、専用回路でも実装可能である。専用回路は、FPGAやASIC等が適用可能である。また、プログラムは、対象計算機に予めデータとしてインストールされていてもよいし、プログラムソースから対象計算機にデータとして配布されてインストールされてもよい。プログラムソースは、通信網上のプログラム配布サーバでもよいし、非一過性のコンピュータ読み取り可能な記憶媒体でもよい。プログラムは、複数のプログラムモジュールから構成されてもよい。また、説明上、各種のデータや情報は、例えばテーブルやリスト等の表現で説明される場合があるが、このような構造や形式には限定されない。また、各種の要素について識別するためのデータや情報は、識別情報、識別子、ID、名、番号等の表現で説明される場合があるが、これらの表現は置換可能である。 Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings. In the drawings, the same parts are designated by the same reference numerals in principle, and repeated description thereof will be omitted. In the description, when explaining the processing by the program, the program, the function, the processing unit, etc. may be mainly described, but the main body as the hardware for them is a processor or a controller composed of the processor or the like. Devices, processors, systems, etc. The computer executes processing according to the program read out on the memory by the processor while appropriately using resources such as the memory and the communication interface. As a result, a predetermined function, a processing unit, and the like are realized. The processor is composed of, for example, a semiconductor device such as a CPU or a GPU. A processor is composed of a device or a circuit capable of performing a predetermined operation. The processing is not limited to software program processing, but can be implemented by a dedicated circuit. FPGA, ASIC, etc. can be applied to the dedicated circuit. Further, the program may be installed in the target computer as data in advance, or may be distributed and installed as data from the program source to the target computer. The program source may be a program distribution server on a communication network or a non-transient computer-readable storage medium. The program may be composed of a plurality of program modules. Further, for the sake of explanation, various data and information may be described by expressions such as tables and lists, but the structure and format are not limited to these. Further, data and information for identifying various elements may be described by expressions such as identification information, an identifier, an ID, a name, and a number, but these expressions can be replaced.
 <実施の形態1>
 図1~図13を用いて、本発明の実施の形態1のヘッドマウントディスプレイ装置であるHMD1について説明する。実施の形態1のHMD1は、表情筋からの筋電図信号の検出に基づいてユーザの無意識の心理・感情状態を表す情報を取得・入力することができる機能、およびその情報に基づいてGUI画像表示等の出力内容を調整することができる機能を有する。図1等に示す実施の形態1のHMD1は、ユーザの表情筋からの筋電図信号を取得する機能およびデバイス2,4を備え、取得した筋電図信号を入力情報として用いる。また、実施の形態1のHMD1は、筋電図信号に基づいて、推測される心理状態に対応させて、GUI画像等の出力内容を好適に調整する。また、実施の形態1のHMD1は、出力内容の調整として、表示するGUI画像の種類または情報量等を調整する。
<Embodiment 1>
The HMD1 which is the head-mounted display apparatus of Embodiment 1 of this invention will be described with reference to FIGS. 1 to 13. The HMD1 of the first embodiment has a function capable of acquiring and inputting information representing a user's unconscious psychological / emotional state based on the detection of an electromyographic signal from a facial muscle, and a GUI image based on the information. It has a function to adjust the output contents such as display. The HMD 1 of the first embodiment shown in FIG. 1 and the like includes a function and devices 2 and 4 for acquiring an EMG signal from a user's facial muscle, and uses the acquired EMG signal as input information. Further, the HMD 1 of the first embodiment preferably adjusts the output content such as a GUI image according to the inferred psychological state based on the electromyographic signal. Further, the HMD 1 of the first embodiment adjusts the type of GUI image to be displayed, the amount of information, and the like as the adjustment of the output content.
 実施の形態1では、心理状態の推測・把握は、筋電図信号と出力内容(例えば表示面11に表示される画像)との対応関係での制御によって実現され、心理状態を表すパラメータ値等の情報を直接的に扱う必要は無い。なお、変形例のHMDとしては、筋電図信号と心理状態と出力内容との3つの要素の対応関係での制御とし、心理状態を表すパラメータ値等の情報を読み書きする等して扱ってもよい。 In the first embodiment, the estimation / grasp of the psychological state is realized by controlling the correspondence between the EMG signal and the output content (for example, the image displayed on the display surface 11), and the parameter value representing the psychological state or the like is realized. It is not necessary to handle the information of. It should be noted that the HMD of the modified example is controlled by the correspondence between the three elements of the EMG signal, the psychological state, and the output content, and even if it is handled by reading and writing information such as a parameter value indicating the psychological state. good.
 [用語]
 用語について補足説明する。ユーザの心理・感情状態(心理状態と総称する場合がある)とは、快・不快、疲労感、ストレス、活力、リラックス、集中、覚醒度、怒り等の状態を含む。
[the term]
A supplementary explanation will be given for terms. The user's psychological / emotional state (sometimes collectively referred to as psychological state) includes states such as comfort / discomfort, fatigue, stress, vitality, relaxation, concentration, arousal, and anger.
 筋電図信号(electromyogram signal)は、筋電位信号(muscle potential signal)と記載される場合もある。筋電図はEMGと記載される場合がある。筋電図は、筋肉で発生する微弱な電位の変化を、電位と時間との軸で図にしたものである。筋電図は、医療では不随意運動の補助診断等に用いられている。 The electromyogram signal may be described as a muscle potential signal. The electromyogram may be described as EMG. The electromyogram is a diagram of the weak changes in electric potential generated in the muscle on the axis of electric potential and time. EMG is used in medical treatment for auxiliary diagnosis of involuntary movements.
 眼電位信号(ocular potential signal)は、眼電図信号(electrooculogram signal)と記載される場合もある。眼電図はEOGと記載される場合がある。眼電位信号は、眼の動きと関係しており、眼球運動による電位を記録するものである。眼電位信号は、瞬目や視線移動等の際にも信号変化として現れる。眼電位信号には眼球運動以外の生体現象(例えば笑顔)の影響も混入する。 The electrooculogram signal may be described as an electrooculogram signal. The electrocardiogram may be described as EOG. The electro-oculography signal is related to the movement of the eye and records the potential due to the movement of the eyeball. The electro-oculography signal also appears as a signal change during blinking or eye movement. The electro-oculography signal is also mixed with the influence of biological phenomena (for example, smile) other than eye movement.
 [HMD(1)]
 図1は、実施の形態1のHMD1の構成概要として、ユーザの頭部に装着されている様子の模式図を示す。(A)は眼鏡型のHMDの場合であり、(B)はゴーグル型のHMDの場合である。HMD1は、単体の構成に限らず、リモートコントローラを備えてもよいし、スマートフォン、PC、サーバ等の外部装置と通信で接続されてもよい。リモートコントローラは、ユーザの手による操作を入力できる操作器である。HMD1は、操作器との間で例えば近距離無線通信を行う。ユーザは、操作器を操作することで、HMD1に対する指示入力や表示面11でのカーソル移動等ができる。
[HMD (1)]
FIG. 1 shows a schematic view of how the HMD 1 of the first embodiment is attached to the user's head as an outline of the configuration. (A) is a case of a glasses-type HMD, and (B) is a case of a goggle-type HMD. The HMD 1 is not limited to a single configuration, and may be provided with a remote controller, or may be connected to an external device such as a smartphone, a PC, or a server by communication. The remote controller is an operation device that can input the operation by the user's hand. The HMD 1 performs, for example, short-range wireless communication with the controller. By operating the manipulator, the user can input instructions to the HMD 1 and move the cursor on the display surface 11.
 スマートフォン等の外部装置は、例えばアプリケーションのプログラムやデータを有する。外部装置からHMD1にそれらのプログラムやデータを提供してもよい。HMD1から外部装置にデータを記憶や表示してもよい。 An external device such as a smartphone has, for example, an application program or data. Those programs and data may be provided to HMD1 from an external device. Data may be stored and displayed from the HMD 1 to an external device.
 アプリケーションは、例えば、ARやVRの機能に対応したアプリケーションでもよい。アプリケーションは、例えば、空間内でユーザを誘導するものや、作業支援を行うものが挙げられる。例えば、HMD1は、OSやアプリケーションのプログラム処理に基づいて、誘導や作業支援のための仮想画像を生成し、表示面11に表示する。画像は、カーソル、ボタン、コマンド、アイコン、メニュー、ウィンドウ等の各種のGUI画像も含む。 The application may be, for example, an application corresponding to AR or VR functions. Examples of the application include those that guide the user in the space and those that provide work support. For example, the HMD 1 generates a virtual image for guidance and work support based on the program processing of the OS or the application, and displays it on the display surface 11. The images also include various GUI images such as cursors, buttons, commands, icons, menus, windows and the like.
 実施の形態1のHMD1は、筋電図信号を扱うための特有のデバイス2,4を備える。デバイス2,4は、顔の表情筋の部位に接触するように配置される、筋電図信号検出装置、言い換えると電極搭載部である。デバイス2は、特に頬付近に接触するように配置され、頬付近の表情筋から筋電図信号を検出する。デバイス4は、特に眉付近に接触するように配置され、眉付近の表情筋から筋電図信号を検出する。HMD1のコントローラは、それらの筋電図信号を用いて制御処理を行う。 The HMD1 of the first embodiment includes unique devices 2 and 4 for handling an electromyographic signal. The devices 2 and 4 are an electromyogram signal detection device, in other words, an electrode mounting portion, which is arranged so as to be in contact with a portion of facial facial muscles. The device 2 is arranged so as to be in contact with the vicinity of the cheek in particular, and detects an electromyographic signal from the facial muscles near the cheek. The device 4 is arranged so as to be in contact with the vicinity of the eyebrows in particular, and detects an electromyographic signal from the facial muscles near the eyebrows. The controller of HMD1 performs control processing using those EMG signals.
 実施の形態1のHMD1は、特有の機能として、筋電図信号を用いた出力内容調整機能を有する。HMD1は、デバイス2,4を用いて取得した筋電図信号に基づいて、推測される無意識の心理状態に応じて、OSやアプリケーションやユーザインタフェース機能等によるGUI画像等の出力内容を自動的に調整する。 The HMD1 of the first embodiment has an output content adjusting function using an EMG signal as a unique function. Based on the EMG signals acquired using the devices 2 and 4, the HMD1 automatically outputs the output contents such as GUI images by the OS, application, user interface function, etc. according to the estimated unconscious psychological state. adjust.
 [HMD(2)]
 図2は、図1のHMD1、特に(A)の眼鏡型のHMD1の詳細構成を斜視図として示す。なお、説明上、方向として、X,Y,Zを示す。ユーザの頭部およびHMD1を基準とした座標系において、X方向は前後方向、Y方向は左右方向、Z方向は上下方向である。HMD1は、眼鏡型の筐体10、表示面11を含む表示デバイス、筋電図信号に係わるデバイス2およびデバイス4等を備える。筐体10には、コントローラ、表示デバイス、カメラ12、測距センサ13、センサ部14、生体信号取得部20、音声出力装置19等が実装されている。
[HMD (2)]
FIG. 2 shows the detailed configuration of the HMD1 of FIG. 1, particularly the spectacle-shaped HMD1 of (A), as a perspective view. For the sake of explanation, X, Y, and Z are shown as directions. In the coordinate system with respect to the user's head and HMD1, the X direction is the front-back direction, the Y direction is the left-right direction, and the Z direction is the up-down direction. The HMD 1 includes a spectacle-shaped housing 10, a display device including a display surface 11, a device 2 related to an EMG signal, a device 4, and the like. A controller, a display device, a camera 12, a distance measuring sensor 13, a sensor unit 14, a biological signal acquisition unit 20, a voice output device 19, and the like are mounted on the housing 10.
 筐体10は、各部として、表示面11が設けられた両眼部10a、両眼部10aの左眼部・右眼部の間を接続するブリッジ部、両眼部10aから外側の左右のテンプル部10c等を有する。両眼部10aには、表示面11、カメラ12、測距センサ13、デバイス4、および電極5等が設けられている。ブリッジ部10bのノーズパッド付近には、眼電位信号を検出するための電極5が設けられている。テンプル部10cには、コントローラ、生体信号取得部20、センサ部14、スピーカやイヤホン端子を含む音声出力装置19等が実装されている。 The housing 10 has, as each part, a binocular portion 10a provided with a display surface 11, a bridge portion connecting between the left eye portion and the right eye portion of the binocular portion 10a, and temples on the left and right outside the binocular portion 10a. It has a part 10c and the like. The binocular portion 10a is provided with a display surface 11, a camera 12, a distance measuring sensor 13, a device 4, an electrode 5, and the like. An electrode 5 for detecting an electro-oculography signal is provided in the vicinity of the nose pad of the bridge portion 10b. A controller, a biological signal acquisition unit 20, a sensor unit 14, an audio output device 19 including a speaker and an earphone terminal, and the like are mounted on the temple unit 10c.
 表示面11には、外界の実像が透過され、その実像上に画像が重畳表示される。本例では、表示面11を含む表示デバイスは、透過型の表示方式であるが、これに限らず、非透過型(言い換えるとVR型)の表示方式としてもよい。例えば図1の(B)のようなゴーグル型の筐体とする場合に、表示デバイスは非透過型の表示方式としてもよい。 A real image of the outside world is transmitted through the display surface 11, and the image is superimposed and displayed on the real image. In this example, the display device including the display surface 11 is a transparent display method, but the display device is not limited to this, and a non-transparent type (in other words, VR type) display method may be used. For example, in the case of a goggle-type housing as shown in FIG. 1B, the display device may be a non-transparent display method.
 カメラ12は、例えば筐体10の左右両側に配置された2つのカメラを有し、HMD1の前方を含む範囲を撮影して画像を取得する。測距センサ13は、HMD1と外界の物体との距離を測定するセンサである。測距センサ13は、TOF(Time Of Flight)方式のセンサを用いてもよいし、ステレオカメラや他の方式を用いてもよい。センサ部14は、カメラ12や測距センサ13以外の各種のセンサが設けられた部分である。センサ部14は、HMD1の位置および向きの状態を検出するためのセンサ群を含む。カメラ12や測距センサ13やセンサ部14が配置される位置は、図示の位置には限定されない。 The camera 12 has, for example, two cameras arranged on the left and right sides of the housing 10, and captures a range including the front of the HMD 1 to acquire an image. The distance measuring sensor 13 is a sensor that measures the distance between the HMD 1 and an object in the outside world. As the distance measuring sensor 13, a TOF (Time Of Flight) type sensor may be used, or a stereo camera or another type may be used. The sensor unit 14 is a portion provided with various sensors other than the camera 12 and the distance measuring sensor 13. The sensor unit 14 includes a group of sensors for detecting the position and orientation of the HMD1. The position where the camera 12, the distance measuring sensor 13, and the sensor unit 14 are arranged is not limited to the position shown in the figure.
 デバイス2は、頬付近に配置されている。デバイス4は、眉付近に配置されている。デバイス2は、右眼側のデバイス2Rと左眼側のデバイス2Lとの左右で1つのセットとして構成され、X-Z面に対し左右対称形状を有する。デバイス4は、右眼側のデバイス4Rと左眼側のデバイス4Lとの左右で1つのセットとして構成され、X-Z面に対し左右対称形状を有する。デバイス2,4は、眼鏡型の筐体10に対しての拡張部として設けられている。左右のテンプル部10cには、下側で前方(図示のV方向)に出て延在するようにデバイス2(2R,2L)が接続されている。デバイス2の筐体25は、例えば概略的にアームのようにV方向に延在する形状を有するが、これに限定されない。両眼部10aの上側には、眉側に出るように、左右のデバイス4(4R,4L)が接続されている。なお、図1の(B)のようなゴーグル型の場合、ゴーグル型の筐体の中にデバイス4が配置されている。デバイス2,4は、それぞれ、筐体10に対し着脱可能に接続されてもよいし、筐体10(テンプル部10cや両眼部10a)から連続的に延在するように筐体10の一部として固定的に設けられてもよい。 Device 2 is located near the cheek. The device 4 is arranged near the eyebrows. The device 2 is configured as one set on the left and right of the device 2R on the right eye side and the device 2L on the left eye side, and has a symmetrical shape with respect to the XZ plane. The device 4 is configured as one set on the left and right of the device 4R on the right eye side and the device 4L on the left eye side, and has a symmetrical shape with respect to the XZ plane. The devices 2 and 4 are provided as an extension to the spectacle-shaped housing 10. Devices 2 (2R, 2L) are connected to the left and right temple portions 10c so as to extend forward (in the V direction in the figure) on the lower side. The housing 25 of the device 2 has a shape extending in the V direction, for example, substantially like an arm, but is not limited thereto. The left and right devices 4 (4R, 4L) are connected to the upper side of the binocular portion 10a so as to be exposed to the eyebrow side. In the case of the goggle type as shown in FIG. 1B, the device 4 is arranged in the goggle type housing. The devices 2 and 4 may be detachably connected to the housing 10, respectively, or one of the housings 10 so as to continuously extend from the housing 10 (temple portion 10c or binocular portion 10a). It may be fixedly provided as a portion.
 デバイス2Rおよびデバイス2Lは、それぞれ、筐体25においてY方向で内側に向く面に、電極3が設けられている。デバイス4Rおよびデバイス4Lは、それぞれ、X方向で眉側に向く面に、電極3が設けられている。電極3は、筋電図信号を検出するためのセンサ電極である。デバイス2は、長手方向であるV方向において、電極3や、マイクを含む音声入力装置18が配置されている。デバイス2,4には、電極3およびテンプル部10cの生体信号取得部20と接続される回路等も内蔵されている。 Each of the device 2R and the device 2L is provided with an electrode 3 on the surface of the housing 25 facing inward in the Y direction. Each of the device 4R and the device 4L is provided with an electrode 3 on a surface facing the eyebrow side in the X direction. The electrode 3 is a sensor electrode for detecting an electromyographic signal. In the device 2, an electrode 3 and a voice input device 18 including a microphone are arranged in the V direction, which is the longitudinal direction. The devices 2 and 4 also include a circuit and the like connected to the electrode 3 and the biological signal acquisition unit 20 of the temple unit 10c.
 左右のテンプル部10cには、生体信号取得部20が内蔵されている。生体信号取得部20は、後述の図4のように、筋電図信号センサ201と眼電位信号センサ202とを含む。生体信号取得部20の筋電図信号センサ201は、デバイス2の電極3およびデバイス4の電極3から筋電図信号を検出し、眼電位信号センサ202は、電極5から眼電位信号を検出する。生体信号取得部20は、生体信号として、これらの筋電図信号および眼電位信号を取得し、所定の処理を行う。生体信号取得部20は、HMD1のコントローラ101(図4)と連携する。なお、コントローラ101の一部として生体信号取得部20が実装されてもよい。 The biological signal acquisition unit 20 is built in the left and right temple units 10c. The biological signal acquisition unit 20 includes an electromyogram signal sensor 201 and an electro-oculography signal sensor 202, as shown in FIG. 4 described later. The electromyogram signal sensor 201 of the biological signal acquisition unit 20 detects the electromyogram signal from the electrode 3 of the device 2 and the electrode 3 of the device 4, and the electrooculogram signal sensor 202 detects the electromyogram signal from the electrode 5. .. The biological signal acquisition unit 20 acquires these electromyographic signals and electro-oculography signals as biological signals, and performs predetermined processing. The biological signal acquisition unit 20 cooperates with the controller 101 (FIG. 4) of the HMD1. The biological signal acquisition unit 20 may be mounted as a part of the controller 101.
 実施の形態1では、筋電図信号を検出する対象となる身体部位、すなわちセンサ電極である電極3が接触される表情筋の部位として、特に、頬付近と眉付近との2箇所を含む構成とした。これに限定されず、少なくとも1つの表情筋の箇所を対象とすればよい。 In the first embodiment, the body part to detect the electromyographic signal, that is, the part of the facial muscle to which the electrode 3 which is the sensor electrode is in contact is included, in particular, the vicinity of the cheek and the vicinity of the eyebrows. And said. The target is not limited to this, and at least one facial muscle may be targeted.
 実施の形態1では、HMD1は、電極5からの眼電位信号については、電極3からの筋電図信号とは独立した情報として、任意に使用する。例えば、HMD1は、眼電位信号から、瞬目や視線移動等の状態を判断してもよい。変形例のHMDとしては、眼電位信号用の電極5を備えない構成としてもよい。 In the first embodiment, the HMD 1 arbitrarily uses the electro-oculography signal from the electrode 5 as information independent of the electromyogram signal from the electrode 3. For example, the HMD 1 may determine a state such as blinking or eye movement from the electro-oculography signal. The HMD of the modified example may be configured not to include the electrode 5 for the electro-oculography signal.
 [デバイスおよび電極]
 図3は、図2のHMD1の前面を見た構成を示し、特に、筋電図信号用のデバイス2,4の電極3、および眼電位信号用の電極5等の構成例を示す。ノーズパッド付近には電極5として電極5aおよび電極5bが配置されている。電極5は、ユーザの鼻筋(図12)付近に接触する。
[Devices and electrodes]
FIG. 3 shows a configuration in which the front surface of the HMD 1 in FIG. 2 is viewed, and in particular, shows a configuration example of the electrodes 3 of the devices 2 and 4 for the electromyographic signal, the electrodes 5 for the electrooculogram signal, and the like. Electrodes 5a and 5b are arranged as electrodes 5 in the vicinity of the nose pad. The electrode 5 comes into contact with the vicinity of the user's nose (FIG. 12).
 デバイス2Rには、電極3aおよび電極3bの2つの電極3が1つのペアp1として配置されている。デバイス2Lには、電極3cおよび電極3dの2つの電極3が1つのペアp2として配置されている。デバイス2(2R,2L)の筐体25は、例えばZ方向に対し左右の外側に斜めに傾くように配置されている。デバイス2の電極3は、筐体25のY方向で内側の面から外に少し出ており、頬付近の皮膚に接触しやすくなっている。 In the device 2R, two electrodes 3 of the electrode 3a and the electrode 3b are arranged as one pair p1. In the device 2L, two electrodes 3 of the electrode 3c and the electrode 3d are arranged as one pair p2. The housing 25 of the device 2 (2R, 2L) is arranged so as to be inclined to the left and right outward in the Z direction, for example. The electrode 3 of the device 2 protrudes slightly outward from the inner surface in the Y direction of the housing 25, and easily comes into contact with the skin near the cheek.
 デバイス4(4R,4L)の背面側では、それぞれ、2つの電極3が左右に配置されている。デバイス4Rには、電極3eおよび電極3fの2つの電極3が1つのペアp3として配置されている。デバイス4Lには、電極3gおよび電極3hの2つの電極3が1つのペアp4として配置されている。デバイス4の電極3は、背面から外に少し出ており、眉付近の皮膚に接触しやすくなっている。 On the back side of the device 4 (4R, 4L), two electrodes 3 are arranged on the left and right, respectively. In the device 4R, two electrodes 3 of the electrode 3e and the electrode 3f are arranged as one pair p3. In the device 4L, two electrodes 3 of the electrode 3g and the electrode 3h are arranged as one pair p4. The electrode 3 of the device 4 protrudes slightly from the back surface to facilitate contact with the skin near the eyebrows.
 頬側に設けられたデバイス2の電極3群(3a~3d)を電極3Aとし、眉側に設けられたデバイス4の電極3群(3e~3h)を電極3Bとする。電極3Aは、頬付近の表情筋からの筋電図信号を検出するためのセンサ電極である。電極3Bは、眉付近の表情筋からの筋電図信号を検出するためのセンサ電極である。電極3Aは、頬付近の皮膚に接触する。電極3Bは、眉付近の皮膚に接触する。電極3の形状は円盤等としてもよい。 The electrode 3 group (3a to 3d) of the device 2 provided on the buccal side is referred to as the electrode 3A, and the electrode 3 group (3e to 3h) of the device 4 provided on the eyebrow side is referred to as the electrode 3B. The electrode 3A is a sensor electrode for detecting an electromyographic signal from the facial muscle near the cheek. The electrode 3B is a sensor electrode for detecting an electromyographic signal from the facial muscle near the eyebrows. The electrode 3A contacts the skin near the cheek. The electrode 3B contacts the skin near the eyebrows. The shape of the electrode 3 may be a disk or the like.
 筋電図信号を検出するための複数の電極3は、2つの電極3が1つのペアとして使用される。筋電図信号は、ペアの電極3間の電位差として検出される。2つの電極3の電位差を1チャネルの筋電図信号とする。眼電位信号は、それぞれの電極5毎に検出される。1つの電極5とアース電極21との電位差を1チャネルの眼電位信号とする。 As for the plurality of electrodes 3 for detecting the EMG signal, the two electrodes 3 are used as one pair. The electromyographic signal is detected as a potential difference between the pair of electrodes 3. The potential difference between the two electrodes 3 is used as a 1-channel EMG signal. The electro-oculography signal is detected for each electrode 5. The potential difference between one electrode 5 and the ground electrode 21 is used as a one-channel ocular potential signal.
 アース電極21は、生体信号取得部20の電極3および電極5に対するアース電極であり、複数のセンサ電極で共通である。アース電極21は、筐体10の一部、例えば、頭側に向いた面として、テンプル部10cの後部の耳掛け付近の面に配置されており、耳付近に接触する。 The ground electrode 21 is a ground electrode for the electrode 3 and the electrode 5 of the biological signal acquisition unit 20, and is common to a plurality of sensor electrodes. The ground electrode 21 is arranged on a part of the housing 10, for example, as a surface facing the head side, on a surface near the ear hook at the rear of the temple portion 10c, and comes into contact with the vicinity of the ear.
 なお、HMDに搭載される生体信号を取得するための電極およびデバイスについての種類、個数、位置、および形状等の態様は、実施の形態1の例に限定されない。HMDは、少なくとも1つの表情筋の筋電図信号を取得できるように、電極を含むデバイスを設けた構成とすればよい。 The type, number, position, shape, and the like of the electrodes and devices for acquiring the biological signal mounted on the HMD are not limited to the example of the first embodiment. The HMD may be configured to be provided with a device including electrodes so that an electromyographic signal of at least one facial muscle can be acquired.
 [荷重について]
 図2のバンド26は、筐体10の左右のテンプル部10cの耳掛けの後ろ端の間を接続する。これにより、筐体10がユーザの頭部に安定的に固定され、電極3,5と皮膚との接触もより確実にされる。
[About load]
The band 26 of FIG. 2 connects between the rear ends of the ear hooks of the left and right temple portions 10c of the housing 10. As a result, the housing 10 is stably fixed to the user's head, and the contact between the electrodes 3 and 5 and the skin is more reliable.
 ユーザが頭部にHMD1を装着した状態において、デバイス2(2R,2L)は、頬骨付近に接触するように配置されており、デバイス4(4R,4L)は眉付近に接触するように配置されている。特に、デバイス2(2R,2L)は、電極3を含む一部の部位が、頬骨付近の表情筋の皮膚上に接触して乗るような状態で配置される。これにより、デバイス2(特に電極3)は、HMD1の荷重の一部を支える支点となる。従来一般的な眼鏡型のHMDでは、HMD全体の荷重は、筐体が主に接触する鼻および耳の箇所で支えられている。実施の形態1のHMD1では、HMD1全体の荷重は、鼻および耳の箇所に加え、デバイス2の箇所でも支えられ、それらの複数の箇所で荷重が分散される。電極3と表情筋の部位との接触、および皮膚への荷重の状態が好適となるように、デバイス2の形状等が設計されている。 When the user wears the HMD1 on the head, the device 2 (2R, 2L) is arranged so as to be in contact with the vicinity of the cheekbone, and the device 4 (4R, 4L) is arranged so as to be in contact with the vicinity of the eyebrows. ing. In particular, the device 2 (2R, 2L) is arranged in such a state that a part of the portion including the electrode 3 comes into contact with and rides on the skin of the facial muscle near the cheekbone. As a result, the device 2 (particularly the electrode 3) becomes a fulcrum that supports a part of the load of the HMD 1. In a conventional general spectacle-type HMD, the load of the entire HMD is supported by the nose and ears where the housing mainly contacts. In the HMD 1 of the first embodiment, the load of the entire HMD 1 is supported not only at the nose and the ear, but also at the device 2, and the load is distributed at the plurality of the devices. The shape and the like of the device 2 are designed so that the contact between the electrode 3 and the portion of the facial muscle and the state of the load on the skin are suitable.
 これにより、実施の形態1のHMD1は、ユーザの耳や鼻に加わる筐体10等の重量を、デバイス2等が無い従来のHMDよりも軽減することができる。これにより、ユーザのHMDの装着感覚をより良好にすることができ、HMDの重量による不快感を防止できる。さらに、デバイス2によって荷重を支えながら、電極3と表情筋の部位との接触がより良好になり、筋電図信号の検出がより安定になる。 Thereby, the HMD 1 of the first embodiment can reduce the weight of the housing 10 or the like applied to the user's ears or nose as compared with the conventional HMD without the device 2 or the like. As a result, the user's feeling of wearing the HMD can be improved, and the discomfort due to the weight of the HMD can be prevented. Further, while the load is supported by the device 2, the contact between the electrode 3 and the portion of the facial muscle becomes better, and the detection of the EMG signal becomes more stable.
 デバイス2の筐体25の内側面は、例えば、頬面に良く適合するように、X-Z面に対し斜めに配置されていてもよいし、曲面として形成されていてもよい。デバイス2は、V方向に対し垂直な断面では楕円状としてもよいし他の形状としてもよい。デバイス2は、V方向で曲がりがあってもよい。デバイス2は、剛性に限らず弾性体で構成されてもよい。 The inner surface of the housing 25 of the device 2 may be arranged diagonally with respect to the XZ surface or may be formed as a curved surface so as to fit well with the cheek surface, for example. The device 2 may have an elliptical shape or another shape in a cross section perpendicular to the V direction. The device 2 may be bent in the V direction. The device 2 is not limited to rigidity and may be composed of an elastic body.
 なお、ユーザ個人差に対応しやすいように、デバイス2の配置や形状等の状態を調節できる機構があるとより好ましい。一例としては以下がある。HMD1は、筐体10のテンプル部10cからのデバイス2の位置を、各方向へのスライドあるいは曲げや伸縮等によって調節できる機構を有する。デバイス2は例えばV方向で伸縮されてもよい。また、デバイス2は、テンプル部10cとの接続箇所を支点として、Y方向での内側や外側に、あるいは他の方向に、デバイス2先端が可動してもよい。テンプル部10cやデバイス2に弾性体を内蔵することで、デバイス2が内側にある顔面に対して押さえつけられるようにしてもよい。これにより、デバイス2が安定的に保持され、デバイス2の電極3と表情筋の部位とが一定的に接触するようにされる。同様に、デバイス4についても、配置や形状等の状態を調節できる機構があるとより好ましい。 It is more preferable that there is a mechanism that can adjust the state such as the arrangement and shape of the device 2 so that it is easy to deal with individual differences of users. One example is: The HMD 1 has a mechanism that can adjust the position of the device 2 from the temple portion 10c of the housing 10 by sliding in each direction, bending, stretching, or the like. The device 2 may be expanded and contracted in the V direction, for example. Further, in the device 2, the tip of the device 2 may be movable inward or outward in the Y direction or in another direction with the connection point with the temple portion 10c as a fulcrum. By incorporating an elastic body in the temple portion 10c or the device 2, the device 2 may be pressed against the inner face. As a result, the device 2 is stably held, and the electrode 3 of the device 2 and the portion of the facial muscle are in constant contact with each other. Similarly, it is more preferable that the device 4 has a mechanism capable of adjusting the state such as arrangement and shape.
 また、本例では、デバイス2の先端近傍にマイクを含む音声入力装置18も実装されている。このマイクは、口の近くに配置されるので、集音がしやすい。これに限らず、デバイス2,4には、他の構成要素が実装されてもよい。 Further, in this example, a voice input device 18 including a microphone is also mounted near the tip of the device 2. Since this microphone is placed near the mouth, it is easy to collect sound. Not limited to this, other components may be mounted on the devices 2 and 4.
 また、HMD1は、生体信号を利用しない場合、すなわち機能のオフ状態の時に、デバイス2,4が顔に接触しないように、ユーザによって調節できる機構を備えてもよい。筐体10からデバイス2,4が着脱できる機構としてもよい。デバイス2,4を顔に対し外側に広げられる機構等でもよい。 Further, the HMD 1 may be provided with a mechanism that can be adjusted by the user so that the devices 2 and 4 do not come into contact with the face when the biological signal is not used, that is, when the function is off. The mechanism may be such that the devices 2 and 4 can be attached to and detached from the housing 10. A mechanism or the like that allows the devices 2 and 4 to be spread outward with respect to the face may be used.
 [生体信号取得部]
 図4は、生体信号取得部20やセンサ電極の構成例を示す。生体情報取得部20は、筋電図信号センサ201と眼電位信号センサ202とを含む。図4では、筋電図信号センサ201は、デバイス2の電極3A(3a~3d)から2チャネルの筋電図信号(信号sg1,sg2)を取得し、デバイス4の電極3B(3e~3h)から2チャネルの筋電図信号(信号sg3,sg4)を取得する。眼電位信号センサ202は、電極5(5a,5b)から2チャネルの眼電位信号(信号sg5,sg6)を取得する。生体情報取得部20は、これらのセンサによって、ユーザの頭部・顔から生体信号を取得する。
[Biomedical signal acquisition unit]
FIG. 4 shows a configuration example of the biological signal acquisition unit 20 and the sensor electrode. The biological information acquisition unit 20 includes an electromyogram signal sensor 201 and an electro-oculography signal sensor 202. In FIG. 4, the EMG signal sensor 201 acquires two channels of EMG signals (signals sg1, sg2) from the electrodes 3A (3a to 3d) of the device 2, and the electrodes 3B (3e to 3h) of the device 4. Two-channel EMG signals (signals sg3, sg4) are acquired from. The electro-oculography signal sensor 202 acquires a 2-channel electro-oculography signal (signals sg5, sg6) from the electrodes 5 (5a, 5b). The biological information acquisition unit 20 acquires biological signals from the user's head and face by using these sensors.
 2個の電極3のペアによる筋電図信号を、1チャネルの信号とする。例えば、デバイス2Rにおいて、電極3aの検出信号(アース電極21との差分)と電極3bの検出信号(アース電極21との差分)との電位差が、1つの筋電図信号として信号sg1である。この信号sg1は、右頬付近の表情筋の筋電図信号である。同様に、デバイス2Lの電極3c,3dからの信号sg2は、左頬付近の表情筋の筋電図信号である。デバイス4Rの電極3e,3fからの信号sg3は、右眉付近の表情筋の筋電図信号である。デバイス4Lの電極3g,3hからの信号sg4は、左眉付近の表情筋の筋電図信号である。 The electromyographic signal from the pair of two electrodes 3 is used as a one-channel signal. For example, in the device 2R, the potential difference between the detection signal of the electrode 3a (difference from the ground electrode 21) and the detection signal of the electrode 3b (difference from the ground electrode 21) is the signal sg1 as one myocardial signal. This signal sg1 is an electromyographic signal of the facial muscle near the right cheek. Similarly, the signal sg2 from the electrodes 3c and 3d of the device 2L is an electromyographic signal of the facial muscle near the left cheek. The signal sg3 from the electrodes 3e and 3f of the device 4R is an electromyographic signal of the facial muscle near the right eyebrow. The signal sg4 from the electrodes 3g and 3h of the device 4L is an electromyographic signal of the facial muscle near the left eyebrow.
 筋電図信号センサ201や眼電位信号センサ202は、図示しないが、増幅回路、ノイズ除去回路、整流回路等を備え、検出した信号をそれらの回路によって処理する。生体信号取得部20は、筋電図信号センサ201からの4個(4チャネル)の筋電図信号(信号sg1~sg4)と、眼電位信号センサ202からの2個(2チャネル)の眼電位信号(信号sg5,sg6)とを取得する。生体信号取得部20は、それらの生体信号を処理し、データとしてメモリに記憶する。コントローラ101は、それらの生体信号のデータを用いて、所定の制御処理を行う。 Although not shown, the EMG signal sensor 201 and the electrooculogram signal sensor 202 are provided with an amplifier circuit, a noise removal circuit, a rectifier circuit, and the like, and the detected signal is processed by these circuits. The biometric signal acquisition unit 20 has four (4 channels) EMG signals (signals sg1 to sg4) from the EMG signal sensor 201 and two (2 channels) electromyograms from the electrooculogram signal sensor 202. The signal (signals sg5, sg6) and the signal are acquired. The biological signal acquisition unit 20 processes those biological signals and stores them in a memory as data. The controller 101 performs a predetermined control process using the data of those biological signals.
 生体信号取得部20により行う処理としては、筋電図信号の演算処理として、所定の特徴量を算出する処理としてもよい。筋電図信号の特徴量の例としては、強度・振幅等のレベルの他、周期・周波数等を用いてもよい。 The process performed by the biological signal acquisition unit 20 may be a process of calculating a predetermined feature amount as a process of calculating an electromyogram signal. As an example of the feature amount of the EMG signal, the period, frequency, etc. may be used in addition to the level such as intensity and amplitude.
 [HMD-機能ブロック]
 図5は、HMD1の機能ブロック構成例を示す。HMD1は、プロセッサ101、メモリ102、カメラ12、測距センサ13、センサ部14、表示面11を含む表示デバイス103、通信デバイス104、マイクを含む音声入力装置18、スピーカ等を含む音声出力装置19、操作入力部105、バッテリ106、および生体信号取得部20等を備える。これらの要素はバス等を通じて相互に接続されている。
[HMD-Functional block]
FIG. 5 shows an example of a functional block configuration of HMD1. The HMD 1 includes a processor 101, a memory 102, a camera 12, a distance measuring sensor 13, a sensor unit 14, a display device 103 including a display surface 11, a communication device 104, a voice input device 18 including a microphone, and a voice output device 19 including a speaker. , Operation input unit 105, battery 106, biometric signal acquisition unit 20, and the like. These elements are connected to each other through a bus or the like.
 プロセッサ101は、CPU、ROM、RAM等で構成され、HMD1のコントローラを構成する。プロセッサ101は、メモリ102の制御プログラム31やアプリケーションプログラム32に従った処理を実行することにより、OS、ミドルウェア、アプリケーション等の機能や他の機能を実現する。メモリ102は、不揮発性記憶装置等で構成され、プロセッサ101等が扱う各種のデータや情報を記憶する。メモリ102には、一時的な情報として、カメラ12等によって取得した画像や検出情報等も格納される。 The processor 101 is composed of a CPU, ROM, RAM, etc., and constitutes an HMD1 controller. The processor 101 realizes functions such as an OS, middleware, and applications and other functions by executing processing according to the control program 31 and the application program 32 of the memory 102. The memory 102 is composed of a non-volatile storage device or the like, and stores various data and information handled by the processor 101 and the like. The memory 102 also stores images, detection information, and the like acquired by the camera 12 and the like as temporary information.
 カメラ12は、レンズから入射した光を撮像素子で電気信号に変換して画像を取得する。測距センサ13は、例えばTOFセンサを用いる場合、外界に出射した光が物体に当たって戻ってくるまでの時間から、その物体までの距離を計算する。センサ部14は、例えば、加速度センサ141、ジャイロセンサ(言い換えると角速度センサ)142、地磁気センサ143、およびGPS受信器144を含む。センサ部14は、これらのセンサの検出情報を用いて、HMD1の位置、向き、動き等の状態を検出する。HMDは、これに限らず、照度センサ、近接センサ、気圧センサ等を備えてもよい。 The camera 12 acquires an image by converting the light incident from the lens into an electric signal by an image sensor. When a TOF sensor is used, for example, the distance measuring sensor 13 calculates the distance to the object from the time until the light emitted to the outside hits the object and returns. The sensor unit 14 includes, for example, an acceleration sensor 141, a gyro sensor (in other words, an angular velocity sensor) 142, a geomagnetic sensor 143, and a GPS receiver 144. The sensor unit 14 uses the detection information of these sensors to detect states such as the position, orientation, and movement of the HMD1. The HMD is not limited to this, and may include an illuminance sensor, a proximity sensor, a barometric pressure sensor, and the like.
 表示デバイス103は、表示駆動回路や表示面11を含み、表示情報34に基づいて、表示面11に画像を表示する。通信デバイス104は、所定の各種の通信インタフェースに対応する通信処理回路やアンテナ等を含む。通信インタフェースの例は、モバイル網、Wi-Fi(登録商標)、BlueTooth(登録商標)、赤外線等が挙げられる。通信デバイス104は、外部装置との間での無線通信処理等を行う。通信デバイス104は、操作器との近距離通信処理も行う。 The display device 103 includes a display drive circuit and a display surface 11, and displays an image on the display surface 11 based on the display information 34. The communication device 104 includes a communication processing circuit, an antenna, and the like corresponding to various predetermined communication interfaces. Examples of communication interfaces include mobile networks, Wi-Fi (registered trademark), Bluetooth (registered trademark), infrared rays and the like. The communication device 104 performs wireless communication processing and the like with an external device. The communication device 104 also performs short-range communication processing with the actuator.
 音声入力装置18は、マイクからの入力音声を音声データに変換する。音声出力装置19は、音声データに基づいてスピーカ等から音声を出力する。音声入力装置は、音声認識機能を備えてもよい。音声出力装置は、音声合成機能を備えてもよい。操作入力部107は、HMD1に対する操作入力、例えば電源オン/オフや音量調整等を受け付ける部分であり、ハードウェアボタンやタッチセンサ等で構成される。バッテリ108は、各部に電力を供給する。 The voice input device 18 converts the input voice from the microphone into voice data. The voice output device 19 outputs voice from a speaker or the like based on the voice data. The voice input device may include a voice recognition function. The voice output device may include a voice synthesis function. The operation input unit 107 is a part that receives operation inputs to the HMD 1, such as power on / off and volume adjustment, and is composed of a hardware button, a touch sensor, and the like. The battery 108 supplies electric power to each part.
 メモリ102には、制御プログラム31、アプリケーションプログラム32、設定情報33、表示情報34、生体信号データ35、制御情報36等が格納されている。制御プログラム31は、実施の形態1での特有の機能を実現するためのプログラムである。アプリケーションプログラム32は、例えばARによってユーザの誘導や作業支援等の所定の機能を実現するプログラムである。設定情報33は、各機能に係わるシステム設定情報やユーザ設定情報を含む。表示情報34は、表示面11に画像を表示するための情報やデータである。生体信号データ35は、生体信号取得部20によって取得した生体信号のデータやそれを処理したデータである。制御情報36は、実施の形態1の特有の機能(筋電図信号を用いた出力内容調整機能)に係わる管理・制御用の情報である。制御情報36は、後述するが、テスト、評価および学習等の情報・データや、生体信号からの心理状態の推測に基づいて出力内容を制御するための情報・データである。 The memory 102 stores a control program 31, an application program 32, setting information 33, display information 34, biological signal data 35, control information 36, and the like. The control program 31 is a program for realizing the unique function in the first embodiment. The application program 32 is a program that realizes predetermined functions such as user guidance and work support by, for example, AR. The setting information 33 includes system setting information and user setting information related to each function. The display information 34 is information or data for displaying an image on the display surface 11. The biological signal data 35 is the data of the biological signal acquired by the biological signal acquisition unit 20 and the processed data thereof. The control information 36 is information for management / control related to a function peculiar to the first embodiment (output content adjusting function using an EMG signal). The control information 36, which will be described later, is information / data for controlling output contents based on information / data such as tests, evaluations, and learnings, and estimation of a psychological state from biological signals.
 プロセッサ101によるコントローラは、処理によって実現される機能ブロックの構成例として、通信制御部101A、表示制御部101B、データ処理部101C、およびデータ取得部101Dを有する。通信制御部101Aは、外部装置との通信の際等に通信デバイス104を用いた通信処理を制御する。表示制御部101Bは、表示情報34を用いて表示デバイス103の表示面11への画像表示を制御する。データ処理部101Cは、生体信号データ35や制御情報36を読み書きしながら、特有の機能に係わる処理を行う。データ取得部101Dは、処理に必要な各種の情報・データを取得する。データ取得部101Dは、生体信号取得部20から生体信号データを取得し、カメラ12、測距センサ13、およびセンサ部14等から各種のデータを取得する。 The controller by the processor 101 has a communication control unit 101A, a display control unit 101B, a data processing unit 101C, and a data acquisition unit 101D as a configuration example of a functional block realized by processing. The communication control unit 101A controls communication processing using the communication device 104 when communicating with an external device or the like. The display control unit 101B controls the image display on the display surface 11 of the display device 103 by using the display information 34. The data processing unit 101C performs processing related to a unique function while reading and writing the biological signal data 35 and the control information 36. The data acquisition unit 101D acquires various information / data necessary for processing. The data acquisition unit 101D acquires biological signal data from the biological signal acquisition unit 20, and acquires various data from the camera 12, the distance measuring sensor 13, the sensor unit 14, and the like.
 [表情筋]
 図12は、顔の表情筋の例を模式的に示す。実施の形態1のHMD1に備える電極3によって筋電図信号を取得できる表情筋としては、例えば次のものがある。すなわち、その表情筋は、図12の(a)~(g)のように、前頭筋、皺眉筋、眼輪筋、上唇挙筋、笑筋、小頬骨筋、および大頬骨筋が挙げられる。
[Facial muscle]
FIG. 12 schematically shows an example of facial facial muscles. Examples of facial muscles capable of acquiring an electromyographic signal by the electrode 3 provided in the HMD 1 of the first embodiment include the following. That is, the facial muscles include the frontalis muscle, the corrugator supercilii muscle, the orbicularis oculi muscle, the upper lip levitation muscle, the laughing muscle, the zygomaticus minor muscle, and the zygomaticus major muscle as shown in FIGS.
 これらの表情筋のうち、前頭筋、皺眉筋、眼輪筋、および上唇挙筋については、それらの少なくとも1つから信号を検出するための電極3が配置される。実施の形態1の例では、両眼部10aの付近において、両眼部10aの上側に配置されたデバイス4に電極3Bが配置されている。この電極3Bによって、それらの表情筋、特に皺眉筋から、効率的に筋電図信号の検出が可能である。 Among these facial muscles, the frontalis muscle, the corrugator supercilii muscle, the orbicularis oculi muscle, and the levator labii muscle are provided with electrodes 3 for detecting signals from at least one of them. In the example of the first embodiment, the electrode 3B is arranged on the device 4 arranged on the upper side of the binocular portion 10a in the vicinity of the binocular portion 10a. With this electrode 3B, it is possible to efficiently detect the EMG signal from those facial muscles, particularly the corrugator supercilii muscle.
 笑筋、少頬骨筋、および大頬骨筋については、それらの少なくとも1つから信号を検出するための電極3が配置される。実施の形態1の例では、テンプル部10cから頬に沿って伸ばすように配置されたデバイス2に電極3Aが配置されている。この電極3Aによって、それらの表情筋、特に小頬骨筋から大頬骨筋にかけた部分から、効率的に筋電図信号の検出が可能である。 For the risorius muscle, the zygomaticus minor muscle, and the zygomaticus major muscle, an electrode 3 for detecting a signal from at least one of them is arranged. In the example of the first embodiment, the electrode 3A is arranged on the device 2 arranged so as to extend from the temple portion 10c along the cheek. With this electrode 3A, it is possible to efficiently detect the electromyographic signal from those facial muscles, particularly the portion from the zygomaticus minor muscle to the zygomaticus major muscle.
 鼻筋や眼輪筋の付近には、眼電位信号を取得するための電極5が配置される。また、実施の形態1のHMD1は、装着者の眼よりも下に位置する頬骨付近の表情筋に対して配置される電極3Aによって、前述のように、HMD1の一部の荷重が支えられる。この対象となる表情筋は、例えば、眼輪筋、上唇挙筋、笑筋、小頬骨筋または大頬骨筋である。 Electrodes 5 for acquiring electrooculogram signals are arranged near the nasal muscles and the orbicularis oculi muscles. Further, in the HMD1 of the first embodiment, as described above, a part of the load of the HMD1 is supported by the electrodes 3A arranged for the facial muscles near the cheekbones located below the wearer's eyes. The target facial muscles are, for example, the orbicularis oculi muscle, the levator labii superior muscle, the risorius muscle, the zygomaticus minor muscle, or the zygomaticus major muscle.
 [心理状態]
 表情筋の筋電図信号からは心理状態が推測できる。表情筋の筋電図信号から心理状態を推測・評価する方法については、公知技術を適用してもよい。公知技術に基づいて、筋電図信号から推測できる心理状態としては、前述のように、快・不快、リラックス、集中、ストレス、フラストレーション、疲労感、覚醒度、怒り等の状態が挙げられる。また、それらの各種の心理・感情については、度合いとしても推測可能である。例えば、快適さの度合いを複数の段階で数値化してもよい。
[Psychological state]
The psychological state can be inferred from the EMG signal of the facial muscles. A known technique may be applied to a method of estimating and evaluating a psychological state from an electromyographic signal of a facial muscle. As described above, the psychological states that can be inferred from the EMG signal based on the known technique include states such as comfort / discomfort, relaxation, concentration, stress, frustration, fatigue, arousal, and anger. In addition, the degree of these various psychology and emotions can be inferred. For example, the degree of comfort may be quantified in multiple stages.
 実施の形態1では、筋電図信号から推測される心理状態は、出力内容であるGUI画像についてのユーザによる評価の状態と対応付けられ、簡単に言えば、ユーザが出力内容に対し感じている状態に相当する。 In the first embodiment, the psychological state estimated from the EMG signal is associated with the state of the user's evaluation of the GUI image which is the output content, and simply put, the user feels the output content. Corresponds to the state.
 HMD1は、表示面11に画像を表示するとともに、デバイス2等を通じて筋電図信号を取得し、筋電図信号で推測される心理状態に応じて、出力内容であるGUI画像等を調整する。これにより、実施の形態1のHMD1は、ユーザの身体的動きに影響が出る前に、例えば大きなストレスとして現れる前に、より好適な出力内容となるように調整する。これにより、より好適なHMD使用環境をユーザに提供できる。例えば、HMD1は、誘導や作業支援のアプリケーションによる出力としてGUI画像を表示する場合に、筋電図信号からユーザの快・不快等の度合いをみながら、GUI画像を調整することができる。HMD1は、例えば作業支援の指示や説明のGUI画像の種類、量、大きさ、細かさ、速さ等を調整することができる。 The HMD 1 displays an image on the display surface 11 and acquires an electromyogram signal through the device 2 or the like, and adjusts a GUI image or the like as an output content according to a psychological state estimated by the electromyogram signal. Thereby, the HMD1 of the first embodiment is adjusted so as to have a more suitable output content before the physical movement of the user is affected, for example, before it appears as a large stress. This makes it possible to provide the user with a more suitable HMD usage environment. For example, when the HMD1 displays a GUI image as an output by a guidance or work support application, the GUI image can be adjusted while observing the degree of comfort or discomfort of the user from the EMG signal. The HMD1 can adjust, for example, the type, amount, size, fineness, speed, and the like of GUI images for work support instructions and explanations.
 [テスト出力および評価入力]
 実施の形態1のHMD1は、ユーザの心理状態を推測・評価するために、事前にテストとしての出力内容を出力する。HMD1は、このテスト出力時に、ユーザの筋電図信号と、ユーザによる主観評価値とを取得する。HMD1は、取得した情報・データに基づいて、出力内容と心理状態と筋電図信号との対応関係についての学習を行う。HMD1は、学習に基づいて、出力内容の調整の制御のための制御情報(図5での制御情報36)を作成・更新する。このテスト出力等の処理は、ユーザ個人差に適合するためのキャリブレーションにも相当する。HMD1は、このテスト、評価、および学習等の処理に基づいて、制御情報として、ユーザ個人毎の辞書を作成し保持する。
[Test output and evaluation input]
The HMD1 of the first embodiment outputs the output content as a test in advance in order to estimate and evaluate the psychological state of the user. At the time of this test output, the HMD 1 acquires the user's EMG signal and the user's subjective evaluation value. Based on the acquired information / data, the HMD1 learns about the correspondence between the output content, the psychological state, and the EMG signal. Based on learning, HMD1 creates and updates control information (control information 36 in FIG. 5) for controlling adjustment of output contents. This process such as test output also corresponds to calibration for adapting to individual user differences. The HMD1 creates and holds a dictionary for each user as control information based on the processing such as this test, evaluation, and learning.
 図6は、HMD1によるテスト出力等の処理のフローを示し、ステップS601~S605を有する。図6のフローは、テスト単位毎に同様に繰り返しである。ユーザは、HMD1を頭部に装着する。HMD1は、生体信号に係わる機能のオン/オフの状態を、ユーザによって設定可能である。HMD1は、機能がオン状態の場合には以下の処理を行う。 FIG. 6 shows a flow of processing such as test output by HMD1 and has steps S601 to S605. The flow of FIG. 6 is similarly repeated for each test unit. The user wears the HMD1 on the head. In the HMD1, the on / off state of the function related to the biological signal can be set by the user. The HMD1 performs the following processing when the function is on.
 最初、ステップS601で、HMD1は、ユーザの心理状態を初期化するための出力を行う。この初期化とは、心理状態をなるべくニュートラルにすること、リラックスさせることである。この初期化の出力は、例えば、ユーザがリラックスできるような映像や音楽の再生表示や、リラックスさせる旨のメッセージの出力等が挙げられる。この出力は、HMD1によって行ってもよいし、HMD1とは別の外部機器(ディスプレイ等)を用いてもよい。 First, in step S601, the HMD1 outputs to initialize the psychological state of the user. This initialization is to make the psychological state as neutral as possible and to relax it. The output of this initialization includes, for example, a playback display of video or music that allows the user to relax, an output of a message to relax, and the like. This output may be performed by the HMD1 or an external device (display or the like) different from the HMD1 may be used.
 次に、ステップS602で、HMD1は、ユーザに心理的な状態を体験させる、テストのための出力を行う。HMD1は、このテスト出力の最中に、デバイス2等を通じて、ユーザの筋電図信号を取得する。このテスト出力は、快・不快等の様々な心理・感情を生起させるための様々な映像や音楽の再生表示や、HMD1を使用した作業についてのGUI画像を通じた作業指示等が挙げられる。HMD1は、時間毎に映像や音楽の種類や作業の種類を変化させながら、複数回のテスト単位の繰り返しとして、このテスト出力を行う。HMD1は、作業指示の際に、表示面11に表示するGUIを含む画像を変化させる。HMD1は、テスト毎の出力内容の情報とともに、取得した筋電図信号のデータを記憶する。 Next, in step S602, the HMD1 outputs for a test that allows the user to experience a psychological state. During this test output, the HMD 1 acquires the user's EMG signal through the device 2 and the like. This test output includes playback display of various images and music for causing various psychology and emotions such as pleasure and discomfort, and work instructions through GUI images for work using HMD1. The HMD1 performs this test output as a repetition of a plurality of test units while changing the type of video and music and the type of work every hour. The HMD 1 changes the image including the GUI displayed on the display surface 11 at the time of the work instruction. The HMD1 stores the acquired EMG signal data together with the output content information for each test.
 ステップS603で、HMD1は、ユーザに、上記テストに対する、快・不快等の心理・感情の度合いを主観評価として入力してもらう。HMD1は、例えば、表示面11に主観評価入力のためのGUI画像を表示し、ユーザに各回のテスト単位毎の評価値を選択入力してもらう。例えば、あるテスト単位における作業指示のGUI画像について、快・不快等の度合いが選択入力できるようにする。この主観評価は、テスト時の出力に対するユーザの心理状態に対応する。 In step S603, the HMD1 asks the user to input the degree of psychology / emotion such as comfort / discomfort for the above test as a subjective evaluation. For example, the HMD 1 displays a GUI image for subjective evaluation input on the display surface 11, and asks the user to select and input an evaluation value for each test unit. For example, the degree of comfort / discomfort can be selectively input for the GUI image of the work instruction in a certain test unit. This subjective evaluation corresponds to the user's psychological state with respect to the output during the test.
 ステップS604で、HMD1は、上記テスト出力内容情報と筋電図信号と主観評価入力情報とを関連付けてデータとして記憶する。 In step S604, the HMD1 stores the test output content information, the EMG signal, and the subjective evaluation input information in association with each other as data.
 ステップS605で、HMD1は、上記テストの処理によって蓄積したデータを用いて、出力内容(例えばGUI画像)と心理状態と筋電図信号との関係について学習し、学習結果を記録する。この学習は、ニューラルネットによる機械学習等を適用してもよい。そして、HMD1は、学習結果に基づいて、筋電図信号と出力内容との対応関係を制御情報(図5での制御情報36)として設定・カスタマイズする。この制御情報は、どのような筋電図信号の状態の場合に、どのような出力内容にするかを規定・制御するための情報である。この制御情報は、例えばその対応関係を規定するテーブル等でもよい。この制御情報は、筋電図信号と出力内容との対応関係とすればよく、それらの間に心理状態を表す値を介在させることは省略できる。テストに応じて、この制御情報は更新される。 In step S605, the HMD1 learns about the relationship between the output content (for example, GUI image), the psychological state, and the EMG signal using the data accumulated by the processing of the above test, and records the learning result. For this learning, machine learning using a neural network or the like may be applied. Then, the HMD 1 sets and customizes the correspondence relationship between the EMG signal and the output content as control information (control information 36 in FIG. 5) based on the learning result. This control information is information for defining and controlling what kind of output content is to be obtained in what kind of EMG signal state. This control information may be, for example, a table or the like that defines the correspondence. This control information may be a correspondence between the EMG signal and the output content, and it is possible to omit interposing a value representing a psychological state between them. This control information is updated according to the test.
 その後、HMD1は、機能のオン状態では、上記制御情報に基づいて、生体信号の状態に応じた出力内容の調整の制御を実行することができる。 After that, when the function is on, the HMD1 can control the adjustment of the output content according to the state of the biological signal based on the above control information.
 [テスト出力例]
 図7は、上記テスト出力の一例を示し、HMD1の表示面11にテスト用のGUI画像が表示される例を示す。本例のテストは、ユーザに対する作業課題を、GUI画像による作業の指示や説明として表示するものである。図7の(A),(B),(C)の3つの例は、その際の説明の量を変えたGUI画像の例である。(A),(B),(C)の画像の順に、説明の量が多い。例えば、(A)の画像701は、表示面11の大きさに対応した所定の大きさの領域内において、右側には、仮想的なオブジェクトの画像710(四角や円で示す)が表示されている。このオブジェクト710は、作業のためにユーザが操作可能である。操作は、HMD1の本体の機能に応じたものであり、公知技術として、例えば操作器による操作、手のジェスチャの認識、音声認識、視線検出による操作、等が挙げられる。画像701の領域内において、左側には、作業説明の画像720が表示されている。なお、透過型表示方式の場合、ユーザから見ると、表示面11では、実物上に図7のような画像(画像710や画像720)が重畳表示されて見える。
[Test output example]
FIG. 7 shows an example of the test output, and shows an example in which a GUI image for a test is displayed on the display surface 11 of the HMD1. In the test of this example, the work task for the user is displayed as a work instruction or explanation by a GUI image. The three examples (A), (B), and (C) in FIG. 7 are examples of GUI images in which the amount of explanation at that time is changed. The amount of explanation is large in the order of the images (A), (B), and (C). For example, in the image 701 of (A), an image 710 (indicated by a square or a circle) of a virtual object is displayed on the right side in an area of a predetermined size corresponding to the size of the display surface 11. There is. The object 710 is user operable for work. The operation corresponds to the function of the main body of the HMD1, and examples of known techniques include operation by an operating device, recognition of hand gestures, voice recognition, operation by line-of-sight detection, and the like. In the area of the image 701, the image 720 of the work explanation is displayed on the left side. In the case of the transparent display method, when viewed from the user, an image (image 710 or image 720) as shown in FIG. 7 is superimposed and displayed on the actual object on the display surface 11.
 作業説明等の画像720は、(A)の画像701では説明量が相対的に多い画像720A、(B)の画像702では説明量が中程度の画像720B、(C)の画像703では説明量が少ない画像720Cとされている。ユーザは、これらのテストの各画像に対し、画像720による説明の量等が自分にとって適切であったか、快適であったかか、わかりやすかったか等について、主観評価を入力する(ステップS603)。出力内容としてどのような説明量等の画像とすれば好適であるかは、ユーザ個人に応じて異なる。説明量が多い画像の方が好適と感じる場合もあるし、説明量が少ない画像の方が好適と感じる場合もある。あるアプリケーションの作業に対するユーザの熟練度等にも応じて、そのユーザが必要とする好適な指示・説明の情報量等が変わる。 The image 720 such as the work explanation is the image 720A having a relatively large amount of explanation in the image 701 of (A), the image 720B having a medium explanation amount in the image 702 of (B), and the explanation amount in the image 703 of (C). It is said that the image is 720C with few images. For each image of these tests, the user inputs a subjective evaluation as to whether the amount of explanation by the image 720 or the like was appropriate for him / her, whether it was comfortable, easy to understand, or the like (step S603). What kind of explanatory amount or the like is suitable as the output content depends on the individual user. In some cases, an image with a large amount of explanation is more suitable, and in other cases, an image with a small amount of explanation is more suitable. The amount of information of suitable instructions / explanations required by the user changes depending on the skill level of the user for the work of a certain application.
 [評価入力例]
 図8は、主観評価入力(ステップS603)の際の出力の一例を示す。図8の(A)の画像801は、HMD1が表示面11に主観評価入力用のGUI画像を表示する例である。画像801は、「説明の量はどうでしたか?」といった主観評価入力を促すメッセージの画像と、主観評価値を入力するための目盛り等のGUI部品による画像810とを有する。画像810の目盛りでは、説明量の感じ方に関して、不足、適切、過多といったように、適切さや満足の度合いが表されている。ユーザは、操作によってその目盛り上で選択した値を主観評価値として入力できる。本例では、目盛りの画像810は、「適切」を中心の値として、左右に不適切性が大きくなる値として、「不足」と「過多」が配置されている。評価入力形式はこれに限らずに可能である。
[Evaluation input example]
FIG. 8 shows an example of the output at the time of the subjective evaluation input (step S603). The image 801 of FIG. 8A is an example in which the HMD 1 displays a GUI image for subjective evaluation input on the display surface 11. The image 801 has an image of a message prompting a subjective evaluation input such as "How was the amount of explanation?" And an image 810 by a GUI component such as a scale for inputting the subjective evaluation value. The scale of the image 810 shows the degree of appropriateness and satisfaction, such as insufficient, appropriate, and excessive, regarding how to feel the amount of explanation. The user can input the value selected on the scale by the operation as the subjective evaluation value. In this example, in the image 810 of the scale, "appropriate" is set as the center value, and "insufficient" and "excessive" are arranged as values that increase the inadequacy on the left and right. The evaluation input format is not limited to this.
 図8の(B)の画像802は、評価入力の別の表示例を示す。本例は、評価入力用の画像820は、(A)と同様に、説明量の感じ方に関する評価値を入力するための目盛りであるが、バーの操作によって「快適」と「不快」との間で、快・不快の度合いを評価値として入力できる形式である。 Image 802 of FIG. 8B shows another display example of the evaluation input. In this example, the image 820 for evaluation input is a scale for inputting an evaluation value regarding how to feel the explanatory amount, as in (A), but it is "comfortable" and "unpleasant" depending on the operation of the bar. It is a format in which the degree of comfort and discomfort can be input as an evaluation value.
 HMD1は、上記のようなテストおよび評価入力の学習に基づいて、図6のステップS604,S605で、ユーザ毎およびアプリケーション毎の出力内容調整のための制御情報を設定する。HMD1は、ユーザの評価値を、制御情報のユーザ毎の辞書に反映する。HMD1は、出力内容とユーザの心理状態(対応する生体信号の状態)との対応関係の分類等に応じて、出力内容調整の候補となるGUI画像等を決めておき、制御情報に設定しておく。その後、HMD1は、ユーザがHMD1のアプリケーションを実際に利用する時に、制御情報に基づいて、GUI画像等の出力内容を適宜に調整する。ユーザ毎の辞書の更新に応じて、出力内容調整機能の感度がしだいに高くなり、より好適な調整が可能となる。 The HMD1 sets the control information for adjusting the output contents for each user and each application in steps S604 and S605 of FIG. 6 based on the learning of the test and the evaluation input as described above. The HMD1 reflects the user's evaluation value in the user-specific dictionary of control information. The HMD1 determines a GUI image or the like as a candidate for output content adjustment according to the classification of the correspondence between the output content and the user's psychological state (corresponding biological signal state), and sets it as control information. back. After that, the HMD1 appropriately adjusts the output contents such as the GUI image based on the control information when the user actually uses the application of the HMD1. As the dictionary is updated for each user, the sensitivity of the output content adjustment function gradually increases, and more suitable adjustment becomes possible.
 なお、実際のアプリケーション利用時のGUI画像等の出力と、図7のようなテスト出力時の出力とは、基本的に同じような形式のものとしてもよいし、あるいは、テスト出力は実際の出力に比べて簡易化したものとしてもよい。 The output of the GUI image or the like when actually using the application and the output at the time of the test output as shown in FIG. 7 may be basically in the same format, or the test output may be the actual output. It may be simplified as compared with.
 [変形例-評価入力]
 図8の(C)の画像803は、実施の形態1の変形例における、主観評価入力に関する別の表示例を示す。HMD1は、実際のアプリケーション利用時のユーザ処理として表示面11にGUI画像を表示している最中に、その時の出力内容に対するユーザによる主観評価値の入力ができるように、評価入力用の画像830を表示する。なお、本例では、テスト出力の画像と、実際のアプリケーション利用時の画像とが同じようなものであるとする。HMD1は、アプリケーション利用時のユーザ処理中に、表示面11に例えば図7の(B)の画像702と同様であるような画像803を表示する。この画像803は、オブジェクトの画像710や説明の画像720(702B)を有する。さらに、その際、HMD1は、それらの画像とともに、表示面11の領域の一部に、評価入力用の画像830を表示する。HMD1は、主となる画像に対し邪魔にならないように、画像830を領域の端等に表示してもよい。ユーザは、その画像830に対し、その時の出力内容(例えば画像702B)に対する主観評価値をすぐに入力して、評価に反映することができる。
[Transformation example-evaluation input]
The image 803 of FIG. 8C shows another display example regarding the subjective evaluation input in the modified example of the first embodiment. The HMD 1 is an image 830 for evaluation input so that the user can input a subjective evaluation value for the output content at that time while displaying the GUI image on the display surface 11 as a user process when actually using the application. Is displayed. In this example, it is assumed that the image of the test output and the image when actually using the application are similar. The HMD 1 displays, for example, an image 803 similar to the image 702 of FIG. 7 (B) on the display surface 11 during user processing when using the application. The image 803 has an image 710 of the object and an image 720 (702B) of the description. Further, at that time, the HMD 1 displays the image 830 for evaluation input in a part of the area of the display surface 11 together with those images. The HMD 1 may display the image 830 at the edge of the region or the like so as not to interfere with the main image. The user can immediately input a subjective evaluation value for the output content (for example, image 702B) at that time to the image 830 and reflect it in the evaluation.
 この変形例によれば、HMD1は、出力内容調整制御に係わるデータの収集に関して、このような実利用時の評価入力を、事前のテストの補完または代替として利用することができる。なお、図8の(C)のように実利用時に評価入力用の画像830が表示される場合、ユーザは、その画像830を無視して評価入力をしなくてもよく、好きな時に評価入力をすればよい。評価入力用の画像830は、所定のユーザ操作に応じて表示させるようにしてもよいし、一定時間毎に表示させてもよい。 According to this modification, the HMD1 can use such an evaluation input at the time of actual use as a complement or an alternative to the preliminary test for collecting data related to the output content adjustment control. When the image 830 for evaluation input is displayed at the time of actual use as shown in FIG. 8C, the user does not have to ignore the image 830 and input the evaluation at any time. Just do. The image 830 for evaluation input may be displayed according to a predetermined user operation, or may be displayed at regular time intervals.
 他の変形例としては、図7のようなテスト出力の最中おいて、テスト単位毎の評価対象画像(例えば画像720)の表示とともに、図8の(C)のように評価入力用の画像830を表示してもよい。なお、図7と図8の(A)のように、評価対象と評価入力とで画面および時間を分ける場合では、評価対象がより明確となる。図8の(C)のように、画面内に評価対象と評価入力とを混合する場合では、評価の効率化が可能である。 As another modification, in the middle of the test output as shown in FIG. 7, the evaluation target image (for example, image 720) is displayed for each test unit, and the evaluation input image is shown as shown in FIG. 8 (C). 830 may be displayed. When the screen and time are divided between the evaluation target and the evaluation input as shown in FIGS. 7 and 8 (A), the evaluation target becomes clearer. When the evaluation target and the evaluation input are mixed in the screen as shown in FIG. 8C, the efficiency of the evaluation can be improved.
 [出力内容調整]
 ユーザの心理状態(それに対応する筋電図信号の状態)に応じた出力内容の調整は、GUI画像の情報量の変更に限らず、以下のような様々な要素が可能である。出力内容調整の対象として、図7の画像720の説明文の量や細かさに限らず、以下も可能である。GUI画像の種類、大きさ、色、明るさ、文字種類(フォント)等の変更も可能である。GUI画像の種類としては、各種のGUI部品(Widgetやコントロールとも呼ばれる)も挙げられる。GUI部品の例は、ウィンドウ、テキストボックス、ダイアログボックス、ボタン、アイコン、バー、リスト、メニュー等が挙げられる。例えば、同じ説明文である場合でも、どのようなGUI部品、色、文字種類等を用いて提示するかによって、心理に与える影響が異なる。説明画像の大きさ(面積等)や表示位置等によっても、与える影響が異なる。例えば、表示面11の領域に占める説明画像の割合を増減する調整としてもよい。説明文を一定の位置に置くか、オブジェクトの近くに置くか等によっても、与える影響が異なる。文字で説明するか、絵や図形やアイコンで説明するか等によっても、与える影響が異なる。
[Output content adjustment]
The adjustment of the output content according to the psychological state of the user (the state of the corresponding EMG signal) is not limited to the change of the information amount of the GUI image, and various elements such as the following are possible. The target of the output content adjustment is not limited to the amount and fineness of the explanatory text of the image 720 of FIG. 7, and the following is also possible. It is also possible to change the type, size, color, brightness, character type (font), etc. of the GUI image. Examples of the type of GUI image include various GUI parts (also called Widgets and controls). Examples of GUI components include windows, text boxes, dialog boxes, buttons, icons, bars, lists, menus, and the like. For example, even if the same explanation is used, the effect on psychology differs depending on what GUI parts, colors, character types, etc. are used for presentation. The effect is different depending on the size (area, etc.) and display position of the explanatory image. For example, adjustment may be made to increase or decrease the ratio of the explanatory image to the area of the display surface 11. The effect depends on whether the description is placed in a fixed position or near the object. The effect will differ depending on whether the explanation is in text or in pictures, figures, or icons.
 HMD1の出力は、画像表示に限らず、HMD1の機能に応じて音声や振動や発光の出力等も可能である。振動や発光を可能とする場合、HMD1は、筐体10等に、振動器や発光器を備える。出力内容の調整の制御に関しては、それらの各種の出力およびそれらの組合せに関する制御が可能である。音声や発光や振動の制御の場合、それらの有無、種類、量、タイミング等の調整が可能である。画像と一緒にまたは代わりに音声や発光や振動を用いることでも、心理に与える影響が異なる。説明を音声で読み上げること、BGMを流すこと、音声やBGMの種類の変更等でも、与える影響が異なる。出力する映像やBGM等の視聴コンテンツの種類でも、与える影響が異なる。例えば、ユーザが退屈に感じていると推測される場合には、視聴コンテンツの種類を変更してもよい。 The output of HMD1 is not limited to image display, but it is also possible to output voice, vibration, light emission, etc. according to the function of HMD1. When vibration and light emission are possible, the HMD 1 includes a vibrator and a light emitting device in the housing 10 and the like. Regarding the control of the adjustment of the output contents, it is possible to control the various outputs thereof and their combinations. In the case of voice, light emission, and vibration control, it is possible to adjust the presence / absence, type, amount, timing, etc. of them. The use of sound, light emission, and vibration with or instead of images also has different psychological effects. The effect is different depending on whether the explanation is read aloud, BGM is played, or the type of voice or BGM is changed. The effect is different depending on the type of viewing content such as output video or BGM. For example, if the user is presumed to be bored, the type of viewing content may be changed.
 出力内容の調整は、出力の速さの調整としてもよい。例えば、作業説明用の複数の文章や複数の画像があってそれらを自動送りする場合の速度を調整可能である。 The output content may be adjusted as the output speed. For example, if there are a plurality of sentences or a plurality of images for work explanation and they are automatically sent, the speed can be adjusted.
 また、HMD1がユーザに提供するアプリケーションとして、難易度が変更できるゲームの場合には、調整の対象として、そのゲームの難易度とすることもできる。HMD1は、ゲーム中における時々の筋電図信号から、ユーザにとって適度な難易度になるようにリアルタイムで調整を行う。あるいは、ストーリーが分岐可能であり、ユーザにスリルを感じさせることが目的であるコンテンツの場合には、調整の対象としては、その分岐によるストーリーとすることもできる。HMD1は、ストーリー進行上、筋電図信号から、ユーザが適度なスリルを感じるように、分岐を選択してストーリーを変更する。 Further, in the case of a game whose difficulty level can be changed as an application provided by HMD1 to a user, the difficulty level of the game can be set as an adjustment target. The HMD1 adjusts in real time from the occasional EMG signals during the game so that the difficulty level is appropriate for the user. Alternatively, in the case of content in which the story can be branched and the purpose is to make the user feel thrill, the adjustment target may be the story based on the branch. The HMD1 changes the story by selecting a branch from the EMG signal so that the user feels an appropriate thrill in the progress of the story.
 [処理フロー]
 図9は、実施の形態1のHMD1における、ユーザによる実際のアプリケーション利用時のユーザ処理時に、出力内容の調整をリアルタイムで実行する際の処理フローを示す。この処理は、図6のようなテストおよび評価に基づいて、筋電図信号と出力内容との対応関係が制御情報として一旦設定された後の処理である。図9は、ステップS901~S906を有する。例として、HMD1に備える作業支援等のアプリケーションが、表示面11へのGUI画像の表示によって、ユーザに対し作業に関する指示や説明を与える場合の処理を説明する。
[Processing flow]
FIG. 9 shows a processing flow in the HMD 1 of the first embodiment when the user processes the user when the user actually uses the application and adjusts the output contents in real time. This process is a process after the correspondence between the EMG signal and the output content is once set as control information based on the test and evaluation as shown in FIG. FIG. 9 has steps S901 to S906. As an example, a process in which an application such as work support provided for HMD1 gives an instruction or explanation regarding work to a user by displaying a GUI image on the display surface 11 will be described.
 ステップS901で、HMD1は、ユーザが利用するアプリケーションの処理を実行して、例えば作業指示・説明等を決定し、出力内容調整に関する前述の制御情報、もしくは初期設定情報に基づいて、まずは最初の出力内容を決定して出力する。例えば、HMD1は、図7の(B)と同様なGUI画像を表示する。 In step S901, the HMD1 executes the processing of the application used by the user, determines, for example, a work instruction / explanation, and first outputs the first output based on the above-mentioned control information related to the output content adjustment or the initial setting information. Determine the content and output. For example, the HMD 1 displays a GUI image similar to that shown in FIG. 7 (B).
 ステップS902で、HMD1は、上記出力とともに、デバイス2,4および生体信号取得部20を通じて筋電図信号を時系列データとして取得する。なお、HMD1は、出力前から筋電図信号を取得していてもよい。 In step S902, the HMD1 acquires the electromyographic signal as time-series data through the devices 2 and 4 and the biological signal acquisition unit 20 together with the above output. The HMD1 may have acquired the EMG signal before the output.
 ステップS903で、HMD1は、取得したユーザの筋電図信号に対応付けられる心理状態から、現在の出力内容が適切であるかどうか、例えばGUI画像による説明量が多いか少ないか等を、制御情報に基づいて推測する。HMD1は、例えば、その時の筋電図信号の特徴量が、不快やストレスの高さ等を表していた場合には、説明量が多すぎるまたは少なすぎるとユーザが感じていると推測でき、快やストレスの低さ等を表していた場合には、説明量が適切であるとユーザが感じていると推測できる。 In step S903, the HMD 1 determines whether or not the current output content is appropriate, for example, whether or not the amount of explanation by the GUI image is large or small, based on the psychological state associated with the acquired EMG signal of the user. Guess based on. With HMD1, for example, when the feature amount of the EMG signal at that time represents discomfort, high stress, etc., it can be inferred that the user feels that the amount of explanation is too much or too little, which is pleasant. If it indicates low stress or the like, it can be inferred that the user feels that the amount of explanation is appropriate.
 ステップS904で、HMD1は、上記推測に基づいて、ユーザの心理状態として、快・不快、ストレス、満足等の状態が、制御情報に基づいて、所定の許容範囲内であるかを判断する。許容範囲は、閾値等で規定できる。例えば、ある筋電図信号の特徴量の状態は、快適さの度合いを表しており、その快適さの度合いが閾値以上である場合には、ユーザがその時の出力内容を快適や満足に感じていると推測できる。あるいは、ある筋電図信号の特徴量の状態は、不快さの度合いを表しており、その不快さの度合いが閾値以上である場合には、ユーザがその時の出力内容を不快や不満に感じていると推測できる。 In step S904, based on the above estimation, the HMD1 determines whether the state of comfort / discomfort, stress, satisfaction, etc. as the psychological state of the user is within a predetermined allowable range based on the control information. The permissible range can be defined by a threshold value or the like. For example, the state of the feature amount of a certain EMG signal represents the degree of comfort, and when the degree of comfort is equal to or higher than the threshold value, the user feels comfortable or satisfied with the output content at that time. I can guess that there is. Alternatively, the state of the feature amount of a certain EMG signal represents the degree of discomfort, and when the degree of discomfort is equal to or higher than the threshold value, the user feels discomfort or dissatisfaction with the output content at that time. I can guess that there is.
 ステップS904で許容範囲内の場合(Y)にはステップS905を飛ばしてステップS906へ進み、許容範囲外の場合(N)にはステップS905へ進む。 If it is within the permissible range (Y) in step S904, step S905 is skipped and the process proceeds to step S906, and if it is out of the permissible range (N), the process proceeds to step S905.
 ステップS905では、HMD1は、制御情報に基づいて出力内容の調整を行う。HMD1は、例えば、快適さが低く説明量が多すぎるとユーザが感じていると推測した場合には、説明量を低減するように、出力内容の調整を決定する。HMD1は、出力内容の調整として、例えば図7の(B)のような画像720Bを、より簡潔な説明となるように、図7の(C)のような画像720Cに変更する。逆に、HMD1は、快適さが低く説明量が少ないとユーザが感じていると推測した場合には、説明量を増加するように、出力内容の調整を決定する。HMD1は、例えば図7の(B)のような画像720Bを、より多くの情報を含む説明となるように、図7の(A)のような画像720Aに変更する。 In step S905, the HMD1 adjusts the output content based on the control information. The HMD1 determines, for example, to adjust the output content so as to reduce the amount of explanation when it is estimated that the user feels that the comfort is low and the amount of explanation is too large. As an adjustment of the output content, the HMD 1 changes, for example, the image 720B as shown in FIG. 7 (B) to the image 720C as shown in FIG. 7 (C) so as to provide a more concise explanation. On the contrary, when it is estimated that the user feels that the comfort is low and the explanation amount is small, the HMD1 determines the adjustment of the output content so as to increase the explanation amount. HMD1 changes, for example, the image 720B as shown in FIG. 7 (B) to the image 720A as shown in FIG. 7 (A) so as to include more information.
 HMD1は、筋電図信号から、現在の出力内容をユーザが許容範囲内の満足として感じていると推測した場合には、出力内容(例えば画像720)を変更せずに維持するように決定する。これは、ステップS904からステップS906に進む場合に対応する。ステップS906では、HMD1は、ユーザ処理が終了か継続かを確認し、終了の場合(Y)には本フローの終了となり、継続の場合(N)にはステップS901に戻って時間毎に同様の制御の繰り返しである。 The HMD1 determines from the EMG signal that the output content (eg, image 720) remains unchanged when the user infers that the current output content is within the permissible range of satisfaction. .. This corresponds to the case of proceeding from step S904 to step S906. In step S906, HMD1 confirms whether the user processing is completed or continued, and if it is terminated (Y), the flow ends, and if it is continued (N), it returns to step S901 and the same is performed every hour. It is a repetition of control.
 [対応関係]
 補足として、筋電図信号と心理状態と出力内容の対応関係の一例について説明する。表情筋として図12のような皺眉筋と大頬骨筋を例として説明する。皺眉筋は、眉に皺を寄せる筋肉であり、この筋肉の緊張は、フラストレーション、ストレス、集中、緊張、不快感等の概ねネガティブな感情を意味する。大頬骨筋は、笑顔をつくる筋肉であり、この筋肉の緊張は、安心感、快感等の概ねポジティブな感情を意味する。小頬骨筋も笑顔をつくる筋肉であるが、代表として大頬骨筋で説明する。
[Correspondence]
As a supplement, an example of the correspondence between the EMG signal, the psychological state, and the output content will be described. As the facial expression muscles, the corrugator supercilii muscle and the zygomaticus major muscle as shown in FIG. 12 will be described as an example. The corrugator supercilii is a muscle that wrinkles the eyebrows, and the tension of this muscle means generally negative emotions such as frustration, stress, concentration, tension, and discomfort. The zygomaticus major muscle is a muscle that makes a smile, and the tension of this muscle means generally positive emotions such as a sense of security and pleasure. The zygomaticus minor muscle is also a muscle that makes a smile, but I will explain it with the zygomaticus major muscle as a representative.
 実施の形態1では、大頬骨筋から検出できる筋電図信号は、図4のデバイス2の電極3Aから検出できる信号sg1,sg2が相当する。皺眉筋から検出できる筋電図信号は、図4のデバイス4の電極3Bから検出できる信号sg3,sg4が相当する。実施の形態1では、概略的に、大頬骨筋からの筋電図信号をポジティブな状態の指標、皺眉筋からの筋電図信号をネガティブな状態の指標として、それらの組合せで心理状態を推測する。 In the first embodiment, the electromyographic signal that can be detected from the zygomaticus major muscle corresponds to the signals sg1 and sg2 that can be detected from the electrode 3A of the device 2 of FIG. The electromyographic signal that can be detected from the corrugator supercilii corresponds to the signals sg3 and sg4 that can be detected from the electrode 3B of the device 4 in FIG. In the first embodiment, the EMG signal from the zygomaticus major muscle is used as an index of a positive state, and the EMG signal from the corrugator supercilii muscle is used as an index of a negative state, and the psychological state is estimated by a combination thereof. do.
 これらの2種類の筋肉からの筋電図信号が、例えば作業支援のアプリケーションの出力における図7のような画像720の説明文の量に応じてどのように変化するかについて、典型的な一例を以下に示す。まず、(A)の画像701の例のように、あるユーザにとって、説明文が長く、必要以上の情報量がある場合。この場合、ユーザは、説明が多くある分には作業遂行に困らないので、特に緊張することもなく、皺眉筋からの筋電図信号のレベルは弱い。一方、説明が冗長でもあるので、あまり快い感じにはならず、大頬骨筋からの筋電図信号のレベルも弱くなる。 A classic example of how the EMG signals from these two types of muscles change depending on the amount of descriptive text in image 720, such as Figure 7, in the output of a work support application, for example. It is shown below. First, as in the example of image 701 in (A), there is a case where the explanation is long and the amount of information is larger than necessary for a certain user. In this case, the user does not have any trouble in performing the work because there are many explanations, so that the user is not particularly nervous and the level of the EMG signal from the corrugator supercilii is weak. On the other hand, since the explanation is redundant, it does not feel very pleasant, and the level of the EMG signal from the zygomaticus major muscle is also weakened.
 次に、(B)の画像702の例のように、ユーザにとって、説明文の長さが適切で、必要な情報量がある場合。この場合も、ユーザは、作業遂行に困らないので、特に緊張することもなく、皺眉筋からの筋電図信号のレベルは弱い。一方、説明の冗長性もなく、快適に作業を進めることができるので、大頬骨筋からの筋電図信号のレベルは強くなる。 Next, as in the example of image 702 in (B), when the length of the explanation is appropriate for the user and there is a necessary amount of information. In this case as well, the user does not have any trouble in performing the work, so that the user is not particularly nervous and the level of the EMG signal from the corrugator supercilii is weak. On the other hand, since the work can be carried out comfortably without the redundancy of the explanation, the level of the EMG signal from the zygomaticus major muscle becomes strong.
 最後に、(C)の画像703の例のように、ユーザにとって、説明文の長さが短く、必要な情報が不足する場合。この場合、ユーザは、作業遂行に困るので、フラストレーションを感じ、皺眉筋からの筋電図信号のレベルが強くなる。一方、作業遂行に困り、不快な状況であるので、大頬骨筋からの筋電図信号のレベルは弱くなる。 Finally, as in the example of image 703 in (C), when the length of the explanation is short for the user and the necessary information is insufficient. In this case, the user feels frustrated because it is difficult to perform the work, and the level of the EMG signal from the corrugator supercilii becomes stronger. On the other hand, the level of the EMG signal from the zygomaticus major muscle is weakened because it is difficult to perform the work and the situation is unpleasant.
 図10は、上記のような出力内容と筋電図信号のレベルとの対応関係の例をまとめた表である。この表に示すように、出力内容である画像720の説明文の量に応じて、2種類の表情筋の筋電図信号のレベルの組合せは、概略的に例えば3種類のパターンとして現れている。パターン1は、上記(A)の画像701の例に対応し、パターン2は、上記(B)の画像702の例に対応し、パターン3は、上記(C)の画像703の例に対応している。このように、表情筋からの筋電図信号の状態、特に複数の表情筋からの複数の筋電図信号の組合せにおける強弱等のレベルの状態は、ユーザの心理状態を反映している。逆に、表情筋からの筋電図信号の状態を把握することにより、ユーザが出力内容に対しどのように感じているかを推測することができる。 FIG. 10 is a table summarizing an example of the correspondence between the output content and the level of the EMG signal as described above. As shown in this table, the combination of the levels of the EMG signals of the two types of facial muscles generally appears as, for example, three types of patterns, depending on the amount of the description of the image 720 which is the output content. .. Pattern 1 corresponds to the example of image 701 of (A) above, pattern 2 corresponds to the example of image 702 of (B) above, and pattern 3 corresponds to the example of image 703 of (C) above. ing. As described above, the state of the EMG signal from the facial muscles, particularly the state of the level such as the strength in the combination of the plurality of EMG signals from the plurality of facial muscles, reflects the psychological state of the user. On the contrary, by grasping the state of the EMG signal from the facial muscle, it is possible to infer how the user feels about the output content.
 HMD1は、前述のテストおよび評価に基づいて、このような出力内容と筋電図信号との対応関係のパターンについてデータを収集し学習し把握する。HMD1は、このような対応関係に基づいた制御情報を用いて、筋電図信号に応じた出力内容の調整を制御することができる。HMD1は、例えば、2種類の筋電図信号のレベルがともに弱い場合、パターン1に基づいて、そのユーザがその時のGUI画像に対し説明文の量が多いと感じていると推測できる。よって、HMD1は、この場合、出力内容調整として、GUI画像の情報量を低減するような調整を、前述の各種の手段を用いて試みればよい。 HMD1 collects, learns, and grasps data on the pattern of the correspondence between such output contents and EMG signals based on the above-mentioned tests and evaluations. The HMD1 can control the adjustment of the output content according to the EMG signal by using the control information based on such a correspondence. For example, when the levels of the two types of EMG signals are both weak, it can be inferred that the user feels that the amount of explanatory text is large for the GUI image at that time based on the pattern 1. Therefore, in this case, the HMD 1 may attempt to adjust the output content by using the various means described above so as to reduce the amount of information in the GUI image.
 [制御情報]
 図11は、筋電図信号に応じた出力内容調整の機能に係わる制御情報(図5での制御情報36)の構成例を示す。この制御情報は、ユーザ毎およびアプリケーション毎の設定として作成される。この制御情報のテーブルは、列として、筋電図信号、心理状態、出力内容を有する。「筋電図信号」列は、例えば、筋電図信号から算出される特徴量についてのパターンや分類、例えば強度の範囲が格納される。「心理状態」列は、省略可能であるが、筋電図信号の特徴量に対応付けられたユーザの心理状態を表す値、例えば快・不快や満足の度合い等が格納される。「出力内容」列は、出力内容調整の際に出力する候補となる出力内容を規定する情報、例えば表示するGUI画像の種類や説明の量等の情報が格納される。
[Control information]
FIG. 11 shows a configuration example of control information (control information 36 in FIG. 5) related to the function of adjusting the output content according to the EMG signal. This control information is created as a setting for each user and each application. This table of control information has an electromyographic signal, a psychological state, and output contents as columns. The "electromyographic signal" column stores, for example, a pattern or classification of a feature amount calculated from the electromyographic signal, for example, a range of intensity. Although the "psychological state" column can be omitted, a value representing the user's psychological state associated with the feature amount of the EMG signal, for example, the degree of comfort / discomfort or satisfaction, etc. is stored. The "output content" column stores information that defines output content that is a candidate for output when adjusting the output content, for example, information such as the type of GUI image to be displayed and the amount of explanation.
 [効果等(1)]
 上記のように、実施の形態1のHMD1によれば、デバイス2等を通じて取得した筋電図信号を用いて、ユーザの心理・感情状態を迅速に深く推測することができ、それに基づいて出力内容を好適に調整することができる。実施の形態1によれば、ユーザ個人の筋電図信号の状態に応じて、アプリケーション毎のGUI画像等を好適な状態に調整できる。実施の形態1によれば、ユーザの身体的動きに影響が出る前に、例えば大きなストレスとして現れる前に、迅速に好適な出力内容に調整することができる。
[Effects (1)]
As described above, according to the HMD 1 of the first embodiment, the psychological / emotional state of the user can be quickly and deeply estimated by using the electromyographic signal acquired through the device 2 or the like, and the output content is based on the estimation. Can be suitably adjusted. According to the first embodiment, the GUI image or the like for each application can be adjusted to a suitable state according to the state of the EMG signal of the individual user. According to the first embodiment, it is possible to quickly adjust the output content to a suitable one before the physical movement of the user is affected, for example, before it appears as a large stress.
 実施の形態1の変形例としては以下も可能である。HMD1のプロセッサが前述の出力内容調整等の制御処理を行う形態に限らない。変形例のシステムとして、HMDは、サーバ等の外部装置と通信し、その外部装置がその制御処理を行う形態としてもよい。 The following is also possible as a modification of the first embodiment. The HMD1 processor is not limited to the mode in which the control process such as the output content adjustment described above is performed. As a modified example system, the HMD may communicate with an external device such as a server, and the external device may perform the control process thereof.
 [変形例1]
 図13は、実施の形態1の変形例1のHMD1の構成を示す。実施の形態1の図2の例に限らず、前述のデバイス2,4やセンサ電極に関する配置や形状は、各種が可能である。図13は、筐体10に対するデバイス2および電極3の取り付け構造に関する変形例であり、他の構造部分は図2と同様である。変形例1のHMD1は、筐体10の前面部である両眼部10aの下側に拡張して接続されるようにデバイス2が設けられている。このデバイス2は、実施の形態1と同様に、左右のデバイス2R,2Lとして左右対称形状で設けられている。デバイス2(2R,2L)の筐体28には、実施の形態1と同様に、頬付近の表情筋(大頬骨筋から小頬骨筋にかけての部分)からの筋電図信号を検出するための電極3Aとして、電極3i~3lが設けられている。右眼側のデバイス2Rには、下面側に電極3i,3jが設けられている。左眼側のデバイス2Lには、下面側に電極3k,3lが設けられている。デバイス2の先端付近には、実施の形態1と同様にマイクが設けられている。
[Modification 1]
FIG. 13 shows the configuration of HMD1 of the first modification of the first embodiment. Not limited to the example of FIG. 2 of the first embodiment, various arrangements and shapes of the above-mentioned devices 2 and 4 and the sensor electrodes are possible. FIG. 13 is a modified example of the attachment structure of the device 2 and the electrode 3 to the housing 10, and other structural parts are the same as those in FIG. The HMD 1 of the first modification is provided with the device 2 so as to be extended and connected to the lower side of the binocular portion 10a which is the front surface portion of the housing 10. Similar to the first embodiment, the device 2 is provided as left and right devices 2R and 2L in a symmetrical shape. Similar to the first embodiment, the housing 28 of the device 2 (2R, 2L) is for detecting an electrocardiogram signal from the facial muscles near the cheeks (the portion from the zygomaticus major muscle to the zygomaticus minor muscle). Electrodes 3i to 3l are provided as the electrodes 3A. The device 2R on the right eye side is provided with electrodes 3i and 3j on the lower surface side. The device 2L on the left eye side is provided with electrodes 3k and 3l on the lower surface side. A microphone is provided near the tip of the device 2 as in the first embodiment.
 このデバイス2は、頬骨パッドとして機能し、デバイス2の電極3を含む一部が頬骨付近の皮膚に接触してその皮膚の上に乗るように配置される。言い換えると、頬骨の上側にこのデバイス2を介して両眼部10aが乗せられる。これにより、実施の形態1と同様に、デバイス2は、HMD1の荷重の一部を支えて荷重を分散する。これにより、特に、鼻のノーズパッド付近に掛かるHMD1の荷重を、デバイス2によって分散でき、ユーザの装着感覚をより良好にできる。 This device 2 functions as a cheekbone pad, and is arranged so that a part of the device 2 including the electrode 3 comes into contact with the skin near the cheekbone and rides on the skin. In other words, the binocular portion 10a is placed on the upper side of the cheekbone via the device 2. As a result, as in the first embodiment, the device 2 supports a part of the load of the HMD 1 and distributes the load. As a result, in particular, the load of the HMD 1 applied to the vicinity of the nose pad of the nose can be dispersed by the device 2, and the user's wearing sensation can be improved.
 センサ電極の実装に係わる他の変形例としては、図1の(B)のようなゴーグル型の筐体の場合に、デバイス2のような拡張部を設けずに、皮膚と接触する筐体の一部に電極3を設けた形態としてもよい。 As another modification related to the mounting of the sensor electrode, in the case of the goggle type housing as shown in FIG. 1B, the housing that comes into contact with the skin without providing the expansion portion as in the device 2 The electrode 3 may be partially provided.
 <実施の形態2>
 図14等を用いて、実施の形態2のHMDについて説明する。実施の形態2等の基本構成は実施の形態1と同様であり、以下では、実施の形態2等における実施の形態1とは異なる構成部分について主に説明する。実施の形態2のHMD1におけるデバイス2の電極3等の構成は図2と同様である。実施の形態2のHMDは、実施の形態1で説明した、ユーザの無意識の心理状態を読み取るための筋電図信号検出用の電極3を同様に備える。実施の形態2では、その電極3を、ユーザによる意識的な表情筋の動きの操作(随意入力とも記載する)の検出にも用いる。HMD1は、電極3の筋電図信号から、ユーザの意識的な表情筋の動きの操作を随意入力として検出する。ユーザは、例えば意識的に右頬を上げる等の随意入力を行う。HMD1は、検出した随意入力の情報を、HMD1への入力の1つ、例えばアプリケーションのGUI画像に対する入力情報として使用する。
<Embodiment 2>
The HMD of the second embodiment will be described with reference to FIG. 14 and the like. The basic configuration of the second embodiment and the like is the same as that of the first embodiment, and the components different from the first embodiment of the second embodiment and the like will be mainly described below. The configuration of the electrode 3 and the like of the device 2 in the HMD 1 of the second embodiment is the same as that of FIG. The HMD of the second embodiment also includes the electrode 3 for detecting the electromyogram signal for reading the unconscious psychological state of the user described in the first embodiment. In the second embodiment, the electrode 3 is also used to detect a user's conscious operation of facial muscle movement (also referred to as voluntary input). The HMD 1 detects the user's conscious operation of the facial muscle movement as a voluntary input from the electromyographic signal of the electrode 3. The user makes a voluntary input such as consciously raising the right cheek. The HMD1 uses the detected information of the voluntary input as one of the inputs to the HMD1, for example, the input information for the GUI image of the application.
 HMD1は、実施の形態1で説明したユーザの無意識の心理状態に応じた表情筋の動きと、実施の形態2で説明するユーザの意識的な表情筋の動きの操作とについて、筋電図信号の強度等のレベルに応じて判断して切り替える。筋電図信号を意識的な入力信号としても使用する場合、HMD1は、筋電図信号を整流して用いる。HMD1は、整流後の筋電図信号のレベルを所定の閾値と比較し、レベルが閾値以上である場合・時には、その信号を意識的な随意入力として取り扱う。HMD1は、レベルが閾値未満である場合・時には、その信号を無意識の心理状態を表す信号として取り扱う。HMD1は、筋電図信号を意識的な随意入力として扱う時間では、無意識の心理状態の推測については停止する。HMD1は、筋電図信号のレベルの閾値について、ユーザに応じた閾値を設定可能である。 The HMD 1 describes an electromyographic signal regarding the operation of the facial muscle movement according to the unconscious psychological state of the user described in the first embodiment and the operation of the user's conscious facial muscle movement described in the second embodiment. Judgment and switching according to the level of strength etc. When the EMG signal is also used as a conscious input signal, the HMD1 uses the EMG signal after rectifying it. The HMD1 compares the level of the rectified EMG signal with a predetermined threshold value, and when the level is equal to or higher than the threshold value, sometimes treats the signal as a conscious voluntary input. The HMD1 treats the signal as a signal representing an unconscious psychological state when and sometimes the level is below the threshold. The HMD1 stops estimating the unconscious psychological state at the time when the EMG signal is treated as a conscious voluntary input. The HMD1 can set a threshold value according to the user for the threshold value of the level of the EMG signal.
 実施の形態2のHMD1は、電極3から取得する筋電図信号について、整流を行う回路等を備える。図4の生体信号取得部20の筋電図信号センサ201には、信号sg1等について、その整流を行う回路等を備える。筋電図信号センサ201の回路は、筋電図信号から、ノイズ除去のために、所定の高い周波数帯域成分をカットオフし、電極3と皮膚との間で発生する分極電位による直流成分を除去するために、所定の低い周波数帯域成分もカットオフする。筋電図信号センサ201には、そのような処理のためのフィルタ回路等を備える。一例として、実施の形態2のHMD1は、筋電図信号における1Hzから500Hzの帯域幅を使用する。 The HMD 1 of the second embodiment includes a circuit or the like that rectifies the electromyographic signal acquired from the electrode 3. The electromyogram signal sensor 201 of the biological signal acquisition unit 20 of FIG. 4 is provided with a circuit or the like for rectifying the signal sg1 or the like. The circuit of the EMG signal sensor 201 cuts off a predetermined high frequency band component from the EMG signal in order to remove noise, and removes the DC component due to the polarization potential generated between the electrode 3 and the skin. Therefore, a predetermined low frequency band component is also cut off. The EMG signal sensor 201 is provided with a filter circuit or the like for such processing. As an example, the HMD1 of Embodiment 2 uses a bandwidth of 1 Hz to 500 Hz in an electromyographic signal.
 [処理フロー]
 図14は、実施の形態2のHMD1の処理フローを示す。図14のフローは、多くのステップについては実施の形態1の図9のフローと同様であり、主に異なる部分としてはステップS1403,S1408を有する。ステップS1403では、HMD1は、ユーザ処理の際に取得した筋電図信号について、整流等した後の筋電図信号の強度等のレベルを閾値と比較し、閾値以上であるかを判断する。閾値以上である場合(Y)にはステップS1408へ進み、閾値未満である場合(N)にはステップS1404へ進む。ステップS1404からステップS1407の処理は、実施の形態1と同様である。
[Processing flow]
FIG. 14 shows the processing flow of HMD1 of the second embodiment. The flow of FIG. 14 is similar to the flow of FIG. 9 of the first embodiment for many steps, and mainly has steps S1403 and S1408 as different parts. In step S1403, the HMD1 compares the level of the strength of the EMG signal after rectification or the like with the threshold value of the EMG signal acquired during the user processing, and determines whether the EMG signal is equal to or higher than the threshold value. If it is equal to or more than the threshold value (Y), the process proceeds to step S1408, and if it is less than the threshold value (N), the process proceeds to step S1404. The processing of steps S1404 to S1407 is the same as that of the first embodiment.
 ステップS1408では、HMD1は、随意入力であると判断して、随意入力の処理を行う。HMD1は、その時の複数の筋電図信号(信号sg1~sg4)のレベルの状態から、どのような操作を意図した随意入力であるかを判別する。HMD1は、例えば、右頬の表情筋に対応する信号sg1のレベルが最も高い場合には、右頬に対応付けられた随意入力であると判別でき、設定情報に基づいて、その随意入力に対応付けられている操作(例えば肯定ボタン操作)を判別できる。HMD1は、判別した随意入力に対応付けられている操作の情報を、OSあるいはアプリケーションに与える。OSあるいはアプリケーションは、その随意入力操作に応じた規定の処理(例えば肯定ボタン入力処理)を実行する。ステップS1408の後、ステップ1407に進む。 In step S1408, the HMD1 determines that it is a voluntary input and processes the voluntary input. The HMD1 determines what kind of operation is intended as a voluntary input from the state of the levels of the plurality of electromyographic signals (signals sg1 to sg4) at that time. For example, when the level of the signal sg1 corresponding to the facial muscle of the right cheek is the highest, the HMD1 can be determined to be a voluntary input associated with the right cheek, and corresponds to the voluntary input based on the setting information. The attached operation (for example, affirmative button operation) can be identified. The HMD1 gives the OS or the application the information of the operation associated with the discriminated voluntary input. The OS or the application executes a predetermined process (for example, affirmative button input process) according to the voluntary input operation. After step S1408, the process proceeds to step 1407.
 [随意入力]
 上記随意入力は、ユーザが例えば図12の頬付近の表情筋や眉付近の表情筋を意識的に動かす操作として実現される。この随意入力操作は、HMD1に対する例えばハードウェアまたはソフトウェアのボタンの押下や、カーソル移動制御等に対応付けられる所定の入力として使用できる。実施の形態2のHMD1は、この表情筋を用いた随意入力操作について、例えばユーザ毎およびアプリケーション毎に、どのような入力操作に割り当てるかをユーザ設定可能とする。
[Optional input]
The above voluntary input is realized as an operation in which the user consciously moves, for example, the facial muscles near the cheeks and the facial muscles near the eyebrows in FIG. This voluntary input operation can be used as a predetermined input associated with, for example, pressing a button of hardware or software for the HMD 1, cursor movement control, or the like. The HMD 1 of the second embodiment allows the user to set what kind of input operation is assigned to the voluntary input operation using the facial muscle, for example, for each user and each application.
 図15は、随意入力に関するユーザ設定情報の例を示す。HMD1は、例えば表示面11にユーザ設定情報を表示し、ユーザによる設定を可能とする。この随意入力に関するユーザ設定情報は、列として、「随意入力」、「操作」を有する。「随意入力」列は、各表情筋の筋電図信号の組合せによって、複数の随意入力を設定可能である。例えば「随意入力1」は、右頬の表情筋を動かす操作に対応し、信号sg1のレベルが閾値以上であることに対応する。この「随意入力1」に、ユーザが候補から選択した操作、例えば「肯定ボタン」を割り当てることができる。同様に、「随意入力2」は、左頬の表情筋を動かす操作および信号sg2に対応し、例えば「否定ボタン」の操作が割り当てられている。例えば、HMD1のあるアプリケーションが、図7と同様に、作業支援のGUI画像を表示する場合において、ユーザは、「随意入力1」による「肯定ボタン」等の操作を、このアプリケーションあるいはそのうちの選択したオブジェクトの画像710等に対し、与えることができる。HMD1のアプリケーションは、OSを通じてそのような随意入力操作を受け取り、規定の処理を行う。 FIG. 15 shows an example of user setting information regarding voluntary input. The HMD 1 displays user setting information on the display surface 11, for example, and enables the user to set the information. The user setting information regarding this voluntary input has "voluntary input" and "operation" as columns. In the "voluntary input" column, a plurality of voluntary inputs can be set by combining the electromyographic signals of each facial muscle. For example, "voluntary input 1" corresponds to an operation of moving the facial muscle of the right cheek, and corresponds to the level of the signal sg1 being equal to or higher than the threshold value. An operation selected by the user from the candidates, for example, an "affirmative button" can be assigned to this "voluntary input 1". Similarly, the "voluntary input 2" corresponds to the operation of moving the facial muscle of the left cheek and the signal sg2, and is assigned, for example, the operation of the "negative button". For example, when an application having HMD1 displays a GUI image of work support as in FIG. 7, the user selects an operation such as an "affirmative button" by "voluntary input 1" of this application or one of them. It can be given to an image 710 or the like of an object. The HMD1 application receives such a voluntary input operation through the OS and performs the prescribed processing.
 他の例として、カーソル移動制御の場合、例えば右頬の「随意入力1」をカーソルの右移動とし、左頬の「随意入力2」をカーソルの左移動とする、といった割り当ても可能である。右眉の「随意入力3」をカーソルの上移動とし、左眉の「随意入力4」をカーソルの下移動とする、といった割り当ても可能である。他にも、複数の表情筋の動きの組合せで随意入力操作を設定してもよい。例えば、右眉の「随意入力3」と左眉の「随意入力4」との両方が同時にオン状態(閾値以上)になった場合には、所定の操作として受け付けられる。なお、ユーザ設定に限らず、HMD1の設計事項として、予め所定の随意入力が所定の入力操作として対応付けられるように固定的に設定されていてもよい。 As another example, in the case of cursor movement control, for example, "voluntary input 1" on the right cheek can be used as the right movement of the cursor, and "voluntary input 2" on the left cheek can be used as the left movement of the cursor. It is also possible to assign that the "voluntary input 3" of the right eyebrow is moved up the cursor and the "voluntary input 4" of the left eyebrow is moved down the cursor. In addition, a voluntary input operation may be set by combining the movements of a plurality of facial muscles. For example, when both the "voluntary input 3" of the right eyebrow and the "voluntary input 4" of the left eyebrow are turned on (at or above the threshold value) at the same time, the operation is accepted as a predetermined operation. It should be noted that the design item of the HMD 1 is not limited to the user setting, and may be fixedly set so that a predetermined voluntary input can be associated with a predetermined input operation in advance.
 図16は、HMD1が表示面11に表示する画像1601において、随意入力によるカーソル移動制御を受け付ける場合を示す。ここでは表示面11内の左右方向をx方向、上下方向をy方向として示す。HMD1は、OSまたはアプリケーションによって例えばオブジェクト等のGUI画像1602を表示するとともに、位置を表すためのカーソル1603を表示する。 FIG. 16 shows a case where the cursor movement control by voluntary input is accepted in the image 1601 displayed on the display surface 11 by the HMD1. Here, the left-right direction in the display surface 11 is shown as the x direction, and the up-down direction is shown as the y direction. The HMD1 displays a GUI image 1602 such as an object depending on the OS or an application, and also displays a cursor 1603 for indicating a position.
 カーソル1603は、例えば十字形状のGUI画像である。ユーザは、HMD1の本体の機能、あるいは表情筋の随意入力を用いて、カーソル1603を動かす操作ができる。ユーザは、例えば、カーソル1603を動かしてGUI画像1602上に位置させてから、肯定ボタンまたは否定ボタンを押す。ユーザは、この際のカーソル移動や肯定ボタン等の操作を、上記随意入力によって行うことができる。 The cursor 1603 is, for example, a cross-shaped GUI image. The user can operate the cursor 1603 by using the function of the main body of the HMD1 or the voluntary input of the facial muscles. The user, for example, moves the cursor 1603 to position it on the GUI image 1602 and then presses the affirmative or negative button. The user can perform operations such as cursor movement and affirmative button at this time by the above-mentioned optional input.
 随意入力によってカーソル移動制御を行う場合の詳細例は以下が挙げられる。HMD1は、2つの表情筋の箇所、例えば右頬と左頬から、2チャネルの筋電図信号(例えば図4の信号sg1,sg2)を取得する。HMD1は、取得した2チャネルの筋電図信号のレベルの組合せの状態に対応させて、カーソル1603の移動の位置(方向や移動量等)を変化させるように制御する。例えば、左右の信号sg1,sg2を用いて、表示面11内の左右方向(x方向)におけるカーソル1603の位置座標が制御される。 The following are detailed examples of cursor movement control by voluntary input. HMD1 acquires two channels of electromyographic signals (for example, signals sg1 and sg2 in FIG. 4) from two facial muscle locations, for example, the right cheek and the left cheek. The HMD1 controls to change the movement position (direction, movement amount, etc.) of the cursor 1603 according to the state of the combination of the acquired two-channel EMG signal levels. For example, the left and right signals sg1 and sg2 are used to control the position coordinates of the cursor 1603 in the left-right direction (x direction) in the display surface 11.
 2つの筋電図信号(例えば信号sg1,sg2)における強度のレベルをM,Mとする。カーソルのX座標値をCとする。単位制御サイクルΔtにおけるCの変化分をΔCとする。比例定数をkとする。ΔCは、次の式1で制御できる。
 式1:  ΔC=k(M―M
Let the intensity levels of the two EMG signals (eg, signals sg1, sg2) be M 1 and M 2 . Let the X coordinate value of the cursor be C X. Let ΔC X be the change in C X in the unit control cycle Δt. Let k be the constant of proportionality. ΔC X can be controlled by the following equation 1.
Equation 1: ΔC X = k (M 1 -M 2 )
 カーソルを2次元で移動させる場合には、さらに、y方向のために、2チャネルの筋電図信号(例えば右眉・左眉の信号sg3,sg4)を追加して、合計4チャネルの筋電図信号の組合せで同様に制御可能である。 When moving the cursor in two dimensions, two channels of EMG signals (for example, right eyebrow and left eyebrow signals sg3 and sg4) are added for the y direction, for a total of four channels of EMG. It can be controlled in the same way by combining the signals shown in the figure.
 [効果等(2)]
 上記のように、実施の形態2のHMDによれば、実施の形態1と同様にユーザの無意識的の心理状態に応じた出力内容調整ができる効果に加え、ユーザは、表情筋の意識的な動きによる随意入力操作を行うことができる。これにより、HMDのOSやアプリケーションに対する指示等の入力手段の増加や多様化が実現できる。
[Effects (2)]
As described above, according to the HMD of the second embodiment, in addition to the effect that the output content can be adjusted according to the unconscious psychological state of the user as in the first embodiment, the user is conscious of the facial muscles. It is possible to perform a voluntary input operation by movement. This makes it possible to increase and diversify input means such as instructions to the HMD OS and applications.
 実施の形態2の変形例として以下も可能である。変形例では、図7のようなテスト出力および評価入力の際に、上記随意入力を用いてもよい。例えば、図7の(B)のようなテスト用の画像702が表示されている時、ユーザは、随意入力を用いて、主観評価値の入力を簡単に行うことができる。例えば右頬の信号sg1の随意入力は、肯定や快適の評価を表す操作とされ、左頬の信号sg2の随意入力は、否定や不快の評価を表す操作とされる。HMD1は、随意入力による「肯定」/「否定」等の操作に応じて、その時の出力内容に対するユーザの評価として反映するように処理する。 The following is also possible as a modification of the second embodiment. In the modified example, the above-mentioned optional input may be used for the test output and the evaluation input as shown in FIG. For example, when the test image 702 as shown in FIG. 7B is displayed, the user can easily input the subjective evaluation value by using the voluntary input. For example, the voluntary input of the signal sg1 on the right cheek is an operation representing an evaluation of affirmation or comfort, and the voluntary input of the signal sg2 on the left cheek is an operation representing an evaluation of negation or discomfort. The HMD1 processes in response to an operation such as "affirmation" / "denial" by voluntary input so as to reflect it as a user's evaluation of the output content at that time.
 <実施の形態3>
 図17等を用いて、実施の形態3のHMDについて説明する。実施の形態3では、図2の電極5から検出できる眼電位信号を、電極3から検出できる筋電図信号と併用する。実施の形態3のHMDでは、実施の形態2のように表情筋の筋電図信号を用いて意識的な随意入力操作を行う際に、電極5からの眼電位信号(図4での信号sg5,sg6)も制御に用いる。筋電図信号および眼電位信号のそれぞれの信号を、どのような随意入力操作として割り当てるかについては、図15と同様に、ユーザ設定を可能とする。実施の形態3では、例えば、筋電図信号によってカーソル移動制御を行い、眼電位信号によってボタン押下を行うといったことや、あるいは、眼電位信号によってカーソル移動制御を行い、筋電図信号によってボタン押下を行うといったことが可能である。
<Embodiment 3>
The HMD of the third embodiment will be described with reference to FIG. 17 and the like. In the third embodiment, the electrooculogram signal that can be detected from the electrode 5 of FIG. 2 is used in combination with the electromyogram signal that can be detected from the electrode 3. In the HMD of the third embodiment, when performing a conscious voluntary input operation using the electromyographic signal of the facial muscle as in the second embodiment, the electrooculogram signal from the electrode 5 (signal sg5 in FIG. 4). , Sg6) are also used for control. As in FIG. 15, the user can be set as to what kind of voluntary input operation each signal of the EMG signal and the electrooculogram signal is assigned. In the third embodiment, for example, the cursor movement is controlled by the electromyographic signal and the button is pressed by the electromyographic signal, or the cursor movement is controlled by the electromyographic signal and the button is pressed by the electromyographic signal. It is possible to do.
 表情筋の動きと眼の動きとには相互影響があり、筋電図信号と眼電位信号とには相互影響がある。筋電図信号による制御を行っている間において、随意入力操作がされていない場合でも、眼電位信号に係わる眼球の動きは、無意識の心理状態を検出するための筋電図信号のレベルに対し、妨害になる可能性がある。そのため、実施の形態3のHMDは、その場合には、無意識の心理状態の推測・評価については停止してもよい。例えば、HMDは、ユーザによる意識的な眼の動きを用いた随意入力操作に対応して取得される眼電位信号のレベルが、閾値以上である場合・時には、筋電図信号を用いた心理状態の推測および出力内容調整を一時停止してもよい。すなわち、HMDは、時間軸上で、眼電位信号や筋電図信号のレベルに応じて、随意入力と心理状態推測との一方に切り替えながら制御を行う。 There is a mutual influence between the movement of the facial muscles and the movement of the eyes, and there is a mutual influence between the EMG signal and the electrooculographic signal. Even if no voluntary input operation is performed during control by the EMG signal, the movement of the eyeball related to the electrooculogram signal is relative to the level of the EMG signal for detecting the unconscious psychological state. , Can be a hindrance. Therefore, in that case, the HMD of the third embodiment may stop the estimation / evaluation of the unconscious psychological state. For example, in the HMD, when the level of the electrooculogram signal acquired in response to the voluntary input operation using the user's conscious eye movement is equal to or higher than the threshold value, and sometimes, the psychological state using the EMG signal. Guessing and adjusting the output contents may be suspended. That is, the HMD controls while switching between the voluntary input and the psychological state estimation according to the level of the electro-oculography signal and the electromyogram signal on the time axis.
 [処理フロー]
 図17は、実施の形態3でのHMD1のユーザ処理時のフローを示す。図17のフローは、実施の形態2の図14のフローに対し、主に異なる点として、ステップS1702,S1703を有する。ステップS1702で、HMD1は、筋電図信号と眼電位信号との両方を取得する。図4で、HMD1は、デバイス2,4の電極3からの筋電図信号(信号sg1~sg4)と電極5からの眼電位信号(信号sg5,sg6)とを取得する。ステップS1703で、HMD1は、筋電図信号のレベルが所定の閾値(筋電図信号用の閾値Aとする)以上であるか、あるいは、眼電位信号のレベルが所定の閾値(眼電位信号用の閾値Bとする)以上であるかを判断する。これらのそれぞれの閾値はユーザ設定可能である。いずれの信号も閾値未満である場合(N)にはステップS1704に進む。いずれかの信号が閾値以上である場合(Y)にはステップS1708に進む。ステップS1704~S1706では、HMD1は、実施の形態2と同様に、筋電図信号に応じて出力内容の調整を行う。ステップS1708では、HMD1は、ステップS1703で閾値以上であった筋電図信号または眼電位信号の少なくとも一方を用いて、随意入力処理を行う。
[Processing flow]
FIG. 17 shows a flow at the time of user processing of HMD1 in the third embodiment. The flow of FIG. 17 has steps S1702 and S1703 as main differences from the flow of FIG. 14 of the second embodiment. In step S1702, the HMD1 acquires both the electromyographic signal and the electrooculographic signal. In FIG. 4, the HMD 1 acquires an electromyographic signal (signals sg1 to sg4) from the electrodes 3 of the devices 2 and 4 and an electro-oculography signal (signals sg5, sg6) from the electrodes 5. In step S1703, in HMD1, the level of the EMG signal is equal to or higher than a predetermined threshold value (referred to as the threshold value A for the EMG signal), or the level of the electrooculogram signal is a predetermined threshold value (for the electrooculogram signal). It is determined whether or not it is equal to or higher than the threshold value B of). Each of these thresholds is user configurable. If any of the signals is less than the threshold value (N), the process proceeds to step S1704. If any of the signals is equal to or greater than the threshold value (Y), the process proceeds to step S1708. In steps S1704 to S1706, the HMD 1 adjusts the output content according to the EMG signal, as in the second embodiment. In step S1708, HMD1 performs a voluntary input process using at least one of the electromyogram signal and the electrooculogram signal which was equal to or higher than the threshold value in step S1703.
 [随意入力]
 図18は、実施の形態3で、筋電図信号と眼電位信号との両方を用いる場合の随意入力操作の例を示す。図18の横軸は時間とする。前述の4チャネルの筋電図信号(信号sg1~sg4)と、2チャネルの眼電位信号(信号sg5,sg6)とを有する。HMD1は、これらの信号の組合せの状態に応じた随意入力処理を行う。本例では、随意入力の設定として、4チャネルの筋電図信号をカーソル移動制御(随意入力Aとする)に用い、2チャネルの眼電位信号をボタン押下(随意入力Bとする)に用いる場合を示す。
[Optional input]
FIG. 18 shows an example of a voluntary input operation when both an electromyogram signal and an electrooculogram signal are used in the third embodiment. The horizontal axis of FIG. 18 is time. It has the above-mentioned 4-channel electromyographic signals (signals sg1 to sg4) and 2-channel electro-oculography signals (signals sg5, sg6). The HMD1 performs an arbitrary input process according to the state of the combination of these signals. In this example, as the setting of the voluntary input, the 4-channel EMG signal is used for cursor movement control (referred to as voluntary input A), and the 2-channel electrooculogram signal is used for button pressing (referred to as voluntary input B). Is shown.
 例えば、時間T1では、いずれの信号のレベルも閾値未満である。この時間T1では、筋電図信号を用いて、心理状態の推測による出力内容調整が必要に応じて行われている。例えば、図7と同様に、画像720の説明の量が調整される。次に、時間T2では、筋電図信号(信号sg1~sg4の少なくともいずれか)のレベルが閾値以上である。この時間T2では、信号sg1~sg4の状態に応じて、随意入力Aとしてカーソル移動制御が行われている。例えば図16と同様に、カーソル1603が移動される。次に、時間T3では、眼電位信号(信号sg5,sg6の少なくともいずれか)のレベルが閾値以上である。この時間T3では、信号sg5,sg6の状態に応じて、随意入力Bとしてボタン押下が行われている。例えば図16と同様に、カーソル1603がオブジェクトの画像1602上にある状態で肯定ボタンまたは否定ボタンが押下される。上記時間T2,T3の随意入力操作時には、心理状態推測による出力内容調整については一時停止されている。 For example, at time T1, the level of any signal is below the threshold value. At this time T1, the output content is adjusted by guessing the psychological state using the EMG signal as needed. For example, as in FIG. 7, the amount of description of image 720 is adjusted. Next, at time T2, the level of the electromyographic signal (at least one of the signals sg1 to sg4) is equal to or higher than the threshold value. At this time T2, the cursor movement control is performed as the voluntary input A according to the states of the signals sg1 to sg4. For example, as in FIG. 16, the cursor 1603 is moved. Next, at time T3, the level of the electro-oculography signal (at least one of the signals sg5 and sg6) is equal to or higher than the threshold value. At this time T3, the button is pressed as the voluntary input B according to the states of the signals sg5 and sg6. For example, as in FIG. 16, the affirmative button or the negative button is pressed while the cursor 1603 is on the image 1602 of the object. At the time of the voluntary input operation of the times T2 and T3, the adjustment of the output content by the psychological state estimation is temporarily stopped.
 同様に、他の設定例として、筋電図信号を用いた随意入力の方にボタン押下の操作を割り当て、眼電位信号を用いた随意入力の方にカーソル移動の操作を割り当てることも可能である。各チャネルの筋電図信号および眼電位信号を用いて、異なる複数の種類の操作を割り当てることも可能である。 Similarly, as another setting example, it is also possible to assign a button press operation to a voluntary input using an EMG signal and a cursor movement operation to a voluntary input using an electrooculogram signal. .. It is also possible to assign different types of operations using the electromyographic and electrooculographic signals of each channel.
 また、HMD1は、筋電図信号による随意入力と、眼電位信号による随意入力との両方を同時に受け付けた場合には、以下のように処理してもよい。HMD1は、筋電図信号のレベルが閾値以上で、かつ、眼電位信号のレベルが閾値以上である場合に、所定の1つの随意入力操作として処理するようにしてもよい。あるいは、HMD1は、筋電図信号のレベルと眼電位信号のレベルとを比べて相対的に大きい方の信号を用いてそちらに対応付けられる随意入力操作として処理するようにしてもよい。 Further, when the HMD1 simultaneously receives both the voluntary input by the electromyographic signal and the voluntary input by the electrooculogram signal, the HMD1 may be processed as follows. When the level of the EMG signal is equal to or higher than the threshold value and the level of the electrooculographic signal is equal to or higher than the threshold value, the HMD 1 may be processed as one predetermined voluntary input operation. Alternatively, the HMD 1 may be processed as a voluntary input operation associated therewith by using a signal that is relatively larger than the level of the EMG signal and the level of the electrooculogram signal.
 [効果等(3)]
 実施の形態3によれば、ユーザの無意識の心理状態の推測に応じた出力内容調整に加え、筋電図信号と眼電位信号を用いて、ユーザによる意識的な随意入力操作も行うことができ、HMDに対する入力手段の増加や多様化が実現できる。
[Effects (3)]
According to the third embodiment, in addition to adjusting the output content according to the estimation of the user's unconscious psychological state, the user can also perform a conscious voluntary input operation using the electromyogram signal and the electrooculogram signal. , It is possible to increase and diversify the input means for the HMD.
 実施の形態3の変形例として以下も可能である。変形例のHMDでは、表情筋の筋電図信号については、無意識の心理状態の情報の取得のみに用い、随意入力には眼電位信号のみを用いる。 The following is also possible as a modification of the third embodiment. In the modified HMD, the EMG signal of the facial muscle is used only for acquiring information on the unconscious psychological state, and only the electrooculogram signal is used for the voluntary input.
 以上、本発明を実施の形態に基づいて具体的に説明したが、本発明は前述の実施の形態に限定されず、要旨を逸脱しない範囲で種々変更可能である。 Although the present invention has been specifically described above based on the embodiments, the present invention is not limited to the above-described embodiments and can be variously modified without departing from the gist.
 1…HMD、2(2R,2L)…デバイス、3(3a,3b,3c,3d)…電極、4(4R,4L)…デバイス、5…電極、11…表示面、20…生体信号取得部。 1 ... HMD, 2 (2R, 2L) ... device, 3 (3a, 3b, 3c, 3d) ... electrode, 4 (4R, 4L) ... device, 5 ... electrode, 11 ... display surface, 20 ... biological signal acquisition unit ..

Claims (13)

  1.  ユーザの表情筋からの筋電図信号を取得するデバイスを備え、
     前記筋電図信号を入力情報として用いる、
     ヘッドマウントディスプレイ装置。
    Equipped with a device to acquire EMG signals from the user's facial muscles,
    The EMG signal is used as input information.
    Head-mounted display device.
  2.  請求項1記載のヘッドマウントディスプレイ装置において、
     前記筋電図信号に基づいて推測される心理状態に対応させて、出力内容を調整する、
     ヘッドマウントディスプレイ装置。
    In the head-mounted display device according to claim 1,
    The output content is adjusted according to the psychological state estimated based on the EMG signal.
    Head-mounted display device.
  3.  請求項2記載のヘッドマウントディスプレイ装置において、
     前記出力内容の調整として、表示するGUI画像の種類または情報量を調整する、
     ヘッドマウントディスプレイ装置。
    In the head-mounted display device according to claim 2,
    As the adjustment of the output content, the type or amount of information of the GUI image to be displayed is adjusted.
    Head-mounted display device.
  4.  請求項1記載のヘッドマウントディスプレイ装置において、
     前記デバイスは、前記ユーザの鼻および耳以外の箇所の表情筋の部位に接触してこの表情筋から前記筋電図信号を検出する電極を有し、
     前記デバイスの前記電極を含む部位が接触する箇所によって、前記ヘッドマウントディスプレイ装置の荷重の一部が支えられる、
     ヘッドマウントディスプレイ装置。
    In the head-mounted display device according to claim 1,
    The device has electrodes that come into contact with parts of the facial muscles other than the user's nose and ears and detect the electromyographic signal from the facial muscles.
    A part of the load of the head-mounted display device is supported by the contact point of the device including the electrode.
    Head-mounted display device.
  5.  請求項1記載のヘッドマウントディスプレイ装置において、
     前記筋電図信号として、前記ユーザの無意識の前記表情筋の状態に応じて生じる第1筋電図信号と、前記ユーザの意識的な前記表情筋の動きの操作に応じて生じる第2筋電図信号とを検出し、
     前記第2筋電図信号を、設定で対応付けられた入力操作として用いる、
     ヘッドマウントディスプレイ装置。
    In the head-mounted display device according to claim 1,
    As the EMG signal, a first EMG signal generated according to the state of the facial muscle unconsciously by the user and a second EMG signal generated according to the conscious operation of the movement of the facial muscle by the user. Detects the figure signal and
    The second EMG signal is used as an input operation associated with the setting.
    Head-mounted display device.
  6.  請求項1記載のヘッドマウントディスプレイ装置において、
     さらに、前記ユーザの表情筋からの眼電位信号を取得するための電極を備え、
     前記ユーザの意識的な眼付近の動きの操作に応じて生じる前記眼電位信号を検出し、
     前記眼電位信号を、設定で対応付けられた入力操作として用いる、
     ヘッドマウントディスプレイ装置。
    In the head-mounted display device according to claim 1,
    Further, an electrode for acquiring an electro-oculography signal from the user's facial muscle is provided.
    The electro-oculography signal generated in response to the user's conscious operation of movement near the eye is detected.
    The electro-oculography signal is used as an input operation associated with the setting.
    Head-mounted display device.
  7.  請求項2記載のヘッドマウントディスプレイ装置において、
     テストとしての出力内容を出力し、この時の前記筋電図信号を取得し、
     前記テストとしての出力内容に対する前記ユーザの心理状態に関する主観評価値を前記ユーザの操作に基づいて入力し、
     前記テストとしての出力内容と、前記筋電図信号と、前記主観評価値に対応する心理状態との対応関係を学習し、
     前記対応関係に基づいて、前記出力内容の調整を制御する、
     ヘッドマウントディスプレイ装置。
    In the head-mounted display device according to claim 2,
    The output content as a test is output, and the EMG signal at this time is acquired.
    A subjective evaluation value regarding the psychological state of the user with respect to the output content as the test is input based on the operation of the user.
    By learning the correspondence between the output content as the test, the EMG signal, and the psychological state corresponding to the subjective evaluation value,
    Controlling the adjustment of the output content based on the correspondence.
    Head-mounted display device.
  8.  請求項2記載のヘッドマウントディスプレイ装置において、
     OSやアプリケーションの利用時の前記出力内容に対する前記ユーザの心理状態に関する主観評価値を前記ユーザの操作に基づいて入力し、
     前記出力内容と、前記筋電図信号と、前記主観評価値に対応する心理状態との対応関係を学習し、
     前記対応関係に基づいて、前記出力内容の調整を制御する、
     ヘッドマウントディスプレイ装置。
    In the head-mounted display device according to claim 2,
    A subjective evaluation value regarding the psychological state of the user with respect to the output content when using the OS or the application is input based on the operation of the user.
    By learning the correspondence between the output content, the EMG signal, and the psychological state corresponding to the subjective evaluation value,
    Controlling the adjustment of the output content based on the correspondence.
    Head-mounted display device.
  9.  請求項1記載のヘッドマウントディスプレイ装置において、
     前記デバイスとして、前記ユーザの頬骨付近の表情筋の部位に接触するように配置される第1電極を搭載した第1デバイスを有し、
     前記第1デバイスの前記第1電極から取得する第1筋電図信号を入力情報として用いる、
     ヘッドマウントディスプレイ装置。
    In the head-mounted display device according to claim 1,
    As the device, there is a first device equipped with a first electrode arranged so as to be in contact with a portion of the facial muscle near the cheekbone of the user.
    The first EMG signal acquired from the first electrode of the first device is used as input information.
    Head-mounted display device.
  10.  請求項9記載のヘッドマウントディスプレイ装置において、
     前記デバイスとして、前記ユーザの眉付近の表情筋の部位に接触するように配置される第2電極を搭載した第2デバイスを有し、
     前記第2デバイスの前記第2電極から取得する第2筋電図信号を入力情報として用いる、
     ヘッドマウントディスプレイ装置。
    In the head-mounted display device according to claim 9,
    As the device, there is a second device equipped with a second electrode arranged so as to be in contact with a portion of the facial muscle near the user's eyebrows.
    The second EMG signal acquired from the second electrode of the second device is used as input information.
    Head-mounted display device.
  11.  請求項10記載のヘッドマウントディスプレイ装置において、
     前記第1筋電図信号と前記第2筋電図信号との組合せに基づいて、推測される心理状態に対応させて、出力内容を調整する、
     ヘッドマウントディスプレイ装置。
    In the head-mounted display device according to claim 10,
    Based on the combination of the first EMG signal and the second EMG signal, the output content is adjusted according to the inferred psychological state.
    Head-mounted display device.
  12.  請求項9記載のヘッドマウントディスプレイ装置において、
     前記第1デバイスは、前記ヘッドマウントディスプレイ装置の筐体のテンプル部から下側の前方に延長されるように出る形状を有する、
     ヘッドマウントディスプレイ装置。
    In the head-mounted display device according to claim 9,
    The first device has a shape that extends downward and forward from the temple portion of the housing of the head-mounted display device.
    Head-mounted display device.
  13.  請求項1記載のヘッドマウントディスプレイ装置において、
     前記第1デバイスは、前記ヘッドマウントディスプレイ装置の筐体の表示面が配置された両眼部から下側に延長されるように出る形状を有する、
     ヘッドマウントディスプレイ装置。
    In the head-mounted display device according to claim 1,
    The first device has a shape so as to extend downward from a binocular portion in which a display surface of a housing of the head-mounted display device is arranged.
    Head-mounted display device.
PCT/JP2020/037601 2020-10-02 2020-10-02 Head-mounted display device WO2022070415A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2020/037601 WO2022070415A1 (en) 2020-10-02 2020-10-02 Head-mounted display device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2020/037601 WO2022070415A1 (en) 2020-10-02 2020-10-02 Head-mounted display device

Publications (1)

Publication Number Publication Date
WO2022070415A1 true WO2022070415A1 (en) 2022-04-07

Family

ID=80950095

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/037601 WO2022070415A1 (en) 2020-10-02 2020-10-02 Head-mounted display device

Country Status (1)

Country Link
WO (1) WO2022070415A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017070602A (en) * 2015-10-08 2017-04-13 株式会社ジェイアイエヌ Information processing method, information processing device, and program
JP2018018163A (en) * 2016-07-25 2018-02-01 富士通株式会社 Input device
JP2020014160A (en) * 2018-07-19 2020-01-23 国立大学法人 筑波大学 Transmission type head mounted display device and program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017070602A (en) * 2015-10-08 2017-04-13 株式会社ジェイアイエヌ Information processing method, information processing device, and program
JP2018018163A (en) * 2016-07-25 2018-02-01 富士通株式会社 Input device
JP2020014160A (en) * 2018-07-19 2020-01-23 国立大学法人 筑波大学 Transmission type head mounted display device and program

Similar Documents

Publication Publication Date Title
US10877715B2 (en) Emotionally aware wearable teleconferencing system
US10942568B2 (en) Wearable computing device with electrophysiological sensors
KR102630774B1 (en) Automatic control of wearable display device based on external conditions
CN108780228B (en) Augmented reality system and method using images
JP6391465B2 (en) Wearable terminal device and program
CN112567287A (en) Augmented reality display with frame modulation
KR102029219B1 (en) Method for recogniging user intention by estimating brain signals, and brain-computer interface apparatus based on head mounted display implementing the method
JP2017093984A (en) Eye potential detection device, eye potential detection method, eyewear and frame
Wang et al. Intelligent wearable virtual reality (VR) gaming controller for people with motor disabilities
KR20180130172A (en) Mental care system by measuring electroencephalography and method for mental care using this
Shatilov et al. Emerging exg-based nui inputs in extended realities: A bottom-up survey
CN116133594A (en) Sound-based attention state assessment
WO2022070415A1 (en) Head-mounted display device
Mustafa et al. A brain-computer interface augmented reality framework with auto-adaptive ssvep recognition
US11609634B2 (en) Apparatus and method for user interfacing in display glasses
US11513597B2 (en) Human-machine interface system
JP2019023768A (en) Information processing apparatus and video display device
Matthies Reflexive interaction-extending peripheral interaction by augmenting humans
US11493994B2 (en) Input device using bioelectric potential
JP7224032B2 (en) Estimation device, estimation program and estimation method
KR102277358B1 (en) Method and appratus for vr training
Guerreiro Assistive technologies for spinal cord injured individuals: a survey
Guerreiro Myographic Mobile Accessibility for Tetraplegics
CN116830064A (en) System and method for predicting interactive intent

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20956343

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20956343

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP