WO2017013805A1 - Input device, input support method and input support program - Google Patents

Input device, input support method and input support program Download PDF

Info

Publication number
WO2017013805A1
WO2017013805A1 PCT/JP2015/071037 JP2015071037W WO2017013805A1 WO 2017013805 A1 WO2017013805 A1 WO 2017013805A1 JP 2015071037 W JP2015071037 W JP 2015071037W WO 2017013805 A1 WO2017013805 A1 WO 2017013805A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
unit
input
switch
mode
Prior art date
Application number
PCT/JP2015/071037
Other languages
French (fr)
Japanese (ja)
Inventor
境 克司
村瀬 有一
Original Assignee
富士通株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 富士通株式会社 filed Critical 富士通株式会社
Priority to PCT/JP2015/071037 priority Critical patent/WO2017013805A1/en
Publication of WO2017013805A1 publication Critical patent/WO2017013805A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer

Definitions

  • the present invention relates to an input device, an input support method, and an input support program.
  • wearable terminals such as smart watches and smart glasses have been introduced into the market, and new services different from conventional personal computers and smartphones have become widespread. Since such a wearable device is worn and used, it is difficult to input by touching the screen.
  • a method of inputting characters by voice is known.
  • a method in which a sensor is attached to a finger and a simple command is input by a gesture of the finger is also known.
  • JP 2006-79221 A JP 07-271506 A JP2015-35103A
  • An object of one aspect is to provide an input device, an input support method, and an input support program that can improve operability.
  • the input device is provided with a switch for receiving input and is attached to the indicator.
  • the input device extracts information relating to the first mode from a first component of the motion information, a detection unit that detects motion information output from the motion sensor, and a second component of the motion information. And an extraction unit that extracts information on the second mode from the components.
  • the input device includes an output unit that outputs one or both of the information related to the first mode and the information related to the second mode using an input pattern for the switch.
  • operability can be improved.
  • FIG. 1 is a diagram illustrating an example of the overall configuration of a system according to the first embodiment.
  • FIG. 2A is a diagram illustrating an example of a wearable device.
  • FIG. 2B is a diagram illustrating an example of a wearable device.
  • FIG. 2C is a diagram illustrating an example of a wearable device.
  • FIG. 2D is a diagram illustrating an example of a wearable device.
  • FIG. 3 is a diagram illustrating an example of a head mounted display.
  • FIG. 4 is a functional block diagram of the functional configuration of the system according to the first embodiment.
  • FIG. 5 is a diagram illustrating an example of a rotation axis of a finger.
  • FIG. 6 is a diagram illustrating an output example from the wearable device.
  • FIG. 1 is a diagram illustrating an example of the overall configuration of a system according to the first embodiment.
  • FIG. 2A is a diagram illustrating an example of a wearable device.
  • FIG. 2B is
  • FIG. 7 is a diagram illustrating an example of a menu screen.
  • FIG. 8 is a diagram illustrating an example of a display result of a locus of characters input by handwriting.
  • FIG. 9 is a diagram illustrating an example of the result of character recognition.
  • FIG. 10 is a flowchart showing the flow of processing.
  • FIG. 11 is a diagram illustrating the filtering process.
  • FIG. 12 is a diagram illustrating an example of filtering.
  • FIG. 13 is a diagram for explaining a correction example of the trajectory.
  • FIG. 14 is a flowchart for explaining the flow of processing using the direction of the wearable device.
  • FIG. 15 is a diagram for explaining a storage target based on a score.
  • FIG. 16 is a diagram illustrating an example of issuing a gesture ID.
  • FIG. 17 is an explanatory diagram illustrating an example of a computer that executes an input support program.
  • FIG. 1 is a diagram illustrating an example of the overall configuration of a system according to the first embodiment.
  • the input system shown in FIG. 1 includes a wearable device 10, a head mounted display 30, and an input device 50.
  • the wearable device 10, the head mounted display 30, and the input device 50 are communicably connected via a network and can exchange various types of information.
  • any type of communication network such as mobile communication such as a cellular phone, the Internet (Internet), a LAN (Local Area Network), a VPN (Virtual Private Network), or the like, regardless of wired or wireless. Can be adopted.
  • a case where the wearable device 10, the head mounted display 30, and the input device 50 communicate by wireless communication will be described as an example.
  • the input system is a system that supports user input.
  • the input system is used for user work support in a factory or the like, and is used when the user takes notes or issues commands to other devices.
  • a user may work while moving in various places. For this reason, by enabling various inputs using the wearable device 10 instead of a fixed terminal such as a personal computer, the user can input while moving in various places.
  • Wearable device 10 is a device that a user wears and uses to detect a user's finger operation and gesture.
  • the wearable device 10 is a device worn on a finger.
  • the wearable device 10 detects a change in the posture of the finger and transmits information related to the change in the posture of the finger to the input device 50.
  • FIGS. 2A to 2C are diagrams illustrating an example of a wearable device.
  • the wearable device 10 has a ring shape like a ring, and can be attached to the finger by passing the finger through the ring as shown in FIG. 2C.
  • a part of the ring is formed thicker and wider than the other part, and is a component built-in part that incorporates main electronic components.
  • the wearable device 10 has a flat inner surface of a ring-containing component part. That is, the wearable device 10 has a shape that easily fits with a finger when the component built-in part is placed above the finger.
  • the wearable device 10 is attached to the finger in a substantially similar orientation with the component built-in part on the upper side of the finger. Further, as shown in FIG. 2A, the wearable device 10 is provided with a switch 14 on the side surface side of the ring. As shown in FIG. 2C, the switch 14 is disposed at a position corresponding to the thumb when the wearable device 10 is worn on the index finger of the right hand. The wearable device 10 is formed in a shape in which the peripheral portion of the switch 14 is raised to the same height as the upper surface of the switch 14. As a result, the wearable device 10 does not turn on the switch 14 simply by placing a finger on the switch 14 portion.
  • FIG. 2D is a diagram illustrating an example of an operation on a wearable device switch.
  • the example of FIG. 2D shows a case where the wearable device 10 is attached to the index finger and the switch 14 is operated with the thumb.
  • the wearable device 10 does not turn on the switch 14 just by placing the thumb on the switch 14, and the switch 14 is turned on by pushing the thumb as shown in the right side of FIG. 2D.
  • the user places the finger on the input position, and starts input by pushing the finger with the finger when performing input.
  • the switch 14 is energized so that it is turned on when it is contracted, turned off when it is extended, and stretched by an elastic body such as a built-in spring. As a result, the switch 14 is turned on when it is pressed with a finger, and is turned off when the finger force is released. With this configuration, the wearable device 10 cannot start input unless it is worn in a normal state, and when it is worn on the finger, the wearing position is naturally corrected to a normal position. Further, the user can control the input / non-input section without releasing the finger from the switch 14.
  • the head mounted display 30 is a device that is worn by the user on the head and displays various types of information so as to be visible to the user.
  • the head mounted display 30 may be compatible with both eyes or may be compatible with only one eye.
  • FIG. 3 is a diagram showing an example of a head mounted display.
  • the head mounted display 30 has a glasses shape corresponding to both eyes.
  • the head-mounted display 30 has transparency in the lens portion so that an external real environment can be visually recognized even when the user is wearing the head-mounted display 30.
  • the head mounted display 30 includes a display unit having transparency in a part of the lens portion, and various types of information such as images can be displayed on the display unit.
  • the head mounted display 30 realizes augmented reality in which the actual environment is expanded by visually recognizing various information in a part of the field of view while allowing the wearing user to visually recognize the actual environment.
  • FIG. 3 schematically shows a display unit 30B provided in a part of the field of view 30A of the user who wears it.
  • the head mounted display 30 has a built-in camera between two lens portions, and the camera can shoot an image in the line of sight of the user wearing the head mounted display 30.
  • the input device 50 is a device that supports various inputs by user's finger operation.
  • the input device 50 is a portable information processing device such as a smartphone or a tablet terminal.
  • the input device 50 may be implemented as one or a plurality of computers provided in a data center or the like. That is, the input device 50 may be a cloud computer as long as it can communicate with the wearable device 10 and the head mounted display 30.
  • Such an input system detects motion information output from a motion sensor included in the wearable device 10.
  • the input system extracts information related to the first mode from the first component of the exercise information, and extracts information related to the second mode from the second component of the exercise information. Thereafter, the input system outputs one or both of the information related to the first mode and the information related to the second mode using the input pattern for the switch 14.
  • the input system extracts a character trajectory and a gesture from finger movement information output from a motion sensor worn on the finger, and outputs both or one according to the on / off state of the switch 14.
  • the input system can simultaneously process character recognition and gesture events, and can improve the operability of input using the wearable device 10.
  • FIG. 4 is a functional block diagram of the functional configuration of the system according to the first embodiment.
  • functional configurations of the wearable device 10, the head mounted display 30, and the input device 50 will be described.
  • the wearable device 10 includes a wireless unit 11, a storage unit 12, an attitude sensor 13, a switch 14, and a control unit 20.
  • the wearable device 10 may have other devices other than the above devices.
  • the wireless unit 11 is an interface that performs wireless communication control with other devices.
  • a network interface card such as a wireless chip can be adopted.
  • the wireless unit 11 is a device that communicates wirelessly, and transmits and receives various information to and from other devices wirelessly. For example, the wireless unit 11 transmits finger operation information and posture change information to the input device 50.
  • the storage unit 12 is a storage device that stores programs executed by the control unit 20 and various types of information, such as a hard disk or a memory.
  • the storage unit 12 stores various information generated by the control unit 20, intermediate data of processing executed by the control unit 20, and the like.
  • the posture sensor 13 is a device that detects a user's finger operation.
  • the attitude sensor 13 is a triaxial gyro sensor or the like.
  • the posture sensor 13 is built in the wearable device 10 so that the three axes correspond to the rotation axis of the finger when the wearable device 10 is correctly attached to the finger.
  • FIG. 5 is a diagram illustrating an example of a rotation axis of a finger. In the example of FIG. 5, three axes of X, Y, and Z that are orthogonal to each other are shown. In the example of FIG.
  • the rotation axis in the movement direction for bending the finger is the Y axis
  • the rotation axis in the movement direction for swinging the finger left and right is the Z axis
  • the rotation axis in the movement direction for turning the finger is the X axis.
  • the switch 14 is a device that accepts input from the user.
  • the switch 14 is provided on the side of the ring of the wearable device 10 as shown in FIG. 2C.
  • the switch 14 is turned on when pressed, and turned off when released.
  • the switch 14 receives an operation input from the user. For example, when the wearable device 10 is worn on the user's index finger, the switch 14 receives an operation input by the user's thumb.
  • the switch 14 outputs operation information indicating the received operation content to the control unit 20.
  • the user performs various inputs by operating the switch 14. For example, the user turns on the switch 14 when starting input by finger operation.
  • the control unit 20 is a device that controls the wearable device 10.
  • an integrated circuit such as a microcomputer, an ASIC (Application Specific Integrated Circuit), or an FPGA (Field Programmable Gate Array) can be employed.
  • the control unit 20 transmits operation information of the switch 14 to the input device 50 via the wireless unit 11. Further, when the switch 14 is turned on, the control unit 20 controls the posture sensor 13 to detect a posture change. The control unit 20 transmits posture change information detected by the posture sensor 13 to the input device 50 via the wireless unit 11.
  • Such a control unit 20 includes a detection unit 21, an extraction unit 22, a locus generation unit 23, a gesture recognition unit 24, and an output unit 25.
  • the detection unit 21 is a processing unit that detects a sensor value output from the attitude sensor 13, that is, motion information. Specifically, the detection unit 21 sets the position at the time when the switch 14 is turned on as a reference, and continues to detect the movement history from the reference until the switch 14 turns from on to off, and the detected movement. The history is output to the extraction unit 22 as exercise information. In addition, as a movement history, a coordinate and a triaxial gyroscope can also be used, and the information which combined a coordinate and a triaxial gyroscope can also be used. Moreover, the detection part 21 can also output the stop time etc. which a motion stopped, matched with the applicable location of exercise
  • the extraction unit 22 is a processing unit that extracts various types of information to be output from the exercise information detected by the detection unit 21. Specifically, the extraction unit 22 is a processing unit that extracts information about a trajectory characterizing a character and information about a gesture performed by a user.
  • the extraction unit 22 extracts a series of exercise information from the exercise information received from the detection unit 21 until the switch 14 is turned off to output it to the locus generation unit 23. That is, the extraction unit 22 extracts a series of movement information about the movement of the finger in one segment until the switch 14 is turned on.
  • the extraction unit 22 extracts a period from the start to the end of the finger movement from the exercise information received from the detection unit 21, extracts the exercise information within each period, and outputs it to the gesture recognition unit 24. That is, the extraction unit 22 extracts a plurality of segments, and extracts movement information that specifies a finger operation in each segment.
  • the trajectory generation unit 23 is a processing unit that generates a movement trajectory of the finger from a series of motion information until the switch 14 extracted by the extraction unit 22 is turned on. For example, in the exercise information received from the extraction unit 22, the trajectory generation unit 23 sets the position where the switch 14 is turned on as a reference, and sets the position where the switch 14 is turned off as the end position. Then, the trajectory generation unit 23 sets the reference to the start of the trajectory, and generates a trajectory connecting from the start to the end position. Thereafter, the trajectory generation unit 23 stores the generated trajectory information in the storage unit 12 in association with the generation time. In addition, the trajectory generation unit 23 outputs the generated trajectory information to the output unit 25.
  • the gesture recognition unit 24 is a processing unit that recognizes a gesture performed by a user using a finger from the exercise information extracted by the extraction unit 22. For example, the gesture recognition unit 24 sets the position where the switch 14 is turned on as a reference, and sets the position where the finger first stops for a predetermined time as the first end position. Then, the gesture recognition unit 24 identifies the trajectory from the reference to the first end position as the first trajectory.
  • the gesture recognition unit 24 sets the first end position as the second start position, and then sets the position where the finger has stopped for a predetermined time as the second end position. Then, the gesture recognition unit 24 specifies a trajectory from the second start position to the second end position as the second trajectory. As described above, the gesture recognition unit 24 identifies each period from the start to the end of the exercise, and generates a trajectory for each period. Then, the gesture recognition unit 24 identifies each trajectory as a gesture, and stores the identified gesture and time in the storage unit 12 in association with each other. In addition, the gesture recognition unit 24 outputs each trajectory to the output unit 25 as gesture information.
  • the output unit 25 is a processing unit that transmits various types of information generated by the control unit 20 to the input device 50.
  • the output unit 25 transmits the trajectory information generated by the trajectory generation unit 23 and the gesture information generated by the gesture recognition unit 24 to the input device 50.
  • the output unit 25 also transmits on / off information of the switch 14 to the input device 50.
  • FIG. 6 is a diagram illustrating an output example from the wearable device.
  • the detection unit 21 detects that the user's finger motion starts from a, pauses at b, and further moves to c, and this a ⁇ b ⁇
  • the motion of c is extracted as exercise information.
  • the extraction unit 22 extracts a series of operations of a ⁇ b ⁇ c with reference to a, and outputs the extracted sequence to the trajectory generation unit 23. Further, the extraction unit 22 extracts the a ⁇ b trajectory and the b ⁇ c trajectory, and outputs them to the gesture recognition unit 24.
  • generation part 23 specifies a user's finger movement as a locus
  • the gesture recognizing unit 24 specifies the a ⁇ b gesture and the b ⁇ c gesture.
  • Such a gesture corresponds to an operation of turning a dial as shown in (3) of FIG.
  • the gesture of a ⁇ b corresponds to the operation of turning the dial “0”
  • the gesture of b ⁇ c corresponds to the operation of turning the dial “1”.
  • the output unit 25 transmits operation information including “0” trajectory information shown in (2) of FIG. 6 and gesture information indicating dial rotation shown in (3) of FIG. 6 to the input device 50. .
  • the head mounted display 30 includes a wireless unit 31, a camera 32, a display unit 33, and a control unit 34.
  • the head mounted display 30 may have other devices other than the above devices.
  • the wireless unit 31 is a device that performs wireless communication.
  • the wireless unit 31 transmits and receives various types of information to and from other devices wirelessly.
  • the wireless unit 31 receives image information of an image to be displayed on the display unit 33 and an operation command instructing shooting from the input device 50.
  • the wireless unit 31 transmits image information of an image captured by the camera 32 to the input device 50.
  • the camera 32 is a device that captures an image. As shown in FIG. 3, the camera 32 is provided between two lens portions. The camera 32 captures an image under the control of the control unit 34.
  • the display unit 33 is a device that displays various types of information. As shown in FIG. 3, the display unit 33 is provided in the lens portion of the head mounted display 30. The display unit 33 displays various information. For example, the display unit 33 displays a menu screen described later, a virtual laser pointer, an input locus, and the like.
  • the control unit 34 is a device that controls the head mounted display 30.
  • an electronic circuit such as a CPU (Central Processing Unit) or MPU (Micro Processing Unit), or an integrated circuit such as a microcomputer, ASIC, or FPGA can be employed.
  • the control unit 34 performs control to display the image information received from the input device 50 on the display unit 33.
  • the control unit 34 controls the camera 32 to photograph an image.
  • the control unit 34 controls the wireless unit 31 to transmit the image information of the captured image to the input device 50.
  • the input device 50 includes a wireless unit 51, a storage unit 52, and a control unit 60. Note that the input device 50 may include devices other than the above-described devices.
  • the wireless unit 51 is a device that performs wireless communication.
  • the wireless unit 51 transmits / receives various information to / from other devices wirelessly.
  • the wireless unit 51 receives trajectory information, posture change information, and the like from the wearable device 10.
  • the wireless unit 51 transmits image information of an image to be displayed on the head mounted display 30 and various operation commands to the head mounted display 30.
  • the wireless unit 51 receives image information of an image captured by the camera 32 of the head mounted display 30.
  • the storage unit 52 is a storage device such as a hard disk, an SSD (Solid State Drive), or an optical disk.
  • the storage unit 52 may be a semiconductor memory capable of rewriting data such as RAM (Random Access Memory), flash memory, NVSRAM (Non Volatile Static Random Access Memory).
  • the storage unit 52 stores an OS (Operating System) and various programs executed by the control unit 60.
  • the storage unit 52 stores various programs used for input support.
  • the storage unit 52 stores various data used in programs executed by the control unit 60.
  • the storage unit 52 stores recognition dictionary data 53, memo information 54, and image information 55.
  • the recognition dictionary data 53 is dictionary data for recognizing handwritten characters.
  • the recognition dictionary data 53 stores standard trajectory information of various characters.
  • the memo information 54 is data storing information input by handwriting.
  • an image of a character input by handwriting and character information obtained as a result of recognizing the character input by handwriting are stored in association with each other.
  • the image information 55 is image information of an image taken by the camera 32 of the head mounted display 30.
  • the control unit 60 is a device that controls the input device 50.
  • an electronic circuit such as a CPU or MPU, or an integrated circuit such as a microcomputer, ASIC, or FPGA can be employed.
  • the control unit 60 has an internal memory for storing programs defining various processing procedures and control data, and executes various processes using these.
  • the control unit 60 functions as various processing units by operating various programs.
  • the control unit 60 includes an input detection unit 61, a display control unit 62, a calibration unit 63, an axis detection unit 64, a gesture acquisition unit 65, an event specification unit 66, a character recognition unit 67, and a recognition unit.
  • a result display unit 68 and an operation command execution unit 69 are provided.
  • the input detection unit 61 detects various inputs based on operation information and posture change information received from the wearable device 10. For example, the input detection unit 61 detects an operation on the switch 14 based on the operation information. For example, the input detection unit 61 detects single click, double click, triple click, and long press operation of the switch 14 from the number of times the switch 14 is pressed within a predetermined time. Further, the input detection unit 61 identifies the rotation of the three axes and the movement to each axis from the posture change information received from the wearable device 10, and detects the posture change of the finger.
  • the display control unit 62 performs various display controls. For example, the display control unit 62 generates image information of various screens according to the detection result by the input detection unit 61, and transmits the generated image information to the head mounted display 30 via the wireless unit 51. As a result, an image of image information is displayed on the display unit 33 of the head mounted display 30. For example, when a double click is detected by the input detection unit 61, the display control unit 62 displays a menu screen on the display unit 33 of the head mounted display 30.
  • FIG. 7 is a diagram showing an example of the menu screen.
  • the menu screen 70 displays items of “1 calibration”, “2 memo input”, “3 memo browsing”, and “4 shooting”.
  • the item “1 calibration” designates a calibration mode for calibrating the posture information of the detected finger.
  • the item “2 Memo Input” specifies a memo input mode for inputting a memo by handwriting.
  • the item “3 Browsing memos” designates a browsing mode for browsing inputted memos.
  • the item “4 shooting” is for designating a shooting mode in which an image is shot by the camera 32 of the head mounted display 30.
  • the input device 50 is capable of selecting an item on the menu screen 70 by handwriting input, a cursor, a finger gesture, or the like.
  • the display control unit 62 determines that the item mode corresponding to the recognized number has been selected. Further, the display control unit 62 displays a cursor on the screen, and moves the cursor in accordance with the finger posture change detected by the input detection unit 61. For example, when the rotation of the Y axis is detected, the display control unit 62 moves the cursor in the horizontal direction of the screen at a speed corresponding to the rotation.
  • the display control unit 62 moves the cursor in the vertical direction of the screen at a speed corresponding to the rotation.
  • the display control unit 62 determines that the mode of the item on which the cursor is positioned has been selected.
  • the display control unit 62 deletes the menu screen 70.
  • the calibration unit 63 calibrates the detected posture information of the finger. For example, when the calibration mode is selected on the menu screen 70, the calibration unit 63 calibrates the detected finger posture information.
  • the wearable device 10 may be worn in a misaligned state rotated in the circumferential direction with respect to the finger.
  • the wearable device 10 When the wearable device 10 is worn in a shifted state with respect to a finger, there is a case where the posture change detected by the wearable device 10 is shifted by a rotation amount, and the detected motion may be different from the user's intention.
  • the user selects the calibration mode on the menu screen 70.
  • the user selects the calibration mode on the menu screen 70, the user opens and closes the hand on which the wearable device 10 is worn.
  • the wearable device 10 transmits posture change information of the posture change of the finger when the hand is opened and closed to the input device 50.
  • the calibration unit 63 detects the movement of the finger when the finger wearing the wearable device 10 is bent and stretched by opening and closing the hand based on the posture change information.
  • the calibration unit 63 calibrates the reference direction of the finger movement based on the detected finger movement.
  • the axis detection unit 64 detects the axis indicating the posture based on the posture change of the finger detected by the input detection unit 61. For example, the axis detection unit 64 detects an axis that moves in the direction according to a change in the posture of the finger. For example, the axis detector 64 passes through the origin in a three-dimensional space, and moves in the X, Y, and Z directions according to the respective rotation directions and rotation speeds of the X, Y, and Z rotation axes. The direction vector of is calculated. In addition, when detecting a motion only by a posture, it is difficult to move the wrist as far as it goes away from the facing direction. Further, when the palm is horizontal, the degree of freedom in the vertical direction is high, but the degree of freedom in the horizontal direction may be low.
  • the axis detection unit 64 may change the vertical and left pointing sensitivity from the center point of the axis direction corrected by the calibration unit 63. For example, the axis detection unit 64 calculates the axis direction vector by correcting the rotation in the left-right direction of the hand to be larger than the rotation in the up-down direction of the hand. That is, when the rotation amount is the same, the axis detection unit 64 corrects the movement amount due to the rotation in the left-right direction larger than the movement amount due to the rotation in the vertical direction. Further, the axis detection unit 64 may increase the sensitivity as the distance from the center point in the corrected axis direction increases.
  • the axis detection unit 64 calculates the axis direction vector by correcting the rotation to be greater as the distance from the center point in the axis direction increases. That is, when the rotation amount is the same, the axis detection unit 64 corrects the movement amount due to the rotation in the peripheral portion away from the center point in the axis direction larger than the movement amount due to the rotation near the center point. Thereby, since the sensitivity of rotation is set corresponding to the ease of movement of the wrist, accurate pointing can be facilitated.
  • the control unit 60 can also display a virtual laser pointer linked to the axis detected by the axis detection unit 64 on the display unit 33 of the head mounted display 30.
  • the control unit 60 when the calibration mode is selected on the menu screen 70, the control unit 60 generates image information on a screen on which a virtual laser pointer linked to the axis detected by the axis detection unit 64 is arranged. Then, the control unit 60 controls the wireless unit 51 to transmit the generated image information to the head mounted display 30. As a result, a virtual laser pointer image is displayed on the display unit 33 of the head mounted display 30.
  • the gesture acquisition unit 65 is a processing unit that acquires gesture information from the wearable device 10. For example, the gesture acquisition unit 65 receives gesture information including a locus recognized by the wearable device 10 and outputs the gesture information to the event identification unit 66. When the gesture acquisition unit 65 receives a gesture identifier or the like for identifying a gesture from the wearable device 10, the gesture acquisition unit 65 outputs the received gesture identifier to the event specifying unit 66.
  • the event identification unit 66 is a processing unit that identifies an event corresponding to a gesture.
  • the event specifying unit 66 stores the gesture information or the gesture identifier and the gesture content in advance in association with each other. And the event specific
  • the event specifying unit 66 outputs the specified gesture content to the operation command executing unit 69.
  • the character recognition unit 67 is a processing unit that executes character recognition in accordance with the trajectory information acquired from the wearable device 10. For example, the character recognition unit 67 compares the acquired trajectory with the standard trajectories of various characters stored in the recognition dictionary data 53, specifies the character with the highest similarity, and sets the character code of the specified character. Output. Further, the character recognition unit 67 outputs the recognized character and character code to the operation command execution unit 69.
  • the character recognition unit 67 can perform character recognition again after correcting the locus specified from the locus information or the result of character recognition. For example, when there is “upper left splash” that is not frequently used in general Chinese characters, the character recognition unit 67 can also perform character recognition by deleting the locus corresponding to the hit.
  • the user when the user performs handwritten input of characters, the user can also perform input by long pressing the switch 14 for each character. That is, the switch 14 is released once for each character.
  • the character recognizing unit 67 can also record a handwritten input locus character by character and recognize characters from the locus character by character.
  • FIG. 8 is a diagram illustrating an example of a display result of a locus of characters input by handwriting.
  • FIG. 8A shows an example in which “bird” is input by handwriting.
  • FIG. 8B is an example in which “God” is input by handwriting.
  • the controller 60 makes it easy to recognize the character indicated by the trajectory by displaying the trajectory moving to the upper left in a lightly distinct manner.
  • a thin line portion is indicated by a broken line.
  • the character recognition unit 67 performs character recognition on the trajectory indicated by the dark line in FIGS. 8A to 8B.
  • FIG. 9 is a diagram showing an example of the result of character recognition.
  • a thin line portion is indicated by a broken line.
  • FIG. 9 shows the result of character recognition for “Tu” input by handwriting.
  • FIG. 9 shows a candidate character and a score (score) indicating the degree of similarity when the character is recognized without deleting the upper left locus moved to the upper left and when the character is recognized with the upper left locus deleted. Has been.
  • Candidate characters are shown after “code:”. The score indicates the degree of similarity. The larger the value, the higher the degree of similarity.
  • “Mori” input by handwriting has a score of 920 when the character is recognized without deleting the upper left locus, and the score is 928 when the character is recognized by deleting the upper left locus, and the upper left locus is deleted. If you do, the score will be higher. Moreover, erroneous conversion can be suppressed by deleting the upper left locus. For example, “Tu” entered by handwriting has “Ream”, “Slow”, and “Han” as the recognition candidate characters. However, deleting the upper left trajectory will increase the score, thus suppressing erroneous conversion. it can.
  • the character recognition unit 67 stores the locus, recognition result, character code, recognition execution time, and the like in the storage unit 52 in association with each other.
  • the character recognition unit 67 stores the trajectory of the handwritten character and the recognized character in the memo information 54.
  • the character recognizing unit 67 associates the trajectory information with the recognized character and stores it in the memo information 54 together with the date / time information.
  • Information stored in the memo information 54 can be referred to.
  • the character recognition unit 67 displays information stored in the memo information 54 of the storage unit 52.
  • the character recognizing unit 67 displays the input date and time of memo input, the text of a text recognizing a handwritten character, and the text of a handwritten character image in association with each other.
  • the user can confirm whether the handwritten character is correctly recognized by displaying the sentence of the text and the sentence of the handwritten character image in association with each other.
  • the user can refer to the corresponding character image even when the character is erroneously converted by the recognition of the trajectory. You can grasp the handwritten characters.
  • the handwriting characteristics of the user are recorded in the handwritten character image. For this reason, the image of the handwritten character can also be stored and used for proof that the user has input, for example, like a signature.
  • the recognition result display unit 68 is a processing unit that displays the result of character recognition recognized by the character recognition unit 67. For example, when the character recognized by the character recognition unit 67 is repelled and corrected, the recognition result display unit 68 displays the recognition result before correction and the recognition result after correction on the head mounted display 30 or the like. In this way, the user can be inquired about the correctness of the recognition result.
  • the operation command execution unit 69 outputs an operation command to another device based on the recognized character or symbol, gesture or the like. For example, when shooting with the camera 32 of the head mounted display 30, the user selects a shooting mode on the menu screen 70. Then, the user presses and operates the switch 14 of the wearable device 10 at a timing when photographing is desired, and inputs a predetermined character by handwriting.
  • the predetermined character may be any of a character, a number, and a symbol, for example, “1”.
  • the operation command execution unit 69 When the shooting mode is selected on the menu screen 70, the operation command execution unit 69 is ready for shooting.
  • the character recognizing unit 67 records a trajectory input by handwriting in a shooting preparation state, and recognizes a character from the trajectory.
  • the operation command execution unit 69 transmits an operation command for instructing shooting to the head mounted display 30.
  • the head mounted display 30 receives an operation command for instructing photographing, the head mounted display 30 performs photographing by the camera 32 and transmits image information of the photographed image to the input device 50.
  • the control unit 60 stores the image information of the captured image in the storage unit 52 as the image information 55.
  • the operation command execution unit 69 can execute a process corresponding to the gesture specified by the event specification unit 66.
  • the operation command execution unit 69 can hold a correspondence table in which the gesture information and the command are associated with each other in the storage unit 52 and issue the operation command corresponding to the gesture according to the correspondence table.
  • a gesture ID described later can be used.
  • the input device 50 can perform an operation by outputting an operation command to another device.
  • FIG. 10 is a flowchart showing the flow of processing. As shown in FIG. 10, the wearable device 10 is turned on (S101: Yes), and the detection unit 21 of the wearable device 10 turns on the fingertip via the posture sensor 13 when the switch 14 is turned on (S102: Yes). Is detected (S103).
  • the detection unit 21 extracts the motion component of the finger operation (S104). For example, the detection unit 21 detects a sensor value output from the posture sensor 13, that is, motion information.
  • the extraction part 22 extracts the exercise
  • the trajectory generation unit 23 generates finger movement transitions, that is, trajectory information, using a series of motion information from the on-off state to the switch-off state among the extracted motion patterns (S106).
  • the gesture recognizing unit 24 uses the movement information for each period from the start to the end of the finger movement among the extracted movement patterns to generate a finger operation, that is, gesture information (S107).
  • the output unit 25 outputs the gesture information to the input device 50 (S108). Thereafter, the gesture acquisition unit 65 of the input device 50 recognizes the gesture by the finger operation, and the operation command execution unit 69 executes the operation corresponding to the recognized gesture.
  • the input detection unit 61 of the input device 50 identifies the state change of the switch 14 according to the information received from the wearable device 10 (S110). Thereafter, the input detection unit 61 generates an event for specifying an operation according to the state change of the switch 14 (S111). Then, the display control unit 62 and the operation command execution unit 69 output an operation corresponding to the identified event to the corresponding device to execute the operation (S112).
  • the character recognizing unit 67 specifies the locus operated by the finger using the locus information acquired from the wearable device 10 (S113). Thereafter, the character recognition unit 67 performs character recognition using the identified locus and the recognition dictionary data 53 (S114). In parallel with S114, the character recognition unit 67 corrects the identified locus or the result of character recognition (S115). Then, the recognition result display unit 68 displays information such as character recognition results, correction results, and trajectories on the head mounted display 30 and stores them in the memo information 54 (S116). Thereafter, the operation command execution unit 69 outputs an operation command corresponding to the recognition result (S117).
  • the wearable device 10 and the input device 50 can simultaneously process a gesture event and character recognition, and output a character input, a gesture event, or both depending on the result.
  • the input operation can be executed without manually switching the input, so that the user's trouble can be omitted and setting mistakes can be suppressed. Therefore, operability can be improved.
  • the input device 50 can improve the character recognition accuracy by filtering the motion information using the cut-off frequency and suppressing the body shake component. Specifically, when the switch 14 is turned from on to off, the character recognition unit 67 of the input device 50 performs a finger movement pattern based on a result of filtering exercise information according to a cutoff frequency corresponding to a preset character type. And the trajectory information can be extracted from the extracted motion pattern.
  • the character recognition unit 67 holds a correspondence table in which the character type and the cut-off frequency (fc (Hz)) are associated with each other in the storage unit 52 or the like. For example, “Default, 3”, “Number 2”, “Kana, 0.5”, “Kana 2,” “Alphabet, 1.5”, “Kanji, 0.2” as “Type, fc” Hold in association.
  • fc is set high for numbers with short input times
  • fc is set low for kanji characters with long input times.
  • the character recognition unit 67 accepts the setting of the character recognition target from the user in advance and determines fc according to the correspondence table. In this state, when the character recognition unit 67 acquires the exercise information from the wearable device 10, the character recognition unit 67 filters the input exercise information using the determined fc, and executes character recognition from the filtering result.
  • FIG. 11 is a diagram for explaining the filtering process.
  • the character recognition unit 67 sets a character type for recognizing an input character (S201). Subsequently, the character recognition unit 67 searches the correspondence table (table) between the character type and fc for the fc corresponding to the set character type (S202). Here, when there are a plurality of set character types, the character recognition unit 67 determines the lowest fc as the setting target (S203). Then, the character recognition unit 67 sets the cutoff frequency fc corresponding to the input character type in the high pass filter (HPF) (S204).
  • HPF high pass filter
  • the character recognition unit 67 when receiving the motion information represented by the frequency from the wearable device 10, the character recognition unit 67 passes the low-pass filter (LPF) and performs spike noise cut. After that, the character recognition unit 67 passes the high-pass filter in which the cutoff frequency fc is set, suppresses the shake component of the body, and performs character recognition using the result.
  • LPF low-pass filter
  • FIG. 12 is a diagram for explaining an example of filtering.
  • FIG. 12 is a diagram for explaining filtering in the HPF.
  • the character recognition unit 67 changes the cutoff frequency fc in accordance with the input target, that is, the character type to be determined.
  • the character recognizing unit 67 executes character recognition mainly based on a slow motion component mainly of body shaking by setting fc low for numbers and the like having a short input time.
  • the character recognizing unit 67 executes character recognition based on a fast motion component mainly composed of handwritten components by setting fc high for kanji characters having a long input time.
  • FIG. 13 is a diagram for explaining an example of correction of a trajectory.
  • the posture at which the switch 14 is turned on is set as a posture 0 point, and the estimated value of the posture change of the finger from the posture 0 point calculated based on the data source after extracting the motion component of the finger operation Set the gain to convert to length. If this gain is larger, a long trajectory can be written with a slight change in posture.
  • the trajectory generation gain change profile is changed according to the posture change direction from the posture 0 point
  • the trajectory generation gain change rate is changed according to the posture change amount from the posture 0 point
  • the gain C of the posture 0 point is matched.
  • a continuous function satisfying the above is assumed.
  • the character recognition unit 67 increases the gain with respect to the + Y-axis posture change with respect to the trajectory in the vertical axis direction, and increases the gain with respect to the ⁇ Z-axis posture change with respect to the trajectory in the horizontal axis direction. To do. As a result, a trajectory in a direction that is difficult to move can be processed in the same way as a trajectory in a direction that is easy to move, so that the trajectory can be accurately recognized.
  • Example 1 Although the example which performs both a locus
  • FIG. 14 is a flowchart for explaining the flow of processing using the direction of the wearable device.
  • the power of the wearable device 10 is turned on (S301: Yes)
  • the switch 14 is turned on (S302: Yes)
  • the detection unit 21 of the wearable device 10 passes through the attitude sensor 13. It is specified whether the gravity axis direction when the switch 14 is turned on is the Y axis or the X axis, and the specified result is substituted into the variable “G” (S303). Subsequently, the detection unit 21 detects the movement of the fingertip via the posture sensor 13 (S304).
  • the detection unit 21 extracts the motion component of the finger operation (S305). And the extraction part 22 extracts the exercise
  • the trajectory generating unit 23 uses a series of motion information from the on to the off of the extracted motion pattern. Then, finger movement transition, that is, trajectory information is generated (S308).
  • the gesture recognition unit 24 uses the motion information for each period from the start to the end of the finger motion among the extracted motion patterns, The gesture information is generated (S309), and the output unit 25 outputs the gesture information to the input device 50 (S310).
  • FIG. 15 is a diagram for explaining a storage target based on a score.
  • the character recognition unit 67 recognizes “dimensions” according to the trajectory information received from the wearable device 10 (S ⁇ b> 1). Then, the character recognition unit 67 calculates a score value obtained by comparing the recognition result and the dictionary as 250 points.
  • the character recognition unit 67 executes character recognition after removing the upper left splash from the trajectory information received from the wearable device 10 (S2). Then, the character recognition unit 67 calculates a score value obtained by comparing the corrected recognition result with the dictionary as 200 points.
  • the character recognition unit 67 compares both score values (S3).
  • the character recognition unit 67 stores the corrected recognition result in the memo information 54 if the corrected score value is higher.
  • the character recognition unit 67 associates both recognition results and stores them in the memo information 54.
  • the wearable device 10 has described an example in which the gesture information including the recognized gesture is output to the input device 50, but the present invention is not limited to this.
  • the wearable device 10 can also output to the input device 50 a gesture ID associated with “rotation direction, number”.
  • FIG. 16 is a diagram showing an example of issuing a gesture ID.
  • the gesture recognition unit 24 detects one rotation counterclockwise after the switch 14 is turned on, the gesture recognition unit 24 outputs a gesture ID (R, 1) to the input device 50. Subsequently, when the gesture recognition unit 24 further detects one rotation (second rotation) counterclockwise, the gesture recognition unit 24 outputs the gesture ID (R, 2) to the input device 50. Subsequently, when the gesture recognition unit 24 further detects one rotation (third rotation) counterclockwise, the gesture recognition unit 24 outputs the gesture ID (R, 3) to the input device 50.
  • the gesture recognition unit 24 issues a gesture ID every time it passes through a predetermined determination zone, and repeats until the switch 14 is turned off.
  • the input device 50 can also specify a gesture according to the gesture ID. Further, the input device 50 can output this gesture ID to the application, and the application can specify the gesture.
  • each component of each part illustrated does not necessarily need to be physically configured as illustrated.
  • the specific form of distribution / integration of each unit is not limited to that shown in the figure, and all or a part thereof may be functionally or physically distributed / integrated in arbitrary units according to various loads or usage conditions. Can be configured.
  • various processing functions performed by each device may be executed entirely or arbitrarily on a CPU (or a microcomputer such as an MPU or MCU (Micro Controller Unit)).
  • various processing functions may be executed in whole or in any part on a program that is analyzed and executed by a CPU (or a microcomputer such as an MPU or MCU) or on hardware based on wired logic. Needless to say, it is good.
  • FIG. 17 is an explanatory diagram illustrating an example of a computer that executes an input support program.
  • the computer 300 includes a CPU 310, an HDD (Hard Disk Drive) 320, and a RAM (Random Access Memory) 340. These units 310 to 340 are connected via a bus 400.
  • the HDD 320 includes an input support program 320 a that exhibits the same functions as the detection unit 21, the extraction unit 22, the trajectory generation unit 23, the gesture recognition unit 24, and the output unit 25 of the wearable device 10, or the input detection unit 61 of the input device 50.
  • An input program 320a that exhibits the same function is stored in advance. Note that the input support program 320a may be separated as appropriate.
  • the HDD 320 stores various information.
  • the CPU 310 reads out and executes the input support program 320a from the HDD 320, thereby executing the same operation as each processing unit of the embodiment. That is, the input support program 320a is similar in operation to the detection unit 21, the extraction unit 22, the trajectory generation unit 23, the gesture recognition unit 24, and the output unit 25, or the input detection unit 61, the display control unit 62, and the calibration.
  • the same operations as the unit 63, the axis detection unit 64, the gesture acquisition unit 65, the event identification unit 66, the character recognition unit 67, the recognition result display unit 68, and the operation command execution unit 69 are executed.
  • the input support program 320a is not necessarily stored in the HDD 320 from the beginning.
  • the program is stored in a “portable physical medium” such as a flexible disk (FD), a CD-ROM, a DVD disk, a magneto-optical disk, or an IC card inserted into the computer 300. Then, the computer 300 may read and execute the program from these.
  • the program is stored in “another computer (or server)” connected to the computer 300 via a public line, the Internet, a LAN, a WAN, or the like. Then, the computer 300 may read and execute the program from these.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

This input device is mounted on an indicator body and is provided with a switch which receives input. The input device detects motion information outputted from a motion sensor. Further, from a first component of the motion information, the input device extracts information relating to a first mode, and, from a second component of said motion information, extracts information relating to a second mode. Thereafter, the input device uses the input pattern inputted to the switch to output information relating to the first mode and/or information relating to the second mode.

Description

入力装置、入力支援方法および入力支援プログラムInput device, input support method, and input support program
 本発明は、入力装置、入力支援方法および入力支援プログラムに関する。 The present invention relates to an input device, an input support method, and an input support program.
 近年、スマートウォッチやスマートグラスなどのウェアラブル端末の市場への投入が進み、従来のパソコンやスマートフォンとは異なる新しいサービスが普及している。このようなウェアラブルデバイスは、身に着けて使用されることから、画面を触って操作するなどの入力が難しい。 In recent years, wearable terminals such as smart watches and smart glasses have been introduced into the market, and new services different from conventional personal computers and smartphones have become widespread. Since such a wearable device is worn and used, it is difficult to input by touching the screen.
 例えば、ウェアラブルデバイスへの入力としては、音声によって文字等を入力する手法が知られている。また、指にセンサをつけて、指の動きで簡単なコマンドをジェスチャで入力する手法も知られている。 For example, as an input to the wearable device, a method of inputting characters by voice is known. There is also known a method in which a sensor is attached to a finger and a simple command is input by a gesture of the finger.
特開2006-79221号公報JP 2006-79221 A 特開平07-271506号公報JP 07-271506 A 特開2015-35103号公報JP2015-35103A
 しかしながら、上記技術では、ウェアラブルデバイスへの入力に際して、操作性がよくない。例えば、指につけたセンサを操作する前に、文字入力かジェスチャ入力か手動で設定するので、手間が多く、設定ミスによる入力ミスも多くなる。また、文字入力かジェスチャ入力か自動で判断することが難しく、ユーザの意図とは異なる入力が実行される場合がある。 However, with the above technology, operability is not good when inputting to a wearable device. For example, since a character input or a gesture input is set manually before operating a sensor attached to a finger, there is a lot of trouble and an input error due to a setting error increases. Further, it is difficult to automatically determine whether a character input or a gesture input, and an input different from the user's intention may be executed.
 1つの側面では、操作性を向上させることができる入力装置、入力支援方法および入力支援プログラムを提供することを目的とする。 An object of one aspect is to provide an input device, an input support method, and an input support program that can improve operability.
 第1の案では、入力装置は、入力を受け付けるスイッチを備えると共に指示体に装着される。入力装置は、モーションセンサと、前記モーションセンサから出力される運動情報を検出する検出部と、前記運動情報の第1の成分から第1のモードに関する情報を抽出すると共に、該運動情報の第2の成分から第2のモードに関する情報を抽出する抽出部とを有する。入力装置は、前記スイッチに対する入力パターンを用いて、前記第1のモードに関する情報または前記第2のモードに関する情報のいずれか一方もしくは両方を出力する出力部を有する。 In the first proposal, the input device is provided with a switch for receiving input and is attached to the indicator. The input device extracts information relating to the first mode from a first component of the motion information, a detection unit that detects motion information output from the motion sensor, and a second component of the motion information. And an extraction unit that extracts information on the second mode from the components. The input device includes an output unit that outputs one or both of the information related to the first mode and the information related to the second mode using an input pattern for the switch.
 一実施形態によれば、操作性を向上させることができる。 According to one embodiment, operability can be improved.
図1は、実施例1にかかるシステムの全体構成例を示す図である。FIG. 1 is a diagram illustrating an example of the overall configuration of a system according to the first embodiment. 図2Aは、ウェアラブルデバイスの一例を示す図である。FIG. 2A is a diagram illustrating an example of a wearable device. 図2Bは、ウェアラブルデバイスの一例を示す図である。FIG. 2B is a diagram illustrating an example of a wearable device. 図2Cは、ウェアラブルデバイスの一例を示す図である。FIG. 2C is a diagram illustrating an example of a wearable device. 図2Dは、ウェアラブルデバイスの一例を示す図である。FIG. 2D is a diagram illustrating an example of a wearable device. 図3は、ヘッドマウントディスプレイの一例を示す図である。FIG. 3 is a diagram illustrating an example of a head mounted display. 図4は、実施例1にかかるシステムの機能構成を示す機能ブロック図である。FIG. 4 is a functional block diagram of the functional configuration of the system according to the first embodiment. 図5は、指の回転軸の一例を示す図である。FIG. 5 is a diagram illustrating an example of a rotation axis of a finger. 図6は、ウェアラブルデバイスからの出力例を示す図である。FIG. 6 is a diagram illustrating an output example from the wearable device. 図7は、メニュー画面の一例を示す図である。FIG. 7 is a diagram illustrating an example of a menu screen. 図8は、手書き入力された文字の軌跡の表示結果の一例を示す図である。FIG. 8 is a diagram illustrating an example of a display result of a locus of characters input by handwriting. 図9は、文字認識の結果の一例を示す図である。FIG. 9 is a diagram illustrating an example of the result of character recognition. 図10は、処理の流れを示すフローチャートである。FIG. 10 is a flowchart showing the flow of processing. 図11は、フィルタリング処理を説明する図である。FIG. 11 is a diagram illustrating the filtering process. 図12は、フィルタリング例を説明する図である。FIG. 12 is a diagram illustrating an example of filtering. 図13は、軌跡の補正例を説明する図である。FIG. 13 is a diagram for explaining a correction example of the trajectory. 図14は、ウェアラブルデバイスの向きを用いた処理の流れを説明するフローチャートである。FIG. 14 is a flowchart for explaining the flow of processing using the direction of the wearable device. 図15は、スコアによる保存対象を説明する図である。FIG. 15 is a diagram for explaining a storage target based on a score. 図16は、ジェスチャIDの発行例を示す図である。FIG. 16 is a diagram illustrating an example of issuing a gesture ID. 図17は、入力支援プログラムを実行するコンピュータの一例を示す説明図である。FIG. 17 is an explanatory diagram illustrating an example of a computer that executes an input support program.
 以下に、本発明にかかる入力装置、入力支援方法および入力支援プログラムの実施例を図面に基づいて詳細に説明する。なお、この実施例によりこの発明が限定されるものではない。また、各実施例は、矛盾のない範囲内で適宜組み合わせることができる。 Hereinafter, embodiments of an input device, an input support method, and an input support program according to the present invention will be described in detail with reference to the drawings. Note that the present invention is not limited to the embodiments. In addition, the embodiments can be appropriately combined within a consistent range.
[全体構成]
 図1は、実施例1にかかるシステムの全体構成例を示す図である。図1に示す入力システムは、ウェアラブルデバイス10と、ヘッドマウントディスプレイ30と、入力装置50とを有する。ウェアラブルデバイス10とヘッドマウントディスプレイ30と入力装置50は、ネットワークを介して通信可能に接続され、各種の情報を交換することが可能とされている。かかるネットワークの一態様としては、有線または無線を問わず、携帯電話などの移動体通信、インターネット(Internet)、LAN(Local Area Network)やVPN(Virtual Private Network)などの任意の種類の通信網を採用できる。本実施例では、ウェアラブルデバイス10とヘッドマウントディスプレイ30と入力装置50が無線通信により通信を行う場合を例に説明する。
[overall structure]
FIG. 1 is a diagram illustrating an example of the overall configuration of a system according to the first embodiment. The input system shown in FIG. 1 includes a wearable device 10, a head mounted display 30, and an input device 50. The wearable device 10, the head mounted display 30, and the input device 50 are communicably connected via a network and can exchange various types of information. As an aspect of such a network, any type of communication network such as mobile communication such as a cellular phone, the Internet (Internet), a LAN (Local Area Network), a VPN (Virtual Private Network), or the like, regardless of wired or wireless. Can be adopted. In the present embodiment, a case where the wearable device 10, the head mounted display 30, and the input device 50 communicate by wireless communication will be described as an example.
 入力システムは、ユーザの入力の支援を行うシステムである。例えば、入力システムは、工場などでのユーザの作業支援に用いられ、ユーザがメモなどを取る場合や他の装置へのコマンド発行などに使用される。ユーザは、様々な場所を移動しながら作業する場合がある。このため、パーソナルコンピュータなどの固定端末ではなく、ウェアラブルデバイス10を用いて各種入力を可能とすることにより、ユーザは、様々な場所を移動しながら入力を行うことができる。 The input system is a system that supports user input. For example, the input system is used for user work support in a factory or the like, and is used when the user takes notes or issues commands to other devices. A user may work while moving in various places. For this reason, by enabling various inputs using the wearable device 10 instead of a fixed terminal such as a personal computer, the user can input while moving in various places.
 ウェアラブルデバイス10は、ユーザが身に着けて使用し、ユーザの指操作やジェスチャを検出するデバイスである。本実施例では、ウェアラブルデバイス10を、指に装着するデバイスとする。ウェアラブルデバイス10は、指の姿勢変化を検出し、指の姿勢変化に関する情報を入力装置50へ送信する。 Wearable device 10 is a device that a user wears and uses to detect a user's finger operation and gesture. In the present embodiment, the wearable device 10 is a device worn on a finger. The wearable device 10 detects a change in the posture of the finger and transmits information related to the change in the posture of the finger to the input device 50.
 図2A~図2Cは、ウェアラブルデバイスの一例を示す図である。ウェアラブルデバイス10は、指輪のように、リング形状とされており、図2Cに示すように、リングに指を通すことで指への装着が可能とされている。ウェアラブルデバイス10は、リングの一部分が他の部分より厚く且つ幅広に形成され、主要な電子部品を内蔵する部品内蔵部とされている。また、ウェアラブルデバイス10は、リングの部品内蔵部の内面が平面状に形成されている。すなわち、ウェアラブルデバイス10は、部品内蔵部を指の上側にした場合に指とフィットしやすい形状となっている。 2A to 2C are diagrams illustrating an example of a wearable device. The wearable device 10 has a ring shape like a ring, and can be attached to the finger by passing the finger through the ring as shown in FIG. 2C. In the wearable device 10, a part of the ring is formed thicker and wider than the other part, and is a component built-in part that incorporates main electronic components. Further, the wearable device 10 has a flat inner surface of a ring-containing component part. That is, the wearable device 10 has a shape that easily fits with a finger when the component built-in part is placed above the finger.
 ウェアラブルデバイス10は、図2Cに示すように、部品内蔵部を指の上側としてほぼ同様の向きで指に装着される。また、ウェアラブルデバイス10は、図2Aに示すように、リングの側面側のスイッチ14が設けられている。スイッチ14は、図2Cに示すように、ウェアラブルデバイス10を右手の人差指に装着した場合に親指に対応する位置に配置されている。ウェアラブルデバイス10は、スイッチ14の周辺部分が、スイッチ14の上面と同様の高さまで隆起させた形状に形成されている。これにより、ウェアラブルデバイス10は、指をスイッチ14部分に置いただけではスイッチ14がオンとならない。 As shown in FIG. 2C, the wearable device 10 is attached to the finger in a substantially similar orientation with the component built-in part on the upper side of the finger. Further, as shown in FIG. 2A, the wearable device 10 is provided with a switch 14 on the side surface side of the ring. As shown in FIG. 2C, the switch 14 is disposed at a position corresponding to the thumb when the wearable device 10 is worn on the index finger of the right hand. The wearable device 10 is formed in a shape in which the peripheral portion of the switch 14 is raised to the same height as the upper surface of the switch 14. As a result, the wearable device 10 does not turn on the switch 14 simply by placing a finger on the switch 14 portion.
 図2Dは、ウェアラブルデバイスのスイッチに対する操作の一例を示す図である。図2Dの例は、ウェアラブルデバイス10を人差指に装着してスイッチ14を親指で操作する場合を示している。ウェアラブルデバイス10は、図2Dの左側に示すように、親指をスイッチ14部分に置いただけではスイッチ14をオンされず、図2Dの右側に示すように、親指を押し込むことでスイッチ14がオンされる。ユーザは、入力開始時においては、指を入力ポジションにそえ、入力を行う際に指で押し込むことで入力を開始する。 FIG. 2D is a diagram illustrating an example of an operation on a wearable device switch. The example of FIG. 2D shows a case where the wearable device 10 is attached to the index finger and the switch 14 is operated with the thumb. As shown in the left side of FIG. 2D, the wearable device 10 does not turn on the switch 14 just by placing the thumb on the switch 14, and the switch 14 is turned on by pushing the thumb as shown in the right side of FIG. 2D. . At the start of input, the user places the finger on the input position, and starts input by pushing the finger with the finger when performing input.
 スイッチ14は、縮んだ状態でオン状態となり、伸びた状態でオフ状態となり、内蔵されたバネなどの弾性体により伸張した状態となるように付勢されている。これにより、スイッチ14は、指で押し込まれることでオン状態となり、指の力を抜き緩めることでオフ状態となる。このように構成することで、ウェアラブルデバイス10は、正常な状態で装着されないと入力を開始することができず、指に装着した際に装着位置が正常な状態の位置に自然に矯正される。また、ユーザは、指をスイッチ14から離さずに入力、非入力区間をコントロールできる。 The switch 14 is energized so that it is turned on when it is contracted, turned off when it is extended, and stretched by an elastic body such as a built-in spring. As a result, the switch 14 is turned on when it is pressed with a finger, and is turned off when the finger force is released. With this configuration, the wearable device 10 cannot start input unless it is worn in a normal state, and when it is worn on the finger, the wearing position is naturally corrected to a normal position. Further, the user can control the input / non-input section without releasing the finger from the switch 14.
 図1に戻り、ヘッドマウントディスプレイ30は、ユーザが頭に装着し、各種の情報をユーザに視認可能に表示するデバイスである。ヘッドマウントディスプレイ30は、両眼に対応したものであってもよく、片眼のみに対応したものであってもよい。 Referring back to FIG. 1, the head mounted display 30 is a device that is worn by the user on the head and displays various types of information so as to be visible to the user. The head mounted display 30 may be compatible with both eyes or may be compatible with only one eye.
 図3は、ヘッドマウントディスプレイの一例を示す図である。本実施例では、ヘッドマウントディスプレイ30は、両眼に対応する眼鏡形状とされている。ヘッドマウントディスプレイ30は、ユーザが装着したままでも、外部の現実環境を視認可能なように、レンズ部分に透過性を有する。また、ヘッドマウントディスプレイ30は、レンズ部分の一部に透過性を有する表示部が内蔵され、表示部に画像などの各種の情報の表示が可能とされている。これにより、ヘッドマウントディスプレイ30は、装着したユーザに現実環境を視認させつつ、視界の一部で各種の情報を視認させて現実環境を拡張する拡張現実を実現する。図3には、装着したユーザの視界30Aの一部に設けられた表示部30Bが模式的に示されている。 FIG. 3 is a diagram showing an example of a head mounted display. In the present embodiment, the head mounted display 30 has a glasses shape corresponding to both eyes. The head-mounted display 30 has transparency in the lens portion so that an external real environment can be visually recognized even when the user is wearing the head-mounted display 30. In addition, the head mounted display 30 includes a display unit having transparency in a part of the lens portion, and various types of information such as images can be displayed on the display unit. As a result, the head mounted display 30 realizes augmented reality in which the actual environment is expanded by visually recognizing various information in a part of the field of view while allowing the wearing user to visually recognize the actual environment. FIG. 3 schematically shows a display unit 30B provided in a part of the field of view 30A of the user who wears it.
 また、ヘッドマウントディスプレイ30は、2つのレンズ部分の間にカメラが内蔵され、当該カメラにより、装着したユーザの視線方向の画像の撮影が可能とされている。 The head mounted display 30 has a built-in camera between two lens portions, and the camera can shoot an image in the line of sight of the user wearing the head mounted display 30.
 図1に戻り、入力装置50は、ユーザの指操作による各種入力を支援する装置である。入力装置50は、例えば、スマートフォンやタブレット端末など、携帯可能な情報処理装置である。なお、入力装置50は、データセンタなどに設けられた1台または複数台のコンピュータとして実装してもよい。すなわち、入力装置50は、ウェアラブルデバイス10およびヘッドマウントディスプレイ30と通信可能であれば、クラウドのコンピュータであってもよい。 Referring back to FIG. 1, the input device 50 is a device that supports various inputs by user's finger operation. The input device 50 is a portable information processing device such as a smartphone or a tablet terminal. The input device 50 may be implemented as one or a plurality of computers provided in a data center or the like. That is, the input device 50 may be a cloud computer as long as it can communicate with the wearable device 10 and the head mounted display 30.
 このような入力システムは、ウェアラブルデバイス10が有するモーションセンサから出力される運動情報を検出する。そして、入力システムは、運動情報の第1の成分から第1のモードに関する情報を抽出すると共に、運動情報の第2の成分から第2のモードに関する情報を抽出する。その後、入力システムは、スイッチ14に対する入力パターンを用いて、第1のモードに関する情報または第2のモードに関する情報のいずれか一方もしくは両方を出力する。 Such an input system detects motion information output from a motion sensor included in the wearable device 10. The input system extracts information related to the first mode from the first component of the exercise information, and extracts information related to the second mode from the second component of the exercise information. Thereafter, the input system outputs one or both of the information related to the first mode and the information related to the second mode using the input pattern for the switch 14.
 つまり、入力システムは、指に着けたモーションセンサから出力される指の移動情報から、文字軌跡とジェスチャを抽出して、スイッチ14のオン・オフ状態にしたがって両方もしくは一方を出力する。このように、入力システムは、文字認識とジェスチャイベントを同時に処理でき、ウェアラブルデバイス10を用いた入力の操作性を向上させることができる。 That is, the input system extracts a character trajectory and a gesture from finger movement information output from a motion sensor worn on the finger, and outputs both or one according to the on / off state of the switch 14. As described above, the input system can simultaneously process character recognition and gesture events, and can improve the operability of input using the wearable device 10.
[各装置の構成]
 次に、図1に示した入力システムの機能構成を説明する。図4は、実施例1にかかるシステムの機能構成を示す機能ブロック図である。ここでは、ウェアラブルデバイス10、ヘッドマウントディスプレイ30、入力装置50の機能構成について説明する。
[Configuration of each device]
Next, the functional configuration of the input system shown in FIG. 1 will be described. FIG. 4 is a functional block diagram of the functional configuration of the system according to the first embodiment. Here, functional configurations of the wearable device 10, the head mounted display 30, and the input device 50 will be described.
(ウェアラブルデバイスの機能構成)
 図4に示すように、ウェアラブルデバイス10は、無線部11、記憶部12、姿勢センサ13、スイッチ14、制御部20を有する。なお、ウェアラブルデバイス10は、上記の機器以外の他の機器を有してもよい。
(Functional configuration of wearable device)
As illustrated in FIG. 4, the wearable device 10 includes a wireless unit 11, a storage unit 12, an attitude sensor 13, a switch 14, and a control unit 20. The wearable device 10 may have other devices other than the above devices.
 無線部11は、他の装置との間で無線の通信制御を行うインタフェースである。無線部11としては、無線チップなどのネットワークインタフェースカードを採用できる。また、無線部11は、無線により通信を行うデバイスであり、無線により他の装置と各種情報を送受信する。例えば、無線部11は、指の操作情報および姿勢変化情報を入力装置50へ送信する。 The wireless unit 11 is an interface that performs wireless communication control with other devices. As the wireless unit 11, a network interface card such as a wireless chip can be adopted. The wireless unit 11 is a device that communicates wirelessly, and transmits and receives various information to and from other devices wirelessly. For example, the wireless unit 11 transmits finger operation information and posture change information to the input device 50.
 記憶部12は、制御部20が実行するプログラムや各種情報を記憶する記憶装置であり、例えばハードディスクやメモリなどである。例えば、記憶部12は、制御部20で生成される各種情報、制御部20が実行する処理の中間データなどを記憶する。 The storage unit 12 is a storage device that stores programs executed by the control unit 20 and various types of information, such as a hard disk or a memory. For example, the storage unit 12 stores various information generated by the control unit 20, intermediate data of processing executed by the control unit 20, and the like.
 姿勢センサ13は、ユーザの指操作を検出するデバイスである。例えば、姿勢センサ13は、3軸のジャイロセンサなどである。姿勢センサ13は、図2Cに示すように、ウェアラブルデバイス10が正しく指に装着された場合、3軸が指の回転軸と対応するようにウェアラブルデバイス10に内蔵されている。図5は、指の回転軸の一例を示す図である。図5の例では、互いに直交するX、Y、Zの3軸が示されている。図5の例では、指を曲げる動作方向の回転軸がY軸とされ、指を左右に振る動作方向の回転軸がZ軸とされ、指を回す動作方向の回転軸がX軸とされている。姿勢センサ13は、制御部20からの制御に応じて、X、Y、Zの各回転軸の回転を検出し、検出された3軸の回転を指の姿勢変化を示す姿勢変化情報として制御部20へ出力する。 The posture sensor 13 is a device that detects a user's finger operation. For example, the attitude sensor 13 is a triaxial gyro sensor or the like. As shown in FIG. 2C, the posture sensor 13 is built in the wearable device 10 so that the three axes correspond to the rotation axis of the finger when the wearable device 10 is correctly attached to the finger. FIG. 5 is a diagram illustrating an example of a rotation axis of a finger. In the example of FIG. 5, three axes of X, Y, and Z that are orthogonal to each other are shown. In the example of FIG. 5, the rotation axis in the movement direction for bending the finger is the Y axis, the rotation axis in the movement direction for swinging the finger left and right is the Z axis, and the rotation axis in the movement direction for turning the finger is the X axis. Yes. The posture sensor 13 detects the rotation of each of the X, Y, and Z rotation axes in accordance with control from the control unit 20, and uses the detected three-axis rotation as posture change information indicating a posture change of the finger. 20 output.
 スイッチ14は、ユーザからの入力を受け付けるデバイスである。スイッチ14は、図2Cに示したように、ウェアラブルデバイス10のリングの側面側に設けられている。スイッチ14は、押されるとオンとなり、離されるとオフとなる。スイッチ14は、ユーザからの操作入力を受け付ける。例えば、ウェアラブルデバイス10がユーザの人差指に装着された場合、スイッチ14は、ユーザの親指による操作入力を受け付ける。スイッチ14は、受け付けた操作内容を示す操作情報を制御部20へ出力する。ユーザは、スイッチ14を操作して各種の入力を行う。例えば、ユーザは、指操作による入力を開始する際、スイッチ14をオンする。 The switch 14 is a device that accepts input from the user. The switch 14 is provided on the side of the ring of the wearable device 10 as shown in FIG. 2C. The switch 14 is turned on when pressed, and turned off when released. The switch 14 receives an operation input from the user. For example, when the wearable device 10 is worn on the user's index finger, the switch 14 receives an operation input by the user's thumb. The switch 14 outputs operation information indicating the received operation content to the control unit 20. The user performs various inputs by operating the switch 14. For example, the user turns on the switch 14 when starting input by finger operation.
 制御部20は、ウェアラブルデバイス10を制御するデバイスである。制御部20としては、マイコン、ASIC(Application Specific Integrated Circuit)、FPGA(Field Programmable Gate Array)等の集積回路を採用できる。制御部20は、無線部11を介して、スイッチ14の操作情報を入力装置50へ送信する。また、制御部20は、スイッチ14がオンとなると、姿勢センサ13を制御して姿勢変化を検出させる。制御部20は、無線部11を介して、姿勢センサ13により検出される姿勢変化情報を入力装置50へ送信する。 The control unit 20 is a device that controls the wearable device 10. As the control unit 20, an integrated circuit such as a microcomputer, an ASIC (Application Specific Integrated Circuit), or an FPGA (Field Programmable Gate Array) can be employed. The control unit 20 transmits operation information of the switch 14 to the input device 50 via the wireless unit 11. Further, when the switch 14 is turned on, the control unit 20 controls the posture sensor 13 to detect a posture change. The control unit 20 transmits posture change information detected by the posture sensor 13 to the input device 50 via the wireless unit 11.
 このような制御部20は、検出部21、抽出部22、軌跡生成部23、ジェスチャ認識部24、出力部25を有する。 Such a control unit 20 includes a detection unit 21, an extraction unit 22, a locus generation unit 23, a gesture recognition unit 24, and an output unit 25.
 検出部21は、姿勢センサ13から出力されるセンサ値、すなわち運動情報を検出する処理部である。具体的には、検出部21は、スイッチ14がオンにされた時点の位置を基準に設定し、スイッチ14がオンからオフになるまで、基準からの移動履歴を検出し続けて、検出する移動履歴を運動情報として抽出部22に出力する。なお、移動履歴としては、座標や3軸ジャイロを用いることもでき、座標と3軸ジャイロとを組み合わせた情報を用いることもできる。また、検出部21は、スイッチ14がオンにされた時、オフにされた時、動きが停止した停止時間などを、運動情報の該当箇所に対応付けて出力することもできる。 The detection unit 21 is a processing unit that detects a sensor value output from the attitude sensor 13, that is, motion information. Specifically, the detection unit 21 sets the position at the time when the switch 14 is turned on as a reference, and continues to detect the movement history from the reference until the switch 14 turns from on to off, and the detected movement. The history is output to the extraction unit 22 as exercise information. In addition, as a movement history, a coordinate and a triaxial gyroscope can also be used, and the information which combined a coordinate and a triaxial gyroscope can also be used. Moreover, the detection part 21 can also output the stop time etc. which a motion stopped, matched with the applicable location of exercise | movement information, when the switch 14 was turned on, when it was turned off.
 抽出部22は、検出部21によって検出される運動情報から出力対象となる各種情報を抽出する処理部である。具体的には、抽出部22は、文字を特徴づける軌跡に関する情報と、ユーザが実行するジェスチャに関する情報を抽出する処理部である。 The extraction unit 22 is a processing unit that extracts various types of information to be output from the exercise information detected by the detection unit 21. Specifically, the extraction unit 22 is a processing unit that extracts information about a trajectory characterizing a character and information about a gesture performed by a user.
 例えば、抽出部22は、検出部21から受信した運動情報から、スイッチ14がオンからオフになるまでの一連の運動情報を抽出して、軌跡生成部23に出力する。つまり、抽出部22は、スイッチ14がオンからオフになるまでの1つのセグメントで、指が移動した一連の移動情報を抽出する。 For example, the extraction unit 22 extracts a series of exercise information from the exercise information received from the detection unit 21 until the switch 14 is turned off to output it to the locus generation unit 23. That is, the extraction unit 22 extracts a series of movement information about the movement of the finger in one segment until the switch 14 is turned on.
 また、抽出部22は、検出部21から受信した運動情報から、指の運動開始から終了までの期間を抽出し、各期間内の運動情報を抽出してジェスチャ認識部24に出力する。つまり、抽出部22は、複数のセグメントを抽出し、一つ一つのセグメントにおける指の操作を特定する移動情報を抽出する。 Further, the extraction unit 22 extracts a period from the start to the end of the finger movement from the exercise information received from the detection unit 21, extracts the exercise information within each period, and outputs it to the gesture recognition unit 24. That is, the extraction unit 22 extracts a plurality of segments, and extracts movement information that specifies a finger operation in each segment.
 軌跡生成部23は、抽出部22によって抽出されたスイッチ14がオンからオフになるまでの一連の運動情報から、指の移動軌跡を生成する処理部である。例えば、軌跡生成部23は、抽出部22から受信した運動情報において、スイッチ14がオンになった位置を基準に設定し、スイッチ14がオフになった位置を終了位置に設定する。そして、軌跡生成部23は、基準を軌跡の開始に設定し、開始から終了位置までを結んだ軌跡を生成する。その後、軌跡生成部23は、生成した軌跡情報を生成時間などと対応付けて記憶部12に格納する。また、軌跡生成部23は、生成した軌跡情報を出力部25に出力する。 The trajectory generation unit 23 is a processing unit that generates a movement trajectory of the finger from a series of motion information until the switch 14 extracted by the extraction unit 22 is turned on. For example, in the exercise information received from the extraction unit 22, the trajectory generation unit 23 sets the position where the switch 14 is turned on as a reference, and sets the position where the switch 14 is turned off as the end position. Then, the trajectory generation unit 23 sets the reference to the start of the trajectory, and generates a trajectory connecting from the start to the end position. Thereafter, the trajectory generation unit 23 stores the generated trajectory information in the storage unit 12 in association with the generation time. In addition, the trajectory generation unit 23 outputs the generated trajectory information to the output unit 25.
 ジェスチャ認識部24は、抽出部22によって抽出された運動情報から、ユーザが指を用いて実行したジェスチャを認識する処理部である。例えば、ジェスチャ認識部24は、スイッチ14がオンになった位置を基準に設定し、初めに指が所定時間停止した位置を第1の終了位置に設定する。そして、ジェスチャ認識部24は、基準から第1の終了位置までの軌跡を第1の軌跡として特定する。 The gesture recognition unit 24 is a processing unit that recognizes a gesture performed by a user using a finger from the exercise information extracted by the extraction unit 22. For example, the gesture recognition unit 24 sets the position where the switch 14 is turned on as a reference, and sets the position where the finger first stops for a predetermined time as the first end position. Then, the gesture recognition unit 24 identifies the trajectory from the reference to the first end position as the first trajectory.
 続いて、ジェスチャ認識部24は、第1の終了位置を第2の開始位置に設定し、次に指が所定時間停止した位置を第2の終了位置に設定する。そして、ジェスチャ認識部24は、第2の開始位置から第2の終了位置までの軌跡を第2の軌跡として特定する。このように、ジェスチャ認識部24は、運動開始から終了までの各期間を特定し、期間ごとに軌跡を生成する。そして、ジェスチャ認識部24は、各軌跡をジェスチャとして特定し、特定したジェスチャと時間とを対応付けて記憶部12に格納する。また、ジェスチャ認識部24は、各軌跡をジェスチャ情報として出力部25に出力する。 Subsequently, the gesture recognition unit 24 sets the first end position as the second start position, and then sets the position where the finger has stopped for a predetermined time as the second end position. Then, the gesture recognition unit 24 specifies a trajectory from the second start position to the second end position as the second trajectory. As described above, the gesture recognition unit 24 identifies each period from the start to the end of the exercise, and generates a trajectory for each period. Then, the gesture recognition unit 24 identifies each trajectory as a gesture, and stores the identified gesture and time in the storage unit 12 in association with each other. In addition, the gesture recognition unit 24 outputs each trajectory to the output unit 25 as gesture information.
 出力部25は、制御部20によって生成された各種情報を入力装置50に送信する処理部である。例えば、出力部25は、軌跡生成部23によって生成された軌跡情報と、ジェスチャ認識部24によって生成されたジェスチャ情報とを、入力装置50に送信する。また、出力部25は、スイッチ14のオン/オフ情報なども入力装置50に送信する。 The output unit 25 is a processing unit that transmits various types of information generated by the control unit 20 to the input device 50. For example, the output unit 25 transmits the trajectory information generated by the trajectory generation unit 23 and the gesture information generated by the gesture recognition unit 24 to the input device 50. The output unit 25 also transmits on / off information of the switch 14 to the input device 50.
 ここで、ウェアラブルデバイス10が一連の指操作を検出したときに、その指操作(運動情報)から生成される軌跡情報とジェスチャ情報とについて説明する。図6は、ウェアラブルデバイスからの出力例を示す図である。 Here, the trajectory information and the gesture information generated from the finger operation (movement information) when the wearable device 10 detects a series of finger operations will be described. FIG. 6 is a diagram illustrating an output example from the wearable device.
 検出部21は、図6の(1)に示すように、ユーザの指動作として、aからスタートし、bで一時停止した後、cまでさらに移動させたことを検出し、このa→b→cの動作を運動情報として抽出する。続いて、抽出部22は、aを基準としてa→b→cの一連の動作を抽出して、軌跡生成部23に出力する。さらに、抽出部22は、a→bの軌跡と、b→cの軌跡とを抽出して、ジェスチャ認識部24に出力する。 As shown in (1) of FIG. 6, the detection unit 21 detects that the user's finger motion starts from a, pauses at b, and further moves to c, and this a → b → The motion of c is extracted as exercise information. Subsequently, the extraction unit 22 extracts a series of operations of a → b → c with reference to a, and outputs the extracted sequence to the trajectory generation unit 23. Further, the extraction unit 22 extracts the a → b trajectory and the b → c trajectory, and outputs them to the gesture recognition unit 24.
 そして、軌跡生成部23は、図6の(2)に示すように、a→b→cの運動情報から、ユーザの指動作を「0」の軌跡と特定する。一方で、ジェスチャ認識部24は、a→bのジェスチャと、b→cのジェスチャとを特定する。このようなジェスチャは、図6の(3)に示すように、ダイヤルを回すような操作に該当する。図6の例では、a→bのジェスチャは、ダイヤル「0」を回す動作に該当し、b→cのジェスチャは、ダイヤル「1」を回す動作に該当する。そして、出力部25は、図6の(2)に示した「0」の軌跡情報と、図6の(3)に示すダイヤル回転を示すジェスチャ情報とを含む操作情報を入力装置50に送信する。 And the locus | trajectory production | generation part 23 specifies a user's finger movement as a locus | trajectory of "0" from the exercise | movement information of a-> b-> c, as shown to (2) of FIG. On the other hand, the gesture recognizing unit 24 specifies the a → b gesture and the b → c gesture. Such a gesture corresponds to an operation of turning a dial as shown in (3) of FIG. In the example of FIG. 6, the gesture of a → b corresponds to the operation of turning the dial “0”, and the gesture of b → c corresponds to the operation of turning the dial “1”. Then, the output unit 25 transmits operation information including “0” trajectory information shown in (2) of FIG. 6 and gesture information indicating dial rotation shown in (3) of FIG. 6 to the input device 50. .
(ヘッドマウントディスプレイの機能構成)
 図4に示すように、ヘッドマウントディスプレイ30は、無線部31、カメラ32、表示部33、制御部34を有する。なお、ヘッドマウントディスプレイ30は、上記の機器以外の他の機器を有してもよい。
(Functional configuration of head-mounted display)
As illustrated in FIG. 4, the head mounted display 30 includes a wireless unit 31, a camera 32, a display unit 33, and a control unit 34. The head mounted display 30 may have other devices other than the above devices.
 無線部31は、無線により通信を行うデバイスである。無線部31は、無線により他の装置と各種情報を送受信する。例えば、無線部31は、表示部33に表示する画像の画像情報や撮影を指示する操作コマンドを入力装置50から受信する。また、無線部31は、カメラ32により撮影された画像の画像情報を入力装置50へ送信する。 The wireless unit 31 is a device that performs wireless communication. The wireless unit 31 transmits and receives various types of information to and from other devices wirelessly. For example, the wireless unit 31 receives image information of an image to be displayed on the display unit 33 and an operation command instructing shooting from the input device 50. In addition, the wireless unit 31 transmits image information of an image captured by the camera 32 to the input device 50.
 カメラ32は、画像を撮影するデバイスである。カメラ32は、図3に示したように、2つのレンズ部分の間に設けられている。カメラ32は、制御部34の制御に応じて、画像を撮影する。 The camera 32 is a device that captures an image. As shown in FIG. 3, the camera 32 is provided between two lens portions. The camera 32 captures an image under the control of the control unit 34.
 表示部33は、各種情報を表示するデバイスである。表示部33は、図3に示したように、ヘッドマウントディスプレイ30のレンズ部分に設けられている。表示部33は、各種情報を表示する。例えば、表示部33は、後述するメニュー画面や、仮想的なレーザポインタ、入力の軌跡などを表示する。 The display unit 33 is a device that displays various types of information. As shown in FIG. 3, the display unit 33 is provided in the lens portion of the head mounted display 30. The display unit 33 displays various information. For example, the display unit 33 displays a menu screen described later, a virtual laser pointer, an input locus, and the like.
 制御部34は、ヘッドマウントディスプレイ30を制御するデバイスである。制御部34としては、CPU(Central Processing Unit)、MPU(Micro Processing Unit)等の電子回路や、マイコン、ASIC、FPGA等の集積回路を採用できる。制御部34は、入力装置50から受信した画像情報を表示部33に表示させる制御を行う。また、制御部34は、入力装置50から撮影を指示する操作コマンドを受信すると、カメラ32を制御して画像を撮影する。そして、制御部34は、撮影された画像の画像情報を、無線部31を制御して入力装置50へ送信する。 The control unit 34 is a device that controls the head mounted display 30. As the control unit 34, an electronic circuit such as a CPU (Central Processing Unit) or MPU (Micro Processing Unit), or an integrated circuit such as a microcomputer, ASIC, or FPGA can be employed. The control unit 34 performs control to display the image information received from the input device 50 on the display unit 33. In addition, when receiving an operation command for instructing photographing from the input device 50, the control unit 34 controls the camera 32 to photograph an image. Then, the control unit 34 controls the wireless unit 31 to transmit the image information of the captured image to the input device 50.
(入力装置の機能構成)
 図4に示すように、入力装置50は、無線部51と、記憶部52と、制御部60とを有する。なお、入力装置50は、上記の機器以外の他の機器を有してもよい。
(Functional configuration of input device)
As illustrated in FIG. 4, the input device 50 includes a wireless unit 51, a storage unit 52, and a control unit 60. Note that the input device 50 may include devices other than the above-described devices.
 無線部51は、無線により通信を行うデバイスである。無線部51は、無線により他の装置と各種情報を送受信する。例えば、無線部51は、軌跡情報および姿勢変化情報などをウェアラブルデバイス10から受信する。また、無線部51は、ヘッドマウントディスプレイ30で表示させる画像の画像情報や各種の操作コマンドをヘッドマウントディスプレイ30へ送信する。また、無線部51は、ヘッドマウントディスプレイ30のカメラ32により撮影された画像の画像情報を受信する。 The wireless unit 51 is a device that performs wireless communication. The wireless unit 51 transmits / receives various information to / from other devices wirelessly. For example, the wireless unit 51 receives trajectory information, posture change information, and the like from the wearable device 10. In addition, the wireless unit 51 transmits image information of an image to be displayed on the head mounted display 30 and various operation commands to the head mounted display 30. In addition, the wireless unit 51 receives image information of an image captured by the camera 32 of the head mounted display 30.
 記憶部52は、ハードディスク、SSD(Solid State Drive)、光ディスクなどの記憶装置である。なお、記憶部52は、RAM(Random Access Memory)、フラッシュメモリ、NVSRAM(Non Volatile Static Random Access Memory)などのデータを書き換え可能な半導体メモリであってもよい。 The storage unit 52 is a storage device such as a hard disk, an SSD (Solid State Drive), or an optical disk. The storage unit 52 may be a semiconductor memory capable of rewriting data such as RAM (Random Access Memory), flash memory, NVSRAM (Non Volatile Static Random Access Memory).
 また、記憶部52は、制御部60で実行されるOS(Operating System)や各種プログラムを記憶する。例えば、記憶部52は、入力の支援に用いる各種のプログラムを記憶する。さらに、記憶部52は、制御部60で実行されるプログラムで用いられる各種データを記憶する。例えば、記憶部52は、認識辞書データ53と、メモ情報54と、画像情報55とを記憶する。 Further, the storage unit 52 stores an OS (Operating System) and various programs executed by the control unit 60. For example, the storage unit 52 stores various programs used for input support. Further, the storage unit 52 stores various data used in programs executed by the control unit 60. For example, the storage unit 52 stores recognition dictionary data 53, memo information 54, and image information 55.
 認識辞書データ53は、手書き入力された文字の認識ための辞書データである。例えば、認識辞書データ53には、各種の文字の標準的な軌跡情報が記憶されている。メモ情報54は、手書き入力された情報を記憶したデータである。例えば、メモ情報54には、手書き入力された文字の画像と、手書き入力された文字を認識した結果の文字情報とが対応付けて記憶される。画像情報55は、ヘッドマウントディスプレイ30のカメラ32により撮影された画像の画像情報である。 The recognition dictionary data 53 is dictionary data for recognizing handwritten characters. For example, the recognition dictionary data 53 stores standard trajectory information of various characters. The memo information 54 is data storing information input by handwriting. For example, in the memo information 54, an image of a character input by handwriting and character information obtained as a result of recognizing the character input by handwriting are stored in association with each other. The image information 55 is image information of an image taken by the camera 32 of the head mounted display 30.
 制御部60は、入力装置50を制御するデバイスである。制御部60としては、CPU、MPU等の電子回路や、マイコン、ASIC、FPGA等の集積回路を採用できる。制御部60は、各種の処理手順を規定したプログラムや制御データを格納するための内部メモリを有し、これらによって種々の処理を実行する。 The control unit 60 is a device that controls the input device 50. As the control unit 60, an electronic circuit such as a CPU or MPU, or an integrated circuit such as a microcomputer, ASIC, or FPGA can be employed. The control unit 60 has an internal memory for storing programs defining various processing procedures and control data, and executes various processes using these.
 制御部60は、各種のプログラムが動作することにより各種の処理部として機能する。例えば、制御部60は、入力検出部61と、表示制御部62と、キャリブレーション部63と、軸検出部64と、ジェスチャ取得部65と、イベント特定部66と、文字認識部67と、認識結果表示部68と、操作コマンド実行部69とを有する。 The control unit 60 functions as various processing units by operating various programs. For example, the control unit 60 includes an input detection unit 61, a display control unit 62, a calibration unit 63, an axis detection unit 64, a gesture acquisition unit 65, an event specification unit 66, a character recognition unit 67, and a recognition unit. A result display unit 68 and an operation command execution unit 69 are provided.
 入力検出部61は、ウェアラブルデバイス10から受信される操作情報および姿勢変化情報に基づいて、各種の入力の検出を行う。例えば、入力検出部61は、操作情報に基づいて、スイッチ14に対する操作を検出する。例えば、入力検出部61は、所定時間以内にスイッチ14が押された回数からシングルクリック、ダブルクリック、トリプルクリックや、スイッチ14の長押し操作を検出する。また、入力検出部61は、ウェアラブルデバイス10から受信される姿勢変化情報から3軸の回転や各軸への移動を特定し、指の姿勢変化を検出する。 The input detection unit 61 detects various inputs based on operation information and posture change information received from the wearable device 10. For example, the input detection unit 61 detects an operation on the switch 14 based on the operation information. For example, the input detection unit 61 detects single click, double click, triple click, and long press operation of the switch 14 from the number of times the switch 14 is pressed within a predetermined time. Further, the input detection unit 61 identifies the rotation of the three axes and the movement to each axis from the posture change information received from the wearable device 10, and detects the posture change of the finger.
 表示制御部62は、各種の表示制御を行う。例えば、表示制御部62は、入力検出部61による検出結果に応じて各種の画面の画像情報を生成し、無線部51を介して、生成した画像情報をヘッドマウントディスプレイ30へ送信する。これにより、ヘッドマウントディスプレイ30の表示部33には、画像情報の画像が表示される。例えば、表示制御部62は、入力検出部61によりダブルクリックが検出されると、ヘッドマウントディスプレイ30の表示部33にメニュー画面を表示させる。 The display control unit 62 performs various display controls. For example, the display control unit 62 generates image information of various screens according to the detection result by the input detection unit 61, and transmits the generated image information to the head mounted display 30 via the wireless unit 51. As a result, an image of image information is displayed on the display unit 33 of the head mounted display 30. For example, when a double click is detected by the input detection unit 61, the display control unit 62 displays a menu screen on the display unit 33 of the head mounted display 30.
 図7は、メニュー画面の一例を示す図である。図7に示すように、メニュー画面70には、「1 キャリブレーション」、「2 メモ入力」、「3 メモ閲覧」、「4 撮影」の各項目が表示されている。「1 キャリブレーション」の項目は、検出される指の姿勢情報をキャリブレーションするキャリブレーションモードを指定するものである。「2 メモ入力」の項目は、手書きによりメモを入力するメモ入力モードを指定するものである。「3 メモ閲覧」の項目は、入力済みのメモを閲覧する閲覧モードを指定するものである。「4 撮影」の項目は、ヘッドマウントディスプレイ30のカメラ32により画像を撮影する撮影モードを指定するものである。 FIG. 7 is a diagram showing an example of the menu screen. As shown in FIG. 7, the menu screen 70 displays items of “1 calibration”, “2 memo input”, “3 memo browsing”, and “4 shooting”. The item “1 calibration” designates a calibration mode for calibrating the posture information of the detected finger. The item “2 Memo Input” specifies a memo input mode for inputting a memo by handwriting. The item “3 Browsing memos” designates a browsing mode for browsing inputted memos. The item “4 shooting” is for designating a shooting mode in which an image is shot by the camera 32 of the head mounted display 30.
 一例を挙げると、本実施例に係る入力装置50は、手書き入力、カーソル、指のジェスチャ等によりメニュー画面70の項目の選択が可能とされている。例えば、表示制御部62は、手書きの入力された軌跡が、「1」~「4」の数字であると認識された場合、認識された数字に対応する項目のモードが選択されたと判定する。また、表示制御部62は、画面上にカーソルを表示させ、入力検出部61により検出される指の姿勢変化に応じてカーソルを移動させる。例えば、表示制御部62は、Y軸の回転が検出されると回転に応じた速度でカーソルを画面の左右方向に移動させる。また、表示制御部62は、Z軸の回転が検出されると回転に応じた速度でカーソルを画面の上下方向に移動させる。表示制御部62は、メニュー画面70の何れかの項目の上にカーソルが位置し、入力検出部61によりシングルクリックが検出されると、カーソルが位置する項目のモードが選択されたと判定する。表示制御部62は、メニュー画面70の何れかの項目が選択されると、メニュー画面70を消去する。 For example, the input device 50 according to the present embodiment is capable of selecting an item on the menu screen 70 by handwriting input, a cursor, a finger gesture, or the like. For example, when the handwritten input locus is recognized as a number “1” to “4”, the display control unit 62 determines that the item mode corresponding to the recognized number has been selected. Further, the display control unit 62 displays a cursor on the screen, and moves the cursor in accordance with the finger posture change detected by the input detection unit 61. For example, when the rotation of the Y axis is detected, the display control unit 62 moves the cursor in the horizontal direction of the screen at a speed corresponding to the rotation. Further, when the rotation of the Z axis is detected, the display control unit 62 moves the cursor in the vertical direction of the screen at a speed corresponding to the rotation. When the cursor is positioned on any item on the menu screen 70 and a single click is detected by the input detection unit 61, the display control unit 62 determines that the mode of the item on which the cursor is positioned has been selected. When any item on the menu screen 70 is selected, the display control unit 62 deletes the menu screen 70.
 キャリブレーション部63は、検出される指の姿勢情報のキャリブレーションを行う。例えば、メニュー画面70でキャリブレーションモードが選択された場合、キャリブレーション部63は、検出される指の姿勢情報のキャリブレーションを行う。 The calibration unit 63 calibrates the detected posture information of the finger. For example, when the calibration mode is selected on the menu screen 70, the calibration unit 63 calibrates the detected finger posture information.
 ここで、ウェアラブルデバイス10は、指に対して周方向に回転したずれ状態で装着される場合がある。ウェアラブルデバイス10が指に対してずれ状態で装着された場合、ウェアラブルデバイス10により検出される姿勢変化に回転分のずれが生じ、検出される動作がユーザの意図と異なるものとなる場合がある。このような場合、ユーザは、メニュー画面70でキャリブレーションモードを選択する。ユーザは、メニュー画面70でキャリブレーションモードを選択すると、指にウェアラブルデバイス10が装着された手を開閉させる。ウェアラブルデバイス10は、手を開閉させた際の指の姿勢変化の姿勢変化情報を入力装置50へ送信する。 Here, the wearable device 10 may be worn in a misaligned state rotated in the circumferential direction with respect to the finger. When the wearable device 10 is worn in a shifted state with respect to a finger, there is a case where the posture change detected by the wearable device 10 is shifted by a rotation amount, and the detected motion may be different from the user's intention. In such a case, the user selects the calibration mode on the menu screen 70. When the user selects the calibration mode on the menu screen 70, the user opens and closes the hand on which the wearable device 10 is worn. The wearable device 10 transmits posture change information of the posture change of the finger when the hand is opened and closed to the input device 50.
 例えば、キャリブレーション部63は、姿勢変化情報に基づき、手の開閉によりウェアラブルデバイス10が装着された指を屈伸させた際の指の動作を検出する。キャリブレーション部63は、検出した指の動作に基づいて、指の動作の基準方向をキャリブレーションする。 For example, the calibration unit 63 detects the movement of the finger when the finger wearing the wearable device 10 is bent and stretched by opening and closing the hand based on the posture change information. The calibration unit 63 calibrates the reference direction of the finger movement based on the detected finger movement.
 軸検出部64は、入力検出部61により検出される指の姿勢変化に基づき、姿勢を示す軸を検出する。例えば、軸検出部64は、指の姿勢変化に応じて方向移動する軸を検出する。例えば、軸検出部64は、3次元空間上で原点を通り、X、Y、Zの各回転軸についてそれぞれの回転方向および回転速度に応じて、X、Y、Zの各方向に移動する軸の方向ベクトルを算出する。なお、姿勢だけで動きを検出する場合、手首を正対方向から離れれば離れるほど大きく動かすことは困難である。また、手のひらを水平にした場合、上下方向の自由度は高いが左右方向の自由度は低い場合がある。 The axis detection unit 64 detects the axis indicating the posture based on the posture change of the finger detected by the input detection unit 61. For example, the axis detection unit 64 detects an axis that moves in the direction according to a change in the posture of the finger. For example, the axis detector 64 passes through the origin in a three-dimensional space, and moves in the X, Y, and Z directions according to the respective rotation directions and rotation speeds of the X, Y, and Z rotation axes. The direction vector of is calculated. In addition, when detecting a motion only by a posture, it is difficult to move the wrist as far as it goes away from the facing direction. Further, when the palm is horizontal, the degree of freedom in the vertical direction is high, but the degree of freedom in the horizontal direction may be low.
 そこで、軸検出部64は、キャリブレーション部63により補正された軸の方向の中心点から上下と左右のポインティング感度を変えてもよい。例えば、軸検出部64は、手の上下方向の回転より、手の左右方向の回転を大きく補正して軸の方向ベクトルを算出する。すなわち、軸検出部64は、回転量が同じ場合、上下方向の回転による移動量より、左右方向の回転による移動量を大きく補正する。また、軸検出部64は、補正された軸の方向の中心点から離れれば離れるほど感度を大きくしてもよい。例えば、軸検出部64は、軸の方向の中心点から離れるほど回転を大きく補正して軸の方向ベクトルを算出する。すなわち、軸検出部64は、回転量が同じ場合、軸の方向の中心点から離れた周辺部分での回転による移動量を、中心点付近での回転による移動量よりも大きく補正する。これにより、手首の可動のし易さに対応して回転の感度が設定されるため、正確なポインティングをしやすくすることができる。 Therefore, the axis detection unit 64 may change the vertical and left pointing sensitivity from the center point of the axis direction corrected by the calibration unit 63. For example, the axis detection unit 64 calculates the axis direction vector by correcting the rotation in the left-right direction of the hand to be larger than the rotation in the up-down direction of the hand. That is, when the rotation amount is the same, the axis detection unit 64 corrects the movement amount due to the rotation in the left-right direction larger than the movement amount due to the rotation in the vertical direction. Further, the axis detection unit 64 may increase the sensitivity as the distance from the center point in the corrected axis direction increases. For example, the axis detection unit 64 calculates the axis direction vector by correcting the rotation to be greater as the distance from the center point in the axis direction increases. That is, when the rotation amount is the same, the axis detection unit 64 corrects the movement amount due to the rotation in the peripheral portion away from the center point in the axis direction larger than the movement amount due to the rotation near the center point. Thereby, since the sensitivity of rotation is set corresponding to the ease of movement of the wrist, accurate pointing can be facilitated.
 なお、制御部60は、軸検出部64により検出される軸に連動する仮想的なレーザポインタをヘッドマウントディスプレイ30の表示部33に表示させることもできる。例えば、メニュー画面70でキャリブレーションモードが選択された場合、制御部60は、軸検出部64により検出される軸に連動する仮想的なレーザポインタを配置した画面の画像情報を生成する。そして、制御部60は、生成した画像情報を、無線部51を制御してヘッドマウントディスプレイ30へ送信する。これにより、ヘッドマウントディスプレイ30の表示部33には、仮想的なレーザポインタの画像が表示される。 The control unit 60 can also display a virtual laser pointer linked to the axis detected by the axis detection unit 64 on the display unit 33 of the head mounted display 30. For example, when the calibration mode is selected on the menu screen 70, the control unit 60 generates image information on a screen on which a virtual laser pointer linked to the axis detected by the axis detection unit 64 is arranged. Then, the control unit 60 controls the wireless unit 51 to transmit the generated image information to the head mounted display 30. As a result, a virtual laser pointer image is displayed on the display unit 33 of the head mounted display 30.
 ジェスチャ取得部65は、ウェアラブルデバイス10からジェスチャ情報を取得する処理部である。例えば、ジェスチャ取得部65は、ウェアラブルデバイス10によって認識された軌跡を含むジェスチャ情報を受信し、イベント特定部66に出力する。また、ジェスチャ取得部65は、ウェアラブルデバイス10から、ジェスチャを識別するジェスチャ識別子等を受信した場合、受信したジェスチャ識別子をイベント特定部66に出力する。 The gesture acquisition unit 65 is a processing unit that acquires gesture information from the wearable device 10. For example, the gesture acquisition unit 65 receives gesture information including a locus recognized by the wearable device 10 and outputs the gesture information to the event identification unit 66. When the gesture acquisition unit 65 receives a gesture identifier or the like for identifying a gesture from the wearable device 10, the gesture acquisition unit 65 outputs the received gesture identifier to the event specifying unit 66.
 イベント特定部66は、ジェスチャに対応するイベントを特定する処理部である。例えば、イベント特定部66は、ジェスチャ情報またはジェスチャ識別子とジェスチャ内容とを対応付けて予め記憶する。そして、イベント特定部66は、ジェスチャ取得部65からジェスチャ情報またはジェスチャ識別子を受信すると、予め記憶する対応付けにしたがって、ユーザが実行したジェスチャ内容を特定する。その後、イベント特定部66は、特定したジェスチャ内容を操作コマンド実行部69に出力する。 The event identification unit 66 is a processing unit that identifies an event corresponding to a gesture. For example, the event specifying unit 66 stores the gesture information or the gesture identifier and the gesture content in advance in association with each other. And the event specific | specification part 66 will specify the content of the gesture which the user performed according to the matching memorize | stored beforehand, if gesture information or a gesture identifier is received from the gesture acquisition part 65. FIG. Thereafter, the event specifying unit 66 outputs the specified gesture content to the operation command executing unit 69.
 文字認識部67は、ウェアラブルデバイス10から取得した軌跡情報にしたがって文字認識を実行する処理部である。例えば、文字認識部67は、取得した軌跡を、認識辞書データ53に記憶された各種の文字の標準的な軌跡と比較し、最も類似度の高い文字を特定し、特定した文字の文字コードを出力する。また、文字認識部67は、認識した文字や文字コードを操作コマンド実行部69に出力する。 The character recognition unit 67 is a processing unit that executes character recognition in accordance with the trajectory information acquired from the wearable device 10. For example, the character recognition unit 67 compares the acquired trajectory with the standard trajectories of various characters stored in the recognition dictionary data 53, specifies the character with the highest similarity, and sets the character code of the specified character. Output. Further, the character recognition unit 67 outputs the recognized character and character code to the operation command execution unit 69.
 また、文字認識部67は、認識した文字に撥ねがある場合、軌跡情報から特定される軌跡もしくは文字認識の結果を補正した上で、再度文字認識を実行することもできる。例えば、文字認識部67は、一般的な漢字では使用頻度の少ない「左上への撥ね」がある場合、当該撥ねに対応する軌跡を削除して、文字認識を実行することもできる。 In addition, if the recognized character is repelled, the character recognition unit 67 can perform character recognition again after correcting the locus specified from the locus information or the result of character recognition. For example, when there is “upper left splash” that is not frequently used in general Chinese characters, the character recognition unit 67 can also perform character recognition by deleting the locus corresponding to the hit.
 ここで、ユーザは、文字の手書き入力を行う場合、文字ごとに、スイッチ14を長押し操作して入力を行うこともできる。すなわち、スイッチ14は、1文字ごとに一旦、離される。文字認識部67は、1文字ずつ手書き入力の軌跡を記録し、1文字ずつ軌跡から文字を認識することもできる。 Here, when the user performs handwritten input of characters, the user can also perform input by long pressing the switch 14 for each character. That is, the switch 14 is released once for each character. The character recognizing unit 67 can also record a handwritten input locus character by character and recognize characters from the locus character by character.
 図8は、手書き入力された文字の軌跡の表示結果の一例を示す図である。図8(A)は、「鳥」を手書き入力した例である。図8(B)は、「神」を手書き入力した例である。図8(A)~図8(B)に示すように、制御部60は、左上に移動する軌跡を薄く区別して表示することにより、軌跡で示される文字を認識しやすくする。なお、図8の例では、薄く表示される線部分を破線で示している。文字認識部67は、図8(A)~図8(B)の濃い線で示された軌跡について文字認識を行う。 FIG. 8 is a diagram illustrating an example of a display result of a locus of characters input by handwriting. FIG. 8A shows an example in which “bird” is input by handwriting. FIG. 8B is an example in which “God” is input by handwriting. As shown in FIGS. 8A to 8B, the controller 60 makes it easy to recognize the character indicated by the trajectory by displaying the trajectory moving to the upper left in a lightly distinct manner. In the example of FIG. 8, a thin line portion is indicated by a broken line. The character recognition unit 67 performs character recognition on the trajectory indicated by the dark line in FIGS. 8A to 8B.
 図9は、文字認識の結果の一例を示す図である。図9の例では、薄く表示される線部分を破線で示している。図9は、手書き入力された「通」について文字認識を行った結果である。図9には、左上へ移動した左上軌跡を削除せずに文字認識した場合と、左上軌跡を削除して文字認識した場合について、候補となる文字と、類似度を示すスコア(score)が示されている。候補となる文字は、「code:」の後に示されている。スコアは、類似度を示し、値が大きいほど類似度が高いことを示す。例えば、手書き入力された「森」は、左上軌跡を削除せずに文字認識した場合、スコアが920であり、左上軌跡を削除して文字認識した場合、スコアが928であり、左上軌跡を削除した方がスコアが高くなる。また、左上軌跡を削除した方が、誤変換も抑制できる。例えば、手書き入力された「通」は、認識の候補となる文字に「連」、「遅」、「遍」があるが、左上軌跡を削除した方がスコアが高くなるため、誤変換を抑制できる。 FIG. 9 is a diagram showing an example of the result of character recognition. In the example of FIG. 9, a thin line portion is indicated by a broken line. FIG. 9 shows the result of character recognition for “Tu” input by handwriting. FIG. 9 shows a candidate character and a score (score) indicating the degree of similarity when the character is recognized without deleting the upper left locus moved to the upper left and when the character is recognized with the upper left locus deleted. Has been. Candidate characters are shown after “code:”. The score indicates the degree of similarity. The larger the value, the higher the degree of similarity. For example, “Mori” input by handwriting has a score of 920 when the character is recognized without deleting the upper left locus, and the score is 928 when the character is recognized by deleting the upper left locus, and the upper left locus is deleted. If you do, the score will be higher. Moreover, erroneous conversion can be suppressed by deleting the upper left locus. For example, “Tu” entered by handwriting has “Ream”, “Slow”, and “Han” as the recognition candidate characters. However, deleting the upper left trajectory will increase the score, thus suppressing erroneous conversion. it can.
 また、文字認識部67は、軌跡、認識結果、文字コード、認識実行時間などを対応付けて記憶部52に格納する。例えば、文字認識部67は、手書きされた文字の軌跡および認識された文字をメモ情報54に格納する。例えば、メニュー画面70でメモ入力モードが選択された場合、文字認識部67は、軌跡情報と、認識された文字とを対応付けて、日時情報と共に、メモ情報54に格納する。メモ情報54に格納された情報は、参照可能である。例えば、メニュー画面70でメモ入力モードが選択された場合、文字認識部67は、記憶部52のメモ情報54に格納された情報を表示する。 Also, the character recognition unit 67 stores the locus, recognition result, character code, recognition execution time, and the like in the storage unit 52 in association with each other. For example, the character recognition unit 67 stores the trajectory of the handwritten character and the recognized character in the memo information 54. For example, when the memo input mode is selected on the menu screen 70, the character recognizing unit 67 associates the trajectory information with the recognized character and stores it in the memo information 54 together with the date / time information. Information stored in the memo information 54 can be referred to. For example, when the memo input mode is selected on the menu screen 70, the character recognition unit 67 displays information stored in the memo information 54 of the storage unit 52.
 一例を挙げると、文字認識部67は、メモ入力の入力日時と、手書きされた文字を認識したテキストの文章と、手書きされた文字の画像による文章が対応付けて表示されている。このようにテキストの文章と、手書きされた文字の画像による文章とを対応付けて表示することで、ユーザは、手書きされた文字が正しく認識されているかを確認できる。また、テキストの文章と、手書きされた文字の画像による文章とを対応付けて表示することで、ユーザは、軌跡の認識で文字が誤変換された場合でも、対応する文字の画像を参照することで、手書きした文字を把握できる。また、手書きされた文字の画像には、ユーザの筆跡の特徴が記録される。このため、手書きされた文字の画像も格納することで、例えば、サインのように、ユーザが入力したことの証明に用いることもできる。 For example, the character recognizing unit 67 displays the input date and time of memo input, the text of a text recognizing a handwritten character, and the text of a handwritten character image in association with each other. Thus, the user can confirm whether the handwritten character is correctly recognized by displaying the sentence of the text and the sentence of the handwritten character image in association with each other. In addition, by displaying a text sentence and a sentence based on a handwritten character image in association with each other, the user can refer to the corresponding character image even when the character is erroneously converted by the recognition of the trajectory. You can grasp the handwritten characters. In addition, the handwriting characteristics of the user are recorded in the handwritten character image. For this reason, the image of the handwritten character can also be stored and used for proof that the user has input, for example, like a signature.
 認識結果表示部68は、文字認識部67によって認識された文字認識の結果を表示する処理部である。例えば、認識結果表示部68は、文字認識部67により認識された文字に撥ねがあって補正された場合、補正前の認識結果と補正後の認識結果とをヘッドマウントディスプレイ30等に表示する。このようにすることで、ユーザに対して、認識結果の正誤を問い合わせることができる。 The recognition result display unit 68 is a processing unit that displays the result of character recognition recognized by the character recognition unit 67. For example, when the character recognized by the character recognition unit 67 is repelled and corrected, the recognition result display unit 68 displays the recognition result before correction and the recognition result after correction on the head mounted display 30 or the like. In this way, the user can be inquired about the correctness of the recognition result.
 操作コマンド実行部69は、認識した文字または記号、ジェスチャ等に基づいて、他の機器へ操作コマンドを出力する。例えば、ユーザは、ヘッドマウントディスプレイ30のカメラ32による撮影を行う場合、メニュー画面70で撮影モードを選択する。そして、ユーザは、撮影を希望するタイミングでウェアラブルデバイス10のスイッチ14を長押し操作して所定文字を手書き入力する。この所定文字は、文字、数字、記号の何れであってもよく、例えば、「1」とする。 The operation command execution unit 69 outputs an operation command to another device based on the recognized character or symbol, gesture or the like. For example, when shooting with the camera 32 of the head mounted display 30, the user selects a shooting mode on the menu screen 70. Then, the user presses and operates the switch 14 of the wearable device 10 at a timing when photographing is desired, and inputs a predetermined character by handwriting. The predetermined character may be any of a character, a number, and a symbol, for example, “1”.
 操作コマンド実行部69は、メニュー画面70で撮影モードが選択された場合、撮影準備の状態となる。文字認識部67は、撮影準備の状態で手書き入力された軌跡を記録し、軌跡から文字を認識する。操作コマンド実行部69は、文字認識部67により所定文字が認識されると、撮影を指示する操作コマンドをヘッドマウントディスプレイ30へ送信する。ヘッドマウントディスプレイ30は、撮影を指示する操作コマンドを受信すると、カメラ32による撮影を行い、撮影された画像の画像情報を入力装置50へ送信する。制御部60は、撮影された画像の画像情報を画像情報55として記憶部52に格納する。 When the shooting mode is selected on the menu screen 70, the operation command execution unit 69 is ready for shooting. The character recognizing unit 67 records a trajectory input by handwriting in a shooting preparation state, and recognizes a character from the trajectory. When the character recognition unit 67 recognizes a predetermined character, the operation command execution unit 69 transmits an operation command for instructing shooting to the head mounted display 30. When the head mounted display 30 receives an operation command for instructing photographing, the head mounted display 30 performs photographing by the camera 32 and transmits image information of the photographed image to the input device 50. The control unit 60 stores the image information of the captured image in the storage unit 52 as the image information 55.
 また、操作コマンド実行部69は、イベント特定部66によって特定されたジェスチャに対応する処理を実行することもできる。例えば、操作コマンド実行部69は、ジェスチャ情報とコマンドとを対応付けた対応表を記憶部52等に保持し、対応表にしたがって、ジェスチャに対応する操作コマンドを発行することもできる。なお、ジェスチャ情報の他には、後述するジェスチャIDなどを用いることもできる。このように、入力装置50は、他の機器へ操作コマンドを出力して操作を行うことができる。 Also, the operation command execution unit 69 can execute a process corresponding to the gesture specified by the event specification unit 66. For example, the operation command execution unit 69 can hold a correspondence table in which the gesture information and the command are associated with each other in the storage unit 52 and issue the operation command corresponding to the gesture according to the correspondence table. In addition to the gesture information, a gesture ID described later can be used. As described above, the input device 50 can perform an operation by outputting an operation command to another device.
[処理の流れ]
 図10は、処理の流れを示すフローチャートである。図10に示すように、ウェアラブルデバイス10の電源がオンになり(S101:Yes)、ウェアラブルデバイス10の検出部21は、スイッチ14がオンになると(S102:Yes)、姿勢センサ13を介して指先の動きを検出する(S103)。
[Process flow]
FIG. 10 is a flowchart showing the flow of processing. As shown in FIG. 10, the wearable device 10 is turned on (S101: Yes), and the detection unit 21 of the wearable device 10 turns on the fingertip via the posture sensor 13 when the switch 14 is turned on (S102: Yes). Is detected (S103).
 続いて、検出部21は、指操作の運動成分を抽出する(S104)。例えば、検出部21は、姿勢センサ13から出力されるセンサ値、すなわち運動情報を検出する。 Subsequently, the detection unit 21 extracts the motion component of the finger operation (S104). For example, the detection unit 21 detects a sensor value output from the posture sensor 13, that is, motion information.
 そして、抽出部22は、抽出された運動成分から、運動が開始されてからの運動パターンを抽出する(S105)。軌跡生成部23は、抽出された運動パターンのうち、スイッチ14がオンからオフになるまでの一連の運動情報を用いて、指の移動遷移すなわち軌跡情報を生成する(S106)。S106と並行して、ジェスチャ認識部24が、抽出された運動パターンのうち、指の運動開始から終了までの期間ごとの運動情報を用いて、指による操作すなわちジェスチャ情報を生成し(S107)、出力部25が、ジェスチャ情報を入力装置50に出力する(S108)。その後、入力装置50のジェスチャ取得部65が、指操作によるジェスチャを認識し、操作コマンド実行部69が、認識されたジェスチャに対応する操作を実行する。 And the extraction part 22 extracts the exercise | movement pattern after exercise | movement started from the extracted exercise | movement component (S105). The trajectory generation unit 23 generates finger movement transitions, that is, trajectory information, using a series of motion information from the on-off state to the switch-off state among the extracted motion patterns (S106). In parallel with S106, the gesture recognizing unit 24 uses the movement information for each period from the start to the end of the finger movement among the extracted movement patterns to generate a finger operation, that is, gesture information (S107). The output unit 25 outputs the gesture information to the input device 50 (S108). Thereafter, the gesture acquisition unit 65 of the input device 50 recognizes the gesture by the finger operation, and the operation command execution unit 69 executes the operation corresponding to the recognized gesture.
 そして、スイッチ14がオンの間(S109:No)、S103以降が繰り返される。一方、スイッチ14がオフになると(S109:Yes)、入力装置50の入力検出部61は、ウェアラブルデバイス10から受信する情報にしたがって、スイッチ14の状態変化を特定する(S110)。その後、入力検出部61は、スイッチ14の状態変化にしたがって、操作を特定するイベントを生成する(S111)。そして、表示制御部62や操作コマンド実行部69は、特定されたイベントに対応する操作を、該当装置に出力して、操作を実行させる(S112)。 And while the switch 14 is on (S109: No), the steps after S103 are repeated. On the other hand, when the switch 14 is turned off (S109: Yes), the input detection unit 61 of the input device 50 identifies the state change of the switch 14 according to the information received from the wearable device 10 (S110). Thereafter, the input detection unit 61 generates an event for specifying an operation according to the state change of the switch 14 (S111). Then, the display control unit 62 and the operation command execution unit 69 output an operation corresponding to the identified event to the corresponding device to execute the operation (S112).
 S110以降の処理と並行して、文字認識部67は、ウェアラブルデバイス10から取得した軌跡情報を用いて、指で操作された軌跡を特定する(S113)。その後、文字認識部67は、特定した軌跡と認識辞書データ53とを用いて、文字認識を実行する(S114)。このS114と並行して、文字認識部67は、特定した軌跡もしくは文字認識の結果を補正する(S115)。そして、認識結果表示部68は、文字認識の結果、補正結果、軌跡などの情報をヘッドマウントディスプレイ30に表示させるとともに、メモ情報54に格納する(S116)。その後、操作コマンド実行部69は、認識結果に対応する操作コマンドを出力する(S117)。 In parallel with the processing after S110, the character recognizing unit 67 specifies the locus operated by the finger using the locus information acquired from the wearable device 10 (S113). Thereafter, the character recognition unit 67 performs character recognition using the identified locus and the recognition dictionary data 53 (S114). In parallel with S114, the character recognition unit 67 corrects the identified locus or the result of character recognition (S115). Then, the recognition result display unit 68 displays information such as character recognition results, correction results, and trajectories on the head mounted display 30 and stores them in the memo information 54 (S116). Thereafter, the operation command execution unit 69 outputs an operation command corresponding to the recognition result (S117).
[効果]
 このように、ウェアラブルデバイス10および入力装置50は、ジェスチャイベントと文字認識を同時処理し、その結果によって、文字入力かジェスチャイベントかまたはその両方を出力することができる。この結果、入力を手動で切り替えることなく、入力操作を実行できるので、ユーザの手間を省略することもでき、設定ミスも抑制できる。したがって、操作性を向上させることができる。
[effect]
As described above, the wearable device 10 and the input device 50 can simultaneously process a gesture event and character recognition, and output a character input, a gesture event, or both depending on the result. As a result, the input operation can be executed without manually switching the input, so that the user's trouble can be omitted and setting mistakes can be suppressed. Therefore, operability can be improved.
 本発明について、上記実施例を説明したが、本発明はこれに限定されるものではない。そこで、実施例2では、実施例1とは異なる処理について説明する。 Although the above embodiment has been described with respect to the present invention, the present invention is not limited to this. Therefore, in the second embodiment, processing different from that in the first embodiment will be described.
[フィルタリング]
 入力装置50は、カットオフ周波数を用いて運動情報をフィルタリングして、体の揺れ成分を抑制することで、文字認識精度を向上させることができる。具体的には、入力装置50の文字認識部67は、スイッチ14がオンからオフになると、予め設定された文字種別に対応するカットオフ周波数にしたがって、運動情報をフィルタリングした結果から指の動きパターンを抽出し、抽出した動きパターンから軌跡情報を抽出することもできる。
[filtering]
The input device 50 can improve the character recognition accuracy by filtering the motion information using the cut-off frequency and suppressing the body shake component. Specifically, when the switch 14 is turned from on to off, the character recognition unit 67 of the input device 50 performs a finger movement pattern based on a result of filtering exercise information according to a cutoff frequency corresponding to a preset character type. And the trajectory information can be extracted from the extracted motion pattern.
 例えば、文字認識部67は、文字種別とカットオフ周波数(fc(Hz))とを対応付けた対応表を記憶部52等に保持する。例えば、「種別、fc」として「デフォルト、3」、「数字、2」、「カナ、0.5」、「かな、2」、「アルファベット、1.5」、「漢字、0.2」を対応付けて保持する。なお、ここでの設定は一例であるが、入力時間が短い数字などはfcを高く設定し、入力時間が長い漢字などはfcを低く設定する。 For example, the character recognition unit 67 holds a correspondence table in which the character type and the cut-off frequency (fc (Hz)) are associated with each other in the storage unit 52 or the like. For example, “Default, 3”, “Number 2”, “Kana, 0.5”, “Kana 2,” “Alphabet, 1.5”, “Kanji, 0.2” as “Type, fc” Hold in association. Although the setting here is an example, fc is set high for numbers with short input times, and fc is set low for kanji characters with long input times.
 そして、文字認識部67は、ユーザから文字認識対象の設定を予め受付けておき、対応表にしたがってfcを決定しておく。この状態で、文字認識部67は、ウェアラブルデバイス10から運動情報を取得すると、決定済みのfcを用いて、入力された運動情報をフィルタリングして、フィルタリング結果から文字認識を実行する。 Then, the character recognition unit 67 accepts the setting of the character recognition target from the user in advance and determines fc according to the correspondence table. In this state, when the character recognition unit 67 acquires the exercise information from the wearable device 10, the character recognition unit 67 filters the input exercise information using the determined fc, and executes character recognition from the filtering result.
 図11は、フィルタリング処理を説明する図である。図11に示すように、文字認識部67は、入力文字を認識する文字種別を設定する(S201)。続いて、文字認識部67は、文字種別とfcの対応表(テーブル)から、設定された文字種別に対応するfcを検索する(S202)。ここで、文字認識部67は、設定された文字種別が複数の場合は、最も低いfcを設定対象に決定する(S203)。そして、文字認識部67は、入力文字種別に応じたカットオフ周波数fcを、ハイパスフィルタ(HPF)に設定する(S204)。 FIG. 11 is a diagram for explaining the filtering process. As shown in FIG. 11, the character recognition unit 67 sets a character type for recognizing an input character (S201). Subsequently, the character recognition unit 67 searches the correspondence table (table) between the character type and fc for the fc corresponding to the set character type (S202). Here, when there are a plurality of set character types, the character recognition unit 67 determines the lowest fc as the setting target (S203). Then, the character recognition unit 67 sets the cutoff frequency fc corresponding to the input character type in the high pass filter (HPF) (S204).
 その後、文字認識部67は、周波数で表された運動情報をウェアラブルデバイス10から受信すると、ローパスフィルタ(LPF)を通過させてスパイクノイズカットを実行する。その後、文字認識部67は、カットオフ周波数fcが設定されたハイパスフィルタを通過させて、体の揺れ成分を抑制し、その結果を用いて文字認識を実行する。 After that, when receiving the motion information represented by the frequency from the wearable device 10, the character recognition unit 67 passes the low-pass filter (LPF) and performs spike noise cut. After that, the character recognition unit 67 passes the high-pass filter in which the cutoff frequency fc is set, suppresses the shake component of the body, and performs character recognition using the result.
 図12は、フィルタリング例を説明する図である。この図12は、HPFにおけるフィルタリングを説明する図である。図12に示すように、文字認識部67は、入力対象すなわち判定対象の文字種別に応じてカットオフ周波数fcを変化させる。一例を挙げると、文字認識部67は、入力時間が短い数字などはfcを低く設定することで、体の揺れの主体の遅い動き成分を主体にした文字認識を実行する。一方、文字認識部67は、入力時間が長い漢字などはfcを高く設定することで、手書き成分主体の早い動き成分を主体にした文字認識を実行する。 FIG. 12 is a diagram for explaining an example of filtering. FIG. 12 is a diagram for explaining filtering in the HPF. As shown in FIG. 12, the character recognition unit 67 changes the cutoff frequency fc in accordance with the input target, that is, the character type to be determined. For example, the character recognizing unit 67 executes character recognition mainly based on a slow motion component mainly of body shaking by setting fc low for numbers and the like having a short input time. On the other hand, the character recognizing unit 67 executes character recognition based on a fast motion component mainly composed of handwritten components by setting fc high for kanji characters having a long input time.
 このようにすることで、文字認識の精度を向上させることができ、指動作による操作イベントの精度も向上する。 In this way, the accuracy of character recognition can be improved, and the accuracy of operation events caused by finger movements can also be improved.
[軌跡の補正]
 ところで、指の操作に関して、動きやすい方向と動きにくい方向とがあり、これらを同じように処理すると、正しく軌跡が認識されることもある。例えば、右手のひと指し指にウェアラブルデバイス10を装着し、手首を動かさず指だけを動かす場合、一般的に左方向よりも右方向の方が動かしやすく、上方向よりも下方向の方が動かしやすい。そこで、実施例2では、方向によってゲインを変化させることで、軌跡を正しく認識する例を説明する。
[Correction of locus]
By the way, with respect to the operation of the finger, there are a direction in which movement is easy and a direction in which movement is difficult. For example, when the wearable device 10 is attached to the index finger of the right hand and only the finger is moved without moving the wrist, the right direction is generally easier to move than the left direction, and the lower direction is moved more than the upper direction. Cheap. Therefore, in the second embodiment, an example will be described in which the trajectory is correctly recognized by changing the gain according to the direction.
 図13は、軌跡の補正例を説明する図である。図13に示すように、スイッチ14がオンにされた姿勢を姿勢0点とし、指操作の運動成分抽出後のデータ元に算出される姿勢0点からの指の姿勢変化の推定値を軌跡の長さに変換するゲインを設定する。このゲインが大きい方が僅かな姿勢変化で長い軌跡が書ける。例えば、姿勢0点からの姿勢変化方向によって軌跡生成ゲイン変化のプロファイルを変え、姿勢0点からの姿勢変化量に応じて軌跡生成ゲインの変化率を変え、姿勢0点のゲインCは一致させて上記を満足する連続関数とする。 FIG. 13 is a diagram for explaining an example of correction of a trajectory. As shown in FIG. 13, the posture at which the switch 14 is turned on is set as a posture 0 point, and the estimated value of the posture change of the finger from the posture 0 point calculated based on the data source after extracting the motion component of the finger operation Set the gain to convert to length. If this gain is larger, a long trajectory can be written with a slight change in posture. For example, the trajectory generation gain change profile is changed according to the posture change direction from the posture 0 point, the trajectory generation gain change rate is changed according to the posture change amount from the posture 0 point, and the gain C of the posture 0 point is matched. A continuous function satisfying the above is assumed.
 このようにすることで、文字認識部67は、縦軸方向の軌跡については、+Y軸の姿勢変化についてゲインを大きくし、横軸方向の軌跡については、-Z軸の姿勢変化についてゲインを大きくする。この結果、動かしにくい方向の軌跡も、動かしやすい方向の軌跡と同じように処理することができるので、軌跡を正確に認識することができる。 By doing so, the character recognition unit 67 increases the gain with respect to the + Y-axis posture change with respect to the trajectory in the vertical axis direction, and increases the gain with respect to the −Z-axis posture change with respect to the trajectory in the horizontal axis direction. To do. As a result, a trajectory in a direction that is difficult to move can be processed in the same way as a trajectory in a direction that is easy to move, so that the trajectory can be accurately recognized.
[重力方向の考慮]
 実施例1では、軌跡(移動遷移)とジェスチャ認識の両方を実行する例を説明したが、これに限定されるものではない。そこで、ここでは、スイッチ14がオンになった時の姿勢に基づいていずれか一方を出力する例を説明する。
[Consideration of gravity direction]
In Example 1, although the example which performs both a locus | trajectory (movement transition) and gesture recognition was demonstrated, it is not limited to this. Therefore, here, an example in which either one is output based on the posture when the switch 14 is turned on will be described.
 図14は、ウェアラブルデバイスの向きを用いた処理の流れを説明するフローチャートである。図14に示すように、ウェアラブルデバイス10の電源がオンになり(S301:Yes)、ウェアラブルデバイス10の検出部21は、スイッチ14がオンになると(S302:Yes)、姿勢センサ13を介して、スイッチ14がオンになったときの重力軸方向がY軸かX軸かのいずれであるかを特定し、特定した結果を変数「G」に代入する(S303)。続いて、検出部21は、姿勢センサ13を介して指先の動きを検出する(S304)。 FIG. 14 is a flowchart for explaining the flow of processing using the direction of the wearable device. As shown in FIG. 14, the power of the wearable device 10 is turned on (S301: Yes), and when the switch 14 is turned on (S302: Yes), the detection unit 21 of the wearable device 10 passes through the attitude sensor 13. It is specified whether the gravity axis direction when the switch 14 is turned on is the Y axis or the X axis, and the specified result is substituted into the variable “G” (S303). Subsequently, the detection unit 21 detects the movement of the fingertip via the posture sensor 13 (S304).
 続いて、検出部21は、指操作の運動成分を抽出する(S305)。そして、抽出部22は、抽出された運動成分から、運動が開始されてからの運動パターンを抽出する(S306)。 Subsequently, the detection unit 21 extracts the motion component of the finger operation (S305). And the extraction part 22 extracts the exercise | movement pattern after exercise | movement started from the extracted exercise | movement component (S306).
 その後、GがY軸ではなくX軸である場合(S307:No)、軌跡生成部23は、抽出された運動パターンのうち、スイッチ14がオンからオフになるまでの一連の運動情報を用いて、指の移動遷移すなわち軌跡情報を生成する(S308)。 Thereafter, when G is not the Y axis but the X axis (S307: No), the trajectory generating unit 23 uses a series of motion information from the on to the off of the extracted motion pattern. Then, finger movement transition, that is, trajectory information is generated (S308).
 一方、GがY軸である場合(S307:Yes)、ジェスチャ認識部24が、抽出された運動パターンのうち、指の運動開始から終了までの期間ごとの運動情報を用いて、指による操作すなわちジェスチャ情報を生成し(S309)、出力部25が、ジェスチャ情報を入力装置50に出力する(S310)。 On the other hand, when G is the Y axis (S307: Yes), the gesture recognition unit 24 uses the motion information for each period from the start to the end of the finger motion among the extracted motion patterns, The gesture information is generated (S309), and the output unit 25 outputs the gesture information to the input device 50 (S310).
 その後のS311からS319の処理は、図10で説明したS109からS117と同様の処理なので詳細な説明は省略する。ただし、図14の場合は、Gの方向(値)を考慮することで、X軸方向におけるダブルクリックなどのように、どの軸方向でのクリック操作かを判定することができる。したがって、図10と比べても、スイッチ14操作によるイベントを細分化することができ、より詳細なコマンド発行等が実行できる。 Since the subsequent processes from S311 to S319 are the same as the processes from S109 to S117 described in FIG. 10, detailed description thereof will be omitted. However, in the case of FIG. 14, by considering the direction (value) of G, it is possible to determine in which axial direction the click operation is performed, such as a double click in the X-axis direction. Therefore, even when compared with FIG. 10, it is possible to subdivide the events caused by the operation of the switch 14 and execute more detailed command issuance.
[保存対象]
 実施例1では、文字認識結果を保存する例を説明したが、これに限定されるものではなく、補正結果に応じて保存対象を動的に変更することもできる。図15は、スコアによる保存対象を説明する図である。図15に示すように、文字認識部67は、ウェアラブルデバイス10から受信した軌跡情報にしたがって、「寸」を認識する(S1)。そして、文字認識部67は、認識結果と辞書とを比較したスコア値を250点と算出する。
[To be saved]
In the first embodiment, the example in which the character recognition result is stored has been described. However, the present invention is not limited to this, and the storage target can be dynamically changed according to the correction result. FIG. 15 is a diagram for explaining a storage target based on a score. As shown in FIG. 15, the character recognition unit 67 recognizes “dimensions” according to the trajectory information received from the wearable device 10 (S <b> 1). Then, the character recognition unit 67 calculates a score value obtained by comparing the recognition result and the dictionary as 250 points.
 一方で、文字認識部67は、ウェアラブルデバイス10から受信した軌跡情報から、左上への撥ねを削除した上での文字認識を実行する(S2)。そして、文字認識部67は、補正後の認識結果と辞書とを比較したスコア値を200点と算出する。 On the other hand, the character recognition unit 67 executes character recognition after removing the upper left splash from the trajectory information received from the wearable device 10 (S2). Then, the character recognition unit 67 calculates a score value obtained by comparing the corrected recognition result with the dictionary as 200 points.
 そして、文字認識部67は、両方のスコア値を比較する(S3)。ここで、文字認識部67は、補正済みの方のスコア値が高ければ、補正済みの認識結果をメモ情報54に保存する。一方、文字認識部67は、補正前の方のスコア値が高ければ、両方の認識結果を対応付けてメモ情報54に保存する。このようにすることで、補正方法やスコア値の計算方法を学習することができ、認識精度の向上を実現できる。 Then, the character recognition unit 67 compares both score values (S3). Here, the character recognition unit 67 stores the corrected recognition result in the memo information 54 if the corrected score value is higher. On the other hand, if the score value before correction is higher, the character recognition unit 67 associates both recognition results and stores them in the memo information 54. By doing in this way, the correction method and the score value calculation method can be learned, and the recognition accuracy can be improved.
[ジェスチャ出力]
 実施例1では、ウェアラブルデバイス10は、認識したジェスチャを含むジェスチャ情報を入力装置50に出力する例を説明したが、これに限定されるものではない。例えば、ウェアラブルデバイス10は、「回転方向、数」を対応付けたジェスチャIDを入力装置50に出力することもできる。
[Gesture output]
In the first embodiment, the wearable device 10 has described an example in which the gesture information including the recognized gesture is output to the input device 50, but the present invention is not limited to this. For example, the wearable device 10 can also output to the input device 50 a gesture ID associated with “rotation direction, number”.
 図16は、ジェスチャIDの発行例を示す図である。図16に示すように、ジェスチャ認識部24は、スイッチ14がオンになってから反時計回りに1回転を検出すると、ジェスチャID(R、1)を入力装置50に出力する。この続きで、ジェスチャ認識部24は、さらに反時計回りに1回転(2回転目)を検出すると、ジェスチャID(R、2)を入力装置50に出力する。この続きで、ジェスチャ認識部24は、さらに反時計回りに1回転(3回転目)を検出すると、ジェスチャID(R、3)を入力装置50に出力する。 FIG. 16 is a diagram showing an example of issuing a gesture ID. As illustrated in FIG. 16, when the gesture recognition unit 24 detects one rotation counterclockwise after the switch 14 is turned on, the gesture recognition unit 24 outputs a gesture ID (R, 1) to the input device 50. Subsequently, when the gesture recognition unit 24 further detects one rotation (second rotation) counterclockwise, the gesture recognition unit 24 outputs the gesture ID (R, 2) to the input device 50. Subsequently, when the gesture recognition unit 24 further detects one rotation (third rotation) counterclockwise, the gesture recognition unit 24 outputs the gesture ID (R, 3) to the input device 50.
 このように、ジェスチャ認識部24は、予め定めた判定ゾーンを通過するたびに、ジェスチャIDの発行を実行し、スイッチ14がオフになるまで繰り返す。そして、入力装置50は、このジェスチャIDにしたがってジェスチャを特定することもできる。また、入力装置50がこのジェスチャIDをアプリケーションに出力し、アプリケーションがジェスチャを特定することもできる。 As described above, the gesture recognition unit 24 issues a gesture ID every time it passes through a predetermined determination zone, and repeats until the switch 14 is turned off. The input device 50 can also specify a gesture according to the gesture ID. Further, the input device 50 can output this gesture ID to the application, and the application can specify the gesture.
 さて、これまで本発明の実施例について説明したが、本発明は上述した実施例以外にも、種々の異なる形態にて実施されてよいものである。 The embodiments of the present invention have been described so far, but the present invention may be implemented in various different forms other than the above-described embodiments.
[ウェアラブルデバイス]
 上記実施例では、ウェアラブルデバイス10と入力装置50とを別々の装置で実行する例を説明したが、これに限定されるものではなく、入力装置50と同様の処理部を有するウェアラブルデバイス10だけ実現することもできる。また、ウェアラブルデバイス10の処理能力に応じて、ウェアラブルデバイス10と入力装置50に分散させる処理部を任意に変更することもできる。
[Wearable device]
In the above-described embodiment, the example in which the wearable device 10 and the input device 50 are executed by different devices has been described. However, the present invention is not limited to this, and only the wearable device 10 having the same processing unit as the input device 50 is realized. You can also In addition, the processing units distributed to the wearable device 10 and the input device 50 can be arbitrarily changed according to the processing capability of the wearable device 10.
[システム]
 また、図示した各部の各構成要素は、必ずしも物理的に図示の如く構成されていることを要しない。すなわち、各部の分散・統合の具体的形態は図示のものに限られず、その全部または一部を、各種の負荷や使用状況等に応じて、任意の単位で機能的または物理的に分散・統合して構成することができる。さらに、各装置で行われる各種処理機能は、CPU(またはMPU、MCU(Micro Controller Unit)等のマイクロ・コンピュータ)上で、その全部または任意の一部を実行するようにしてもよい。また、各種処理機能は、CPU(またはMPU、MCU等のマイクロ・コンピュータ)で解析実行されるプログラム上、またはワイヤードロジックによるハードウェア上で、その全部または任意の一部を実行するようにしてもよいことは言うまでもない。
[system]
In addition, each component of each part illustrated does not necessarily need to be physically configured as illustrated. In other words, the specific form of distribution / integration of each unit is not limited to that shown in the figure, and all or a part thereof may be functionally or physically distributed / integrated in arbitrary units according to various loads or usage conditions. Can be configured. Furthermore, various processing functions performed by each device may be executed entirely or arbitrarily on a CPU (or a microcomputer such as an MPU or MCU (Micro Controller Unit)). In addition, various processing functions may be executed in whole or in any part on a program that is analyzed and executed by a CPU (or a microcomputer such as an MPU or MCU) or on hardware based on wired logic. Needless to say, it is good.
[ハードウェア]
 上記の実施例で説明した各種の処理は、予め用意されたプログラムをコンピュータで実行することで実現できる。そこで、以下では、上記の実施例と同様の機能を有するプログラムを実行するコンピュータの一例を説明する。図17は、入力支援プログラムを実行するコンピュータの一例を示す説明図である。
[hardware]
The various processes described in the above embodiments can be realized by executing a prepared program on a computer. Therefore, in the following, an example of a computer that executes a program having the same function as in the above embodiment will be described. FIG. 17 is an explanatory diagram illustrating an example of a computer that executes an input support program.
 図17が示すように、コンピュータ300は、CPU310、HDD(Hard Disk Drive)320、RAM(Random Access Memory)340を有する。これら310~340の各部は、バス400を介して接続される。 As shown in FIG. 17, the computer 300 includes a CPU 310, an HDD (Hard Disk Drive) 320, and a RAM (Random Access Memory) 340. These units 310 to 340 are connected via a bus 400.
 HDD320には、ウェアラブルデバイス10の検出部21、抽出部22、軌跡生成部23、ジェスチャ認識部24、出力部25と同様の機能を発揮する入力支援プログラム320a、または入力装置50の入力検出部61と、表示制御部62と、キャリブレーション部63と、軸検出部64と、ジェスチャ取得部65と、イベント特定部66と、文字認識部67と、認識結果表示部68と、操作コマンド実行部69と同様の機能を発揮する入力プログラム320aが予め記憶される。なお、入力支援プログラム320aについては、適宜分離しても良い。また、HDD320は、各種情報を記憶する。 The HDD 320 includes an input support program 320 a that exhibits the same functions as the detection unit 21, the extraction unit 22, the trajectory generation unit 23, the gesture recognition unit 24, and the output unit 25 of the wearable device 10, or the input detection unit 61 of the input device 50. A display control unit 62, a calibration unit 63, an axis detection unit 64, a gesture acquisition unit 65, an event identification unit 66, a character recognition unit 67, a recognition result display unit 68, and an operation command execution unit 69. An input program 320a that exhibits the same function is stored in advance. Note that the input support program 320a may be separated as appropriate. The HDD 320 stores various information.
 そして、CPU310が、入力支援プログラム320aをHDD320から読み出して実行することで、実施例の各処理部と同様の動作を実行する。すなわち、入力支援プログラム320aは、検出部21、抽出部22、軌跡生成部23、ジェスチャ認識部24、出力部25と同様の動作、または、入力検出部61と、表示制御部62と、キャリブレーション部63と、軸検出部64と、ジェスチャ取得部65と、イベント特定部66と、文字認識部67と、認識結果表示部68と、操作コマンド実行部69と同様の動作を実行する。 Then, the CPU 310 reads out and executes the input support program 320a from the HDD 320, thereby executing the same operation as each processing unit of the embodiment. That is, the input support program 320a is similar in operation to the detection unit 21, the extraction unit 22, the trajectory generation unit 23, the gesture recognition unit 24, and the output unit 25, or the input detection unit 61, the display control unit 62, and the calibration. The same operations as the unit 63, the axis detection unit 64, the gesture acquisition unit 65, the event identification unit 66, the character recognition unit 67, the recognition result display unit 68, and the operation command execution unit 69 are executed.
 なお、上記した入力支援プログラム320aについては、必ずしも最初からHDD320に記憶させることを要しない。例えば、コンピュータ300に挿入されるフレキシブルディスク(FD)、CD-ROM、DVDディスク、光磁気ディスク、ICカードなどの「可搬用の物理媒体」にプログラムを記憶させておく。そして、コンピュータ300がこれらからプログラムを読み出して実行するようにしてもよい。さらには、公衆回線、インターネット、LAN、WANなどを介してコンピュータ300に接続される「他のコンピュータ(またはサーバ)」などにプログラムを記憶させておく。そして、コンピュータ300がこれらからプログラムを読み出して実行するようにしてもよい。 Note that the input support program 320a is not necessarily stored in the HDD 320 from the beginning. For example, the program is stored in a “portable physical medium” such as a flexible disk (FD), a CD-ROM, a DVD disk, a magneto-optical disk, or an IC card inserted into the computer 300. Then, the computer 300 may read and execute the program from these. Furthermore, the program is stored in “another computer (or server)” connected to the computer 300 via a public line, the Internet, a LAN, a WAN, or the like. Then, the computer 300 may read and execute the program from these.
 10 ウェアラブルデバイス
 11 無線部
 12 記憶部
 13 姿勢センサ
 14 スイッチ
 20 制御部
 21 検出部
 22 抽出部
 23 軌跡生成部
 24 ジェスチャ認識部
 25 出力部
 30 ヘッドマウントディスプレイ
 31 無線部
 32 カメラ
 33 表示部
 34 制御部
 50 入力装置
 51 無線部
 52 記憶部
 53 認識辞書データ
 54 メモ情報
 55 画像情報
 60 制御部
 61 入力検出部
 62 表示制御部
 63 キャリブレーション部
 64 軸検出部
 65 ジェスチャ取得部
 66 イベント特定部
 67 文字認識部
 68 認識結果表示部
 69 操作コマンド実行部
DESCRIPTION OF SYMBOLS 10 Wearable device 11 Radio | wireless part 12 Memory | storage part 13 Posture sensor 14 Switch 20 Control part 21 Detection part 22 Extraction part 23 Trajectory generation part 24 Gesture recognition part 25 Output part 30 Head mounted display 31 Radio | wireless part 32 Camera 33 Display part 34 Control part 50 Input device 51 Wireless unit 52 Storage unit 53 Recognition dictionary data 54 Memo information 55 Image information 60 Control unit 61 Input detection unit 62 Display control unit 63 Calibration unit 64 Axis detection unit 65 Gesture acquisition unit 66 Event identification unit 67 Character recognition unit 68 Recognition result display section 69 Operation command execution section

Claims (8)

  1.  入力を受け付けるスイッチを備えると共に指示体に装着される入力装置であって、
     モーションセンサと、
     前記モーションセンサから出力される運動情報を検出する検出部と、
     前記運動情報の第1の成分から第1のモードに関する情報を抽出すると共に、該運動情報の第2の成分から第2のモードに関する情報を抽出する抽出部と、
     前記スイッチに対する入力パターンを用いて、前記第1のモードに関する情報または前記第2のモードに関する情報のいずれか一方もしくは両方を出力する出力部と、
     を有することを特徴とする入力装置。
    An input device including a switch for receiving an input and attached to a pointer,
    A motion sensor,
    A detection unit for detecting motion information output from the motion sensor;
    An extraction unit for extracting information on the first mode from the first component of the movement information, and extracting information on the second mode from the second component of the movement information;
    Using an input pattern for the switch, an output unit that outputs one or both of the information related to the first mode and the information related to the second mode;
    An input device comprising:
  2.  前記検出部は、前記スイッチがオンの間、前記モーションセンサから出力される運動情報を検出し、
     前記抽出部は、前記スイッチがオンの間、前記運動情報における運動開始から終了までの前記指示体の動きパターンを用いて、前記第2のモードに関する情報としてユーザが実行するジェスチャ情報を抽出し、前記スイッチがオフになると、前記スイッチのオンからオフまでの一連の動きパターンから軌跡情報を抽出することを特徴とする請求項1に記載の入力装置。
    The detection unit detects motion information output from the motion sensor while the switch is on,
    The extraction unit extracts gesture information performed by a user as information related to the second mode using the movement pattern of the indicator from the start to the end of exercise in the exercise information while the switch is on, The input device according to claim 1, wherein when the switch is turned off, trajectory information is extracted from a series of movement patterns from on to off of the switch.
  3.  前記出力部は、前記スイッチがオンにされたときの重力軸方向が前記スイッチの所定方向である場合、前記抽出部によって抽出された前記ジェスチャ情報を出力し、前記重力軸方向が前記スイッチの所定方向以外の方向である場合、前記抽出部によって抽出された前記軌跡情報を出力することを特徴とする請求項2に記載の入力装置。 The output unit outputs the gesture information extracted by the extraction unit when the gravity axis direction when the switch is turned on is the predetermined direction of the switch, and the gravity axis direction is the predetermined direction of the switch. The input device according to claim 2, wherein, when the direction is other than the direction, the trajectory information extracted by the extraction unit is output.
  4.  前記抽出部は、前記スイッチがオンからオフになると、予め設定された文字種別に対応するカットオフ周波数にしたがって、前記運動情報をフィルタリングした結果から前記指示体の動きパターンを抽出し、抽出した前記動きパターンから軌跡情報を抽出することを特徴とする請求項2に記載の入力装置。 When the switch is turned from on to off, the extraction unit extracts the movement pattern of the indicator from the result of filtering the movement information according to a cutoff frequency corresponding to a preset character type, and extracts the pointer The input device according to claim 2, wherein trajectory information is extracted from the motion pattern.
  5.  前記抽出部は、前記スイッチがオンになった姿勢を基準にして、前記基準からの前記指示体の姿勢変化値を前記軌跡情報に変換する軌跡ゲインにしたがって、前記動きパターンから軌跡情報を抽出することを特徴とする請求項2に記載の入力装置。 The extraction unit extracts trajectory information from the motion pattern according to a trajectory gain that converts the posture change value of the indicator from the reference into the trajectory information, based on the posture at which the switch is turned on. The input device according to claim 2.
  6.  前記出力部は、前記スイッチのオンおよびオフの時系列の変化にしたがって特定される操作イベントをさらに出力することを特徴とする請求項1に記載の入力装置。 The input device according to claim 1, wherein the output unit further outputs an operation event specified in accordance with a time-series change of the switch on and off.
  7.  入力を受け付けるスイッチを備えると共に指示体に装着される入力装置が、
     モーションセンサから出力される運動情報を検出し、
     前記運動情報の第1の成分から第1のモードに関する情報を抽出すると共に、該運動情報の第2の成分から第2のモードに関する情報を抽出し、
     前記スイッチに対する入力パターンを用いて、前記第1のモードに関する情報または前記第2のモードに関する情報のいずれか一方もしくは両方を出力する、
     処理を実行することを特徴とする入力支援方法。
    An input device that includes a switch for receiving input and is attached to the indicator,
    Detect motion information output from the motion sensor,
    Extracting information related to the first mode from the first component of the motion information, and extracting information related to the second mode from the second component of the motion information;
    Using the input pattern for the switch, output one or both of the information on the first mode and the information on the second mode,
    An input support method characterized by executing processing.
  8.  入力を受け付けるスイッチを備えると共に指示体に装着される入力装置に、
     モーションセンサから出力される運動情報を検出し、
     前記運動情報の第1の成分から第1のモードに関する情報を抽出すると共に、該運動情報の第2の成分から第2のモードに関する情報を抽出し、
     前記スイッチに対する入力パターンを用いて、前記第1のモードに関する情報または前記第2のモードに関する情報のいずれか一方もしくは両方を出力する、
     処理を実行させることを特徴とする入力支援プログラム。
    An input device equipped with a switch for receiving input and attached to the indicator,
    Detect motion information output from the motion sensor,
    Extracting information related to the first mode from the first component of the motion information, and extracting information related to the second mode from the second component of the motion information;
    Using the input pattern for the switch, output one or both of the information on the first mode and the information on the second mode,
    An input support program characterized by causing processing to be executed.
PCT/JP2015/071037 2015-07-23 2015-07-23 Input device, input support method and input support program WO2017013805A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2015/071037 WO2017013805A1 (en) 2015-07-23 2015-07-23 Input device, input support method and input support program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2015/071037 WO2017013805A1 (en) 2015-07-23 2015-07-23 Input device, input support method and input support program

Publications (1)

Publication Number Publication Date
WO2017013805A1 true WO2017013805A1 (en) 2017-01-26

Family

ID=57835292

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/071037 WO2017013805A1 (en) 2015-07-23 2015-07-23 Input device, input support method and input support program

Country Status (1)

Country Link
WO (1) WO2017013805A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001236174A (en) * 2000-02-25 2001-08-31 Fujitsu Ltd Device for inputting handwritten character and method for recognizing handwritten character
JP2004258837A (en) * 2003-02-25 2004-09-16 Nippon Hoso Kyokai <Nhk> Cursor operation device, method therefor and program therefor
JP2012069114A (en) * 2010-09-21 2012-04-05 Visteon Global Technologies Inc Finger-pointing, gesture based human-machine interface for vehicle
JP2012113525A (en) * 2010-11-25 2012-06-14 Univ Of Aizu Gesture recognition device and gesture recognition method
JP2014002719A (en) * 2012-06-20 2014-01-09 Samsung Electronics Co Ltd Remote control device, display device and method for controlling the same
JP2014078238A (en) * 2012-10-10 2014-05-01 Samsung Electronics Co Ltd Multi-display device and method for controlling the same
JP2014518596A (en) * 2011-03-29 2014-07-31 クアルコム,インコーポレイテッド Modular mobile connected pico projector for local multi-user collaboration

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001236174A (en) * 2000-02-25 2001-08-31 Fujitsu Ltd Device for inputting handwritten character and method for recognizing handwritten character
JP2004258837A (en) * 2003-02-25 2004-09-16 Nippon Hoso Kyokai <Nhk> Cursor operation device, method therefor and program therefor
JP2012069114A (en) * 2010-09-21 2012-04-05 Visteon Global Technologies Inc Finger-pointing, gesture based human-machine interface for vehicle
JP2012113525A (en) * 2010-11-25 2012-06-14 Univ Of Aizu Gesture recognition device and gesture recognition method
JP2014518596A (en) * 2011-03-29 2014-07-31 クアルコム,インコーポレイテッド Modular mobile connected pico projector for local multi-user collaboration
JP2014002719A (en) * 2012-06-20 2014-01-09 Samsung Electronics Co Ltd Remote control device, display device and method for controlling the same
JP2014078238A (en) * 2012-10-10 2014-05-01 Samsung Electronics Co Ltd Multi-display device and method for controlling the same

Similar Documents

Publication Publication Date Title
JP6524661B2 (en) INPUT SUPPORT METHOD, INPUT SUPPORT PROGRAM, AND INPUT SUPPORT DEVICE
US10585488B2 (en) System, method, and apparatus for man-machine interaction
EP3258423B1 (en) Handwriting recognition method and apparatus
US9983687B1 (en) Gesture-controlled augmented reality experience using a mobile communications device
EP4327185A1 (en) Hand gestures for animating and controlling virtual and graphical elements
JP5802667B2 (en) Gesture input device and gesture input method
KR101844390B1 (en) Systems and techniques for user interface control
JP6657593B2 (en) Biological imaging apparatus, biological imaging method, and biological imaging program
KR100630806B1 (en) Command input method using motion recognition device
US20150323998A1 (en) Enhanced user interface for a wearable electronic device
CN108027654B (en) Input device, input method, and program
WO2021035646A1 (en) Wearable device and control method therefor, gesture recognition method, and control system
WO2010016065A1 (en) Method and device of stroke based user input
JP2015142317A (en) Electronic apparatus
US11054917B2 (en) Wearable device and control method, and smart control system
CN108369451B (en) Information processing apparatus, information processing method, and computer-readable storage medium
KR20150106823A (en) Gesture recognition apparatus and control method of gesture recognition apparatus
JP6504058B2 (en) INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND PROGRAM
WO2017134732A1 (en) Input device, input assistance method, and input assistance program
US20200143774A1 (en) Information processing device, information processing method, and computer program
JP2017191426A (en) Input device, input control method, computer program, and storage medium
WO2021004413A1 (en) Handheld input device and blanking control method and apparatus for indication icon of handheld input device
JP2015146058A (en) Information processing apparatus, information processing method, and information processing program
WO2017013805A1 (en) Input device, input support method and input support program
JP6481360B2 (en) Input method, input program, and input device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15898963

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15898963

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP