WO2018061743A1 - Terminal portable - Google Patents

Terminal portable Download PDF

Info

Publication number
WO2018061743A1
WO2018061743A1 PCT/JP2017/032781 JP2017032781W WO2018061743A1 WO 2018061743 A1 WO2018061743 A1 WO 2018061743A1 JP 2017032781 W JP2017032781 W JP 2017032781W WO 2018061743 A1 WO2018061743 A1 WO 2018061743A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
unit
input
word
wearable terminal
Prior art date
Application number
PCT/JP2017/032781
Other languages
English (en)
Japanese (ja)
Inventor
軌行 石井
実 矢口
Original Assignee
コニカミノルタ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by コニカミノルタ株式会社 filed Critical コニカミノルタ株式会社
Publication of WO2018061743A1 publication Critical patent/WO2018061743A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/02Viewing or reading apparatus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/64Constructional details of receivers, e.g. cabinets or dust covers

Definitions

  • the present invention relates to a wearable terminal capable of voice recognition.
  • an input method based on gesture detection in which a user's hand movement (gesture) is detected by a camera or the like and input is performed.
  • a user's hand movement gesture
  • an erroneous input may be caused by an ambiguous gesture of the user, which may lead to an operation unintended by the user.
  • Patent Document 1 discloses a head-mounted display that can be worn on a user's head, detects an user's specific action, receives an instruction from the user, and an instruction input by the instruction input unit And a control means for performing a specific operation of the head-mounted display, and the instruction input means is a head-mounted display (hereinafter referred to as HMD) having a motion detection function for detecting the movement of the user's head. ) Is disclosed. Further, Patent Document 1 also includes a reference regarding speech recognition.
  • HMD head-mounted display
  • Patent Document 1 states that “a specific operation of the HMD may be performed according to the order of user actions. That is, two or more user actions are performed at different timings or in parallel. There may be a description that the HMD may be made to perform a specific operation of the HMD ”, and“ when the head movement and voice input are made simultaneously, the HMD performs a specific operation. You may make it do ". That is, Patent Document 1 mentions that a sensor and sound are individually detected and different operations are performed in accordance with the detection. However, if the voice recognition result is inappropriate, or if the sensor input is inappropriate, there still remains a possibility of causing a malfunction. That is, Patent Document 1 does not disclose or suggest that the speech recognition result and the sensor input are collated to determine whether or not the input is appropriate, and if the input is appropriate, the corresponding operation is determined and executed. .
  • the present invention has been made in view of the above circumstances, and provides a wearable terminal capable of realizing an appropriate action while compensating for input instability due to voice recognition by detecting a user's operation. Objective.
  • a wearable terminal reflecting one aspect of the present invention is a wearable terminal used by being worn on a user's body, A voice input unit for inputting the voice of the user; A voice decoding unit that converts the voice of the user input by the voice input unit into a word; An operation detection unit for detecting the operation of the user; The words converted by the speech decoding unit when the input of the user's speech by the speech input unit and the detection of the user's motion by the motion detection unit occur within a predetermined time interval, and the motion detection And a control unit that determines an action corresponding to the word when it is determined that a predetermined relationship is established between the user's movements detected by the unit.
  • a wearable terminal capable of realizing an appropriate action while compensating for input instability due to voice recognition by detecting a user's action.
  • HMD head mounted display
  • FIG. 5 It is an example of table TBL1 memorize
  • 5 is a flowchart illustrating an interrupt routine that is repeatedly executed in a control processing unit 121.
  • 5 is a flowchart illustrating an interrupt routine that is repeatedly executed in a control processing unit 121. It is a figure which shows the example of a display which requests
  • FIG. 1 is a perspective view of a head mounted display (hereinafter, HMD) 100 which is a wearable terminal according to the present embodiment.
  • FIG. 2 is a front view of the HMD 100 according to the present embodiment.
  • FIG. 3 is a view of the HMD 100 according to the present embodiment as viewed from above.
  • the right side and the left side of the HMD 100 refer to the right side and the left side for the user wearing the HMD 100.
  • the HMD 100 of this embodiment has a frame 101 as a support member.
  • a frame 101 that is U-shaped when viewed from above has a front part 101a to which two spectacle lenses 102 are attached, and side parts 101b and 101c extending rearward from both ends of the front part 101a.
  • the two spectacle lenses 102 attached to the frame 101 may or may not have refractive power.
  • a cylindrical main body 103 as a support member is fixed to the front portion 101a of the frame 101 on the upper side of the spectacle lens 102 on the right side (which may be on the left side depending on the user's dominant eye).
  • the main body 103 is provided with a display unit 104.
  • a display control unit 104DR (see FIG. 4 described later) that controls display of the display unit 104 based on an instruction from a control processing unit (control unit) 121 described later is disposed in the main body 103. If necessary, a display unit may be arranged in front of both eyes.
  • the display unit 104 includes an image forming unit (not shown) housed in the main body unit 103 and an image display unit 104B.
  • the image display unit 104B which is a so-called see-through type display member, has a generally plate shape that is disposed so as to extend downward from the main body unit 103 and to extend in parallel with one eyeglass lens 102 (see FIG. 1). is there.
  • the image display unit 104B Based on the image data input from the display control unit 104DR to the image forming unit, the image display unit 104B displays a color image on the image display unit 104B using the image light that is modulated for each pixel and emitted.
  • the image display unit 104B transmits almost all the external light, the user can observe an external image (real image) through these.
  • the virtual image of the image displayed on the image display unit 104B is observed while overlapping a part of the external image.
  • the user of the HMD 100 can simultaneously observe the image provided via the image display unit 104B and the external image. Note that when the display unit 104 is in the non-display state, the image display unit 104B is transparent, and only the external image can be observed.
  • the proximity sensor 105 disposed near the center of the frame 101, the lens 106 a of the camera 106 disposed near the side of the frame 101, and the proximity sensor 105. And the illuminance sensor 112 disposed between the lens 106a and the lens 106a so as to face each other.
  • the proximity sensor 105 exists in a detection region in the proximity range in front of the detection surface of the proximity sensor in order to detect that an object, for example, a part of a human body (such as a hand or a finger) is close to the user's eyes. It has a function of detecting whether or not the signal is output and outputting a signal.
  • the proximity range may be set as appropriate according to the operator's characteristics and preferences. For example, the proximity range from the detection surface of the proximity sensor may be within a range of 200 mm. If the distance from the proximity sensor is 200 mm or less, the user can easily put the palm and fingers into and out of the user's field of view with the arm bent, so that the user can easily operate with gestures using the hands and fingers. It is also preferable because it reduces the risk of erroneous detection of a human body or furniture other than the user.
  • the right sub-body portion 107 is attached to the right side portion 101b of the frame 101
  • the left sub-body portion 108 is attached to the left side portion 101c of the frame 101.
  • the right sub-main body portion 107 and the left sub-main body portion 108 have an elongated plate shape, and have elongated protrusions 107a and 108a on the inner side, respectively.
  • the right sub-body portion 107 is attached to the frame 101 in a positioned state
  • the elongated protrusion 108 a is attached to the side of the frame 101.
  • the left sub-main body portion 108 is attached to the frame 101 in a positioned state.
  • the right sub-body portion 107 there are a geomagnetic sensor 109 (see FIG. 4 described later) for detecting geomagnetism, and an angular velocity sensor 110B and an acceleration sensor 110A (see FIG. 4 described later) that generate an output corresponding to the posture.
  • the left sub-main unit 108 is provided with a speaker / earphone 111C and a microphone 111B (see FIG. 4 described later).
  • the main main body 103 and the right sub main body 107 are connected so as to be able to transmit signals through a wiring HS, and the main main body 103 and the left sub main body 108 are connected so as to be able to transmit signals through a wiring (not shown). Yes.
  • FIG. 4 As schematically illustrated in FIG.
  • the right sub-main body 107 is connected to the control unit CTU via a cord CD extending from the rear end.
  • a 6-axis sensor in which an angular velocity sensor and an acceleration sensor are integrated may be used.
  • the HMD can be operated by sound based on an output signal generated from the microphone 111B according to the input sound.
  • the main main body 103 and the left sub main body 108 may be configured to be wirelessly connected.
  • the provision of the color temperature sensor 113 and the temperature sensor 114 is optional.
  • the position where the microphone 111B is provided is arbitrary, but is preferably a position suitable for recording the voice spoken by the user US.
  • FIG. 4 is a block diagram of main circuits of the HMD 100.
  • the control unit CTU generates a control signal for the display unit 104 and other functional devices, a control processing unit 121, an operation unit 122, a GPS receiving unit 123 that receives radio waves from GPS satellites, and external and data.
  • a communication unit 124 that exchanges programs, a ROM 125 that stores programs and the like, a RAM 126 that stores image data and the like, a power supply circuit 130 that converts the voltage applied from the battery 127 into appropriate voltages for each unit, and an SSD And a storage device 129 such as a flash memory and a voice recognition unit 111E.
  • control processor 121 can use an application processor used in a smartphone or the like, the type of the control processor 121 is not limited. For example, if an application processor includes hardware necessary for image processing such as GPU or Codec as a standard, it can be said that the processor is suitable for a small HMD.
  • the control processing unit 121 controls image display on the display unit 104 via the display control unit 104DR.
  • the control processing unit 121 receives power from the power supply circuit 130, operates according to a program stored in at least one of the ROM 124 and the storage device 129, and receives an image from the camera 106 according to an operation input such as power-on from the operation unit 122. Data can be input and stored in the RAM 126, and can be communicated with the outside via the communication unit 124 as necessary.
  • the microphone 111B collects the voice spoken by the user US, converts it into a signal, and inputs the signal to the voice processing unit 111D.
  • the voice processing unit 111D processes the signal output from the microphone 111B and controls it as a voice signal.
  • the voice recognition unit 111E outputs the signal to the voice recognition unit 111E of the CTU, analyzes the voice signal output from the voice processing unit 111D, converts it into a word, and inputs the information to the control processing unit 121. It has become.
  • the microphone 111B and the voice processing unit 111D constitute a voice input unit
  • the voice recognition unit 111E constitutes a voice decoding unit, but the microphone 111B is externally attached and receives a signal via a pin jack or the like. In such a case, only the voice processing unit 111D may constitute the voice input unit.
  • FIG. 5 is a front view when the user US wears the HMD 100 of the present embodiment.
  • FIG. 6 is a diagram illustrating a state in which the user US is facing left while wearing the HMD 100
  • FIG. 7 is a diagram illustrating a state in which the user US is facing right while wearing the HMD 100.
  • FIG. 8 is a diagram showing a state in which the user US viewed from the side is facing upward while wearing the HMD 100
  • FIG. It is a figure which shows the state which faced.
  • FIG. 10 is an example of the table TBL1 stored in the RAM 126, for example.
  • the control processing unit 121 receives the signal output from the acceleration sensor 110A, based on the signal, the head of the user US faces upward as shown in FIG. If it is determined that the user has not headed, the flag is set to “up” and the head of the user US turns downward as shown in FIG.
  • the flag is set to “right” and the head of the user US If it is determined that the user has not turned to the right after turning to the left as shown in FIG.
  • the flag is set to “left” and the head of the user US is moving up and down between FIGS. If it is determined, the flag is set to “up and down” and the head of the user US is shown in FIG. If it is determined to be moving to the left and right in between, the flag and the "left and right".
  • the control processing unit 121 executes control (see FIG. 11) described later, assuming that a predetermined sensor input has been made.
  • the types of flags are not limited to the above.
  • the control processing unit 121 determines that “up”, “done”, “migi”, “hidari”, “yes”, “no”, “page”, “on” If it is determined that eight types of words “page” have been input, the control (see FIG. 12) described later is executed assuming that the prescribed voice recognition has been performed.
  • the word is not limited to the above, but for example, the user swings his head vertically as an action corresponding to “Yes”, and the user swings his head horizontally as an action corresponding to “No”. It is preferable to have a meaning related to the operation of the US because it becomes a natural operation.
  • FIG. 11 and 12 are flowcharts showing an interrupt routine that is repeatedly executed in the control processing unit 121.
  • FIG. 11 When the specified sensor input is performed before the specified voice recognition, the control of the flowchart of FIG. 11 is executed. When the specified voice recognition is performed before the specified sensor input, the control of the flowchart of FIG. Executed.
  • step S101 determines in step S101 that the specified sensor input is not performed (determination NO)
  • step S102 determines that the specified sensor input is performed. If so (determination YES), the control processing unit 121 resets and starts the built-in timer in subsequent step S102.
  • step S103 the control processing unit 121 determines whether or not the user's utterance input and the prescribed voice recognition have been performed.
  • step S104 the control processing unit 121 determines whether or not the time count of the built-in timer exceeds 1 second. If the time count of the built-in timer does not exceed 1 second, the flow returns to step S103 to wait for the user's speech input and the prescribed voice recognition. On the other hand, if the time of the built-in timer exceeds 1 second before the prescribed voice recognition is performed, the interrupt routine is immediately exited.
  • both inputs are within a predetermined time interval. In other cases, it is determined that the inputs of both have not occurred within a predetermined time interval.
  • the predetermined time is not limited to 1 second, it may be fixed or variable, and it is desirable that it can be adjusted according to the characteristics of the device.
  • the control processing unit 121 stores the table TBL1 stored in the RAM 126 in step S105. With reference, the flag based on the sensor input is compared with the word of the voice recognition result to determine whether or not a predetermined relationship is established. If the predetermined relationship is not established, the flow immediately exits the interrupt routine. On the other hand, if the predetermined relationship is established between the flag and the word, the control processing unit 121 defines in the corresponding column of the table TBL1 in step S106. The next action taken is determined and executed, after which the flow exits the interrupt routine.
  • step S201 when the control processing unit 121 determines in step S201 that the user's utterance is not input or the user's utterance is input but the prescribed voice recognition is not performed ( (Determination NO), the process immediately exits the interrupt routine, but if it is determined that the user's US utterance has been input and the prescribed voice recognition has been performed (determination YES), then in step S202, the control processing unit 121 resets the built-in timer. And start.
  • step S203 the control processing unit 121 determines whether or not a predetermined sensor input has been performed. If it is determined that the specified sensor input has not been performed, the control processing unit 121 further determines whether or not the built-in timer has exceeded 1 second in step S204. If the time measured by the built-in timer does not exceed 1 second, the flow returns to step S203 to wait for a prescribed sensor input. On the other hand, if the time of the built-in timer exceeds 1 second before the specified sensor input is performed, the interrupt routine is immediately exited.
  • both inputs are set at a predetermined time interval. It is determined that the input has not occurred within a predetermined time interval.
  • the predetermined time is not limited to 1 second.
  • the control processing unit 121 stores the table TBL1 stored in the RAM 126 in step S205.
  • the flag based on the sensor input is compared with the word of the voice recognition result to determine whether or not a predetermined relationship is established. If the predetermined relationship is not established, the flow immediately exits the interrupt routine, whereas if the predetermined relationship is established between the flag and the word, the control processing unit 121 defines in the corresponding column of the table TBL1 in step S206. The next action taken is determined and executed, after which the flow exits the interrupt routine.
  • a display requesting confirmation as shown in FIG. 13 is made on the image display unit 104B visually recognized by the user US, and a button B1 “Yes” and a button B2 “No” are displayed at the same time.
  • the “up / down” flag is set by the sensor input in step S101 (or S203) and the word of the speech recognition result in step S103 (or S201) is “yes”, a predetermined relationship is obtained by collating the table TBL1. Therefore, the next action is “determine”, and in response to this, the control processing unit 121 turns on (highlights) the button B1. This confirms (determines) the confirmation.
  • the sensor input result and the voice recognition are verified by collating the table TBL1. Since the predetermined relationship with the result is not established (the inside of the column is x), the next routine is exited without deciding the next action, so neither of the buttons B1 and B2 is turned on. . As a result, the voice recognition result uttered by the user US can be supported by the swing motion of the user US, and an appropriate action desired by the user US can be determined, so that malfunction can be effectively prevented.
  • control processing unit 121 As an action, page up to 4 pages ahead.
  • the control processing unit 121 As an action, return the page to the previous 4 pages.
  • the sensor input is performed by checking the table TBL1. Since it can be seen that the predetermined relationship between the result and the voice recognition result is not established (the inside of the column is x), the next action is not decided and the interruption routine is exited, so that the action that the user US does not want is performed. There is nothing to do.
  • control processing unit 121 performs confirmation display (see FIG. 13) on whether or not the page can be turned between step S205 and step S206 on the image display unit 104B.
  • the page turning as the next action may be performed in response to the voice recognition result “yes” by the utterance of the user US and the input of the movement of the user US.
  • FIG. 15 is a diagram showing a state in which the user US puts the hand HD on the right in front of the face while wearing the HMD 100.
  • FIG. 16 shows a state in which the user US puts the hand HD in front of the face while wearing the HMD 100. It is a figure which shows the state put on the left.
  • FIG. 17 is a diagram illustrating a state in which the user US is wearing the HMD 100 and the hand HD is placed in front of the face.
  • FIG. 18 is a diagram illustrating the state in which the user US is wearing the HMD 100 It is a figure which shows the state put down before.
  • the proximity sensor 105 has, for example, a detection region divided into four parts, and can output a gesture signal by distinguishing which position the hand HD is in as shown in FIGS.
  • FIG. 19 shows an example of the table TBL2 stored in the RAM 126, for example.
  • the control processing unit 121 receives the gesture signal output from the proximity sensor 105, based on the signal, after the hand HD of the user US moves above the face as shown in FIG. 17, When it is determined that it has not moved downward, the flag is set to “Up”, and after it is determined that the hand HD of the user US has not moved upward after moving below the face as shown in FIG. If the flag is set to “down” and the user's US hand HD moves to the right as shown in FIG. 15 and then does not move to the left, the flag is set to “right” When it is determined that the US hand HD has moved to the left of the face as shown in FIG.
  • the flag is set to “left” and the user's US hand HD is changed to FIGS. Moving up and down between If you cross, the flag is "vertical", if the hand of the user US HD is determined to be moving from side to side between 15 and 16, the flag is set to "right”.
  • the control processing unit 121 executes control (see FIG. 11) assuming that a predetermined sensor input has been made.
  • the types of flags are not limited to the above.
  • the control processing unit 121 determines six types of words “up”, “done”, “migi”, “hidari”, “yes”, “no”. If it is determined that the input has been made, control (see FIG. 12) is executed assuming that the prescribed voice recognition has been performed.
  • the word is not limited to the above.
  • the combinations whose action contents are described in the column are assumed to have a predetermined relationship between them, while the combinations marked with “X” in the column have no corresponding relationship. It is assumed that the predetermined relationship is not established. Also in this example, the control processing unit 121 can execute control according to the flowcharts shown in FIGS.
  • an “up / down” flag is set by the input of the proximity sensor 105 in step S101 (or S203), and the word of the speech recognition result in step S103 (or S201) is “Yes”. ”, It is determined that the predetermined relationship has been established by collating the table TBL2, and the next action is“ determine ”. In response, the control processing unit 121 turns on the button B1. Become. This confirms (determines) the confirmation. On the other hand, when the “left / right” flag is set by the input of the proximity sensor 105 in step S101 (or S203) and the word of the speech recognition result in step S103 (or S201) is “no”, the table TBL2 is collated. When it is determined that the predetermined relationship is established, the next action is “cancel”, and in response to this, the control processing unit 121 turns on the button B2. As a result, the confirmation is denied (cancelled).
  • the sensor input result is obtained by collating the table TBL2. Since the predetermined relationship is not established between the voice and the voice recognition result (the inside of the column is x), the interrupt routine is exited without determining the next action, so that both the buttons B1 and B2 are operated. There is no. Others are the same as in the above-described embodiment.
  • the present invention has been described above by taking the HMD as an example, but the present invention is not limited to the HMD and can be applied to all wearable terminals.
  • the motion detection unit for detecting the user's motion is not limited to the above example.
  • the motion detection unit detects the line of sight from the motion of the user's eyeball or detects the motion of the lips according to the user's utterance. Also good.
  • HMD 101 Frame 101a Front part 101b Side part 101c Side part 101d Long hole 101e Long hole 102 Eyeglass lens 103 Main body part 104 Display unit 104B Image display part 104DR Display control part 105 Proximity sensor 106 Camera 106a Lens 107 Right sub-main part 107a Projection 108 Left sub-body portion 108a Protrusion 109 Geomagnetic sensor 110A Acceleration sensor 110B Angular velocity sensor 111B Microphone 111C Speaker / Earphone 111D Audio processing unit 111E Speech recognition unit 112 Illuminance sensor 113 Color temperature sensor 114 Temperature sensor 121 Control processing unit 122 Operation unit 123 Reception unit 124 Communication unit 127 Battery 129 Storage device Chair 130 Power supply circuit B1, B2 Button CD Code CTU Control unit HD Hand HS Wiring TBL1, TBL2 Table US User

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Acoustics & Sound (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Optics & Photonics (AREA)
  • General Health & Medical Sciences (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

L'invention concerne un terminal portable permettant de réaliser une action appropriée tout en compensant l'instabilité de l'entrée de reconnaissance vocale en détectant une opération d'un utilisateur. Lorsque l'entrée vocale d'un utilisateur dans une unité d'entrée vocale et la détection de l'opération de l'utilisateur par une unité de détection d'opération ont lieu dans un intervalle de temps prédéterminé, une unité de commande du terminal portable compare un mot résultant de la conversion par une unité d'interprétation vocale avec l'opération de l'utilisateur détectée par l'unité de détection d'opération, puis détermine une action correspondant au mot si l'unité de commande détermine que le mot et l'opération de l'utilisateur ont une relation prédéterminée.
PCT/JP2017/032781 2016-09-28 2017-09-12 Terminal portable WO2018061743A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2016189413 2016-09-28
JP2016-189413 2016-09-28

Publications (1)

Publication Number Publication Date
WO2018061743A1 true WO2018061743A1 (fr) 2018-04-05

Family

ID=61763498

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/032781 WO2018061743A1 (fr) 2016-09-28 2017-09-12 Terminal portable

Country Status (1)

Country Link
WO (1) WO2018061743A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022535250A (ja) * 2019-06-10 2022-08-05 オッポ広東移動通信有限公司 制御方法、ウェアラブルデバイス及び記憶媒体

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08234789A (ja) * 1995-02-27 1996-09-13 Sharp Corp 統合認識対話装置
JPH1173297A (ja) * 1997-08-29 1999-03-16 Hitachi Ltd 音声とジェスチャによるマルチモーダル表現の時間的関係を用いた認識方法
JP2004233909A (ja) * 2003-01-31 2004-08-19 Nikon Corp ヘッドマウントディスプレイ
JP2010511958A (ja) * 2006-12-04 2010-04-15 韓國電子通信研究院 ジェスチャー/音声統合認識システム及び方法
US20110313768A1 (en) * 2010-06-18 2011-12-22 Christian Klein Compound gesture-speech commands
JP2015526753A (ja) * 2012-06-15 2015-09-10 本田技研工業株式会社 深度に基づく場面認識

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08234789A (ja) * 1995-02-27 1996-09-13 Sharp Corp 統合認識対話装置
JPH1173297A (ja) * 1997-08-29 1999-03-16 Hitachi Ltd 音声とジェスチャによるマルチモーダル表現の時間的関係を用いた認識方法
JP2004233909A (ja) * 2003-01-31 2004-08-19 Nikon Corp ヘッドマウントディスプレイ
JP2010511958A (ja) * 2006-12-04 2010-04-15 韓國電子通信研究院 ジェスチャー/音声統合認識システム及び方法
US20110313768A1 (en) * 2010-06-18 2011-12-22 Christian Klein Compound gesture-speech commands
JP2015526753A (ja) * 2012-06-15 2015-09-10 本田技研工業株式会社 深度に基づく場面認識

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
TAKAHASHI, FUMITADA ET AL.: "How to discern initiation of recognition and discard unintended actio: and speech", NIKKEI ELECTRONICS, 30 April 2012 (2012-04-30), pages 48 - 49 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022535250A (ja) * 2019-06-10 2022-08-05 オッポ広東移動通信有限公司 制御方法、ウェアラブルデバイス及び記憶媒体
JP7413411B2 (ja) 2019-06-10 2024-01-15 オッポ広東移動通信有限公司 制御方法、ウェアラブルデバイス及び記憶媒体

Similar Documents

Publication Publication Date Title
US10891953B2 (en) Multi-mode guard for voice commands
US11914835B2 (en) Method for displaying user interface and electronic device therefor
US10949057B2 (en) Position-dependent modification of descriptive content in a virtual reality environment
US9261700B2 (en) Systems and methods for performing multi-touch operations on a head-mountable device
US20150109191A1 (en) Speech Recognition
US20170115736A1 (en) Photo-Based Unlock Patterns
US11947728B2 (en) Electronic device for executing function based on hand gesture and method for operating thereof
KR20220002605A (ko) 제어 방법, 웨어러블 기기 및 저장 매체
JP2018206080A (ja) 頭部装着型表示装置、プログラム、及び頭部装着型表示装置の制御方法
WO2018061743A1 (fr) Terminal portable
CN117063142A (zh) 用于自适应输入阈值化的系统和方法
KR20220081136A (ko) 복수의 센서를 이용한 전자 장치의 제어 방법 및 그 전자 장치
JP6790769B2 (ja) 頭部装着型表示装置、プログラム、及び頭部装着型表示装置の制御方法
US20220326950A1 (en) Head-mounted device, control method and control program for head-mounted device
US20240046578A1 (en) Wearable electronic device displaying virtual object and method for controlling the same
US20230196765A1 (en) Software-based user interface element analogues for physical device elements
US11966510B2 (en) Object engagement based on finger manipulation data and untethered inputs
EP4369155A1 (fr) Dispositif électronique pouvant être porté et procédé d'identification d'un dispositif de commande à l'aide d'un dispositif électronique pouvant être porté
US20230065008A1 (en) Electronic device for performing plurality of functions using stylus pen and method for operating same
JP2017157120A (ja) 表示装置、及び、表示装置の制御方法
KR20220149191A (ko) 손 제스처에 기반하여 기능을 실행하는 전자 장치 및 그 작동 방법
KR20230063829A (ko) 가상 오브젝트를 표시하는 웨어러블 전자 장치 및 이의 제어 방법
JP2016212769A (ja) 表示装置、表示装置の制御方法、及び、プログラム
WO2022103741A1 (fr) Procédé et dispositif de traitement d'entrée d'utilisateur pour de multiples dispositifs
KR20240041772A (ko) 웨어러블 전자 장치 및 상기 웨어러블 전자 장치를 이용하여 컨트롤러를 식별하는 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17855700

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17855700

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP