WO2015066949A1 - 人机交互系统、方法及其装置 - Google Patents

人机交互系统、方法及其装置 Download PDF

Info

Publication number
WO2015066949A1
WO2015066949A1 PCT/CN2013/088813 CN2013088813W WO2015066949A1 WO 2015066949 A1 WO2015066949 A1 WO 2015066949A1 CN 2013088813 W CN2013088813 W CN 2013088813W WO 2015066949 A1 WO2015066949 A1 WO 2015066949A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
wearable device
mobile terminal
voice
human
Prior art date
Application number
PCT/CN2013/088813
Other languages
English (en)
French (fr)
Inventor
吴查理斯
Original Assignee
百度在线网络技术(北京)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 百度在线网络技术(北京)有限公司 filed Critical 百度在线网络技术(北京)有限公司
Publication of WO2015066949A1 publication Critical patent/WO2015066949A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/163Wearable computers, e.g. on a belt

Definitions

  • the present invention relates to the field of electronic device technologies, and in particular, to a human-computer interaction system, method, and device thereof. Background technique
  • human-computer interaction is increasingly diversified and intelligent.
  • human-computer interaction can be realized by keyboard input, mouse control, graphic recognition technology, speech recognition technology, and the like.
  • speech recognition technology is becoming more and more extensive, such as industrial, home appliances, communications, automotive electronics, medical, consumer electronics and so on.
  • speech recognition accuracy rate With the continuous improvement of the speech recognition accuracy rate, it has reached more than 90%. Therefore, speech recognition technology has gradually become one of the most important modes of human-computer interaction. For example, voice dialing, when the user dials the number, the user's name or telephone number can be spoken, and the mobile terminal can automatically dial out through voice recognition technology.
  • speech recognition technology can also be used for voice document retrieval and simple dictation data entry.
  • the embodiments of the present invention aim to solve the above technical problems at least to some extent.
  • the first object of the embodiment of the present invention is to provide a human-computer interaction system, which can realize intelligent human-computer interaction without any operation on the mobile terminal, thereby facilitating user use and improving users. Experience.
  • a second object of the embodiments of the present invention is to provide a human-computer interaction method.
  • a third object of the embodiments of the present invention is to propose another human-computer interaction method.
  • a fourth object of the embodiments of the present invention is to provide a human-machine interaction device.
  • a fifth object of embodiments of the present invention is to provide a wearable device.
  • a sixth object of the embodiments of the present invention is to provide another human-computer interaction method.
  • a seventh object of the embodiments of the present invention is to provide another human-computer interaction device.
  • An eighth object of the embodiments of the present invention is to provide a mobile terminal.
  • a human-machine interaction system including: the wearable device, configured to receive and record voice information of a user, and after establishing communication with the mobile terminal Transmitting the voice information to the mobile terminal; and the mobile terminal is configured to perform voice recognition on the language information to obtain an instruction of the user.
  • the voice information of the user may be received and recorded by the wearable device, and after establishing communication with the mobile terminal, the voice information is sent by the wearable device to the mobile terminal.
  • the user can identify the voice information without taking out the mobile terminal, thereby realizing the intelligentization of human-computer interaction, providing convenience for the user to use, making the user more convenient, simple and convenient in human-computer interaction, and improving the user.
  • Experience the user can identify the voice information without taking out the mobile terminal, thereby realizing the intelligentization of human-computer interaction, providing convenience for the user to use, making the user more convenient, simple and convenient in human-computer interaction, and improving the user. Experience.
  • a second aspect of the present invention provides a human-computer interaction method, where the steps include: receiving and recording by a wearable device User's voice information; the wearable device establishes communication with the mobile terminal, and transmits the voice information to the mobile terminal; and the mobile terminal performs voice recognition on the language information to obtain the user's instruction .
  • the voice information of the user may be received and recorded by the wearable device, and after establishing communication with the mobile terminal, the voice information is sent by the wearable device to the mobile terminal.
  • the mobile terminal performs voice recognition on the voice information to obtain the user's instruction, and finally provides the execution result to the user through the wearable device, so that the user can identify the voice information without taking out the mobile terminal, thereby realizing human-computer interaction. Intelligent, making users more fast, simple, and convenient in human-computer interaction, more in line with user needs, and enhance the user experience.
  • the third aspect of the present invention provides another human-computer interaction method, the method comprising: receiving voice information of a user; recording the voice information; establishing communication with the mobile terminal, and transmitting the voice information to the And a mobile terminal, configured to enable the mobile terminal to perform voice recognition on the language information to obtain an instruction of the user.
  • the wearable device first receives and records the voice information of the user, and after establishing communication with the mobile terminal, sends the voice information to the mobile terminal, so that the mobile terminal performs voice on the voice information. Recognizing to obtain the user's instruction, and finally providing the execution result to the user through the wearable device, so that the user can identify the voice information without taking out the mobile terminal, thereby realizing the intelligentization of the human-computer interaction and providing convenience for the user to use. It makes the user more convenient, simple and convenient in human-computer interaction, and improves the user experience.
  • a fourth embodiment of the present invention provides a human-machine interaction apparatus, including: a microphone and a voice processor, configured to receive and record voice information of a user; a memory, configured to save the voice information; and a communicator, configured to establish Communicating with the mobile terminal; the controller, configured to send the voice information to the mobile terminal by using the communicator, so that the mobile terminal performs voice recognition on the language information to obtain an instruction of the user.
  • the human-machine interaction device of the embodiment of the present invention receives and records the voice information of the user through the microphone and the voice processor, and after establishing communication with the mobile terminal, sending the voice information to the mobile terminal, so that the The mobile terminal performs voice recognition on the language information to obtain the instruction of the user, so that the user can apply the voice recognition technology without taking out the operation of the mobile terminal, thereby realizing intelligent human-computer interaction and providing convenience for the user. Users are more fast, simple, and convenient in human-computer interaction, which enhances the user experience.
  • a fifth aspect of the present invention provides a wearable device comprising the device for human-computer interaction as described above.
  • the wearable device of the embodiment of the present invention first receives and records the voice information of the user through the wearable device, and after establishing communication with the mobile terminal, sends the voice information to the mobile terminal, so that the mobile terminal performs voice recognition on the voice information.
  • the wearable device In order to obtain the user's instruction, the wearable device finally provides the execution result to the user, so that the user can apply the voice recognition technology without taking out the mobile terminal to operate, thereby realizing intelligent human-computer interaction and enabling the user to interact in human-computer interaction. It is faster, simpler, more convenient, and more in line with user needs, improving the user experience.
  • a sixth aspect of the present invention provides a human-computer interaction method, the method comprising: receiving an activation instruction of a wearable device; starting a voice recognition program according to the activation instruction; and receiving a voice of a user sent by the wearable device Information, and the voice information is identified by the voice recognition program to obtain an instruction of the user.
  • the human-computer interaction method of the embodiment of the present invention by receiving an activation instruction of the wearable device, starts a voice recognition program according to the activation instruction, receives the voice information of the user sent by the wearable device, and identifies the voice information through the voice recognition program, Obtaining the user's instructions, and feeding back the execution result to the wearable device, so that the wearable device provides the execution result to the user, so that the user is more convenient, simple, convenient, and more suitable for the user's interaction, and the user is improved.
  • a voice recognition program according to the activation instruction
  • receives the voice information of the user sent by the wearable device identifies the voice information through the voice recognition program
  • Obtaining the user's instructions and feeding back the execution result to the wearable device, so that the wearable device provides the execution result to the user, so that the user is more convenient, simple, convenient, and more suitable for the user's interaction, and the user is improved.
  • the wearable device provides the execution result to the user, so that the user is more convenient
  • the seventh aspect of the present invention provides another human-machine interaction apparatus, including: a first receiving module, configured to receive An activation instruction of the wearable device; a startup module, configured to start a voice recognition program according to the activation instruction; and a second receiving module, configured to receive voice information of the user sent by the wearable device, and pass the voice recognition program The voice information is identified to obtain an instruction of the user.
  • the human-machine interaction device of the embodiment of the present invention receives the voice information of the user sent by the wearable device through the first receiving module, and identifies the voice information by using the voice recognition program to obtain the instruction of the user.
  • the user can apply the voice recognition technology without taking out the mobile terminal, thereby realizing the intelligent human-computer interaction, making the user more convenient, simple and convenient in the human-computer interaction, providing convenience for the user and improving the user. Experience.
  • An embodiment of the eighth aspect of the present invention provides a mobile terminal, including the apparatus for human-computer interaction as described above.
  • the mobile terminal of the embodiment of the present invention starts the voice recognition program according to the activation command by receiving the activation instruction of the wearable device, receives the voice information of the user sent by the wearable device, and identifies the voice information through the voice recognition program to obtain the user.
  • the instructions are sent back to the wearable device, so that the wearable device provides the execution result to the user, so that the user is more convenient, simple, convenient, and more convenient in the human-computer interaction, and more suitable for the user's needs, thereby improving the user experience.
  • FIG. 1 is a schematic diagram of a human-machine interaction system according to an embodiment of the present invention.
  • FIG. 2 is a flow chart of a human-computer interaction method according to an embodiment of the present invention.
  • FIG. 3 is a flow chart of a human-computer interaction method according to an embodiment of the present invention.
  • FIG. 4 is a flowchart of a human-computer interaction method according to another embodiment of the present invention.
  • FIG. 5 is a flowchart of a human-computer interaction method according to another embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of an apparatus for human-computer interaction according to an embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of an apparatus for human-computer interaction according to an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of a device for human-computer interaction according to an embodiment of the present invention.
  • FIG. 9 is a flowchart of a human-computer interaction method according to still another embodiment of the present invention.
  • FIG. 10 is a flowchart of a human-computer interaction method according to still another embodiment of the present invention.
  • FIG. 11 is a schematic structural diagram of an apparatus for human-computer interaction according to still another embodiment of the present invention.
  • FIG. 12 is a schematic structural diagram of an apparatus for human-computer interaction according to still another embodiment of the present invention. detailed description
  • connection should be understood broadly, and may be fixed or detachable, for example, unless otherwise explicitly defined and defined.
  • Connected, or connected integrally can be mechanical or electrical; can be directly connected, or indirectly connected through an intermediate medium, can be the internal communication of the two components.
  • the specific meaning of the above terms in the present invention can be understood in a specific case by those skilled in the art.
  • the present invention provides a human-computer interaction system, including: a wearable device, which is used for receiving and recording the user's voice information, and establishing with the mobile terminal After the communication, the voice information is sent to the mobile terminal; and the mobile terminal is configured to perform voice recognition on the language information to obtain the user's instruction.
  • FIG. 1 is a schematic diagram of a human-machine interaction system in accordance with one embodiment of the present invention.
  • a human-machine interaction system includes a wearable device 100 and a mobile terminal 200.
  • the wearable device 100 is for receiving and recording voice information of the user, and transmits the voice information to the mobile terminal 200 after establishing communication with the mobile terminal 200.
  • the intelligent design of the personal wearables that people wear on a daily basis, so as to develop a wearable device may be referred to as a wearable device 100, such as known smart glasses, smart headphones, smart bracelets, Smart wallet or smart buttons, etc.
  • the wearable device 100 is a brand new human-computer interaction mode. Because it is convenient to carry around, it can accurately understand the specific needs of users and enhance the user experience.
  • the wearable device 100 and the mobile terminal 200 can communicate by wire or wirelessly, for example, the wearable device 100 can communicate with the mobile terminal 200 through WiFi (Wireless Fidelity) or Bluetooth. Or the wearable device 100 communicates with the mobile terminal 200 through an audio jack on the mobile terminal 200.
  • WiFi Wireless Fidelity
  • Bluetooth Wireless Fidelity
  • the wearable device 100 communicates with the mobile terminal 200 through an audio jack on the mobile terminal 200.
  • the user can trigger the wearable device 100 to receive and record the user's voice information by triggering the trigger of the wearable device 100, and send the voice information to the mobile after establishing communication with the mobile terminal 200.
  • Terminal 200 The triggering manner is specifically as follows:
  • the trigger may be a button or a switch on the wearable device 100, and the user may trigger the wearable device 100 by triggering a button or a switch; or the user may be detected by the trigger of the wearable device 100.
  • the preset behavior or preset voice command is triggered. For example: Trigger can detect the user's nodding, picking, kicking and other actions to trigger.
  • the trigger detects the fixed voice information spoken by the user, for example: the trigger detects that the "start recording" spoken by the user triggers the wearable device 100; and the wearable device 100 can also be triggered by the trigger detecting the change of the surrounding temperature or magnetic field.
  • the trigger detects the change of the magnetic field, automatically triggers the wearable device 100, and judges the user identity by recognizing the user's voiceprint.
  • the user puts a finger on the trigger to automatically trigger the wearable device 100 to reach the preset temperature; and can also receive the infrared trigger signal through the trigger to trigger the wearable device 100; the trigger can also be on the wearable device 100.
  • the camera triggers the wearable device 100 by acquiring and detecting changes in the user's image.
  • the wearable device 100 may first cache voice information, and then after establishing communication with the mobile terminal 200, the wearable device 100 may transmit the voice information to the mobile terminal 200 while recording, and may also complete the recording. The voice information is then transmitted to the mobile terminal 200.
  • the recorded user voice information may be voice information currently spoken by the user, voice information transmitted to the wearable device 100 by other electronic devices, or voice information played after being recorded by other electronic devices.
  • the mobile terminal 200 is configured to perform speech recognition on the language information to acquire an instruction of the user.
  • the mobile terminal 200 may identify the voice information by using a voice recognition program to acquire an instruction of the user.
  • the voice recognition program of the mobile terminal 200 may be used for identification, or the voice information may be sent to the cloud server by the mobile terminal 200, and the voice recognition program of the cloud server may be called by the mobile terminal 200 for identification, or other The method can be used for identification.
  • the mobile terminal 200 can obtain the speech recognition result, and details are not described herein.
  • the name in the address book can be pre-existed, and the mobile terminal 200 can obtain the name by voice recognition technology, and then automatically dial the phone number corresponding to the name, of course.
  • Voice dialing can also be achieved based on further voice commands from the user, such as "dial out".
  • the instruction of "timekeeping" can be said, and the mobile terminal 200 can obtain the instruction through the voice recognition technology, and make corresponding feedback, and the voice broadcasts the current time.
  • the human-machine interaction system of the embodiment of the present invention first receives and records the voice information of the user through the wearable device, and after establishing communication with the mobile terminal, sends the voice information to the mobile terminal, and then the mobile terminal performs voice recognition on the voice information.
  • the user is quick and easy to operate in human-computer interaction, which improves the user experience.
  • the mobile terminal performs voice recognition on the voice information to obtain an instruction of the user, and before the execution result is obtained, the wearable device plays a preset audio signal to the user, so that the human-machine interaction is more intelligent.
  • the operation is quick and easy, and it is more in line with user needs and enhances the user experience.
  • the wearable device 100 can also record the user's voice information in real time, and after receiving the voice information, send an activation command to the voice recognition program in the mobile terminal 200. Since the user receives the voice information of the user through the wearable device 100, and the mobile terminal 200 is not ready at this time, the wearable device 100 sends an activation instruction to the voice recognition program in the mobile terminal 200 after receiving the voice information. Thereby, the corresponding applications in the mobile terminal 200 and the mobile terminal 200 are activated.
  • the user can transmit an activation speech recognition program instruction to the mobile terminal 200 by triggering an operation button on the wearable device 100.
  • the wearable device 100 transmits the voice information just recorded to the mobile terminal 200 to cause the mobile terminal 200 to recognize by the voice recognition program.
  • the speech recognition program can also be identified on the cloud server by sending the voice information to the cloud server.
  • the wearable device 100 plays a preset audio signal to the user to prompt the user before the wearable device 100 receives the execution result.
  • the identification of voice information may fail, or there may be a delay in sending voice messages due to network reasons. Therefore, the wearable device 100 can play a preset audio signal to the user to prompt the user. For example: "Voice recognition failed, please try again”; or "Please wait while in speech recognition.” If the two parties talk, the preset audio signal can also be sent to the other party.
  • the wearable device 100 itself can actively identify a simple minority voice information, and in the case of a two-party call, the corresponding dual tone multi-frequency can be sent to the other party.
  • dual tone multi-frequency can be used to output digital signals during a call.
  • the wearable device 100 is further configured to perform shooting according to a shooting instruction of the user, and send the captured image or video to the mobile terminal 200.
  • the shooting instruction can be input through an operation button on the wearable device 100, or can be input by voice.
  • the mobile terminal 200 may also generate an execution result according to the instruction, and feed back the execution result to the wearable device 100, so that the wearable device 100 provides the execution result to the user.
  • the manner in which the execution result is provided to the user can be provided to the user by playing a voice, or can be provided to the user by displaying a picture or video.
  • the human-machine interaction system of the embodiment of the present invention receives and records the voice information of the user through the wearable device, and after establishing communication with the mobile terminal, sends the voice information to the mobile terminal, and then the mobile terminal performs voice recognition on the voice information.
  • the wearable device plays the preset audio signal to the user, which makes the human-computer interaction more intelligent, the operation is quick and simple, and more conforms to the user's needs, thereby improving the user experience.
  • FIG. 2 is a flow chart of a human-computer interaction method according to an embodiment of the present invention.
  • the voice recognition function is performed on the voice information by using the voice recognition technology in the mobile terminal, so that no operation is performed on the mobile terminal, and the human-computer interaction is intelligentized, so that the user can operate quickly and easily in the human-computer interaction, thereby improving the user.
  • the human-computer interaction method includes:
  • the wearable device receives and records the voice information of the user.
  • an intelligent design of a person's daily wearables is developed, thereby developing a wearable device called a wearable device, such as known smart glasses, smart headphones, smart bracelets, smart wallets. Or smart buttons, etc.
  • Wearable devices are a new way of human-computer interaction. Because they are easy to carry and easy to understand, they can better understand the specific needs of users and enhance the user experience.
  • the wearable device and the mobile terminal can communicate by wire or wirelessly, for example, the wearable device can communicate with the mobile terminal through WiFi (wireless fidelity) or Bluetooth, or the wearable device can pass through the audio jack on the mobile terminal Communicate with mobile terminals.
  • the user can trigger the wearable device to receive and record the user's voice information by triggering the wearable device, and send the voice information to the mobile terminal after establishing communication with the mobile terminal.
  • the triggering manner is as follows:
  • the trigger may be a button or a switch on the wearable device, and the user may trigger the wearable device by triggering a button or a switch; or the user's preset may be detected by the trigger of the wearable device. Behavior or preset voice commands are triggered.
  • the trigger can detect the user's nodding, picking, kicking and other actions to trigger.
  • the trigger detects the fixed voice information spoken by the user, for example: the trigger detects that the user starts the "start recording" triggering the wearable device; and the trigger detects the change of the surrounding temperature or magnetic field to trigger the wearable device.
  • the trigger detects the change of the magnetic field, automatically triggers the wearable device, and judges the user's identity by recognizing the user's voiceprint.
  • the user puts a finger on the trigger to automatically trigger the wearable device when reaching the preset temperature; and can also receive the infrared trigger signal through the trigger to trigger the wearable device; the trigger can also be a camera on the wearable device.
  • the wearable device is triggered by acquiring and detecting changes in the user's image.
  • the recorded user voice information may be voice information currently spoken by the user, voice information transmitted to the wearable device by other electronic devices, or voice information played after being recorded by other electronic devices.
  • the wearable device establishes communication with the mobile terminal to enable the voice recognition program in the mobile terminal to be activated, and send the voice information to the mobile terminal.
  • the wearable device may first buffer the voice information, and then establish a communication with the mobile terminal to enable the voice recognition program in the mobile terminal to be activated, and then transmit the voice information to the mobile terminal.
  • the sending method may be sent while recording, or may be sent after the recording is completed.
  • the mobile terminal performs voice recognition on the voice information to obtain an instruction of the user.
  • the mobile terminal receives the voice information sent by the wearable device, and the voice information is identified by the voice recognition program to acquire the user's instruction.
  • the voice recognition program of the mobile terminal may be used for identification, or the voice information may be sent to the cloud server by the mobile terminal, and the voice recognition program of the cloud server may be called by the mobile terminal for identification, or other methods may be used for identification.
  • the mobile terminal can obtain the speech recognition result, and details are not described herein again.
  • the mobile terminal can obtain the name through the voice recognition technology, and then automatically dial the phone number corresponding to the name, and of course Voice dialing can be achieved according to the user's further voice commands, such as "dial out".
  • the "timekeeping" command can be said, and the mobile terminal can obtain the instruction through the voice recognition technology, and make corresponding feedback, and the voice broadcasts the current time.
  • the wearable device firstly receives and records the voice information of the user, and after establishing communication with the mobile terminal, sends the voice information to the mobile terminal, and then the mobile terminal performs voice recognition on the voice information.
  • the application of speech recognition technology realizes intelligent human-computer interaction, which makes the user's operation in human-computer interaction fast and simple, and improves the user experience.
  • FIG. 3 is a flow chart of a human-computer interaction method in accordance with an embodiment of the present invention.
  • the wearable device plays the preset audio signal to the user, so that the human-machine interaction is more intelligent; and the wearable device performs the result by feeding back the execution result to the wearable device.
  • the human-computer interaction method includes:
  • the wearable device receives and records the voice information of the user.
  • an intelligent design of a person's daily wearables is developed, thereby developing a wearable device called a wearable device, such as known smart glasses, smart headphones, smart bracelets, smart wallets. Or smart buttons, etc.
  • Wearable devices are a new way of human-computer interaction. Because they are easy to carry and easy to understand, they can better understand the specific needs of users and enhance the user experience.
  • the wearable device and the mobile terminal can communicate by wire or wirelessly, for example, the wearable device can communicate with the mobile terminal via WiFi (Wireless Fidelity) or Bluetooth, or the wearable device can pass the audio on the mobile terminal
  • the jack communicates with the mobile terminal.
  • the user may trigger the wearable device to receive and record the user's voice information by triggering the wearable device, and send the voice information to the mobile terminal after establishing communication with the mobile terminal.
  • the triggering manner is as follows:
  • the trigger may be a button or a switch on the wearable device, and the user may trigger the wearable device by triggering a button or a switch; or the user's preset may be detected by the trigger of the wearable device.
  • Behavior or preset voice commands are triggered. For example: Trigger can detect the user's nodding, picking, kicking and other actions to trigger.
  • the trigger detects the fixed voice information spoken by the user, for example: the trigger detects that the user starts the "recording" triggers the wearable device; and the trigger detects the change of the surrounding temperature or magnetic field to trigger the wearable device. For example: When the user performs a security check through the security gate, the trigger detects the change of the magnetic field, automatically triggers the wearable device, and judges the user's identity by recognizing the user's voiceprint.
  • the user puts a finger on the trigger and automatically triggers the wearable device when reaching the preset temperature; and can also receive the infrared trigger signal through the trigger to trigger the wearable
  • the device can also be a camera on the wearable device that triggers the wearable device by acquiring and detecting changes in the user's image.
  • the recorded user voice information may be voice information currently spoken by the user, voice information transmitted to the wearable device by other electronic devices, or voice information played after being recorded by other electronic devices.
  • the wearable device establishes communication with the mobile terminal, and sends the voice information to the mobile terminal.
  • the wearable device may first buffer the voice information and then transmit the voice information to the mobile terminal after establishing communication with the mobile terminal.
  • the sending method may be sent while recording, or may be sent after the recording is completed.
  • the wearable device records the user's voice information in real time and, after receiving the voice message, transmits an activation command that activates the voice recognition program in the mobile terminal. Since the user receives the voice information of the user through the wearable device, and the mobile terminal is not ready at this time, the wearable device sends an activation instruction to the voice recognition program in the mobile terminal after receiving the voice information, thereby And the corresponding application in the mobile terminal is activated.
  • the user can send an activation command to the voice recognition program in the mobile terminal by triggering an operation button on the wearable device.
  • the wearable device After the voice recognition program in the mobile terminal is activated, the wearable device transmits the voice information just recorded to the mobile terminal, so that the mobile terminal recognizes through the voice recognition program.
  • the voice recognition program can also be identified on the cloud server by sending the voice information to the cloud server.
  • the wearable device plays a preset audio signal to the user to prompt the user before the wearable device receives the execution result.
  • the identification of voice information may fail, or there may be a delay in sending voice messages due to network reasons. Therefore, the wearable device can play a preset audio signal to the user to prompt the user. For example: "Voice recognition failed, please try again”; or "Please wait while in speech recognition.”
  • the preset audio signal can also be sent to the other party.
  • the wearable device itself can actively identify a simple minority voice information, and in the case of a call between the two parties, the corresponding dual tone multi-frequency can be sent to the other party.
  • dual tone multi-frequency can be used to output digital signals during a call.
  • the mobile voice system can help and prompt the user how to operate. "1" is Mandarin; "2" is English; if you need manual service, please press "0". In this way, the user can perform corresponding operations according to the prompt tone of the voice system.
  • the user presses the number key to send the command it is applied to the dual tone multi-frequency.
  • the mobile terminal performs voice recognition on the voice information to obtain an instruction of the user.
  • the mobile terminal after the mobile terminal receives the activation command sent by the wearable device, the mobile terminal starts to recognize the voice information, and thereby acquires the user's instruction, and performs the operation of the user.
  • the voice recognition program of the mobile terminal may be used for identification, or the voice information may be sent to the cloud server by the mobile terminal, and the voice recognition program of the cloud server may be called by the mobile terminal for identification, or other methods may be used for identification.
  • the mobile terminal can obtain the speech recognition result, and details are not described herein again.
  • the mobile terminal can obtain the name through the voice recognition technology, and then automatically dial the phone number corresponding to the name, and of course Voice dialing can be achieved according to the user's further voice commands, such as "dial out".
  • the "timekeeping" command can be said, and the mobile terminal can obtain the instruction through the voice recognition technology, and make corresponding feedback, and the voice broadcasts the current time.
  • the mobile terminal generates an execution result according to the instruction, and feeds the execution result to the wearable device to wear The wearable device provides the execution results to the user.
  • the manner in which the result of the execution is provided to the user can be provided to the user by playing a voice, although it can also be provided to the user by displaying a picture or video.
  • the name in the pre-existing address book can be said, and the mobile terminal can obtain the name through the voice recognition technology, and then automatically dial the phone number corresponding to the name, and of course Voice dialing can be performed according to further voice commands of the user, such as "dial out".
  • the mobile terminal performs the dialed number action according to the voice command, and then feeds back the action result of the dialed number to the wearable device, and the wearable device can perform voice broadcast "number has been dialed out" to feedback the user.
  • the wearable device is further configured to take a picture according to a user's shooting instruction and transmit the captured image or video to the mobile terminal.
  • Shooting commands can be entered via the action buttons on the wearable device or via voice input. If you do this, you can complete the shooting operation in a matter of seconds, without having to take out the mobile terminal, unlock the screen, open the camera application and other complicated steps, which not only saves the operation steps, but also captures the scene that you usually miss. For example, scenes passing through at high speeds, etc.
  • the wearable device plays a preset audio signal to the user before the execution result is obtained, so that the human-machine interaction can be more intelligent; and the wearable device is made to feedback the execution result to the wearable device.
  • the device provides the execution result to the user, making the user operation faster and easier, more in line with the user's needs, and improving the user experience.
  • FIG. 4 is a flow chart of a human-computer interaction method according to another embodiment of the present invention.
  • the voice recognition function is performed on the voice information by using the voice recognition technology in the mobile terminal, so that no operation is performed on the mobile terminal, and the human-computer interaction is intelligentized, so that the user can operate quickly and easily in the human-computer interaction, thereby improving the user.
  • the human-computer interaction method includes:
  • an intelligent design of a person's daily wearables is developed, thereby developing a wearable device called a wearable device, such as known smart glasses, smart headphones, smart bracelets, smart wallets. Or smart buttons, etc.
  • Wearable devices are a new way of human-computer interaction. Because they are easy to carry and easy to understand, they can better understand the specific needs of users and enhance the user experience.
  • the wearable device and the mobile terminal can communicate by wire or wirelessly, for example, the wearable device can communicate with the mobile terminal through WiFi (wireless fidelity) or Bluetooth, or the wearable device can pass through the audio jack on the mobile terminal Communicate with mobile terminals.
  • the user can trigger the wearable device to receive and record the user's voice information by triggering the wearable device, and send the voice information to the mobile terminal after establishing communication with the mobile terminal.
  • the triggering manner is as follows:
  • the trigger may be a button or a switch on the wearable device, and the user may trigger the wearable device by triggering a button or a switch; or the user's preset may be detected by the trigger of the wearable device. Behavior or preset voice commands are triggered.
  • the trigger can detect the user's nodding, picking, kicking and other actions to trigger.
  • the trigger detects the fixed voice information spoken by the user, for example: the trigger detects that the user starts the "start recording" triggering the wearable device; and the trigger detects the change of the surrounding temperature or magnetic field to trigger the wearable device.
  • the trigger detects the change of the magnetic field, automatically triggers the wearable device, and judges the user's identity by recognizing the user's voiceprint.
  • the user puts a finger on the trigger to automatically trigger the wearable device when reaching the preset temperature; and can also receive the infrared trigger signal through the trigger to trigger the wearable device; the trigger can also be a camera on the wearable device.
  • the wearable device is triggered by acquiring and detecting changes in the user's image.
  • the wearable device has a function of buffering voice information, and the voice information can be temporarily recorded and stored.
  • S403. Establish communication with the mobile terminal, and send the voice information to the mobile terminal, so that the mobile terminal performs voice recognition on the voice information to obtain the user's instruction.
  • the wearable device after establishing communication with the mobile terminal, transmits the voice information to the mobile terminal, so that the mobile terminal performs voice recognition on the voice information to acquire the user's instruction.
  • the method of sending may be sent while recording, or may be sent after the recording is completed.
  • the wearable device receives and records the voice information, and sends the voice information to the mobile terminal, so that the mobile terminal passes the voice recognition technology. Obtain the name, and then automatically dial the phone number corresponding to the name.
  • voice dialing can be realized according to the user's further voice command, such as "dial out”.
  • the wearable device receives and records the "time-reporting" voice message, and sends the "time-keeping" to the mobile terminal, so that the mobile terminal can
  • the instruction is obtained by voice recognition technology, and corresponding feedback is made, and the current time is broadcasted through the wearable device voice.
  • the wearable device first receives and records the voice information of the user, and after establishing communication with the mobile terminal, sends the voice information to the mobile terminal, so that the mobile terminal performs voice on the voice information.
  • the user's instructions are recognized and obtained, and the human-computer interaction is intelligentized by applying the voice recognition technology, so that the user can operate quickly and easily in the human-computer interaction, thereby improving the user experience.
  • FIG. 5 is a flow chart of a human-computer interaction method according to another embodiment of the present invention.
  • the voice information of the user is received and recorded by the wearable device, and after establishing communication with the mobile terminal, the voice information is sent to the mobile terminal, so that the mobile terminal performs voice recognition on the voice information to obtain the user's
  • the human-computer interaction method includes:
  • S501 Receive voice information of the user.
  • an intelligent design of a person's daily wearables is developed, thereby developing a wearable device called a wearable device, such as known smart glasses, smart headphones, smart bracelets, smart wallets. Or smart buttons, etc.
  • Wearable devices are a new way of human-computer interaction. Because they are easy to carry and easy to understand, they can better understand the specific needs of users and enhance the user experience.
  • the wearable device and the mobile terminal can communicate by wire or wirelessly, for example, the wearable device can communicate with the mobile terminal via WiFi (wireless fidelity) or Bluetooth, or the wearable device can pass the audio on the mobile terminal
  • the jack communicates with the mobile terminal.
  • the user may trigger the wearable device to receive and record the user's voice information by triggering the wearable device, and send the voice information to the mobile terminal after establishing communication with the mobile terminal.
  • the triggering manner is as follows:
  • the trigger may be a button or a switch on the wearable device, and the user may trigger the wearable device by triggering a button or a switch; or the user's preset may be detected by the trigger of the wearable device.
  • Behavior or preset voice commands are triggered. For example: Trigger can detect the user's nodding, picking, kicking and other actions to trigger.
  • the trigger detects the fixed voice information spoken by the user, for example: the trigger detects that the user starts the "recording" triggers the wearable device; and the trigger detects the change of the surrounding temperature or magnetic field to trigger the wearable device.
  • the trigger detects the change of the magnetic field, automatically triggers the wearable device, and judges the user's identity by recognizing the user's voiceprint.
  • the user puts a finger on the trigger, to The preset temperature automatically triggers the wearable device; the infrared trigger signal can also be received by the trigger to trigger the wearable device; the trigger can also be a camera on the wearable device, by acquiring and detecting the change of the user image, The device triggers.
  • the wearable device has a cache voice information function, and the voice information can be temporarily recorded and stored.
  • the wearable device transmits voice information to the mobile terminal after establishing communication with the mobile terminal.
  • the sending method may be sent while recording, or may be sent after the recording is completed.
  • the user sends an activation command to the voice recognition program in the mobile terminal by triggering an operation button on the wearable device to activate the voice recognition program in the mobile terminal. Since the user receives the voice information of the user through the wearable device, and the mobile terminal is not ready at this time, the wearable device sends an activation instruction to the voice recognition program in the mobile terminal after receiving the voice information, thereby And the corresponding application in the mobile terminal is activated.
  • the wearable device can transmit the voice information just recorded to the mobile terminal, so that the mobile terminal can recognize the voice recognition program.
  • the voice recognition program can also be identified on the cloud server by sending the voice information to the cloud server.
  • the wearable device plays a preset audio signal to the user to prompt the user before the wearable device receives the execution result.
  • the identification of voice information may fail, or there may be a delay in sending voice messages due to network reasons. Therefore, the wearable device can play a preset audio signal to the user to prompt the user. For example: "Voice recognition failed, please try again”; or "Please wait while in speech recognition.”
  • the preset audio signal can also be sent to the other party.
  • the wearable device itself can actively identify a simple minority voice information, and in the case of a call between the two parties, the corresponding dual tone multi-frequency can be sent to the other party.
  • dual tone multi-frequency can be used to output digital signals during a call.
  • the mobile voice system can help and prompt the user how to operate. "1" is Mandarin; "2" is English; if you need manual service, please press "0". In this way, the user can perform corresponding operations according to the prompt tone of the voice system.
  • the user presses the number key to send the command it is applied to the dual tone multi-frequency.
  • S504 Receive an execution result generated by the mobile terminal according to the instruction.
  • the mobile terminal after receiving the activation instruction sent by the wearable device, the mobile terminal starts to recognize the voice information, and thereby acquires the user's instruction, and generates an execution result according to the instruction.
  • the mobile terminal can obtain the name through the voice recognition technology, and then automatically dial the phone number corresponding to the name, and of course Voice dialing can be achieved according to the user's further voice commands, such as "dial out".
  • the "timekeeping" command can be said, and the mobile terminal can obtain the instruction through the voice recognition technology, and make corresponding feedback, and the voice broadcasts the current time.
  • the manner in which the result of the execution is provided to the user can be provided to the user by playing a voice, although it can also be provided to the user by displaying a picture or video.
  • the wearable device is further configured to perform shooting according to a user's shooting instruction, and send the captured image or video to the mobile terminal.
  • Shooting commands can be entered via the operation buttons on the wearable device. Voice input. If you do this, you can complete the shooting operation in a matter of seconds, without having to take out the mobile terminal, unlock the screen, open the camera application and other complicated steps, which not only saves the operation steps, but also captures the scenes that are usually missed. For example, scenes passing through at high speeds, etc.
  • the human-computer interaction method of the embodiment of the present invention receives and records the voice information of the user through the wearable device, and after establishing communication with the mobile terminal, sends the voice information to the mobile terminal, so that the mobile terminal performs voice recognition on the voice information.
  • the wearable device plays the preset audio signal to the user before the execution result is obtained, so that the human-computer interaction is more intelligent, the operation is quick and simple, and the user's demand is more satisfied, thereby improving the user experience.
  • FIG. 6 is a schematic structural diagram of an apparatus for human-computer interaction according to an embodiment of the present invention.
  • the human-machine interaction apparatus includes: a wearable device 100.
  • the wearing device 100 includes: a trigger 110, a microphone and a voice processor 120, a memory 130, and a communicator.
  • the portable device that people wear on a daily basis is intelligently designed, thereby developing a wearable device called a wearable device 100, such as known smart glasses, smart headphones, smart bracelets, and smart devices. Wallet or smart buttons, etc.
  • the wearable device 100 is a brand new human-computer interaction mode. Because it is convenient to carry around, it can accurately understand the specific needs of users and enhance the user experience.
  • the user can trigger the microphone 110 and the voice processor 120 to receive and record the voice information of the user by triggering the trigger 110 of the wearable device 100, and after establishing communication with the mobile terminal 200, the voice information is Send to mobile terminal 200.
  • the trigger 110 may be a button or a switch on the wearable device 100, and the user may trigger the microphone and the voice processor 120 by triggering a button or a switch;
  • the preset behavior or preset voice command triggers the microphone and voice processor 120.
  • the trigger 110 can detect the user's nodding, picking, kicking and the like to trigger.
  • the trigger 110 detects the fixed voice information spoken by the user, for example: the trigger 110 detects the "start recording" triggering microphone and the voice processor 120 spoken by the user; and can detect the change of the surrounding temperature or magnetic field through the trigger 110.
  • the microphone and voice processor 120 are triggered.
  • the trigger 110 detects a change in the magnetic field, automatically triggers the microphone and the voice processor 120, and determines the user's identity by recognizing the user's voiceprint. For another example, the user puts a finger on the trigger 110 to automatically trigger the microphone and the voice processor 120 to reach the preset temperature; and the infrared trigger signal can also be received through the trigger 110 to trigger the microphone and the voice processor 120.
  • the memory 130 is used to store voice information.
  • the memory 130 has a function of buffering voice information, and the voice information can be temporarily recorded and stored.
  • the memory 130 can also store picture information, video information, and the like.
  • Communicator 140 is used to establish communication with mobile terminal 200.
  • the communicator 130 and the mobile terminal 200 can communicate by wire or wirelessly, for example, the communicator 140 can communicate with the mobile terminal 200 via WiFi (wireless fidelity) or Bluetooth, or communicate The device 140 communicates with the mobile terminal 200 through an audio jack on the mobile terminal 200.
  • WiFi wireless fidelity
  • Bluetooth wireless fidelity
  • the controller 150 is configured to transmit voice information to the mobile terminal 200 through the communicator 140 to cause the mobile terminal 200 to perform voice recognition on the voice information to acquire the user's instruction.
  • the controller 150 can also control other devices in the wearable device 100.
  • the controller 150 transmits the voice information to the mobile terminal 200 after the communication with the mobile terminal 200 is established by the control communicator 140, so that the mobile terminal 200 performs voice recognition on the voice information. Get the user's instructions.
  • the sending method may be sent while recording, or may be sent after the recording is completed.
  • the name in the address book can be pre-existed, and the mobile terminal 200 can obtain the name by voice recognition technology, and then automatically dial the phone number corresponding to the name, of course.
  • Voice dialing can also be achieved based on further voice commands from the user, such as "dial out".
  • the instruction of "timekeeping" can be said, and the mobile terminal 200 can obtain the instruction through the voice recognition technology, and make corresponding feedback, and the voice broadcasts the current time.
  • the device for human-computer interaction of the embodiment of the present invention first receives and records the voice information of the user through the microphone and the voice processor, and after establishing communication with the mobile terminal, the controller sends the voice information to the mobile terminal, so that the mobile terminal
  • the voice information is voice-recognized and the user's instruction is obtained.
  • FIG. 7 is a schematic structural diagram of an apparatus for human-computer interaction according to an embodiment of the present invention.
  • a human-machine interaction apparatus includes: a trigger 110, a microphone and voice processor 120, a memory 130, a communicator 140, a controller 150, and a camera 160.
  • the controller 150 is further configured to receive, by the communicator 140, an execution result generated by the mobile terminal 200 according to the instruction, and provide the execution result to the user.
  • the controller 150 can also control other devices such as the camera 160.
  • the manner in which the result of the execution is provided to the user can be provided to the user by playing a voice, although it can also be provided to the user by displaying a picture or video.
  • the controller 150 is further configured to transmit an activation instruction to activate the voice recognition program in the mobile terminal 200 after receiving the voice information. Since the user receives the voice information of the user through the controller 150, the mobile terminal 200 is not ready at this time, so after receiving the voice information, the controller 150 sends an activation instruction to the voice recognition program in the mobile terminal 200, thereby The mobile terminal 200 and the corresponding application in the mobile terminal 200 are activated.
  • the controller 150 can also control other devices in the wearable device 100. For example: Camera 160.
  • the camera 160 is for taking a picture according to a user's shooting instruction, and transmits the captured image or video to the mobile terminal 200 through the communicator 140.
  • the camera 160 is configured to take a picture according to a user's shooting instruction, and transmit the captured image or video to the mobile terminal 200.
  • the shooting instruction can be input through an operation button on the wearable device 100, or can be input by voice. If you do this, you can complete the shooting operation in a matter of seconds, without having to take out the mobile terminal 200, unlock the screen, and open the photographic application software and other complicated steps, which not only saves the operation steps, but also captures the scene that is usually missed. For example, scenes passing through at high speeds, etc.
  • the camera 160 can also be configured to trigger the microphone and speech processor 120 by acquiring and detecting changes in the user image such that the microphone and speech processor 120 receive and record the user's voice information.
  • the device for human-computer interaction of the embodiment of the present invention receives and records the voice information of the user through the microphone and the voice processor, and after establishing communication with the mobile terminal, the controller sends the voice information to the mobile terminal, so that the mobile terminal
  • the voice information is voice-recognized to obtain the user's instructions.
  • the wearable device plays the preset audio signal to the user, which makes the human-computer interaction more intelligent, the operation is quick and simple, and the user's needs are more satisfied, and the user is improved. Experience.
  • FIG. 8 is a schematic diagram of a device for human-computer interaction according to an embodiment of the present invention.
  • the human-machine interaction device includes: a smart earphone 100 and a mobile phone 200.
  • the smart earphone 100 includes: a microphone and voice processor 110, a memory 120, a communicator 130, a controller 140, a camera 150, a signal switcher 160, and a headphone audio line 170.
  • the handset 200 includes an audio jack 210.
  • a microphone and voice processor 110 is used to receive and record the user's voice information.
  • the user can trigger the smart headset 100 in a plurality of manners, such that the microphone and the voice processor 110 receive and record the voice information of the user, and after establishing communication with the mobile phone 200, send the voice information to the mobile phone 200.
  • the user can trigger the smart earphone 100 by triggering a button or switch on the smart earphone 100.
  • the user can also trigger the smart headset 100 by detecting the user's preset behavior or a preset voice command. For example: nod, pickpocket, etc.; or say a fixed voice message, such as "start recording.”
  • the memory 120 is used to store voice information.
  • the memory 120 has a function of buffering voice information, and the voice information can be temporarily recorded and stored.
  • the memory 120 can also store picture information, video information, and the like.
  • Communicator 130 is used to establish communication with handset 200.
  • the communicator 130 and the mobile phone 200 can communicate by wire or wirelessly, for example, the communicator 130 can communicate with the mobile phone 200 through WiFi (Wireless Fidelity) or Bluetooth, or the communicator 130 is inserted into the audio jack 210 on the handset 200 through the headset audio line 170 to communicate with the handset 200.
  • the earphone audio cable 170 can be used for playing audio or transmitting data in the earphone.
  • the audio jack 210 on the handset 200 can be communicated between the smart headset 100 and the handset 200 by plugging into the headset audio line 170.
  • the controller 140 is configured to send voice information to the mobile phone 200 through the communicator 130, so that the mobile phone 200 performs voice recognition on the voice information to acquire the user's instruction.
  • the controller 140 transmits the voice information to the mobile phone 200, so that the mobile phone 200 performs voice recognition on the voice information to acquire the user's instruction.
  • the method of sending may be sent while recording, or may be sent after the recording is completed.
  • the name in the address book of the mobile phone 200 can be pre-existed, and the mobile phone 200 can obtain the name through the voice recognition technology, and then automatically dial the phone number corresponding to the name.
  • voice dialing can also be implemented according to further voice commands of the user, such as "dial out".
  • the "timekeeping" command can be said, and the mobile phone 200 can obtain the instruction through the voice recognition technology, and make corresponding feedback, and the voice broadcasts the current time.
  • the controller 140 is further configured to receive, by the communicator 130, an execution result generated by the mobile phone 200 according to the instruction, and provide the execution result to the user.
  • the manner in which the result of the execution is provided to the user can be provided to the user by playing a voice, although it can also be provided to the user by displaying a picture or video.
  • the controller 140 is further configured to transmit an activation command to activate the voice recognition program in the handset 200 after receiving the voice information. Since the user receives the voice information of the user through the controller 140, and the mobile phone 200 is not ready at this time, the controller 140 sends an activation instruction to the voice recognition program in the mobile phone 200 after receiving the voice information, thereby And the corresponding application in the mobile phone 200 is activated.
  • the controller 140 can also control other devices such as the camera 150.
  • the camera 150 is for taking a picture according to a user's shooting instruction, and transmits the captured image or video to the mobile phone 200 through the communicator 130.
  • the camera 150 is configured to perform shooting according to a user's shooting instruction, and transmit the captured image or video to the mobile phone 200.
  • the shooting command can be input through the operation button on the smart earphone 100, or Voice input. If you do this, you can complete the shooting operation in a matter of seconds, without having to take out the phone 200, unlock the screen, open the camera application and other complicated steps, not only saves the operation steps, but also captures the scenes that are usually missed. For example, scenes passing through at high speeds, etc.
  • the camera 150 can also acquire and detect changes in the user image, trigger the microphone and speech processor 120, and cause the microphone and speech processor 110 to receive and record the user's voice information.
  • the signal switcher 160 is used for switching of multiple signals. For example: When the user is listening to a song, an audio signal line is being occupied. At this time, if a call is made, the signal switcher 160 can switch the audio signal line to the call signal line, so that the user can talk to the incoming caller. When the call ends, the call signal line is switched back to the audio signal line of the playing song.
  • the device for human-computer interaction of the embodiment of the present invention receives and records the voice information of the user through the microphone and the voice processor of the smart headset, and after establishing communication with the mobile phone, the controller sends the voice information to the mobile phone, so that the mobile phone
  • the voice information is voice-recognized to obtain the user's instructions.
  • the wearable device plays the preset audio signal to the user, which makes the human-computer interaction more intelligent, the operation is quick and simple, and the user's needs are more satisfied, and the user is improved. Experience.
  • the present invention also proposes a wearable device.
  • a wearable device comprising the device for human-computer interaction as shown in any of Figures 6 and 7.
  • the wearable device of the embodiment of the present invention first receives and records the voice information of the user through the wearable device, and after establishing communication with the mobile terminal, sends the voice information to the mobile terminal, so that the mobile terminal performs voice recognition on the voice information.
  • the wearable device plays the preset audio signal to the user before the execution result is obtained, so that the human-computer interaction is more intelligent, the operation is quick and simple, and the user's demand is more satisfied, thereby improving the user experience.
  • FIG. 9 is a flowchart of a human-computer interaction method according to still another embodiment of the present invention.
  • the voice recognition program by receiving an activation instruction of the wearable device, the voice recognition program is started according to the activation instruction, the voice information of the user sent by the wearable device is received, and the voice information is recognized by the voice recognition program to obtain the user's instruction.
  • speech recognition technology By applying speech recognition technology, the user is quick and easy to operate in human-computer interaction, which enhances the user experience.
  • the human-computer interaction method according to the embodiment of the present invention includes:
  • the activation command is for activating a voice recognition program in the mobile terminal or cloud server. Since the mobile terminal is not ready at the time when receiving the voice information of the user, after receiving the voice information, the mobile terminal activates the corresponding application in the mobile terminal and the mobile terminal by receiving an activation instruction of the voice recognition program. .
  • the speech recognition program is started according to the activation command.
  • the voice recognition program may be pre-installed in the mobile terminal or in the cloud server, thereby identifying the voice information by voice recognition technology.
  • the mobile terminal after establishing communication with the wearable device, receives the voice information of the user sent by the wearable device, and performs voice recognition on the voice information to obtain the user's instruction.
  • the mobile terminal can obtain the name through the voice recognition technology, and then automatically dial the phone number corresponding to the name, and of course Voice dialing can be achieved according to the user's further voice commands, such as "dial out”.
  • the "timekeeping" command can be said, and the mobile terminal can obtain the instruction through the voice recognition technology, and make corresponding feedback, and the voice broadcasts the current time.
  • the human-computer interaction method of the embodiment of the present invention by receiving an activation instruction of the wearable device, starts a voice recognition program according to the activation instruction, receives the voice information of the user sent by the wearable device, and identifies the voice information through the voice recognition program, Obtaining the user's instructions, by applying the speech recognition technology, the user is quick and easy to operate in the human-computer interaction, thereby improving the user experience.
  • FIG. 10 is a flow chart of a human-computer interaction method according to still another embodiment of the present invention.
  • the voice recognition program is started according to the activation instruction, the voice information of the user sent by the wearable device is received, and the voice information is recognized by the voice recognition program to obtain the user's instruction. And the execution result is fed back to the wearable device, so that the wearable device provides the execution result to the user, so that the user is more convenient, simple, convenient, and more convenient to meet the user's needs in the human-computer interaction, thereby improving the user experience.
  • the human-computer interaction method includes:
  • the activation command is for activating a voice recognition program in the mobile terminal or cloud server. Since the mobile terminal is not ready at the time when receiving the voice information of the user, after receiving the voice information, the mobile terminal activates the corresponding application in the mobile terminal and the mobile terminal by receiving an activation instruction of the voice recognition program. .
  • the voice recognition program may be pre-installed in the mobile terminal or in the cloud server, thereby identifying the voice information by voice recognition technology.
  • the mobile terminal after establishing communication with the wearable device, receives the voice information of the user sent by the wearable device, and performs voice recognition on the voice information to obtain the user's instruction.
  • the mobile terminal after receiving the activation instruction sent by the wearable device, the mobile terminal starts to recognize the voice information, and thereby acquires the user's instruction, and generates an execution result according to the instruction.
  • the mobile terminal can obtain the name through the voice recognition technology, and then automatically dial the phone number corresponding to the name, and of course Voice dialing can be achieved according to the user's further voice commands, such as "dial out".
  • the "timekeeping" command can be said, and the mobile terminal can obtain the instruction through the voice recognition technology, and make corresponding feedback, and the voice broadcasts the current time.
  • the execution result is fed back to the wearable device and provided to the user through the wearable device.
  • the manner in which the result of the execution is provided to the user can be provided to the user by playing a voice, although it can also be provided to the user by displaying a picture or video.
  • the mobile terminal is further configured to receive an image or video captured by the wearable device.
  • Shooting commands can be entered via the action buttons on the wearable device or via voice input. If you do this, you can complete the shooting operation in a matter of seconds, without having to take out the mobile terminal, unlock the screen, open the camera application and other complicated steps, which not only saves the operation steps, but also captures the scenes that are usually missed. For example, scenes passing through at high speeds, etc.
  • the mobile terminal speech recognition program identifies the voice information to obtain The user's instruction and feedback the execution result to the wearable device, so that the wearable device provides the execution result to the user, so that the user is more convenient, simple, convenient, and more convenient in the human-computer interaction, and more meets the user's needs, thereby improving the user experience.
  • FIG. 11 is a schematic structural diagram of an apparatus for human-computer interaction according to still another embodiment of the present invention.
  • the human-machine interaction apparatus includes: a first receiving module 210, a booting module 220, and a second receiving module 230.
  • the first receiving module 210 is configured to receive an activation instruction of the wearable device.
  • the activation command is used to activate the voice recognition program in the mobile terminal 200 or the cloud server. Since the first receiving module 210 is not ready at this time when receiving the voice information of the user, after receiving the voice information, the first receiving module 210 receives the activation command of the voice recognition program, thereby causing the mobile terminal 200 and moving The corresponding application in terminal 200 is activated.
  • the startup module 220 is configured to activate the speech recognition program according to the activation instruction.
  • the voice recognition program is a program pre-installed in the mobile terminal 200 or the cloud server.
  • many applications include voice recognition features such as map software, timekeeping software, and dialing software.
  • the second receiving module 230 is configured to receive the voice information of the user sent by the wearable device 100, and identify the voice information through the voice recognition program to obtain the user's instruction.
  • the mobile terminal 200 after establishing communication with the wearable device 100, the mobile terminal 200 receives the voice information of the user sent by the wearable device 100, and performs voice recognition on the voice information to obtain the user's instruction.
  • the name in the address book can be pre-existed, and the mobile terminal 200 can obtain the name by voice recognition technology, and then automatically dial the phone number corresponding to the name, of course.
  • Voice dialing can also be achieved based on further voice commands from the user, such as "dial out".
  • the instruction of "timekeeping" can be said, and the mobile terminal 200 can obtain the instruction through the voice recognition technology, and make corresponding feedback, and the voice broadcasts the current time.
  • the first receiving module starts the voice recognition program according to the activation instruction by receiving the activation instruction of the wearable device, and the second receiving module receives the voice information of the user sent by the wearable device, and passes the voice.
  • the recognition program identifies the voice information to obtain the user's instructions, and by applying the voice recognition technology, the user is quick and easy to operate in the human-computer interaction, thereby improving the user experience.
  • FIG. 12 is a schematic structural diagram of an apparatus for human-computer interaction according to still another embodiment of the present invention.
  • the human-machine interaction apparatus includes: a first receiving module 210, a booting module 220, a second receiving module 230, an executing module 240, and a feedback module 250.
  • Execution module 240 is for executing results generated in accordance with the instructions.
  • the mobile terminal 200 after the mobile terminal 200 receives the activation instruction sent by the wearable device 100, the mobile terminal 200 starts to recognize the voice information, and thereby acquires the user's instruction, and generates an execution result according to the instruction.
  • the name in the address book can be pre-existed, and the mobile terminal 200 can obtain the name by voice recognition technology, and then automatically dial the phone number corresponding to the name, of course.
  • Voice dialing can also be achieved based on further voice commands from the user, such as "dial out".
  • the instruction of "timekeeping" can be said, and the mobile terminal 200 can obtain the instruction through the voice recognition technology, and make corresponding feedback, and the voice broadcasts the current time.
  • the feedback module 250 is configured to feed back the execution result to the wearable device 100 and provide it to the user through the wearable device 100.
  • the feedback module 250 may feed back the execution result to the wearable device 100, and the manner provided to the user by the wearable device 100 may be provided to the user by playing the voice, and may also be provided by displaying the picture or video. To the user.
  • the second receiving module 230 is further configured to receive an image or video captured by the user transmitted by the wearable device 100.
  • the second receiving module 230 is further configured to receive an image or video captured by the wearable device 100.
  • the shooting instruction can be input through an operation button on the wearable device 100, or can be input by voice. If you do this, you can complete the shooting operation in a matter of seconds, without having to take out the mobile terminal 200, unlock the screen, and open the camera application and other complicated steps, which not only saves the operation steps, but also captures scenes that are usually missed. For example, scenes passing through at high speeds.
  • the second receiving module identifies the voice information by using a voice recognition program to obtain the user's instruction, and feeds the execution result to the wearable device, so that the wearable device provides the execution result.
  • the user is more convenient, simple, convenient and convenient in the human-computer interaction, and more in line with the user's needs, improving the user experience.
  • the present invention also proposes a mobile terminal.
  • a mobile terminal comprising the apparatus for human-computer interaction as shown in any one of Figures 11 and 12.
  • the mobile terminal starts the voice recognition program according to the activation instruction by receiving the activation instruction of the wearable device, receives the voice information of the user sent by the wearable device, and identifies the voice information through the voice recognition program, Obtaining the user's instructions, and feeding back the execution result to the wearable device, so that the wearable device provides the execution result to the user, so that the user is more convenient, simple, convenient, and more suitable for the user's interaction, and the user is improved.
  • the mobile terminal starts the voice recognition program according to the activation instruction by receiving the activation instruction of the wearable device, receives the voice information of the user sent by the wearable device, and identifies the voice information through the voice recognition program, Obtaining the user's instructions, and feeding back the execution result to the wearable device, so that the wearable device provides the execution result to the user, so that the user is more convenient, simple, convenient, and more suitable for the user's interaction, and the user is improved.
  • the user is more convenient, simple, convenient, and more suitable for the user
  • the mobile terminal can be, for example, a hardware device having various operating systems such as a mobile phone, a tablet, a personal digital assistant, an e-book, and the like.
  • a "computer-readable medium” can be any apparatus that can contain, store, communicate, propagate, or transport a program for use in an instruction execution system, apparatus, or device, or in conjunction with such an instruction execution system, apparatus, or device.
  • computer readable media include the following: electrical connections (electronic devices) having one or more wires, portable computer disk cartridges (magnetic devices), random access memory (RAM), Read only memory (ROM), erasable editable read only memory (EPR0M or flash memory), fiber optic devices, and portable compact disk read only memory (CDR0M).
  • the computer readable medium may even be a paper or other suitable medium on which the program can be printed, as it may be optically scanned, for example by paper or other medium, followed by editing, interpretation or, if appropriate, other suitable The method proceeds to obtain the program electronically and then store it in computer memory.
  • portions of the invention may be implemented in hardware, software, firmware or a combination thereof.
  • multiple steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system.
  • a suitable instruction execution system For example, if implemented in hardware, as in another embodiment, it can be implemented by any one or combination of the following techniques well known in the art: having logic gates for implementing logic functions on data signals. Discrete logic circuits, application specific integrated circuits with suitable combinational logic gates, programmable gate arrays (PGAs), field programmable gate arrays (FPGAs), etc.
  • each functional unit in each embodiment of the present invention may be integrated into one processing module, or each unit may exist physically separately, or two or more units may be integrated into one module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules.
  • the integrated modules, if implemented in the form of software functional modules and sold or used as stand-alone products, may also be stored in a computer readable storage medium.
  • the above-mentioned storage medium may be a read only memory, a magnetic disk or an optical disk or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • User Interface Of Digital Computer (AREA)
  • Telephone Function (AREA)

Abstract

本发明实施例提出了一种人机交互系统,包括:穿戴式设备,用于接收并记录用户的语音信息,并在与移动终端建立通信之后,将语音信息发送至移动终端;移动终端,用于对语言信息进行语音识别以获取用户的指令。本发明实施例还提出了一种人机交互方法及其装置。本发明实施例的人机交互系统首先通过穿戴式设备接收并记录用户的语音信息,并在与移动终端建立通信之后,将语音信息发送至移动终端,然后移动终端通过语音识别技术,对语音信息进行识别以获取用户的指令,从而使用户无需拿出移动终端进行任何操作,即可应用语音识别技术,从而实现人机交互智能化,为用户提供了便利,提升了用户体验。

Description

人机交互系统、 方法及其装置
技术领域
本发明涉及电子设备技术领域, 特别涉及一种人机交互系统、 方法及其装置。 背景技术
随着科学技术的不断发展, 人机交互日趋多样化、 智能化。 目前, 人机交互可通过键盘 输入、 鼠标控制、 图形识别技术、 语音识别技术等进行实现。 其中, 语音识别技术应用的领 域越来越广泛, 例如工业、 家电、 通信、 汽车电子、 医疗、 消费电子产品等等。 随着语音识 别正确率的不断提高, 目前已经可达到 90%以上。 因此, 语音识别技术逐渐成为人机交互最 重要的模式之一。 例如语音拨号, 用户在拨出号码时, 可说出对方的姓名或电话号码, 移动 终端即可通过语音识别技术实现自动拨出。此外, 语音识别技术还可用于语音文档检索和简 单的听写数据录入等。
但是, 当用户与移动终端进行人机交互时, 依然存在操作繁琐、 不便的问题。 例如: 用 户通过语音识别技术进行语音拨号时,需要首先取出移动终端,然后通过触控操作解锁屏幕, 接着启动语音识别程序并按住语音功能按键,才可以真正开始通过语音识别。因此可以看出, 虽然语音识别技术减少了触控数字键拨号的操作, 但是操作依旧十分繁琐、 使用效率低, 用 户体验差。例如上述语音拨号的例子, 从用户希望输入语音指令到移动终端接收到语音指令 之间, 至少需要 10秒钟的时间, 无法满足给用户实时的服务需求。 发明内容
本发明实施例旨在至少在一定程度上解决上述技术问题。
为此, 本发明实施例的第一个目的在于提出一种人机交互系统, 无需对移动终端进行任 何操作, 即可实现人机交互的智能化, 从而为用户使用提供了便利, 提升了用户体验。
本发明实施例的第二个目的在于提出一种人机交互方法。
本发明实施例的第三个目的在于提出另一种人机交互方法。
本发明实施例的第四个目的在于提出一种人机交互装置。
本发明实施例的第五个目的在于提出一种穿戴式设备。
本发明实施例的第六个目的在于提出又一种人机交互方法。
本发明实施例的第七个目的在于提出又一种人机交互装置。
本发明实施例的第八个目的在于提出一种移动终端。
为达上述目的, 根据本发明第一方面实施例提出了一种人机交互系统, 包括: 所述穿戴 式设备, 用于接收并记录用户的语音信息, 并在与所述移动终端建立通信之后, 将所述 语音信息发送至所述移动终端; 以及所述移动终端, 用于对所述语言信息进行语音识别 以获取所述用户的指令。
本发明实施例的人机交互系统, 可通过穿戴式设备先接收并记录用户的语音信息, 并在 与所述移动终端建立通信之后, 由穿戴式设备将所述语音信息发送至所述移动终端, 使 用户无需拿出移动终端即可对语音信息进行识别, 从而实现人机交互的智能化, 为用户使用 提供了便利, 使得用户在人机交互中更加操作快速、 简单、 便捷, 提升了用户体验。
本发明第二方面实施例提供了一种人机交互方法, 步骤包括: 穿戴式设备接收并记录 用户的语音信息; 所述穿戴式设备建立与移动终端的通信, 并将所述语音信息发送至所 述移动终端; 以及所述移动终端对所述语言信息进行语音识别以获取所述用户的指令。
本发明实施例的人机交互方法, 可通过穿戴式设备先接收并记录用户的语音信息, 并在 与所述移动终端建立通信之后, 由穿戴式设备将所述语音信息发送至所述移动终端, 移 动终端对该语音信息进行语音识别以获取用户的指令, 最后通过穿戴式设备将执行结果 提供至用户, 使用户无需拿出移动终端, 即可对语音信息进行识别, 从而实现人机交互的 智能化, 使得用户在人机交互中更加操作快速、 简单、 便捷, 更加符合用户需求, 提升了用 户体验。
本发明第三方面实施例提供了另一种人机交互方法,步骤包括:接收用户的语音信息; 记录所述语音信息; 以及建立与移动终端的通信, 并将所述语音信息发送至所述移动终 端, 以使所述移动终端对所述语言信息进行语音识别以获取所述用户的指令。
本发明实施例的人机交互方法,通过穿戴式设备先接收并记录用户的语音信息, 并在与 移动终端建立通信之后, 将语音信息发送至移动终端, 以使移动终端对该语音信息进行 语音识别以获取用户的指令, 最后通过穿戴式设备将执行结果提供至用户, 使用户无需 拿出移动终端即可对语音信息进行识别, 从而实现人机交互的智能化, 为用户使用提供了便 利, 使得用户在人机交互中更加操作快速、 简单、 便捷, 提升了用户体验。
本发明第四方面实施例提供了一种人机交互装置, 包括: 麦克风及语音处理器, 用于 接收并记录用户的语音信息; 存储器, 用于保存所述语音信息; 通信器, 用于建立与移 动终端的通信; 控制器, 用于将所述语音信息通过所述通信器发送至所述移动终端, 以 使所述移动终端对所述语言信息进行语音识别以获取所述用户的指令。
本发明实施例的人机交互装置, 通过麦克风及语音处理器接收并记录用户的语音信息, 在与所述移动终端建立通信之后, 将所述语音信息发送至所述移动终端, 以使所述移动 终端对所述语言信息进行语音识别以获取所述用户的指令, 使用户无需拿出移动终端进 行操作, 即可应用语音识别技术, 从而实现人机交互智能化, 为用户提供了便利, 使得用户 在人机交互中更加操作快速、 简单、 便捷, 提升了用户体验。
本发明第五方面实施例提供了一种穿戴式设备, 包括如上所述的用于人机交互的装置。 本发明实施例的穿戴式设备,通过穿戴式设备先接收并记录用户的语音信息,并在与移 动终端建立通信之后, 将语音信息发送至移动终端, 以使移动终端对该语音信息进行语 音识别以获取用户的指令, 最后通过穿戴式设备将执行结果提供至用户, 使用户无需拿 出移动终端进行操作, 即可应用语音识别技术, 从而实现人机交互智能化, 使用户在人机交 互中更加操作快速、 简单、 便捷, 更加符合用户需求, 提升了用户体验。
本发明第六方面实施例提供了又一种人机交互方法, 步骤包括: 接收穿戴式设备的激 活指令; 根据所述激活指令启动语音识别程序; 以及接收所述穿戴式设备发送的用户的 语音信息,并通过所述语音识别程序对所述语音信息进行识别,以获取所述用户的指令。
本发明实施例的人机交互方法, 通过接收穿戴式设备的激活指令, 根据激活指令启动 语音识别程序, 接收穿戴式设备发送的用户的语音信息, 并通过语音识别程序对语音信 息进行识别, 以获取用户的指令, 并将执行结果反馈至穿戴式设备, 以使穿戴式设备将 执行结果提供至用户, 使用户在人机交互中更加操作快速、 简单、 便捷, 更加符合用户需 求, 提升了用户体验。
本发明第七方面实施例提供了又一种人机交互装置, 包括: 第一接收模块, 用于接收 穿戴式设备的激活指令; 启动模块, 用于根据所述激活指令启动语音识别程序; 以及第 二接收模块, 用于接收所述穿戴式设备发送的用户的语音信息, 并通过所述语音识别程 序对所述语音信息进行识别, 以获取所述用户的指令。
本发明实施例的人机交互装置, 通过第一接收模块接收所述穿戴式设备发送的用户的 语音信息,并通过所述语音识别程序对所述语音信息进行识别,以获取所述用户的指令, 使用户无需拿出移动终端进行操作, 即可应用语音识别技术, 从而实现人机交互智能化, 使 得用户在人机交互中更加操作快速、 简单、 便捷, 为用户提供了便利, 提升了用户体验。
本发明第八方面实施例提供了一种移动终端, 包括如上所述的用于人机交互的装置。 本发明实施例的移动终端,通过接收穿戴式设备的激活指令,根据激活指令启动语音 识别程序, 接收穿戴式设备发送的用户的语音信息, 并通过语音识别程序对语音信息进 行识别, 以获取用户的指令, 并将执行结果反馈至穿戴式设备, 以使穿戴式设备将执行 结果提供至用户, 使用户在人机交互中更加操作快速、 简单、 便捷, 更加符合用户需求, 提升了用户体验。
本发明的附加方面和优点将在下面的描述中部分给出, 部分将从下面的描述中变得明 显, 或通过本发明的实践了解到。 附图说明
本发明的上述和 /或附加的方面和优点从结合下面附图对实施例的描述中将变得明显和 容易理解, 其中:
图 1为根据本发明一个实施例的人机交互系统的示意图;
图 2为根据本发明一个实施例的人机交互方法的流程图;
图 3为根据本发明一个具体实施例的人机交互方法的流程图;
图 4为根据本发明另一个实施例的人机交互方法的流程图;
图 5为根据本发明另一个具体实施例的人机交互方法的流程图;
图 6为根据本发明一个实施例的人机交互的装置的结构示意图;
图 7为根据本发明一个具体实施例的人机交互的装置的结构示意图;
图 8为根据本发明一个具体实施例的人机交互的装置的实物示意图
图 9为根据本发明又一个实施例的人机交互方法的流程图;
图 10为根据本发明又一个具体实施例的人机交互方法的流程图;
图 11为根据本发明又一个实施例的人机交互的装置的结构示意图;
图 12为根据本发明又一个具体实施例的人机交互的装置的结构示意图。 具体实施方式
下面详细描述本发明的实施例, 实施例的示例在附图中示出, 其中自始至终相同或类似 的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施 例是示例性的, 仅用于解释本发明, 而不能理解为对本发明的限制。
在本发明的描述中,需要理解的是,术语 "中心" 、 "纵向" 、 "横向" 、 "上" 、 "下" 、 "前" 、 "后" 、 "左" 、 "右" 、 "竖直" 、 "水平" 、 "顶" 、 "底" 、 "内" 、 "外" 等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本发明和简 化描述, 而不是指示或暗示所指的装置或元件必须具有特定的方位、 以特定的方位构造和操 作, 因此不能理解为对本发明的限制。 此外, 术语 "第一" 、 "第二"仅用于描述目的, 而 不能理解为指示或暗示相对重要性。
在本发明的描述中, 需要说明的是, 除非另有明确的规定和限定, 术语 "安装" 、 "相 连" 、 "连接"应做广义理解, 例如, 可以是固定连接, 也可以是可拆卸连接, 或一体地连 接; 可以是机械连接, 也可以是电连接; 可以是直接相连, 也可以通过中间媒介间接相连, 可以是两个元件内部的连通。对于本领域的普通技术人员而言, 可以具体情况理解上述术语 在本发明中的具体含义。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为, 表示包括一 个或更多个用于实现特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部 分, 并且本发明的优选实施方式的范围包括另外的实现, 其中可以不按所示出或讨论的 顺序, 包括根据所涉及的功能按基本同时的方式或按相反的顺序, 来执行功能, 这应被 本发明的实施例所属技术领域的技术人员所理解。
下面参考附图描述根据本发明实施例的人机交互系统、 方法和装置。
为使用户通过语音识别技术进行人机交互更加方便, 提高使用效率, 本发明提出一种 人机交互系统, 包括: 穿戴式设备, 用于接收并记录用户的语音信息, 并在与移动终端 建立通信之后, 将语音信息发送至移动终端; 以及移动终端, 用于对语言信息进行语音 识别以获取用户的指令。
图 1为根据本发明一个实施例的人机交互系统的示意图。
如图 1所示, 根据本发明实施例的人机交互系统, 包括穿戴式设备 100和移动终端 200。 穿戴式设备 100用于接收并记录用户的语音信息, 并在与移动终端 200建立通信 之后, 将语音信息发送至移动终端 200。 在本发明的实施例中, 对人们日常穿戴的随身用 品进行智能化设计, 以此开发出可以穿戴的设备可称为穿戴式设备 100,例如已知的智能眼 镜、 智能耳机、 智能手环、 智能钱包或智能纽扣等。 穿戴式设备 100是全新的人机交互 方式, 由于随身携带方便、如影随形, 更能准确地了解用户的具体需求, 提升用户体验。
在本发明的一个实施例中, 穿戴式设备 100和移动终端 200可通过有线或无线的方 式通信, 例如穿戴式设备 100可通过 WiFi ( wireless fidel ity, 无线保真) 或蓝牙与 移动终端 200通信,或者穿戴式设备 100通过移动终端 200上的音频插孔与移动终端 200 通信。
在本发明的实施例中,用户可通过触发穿戴式设备 100的触发器, 以使穿戴式设备 100 接收并记录用户的语音信息, 并在与移动终端 200建立通信之后, 将语音信息发送至移 动终端 200。 其中, 触发的方式具体如下: 触发器可为穿戴式设备 100之上的按钮或开 关, 用户可以通过触发按钮或开关, 以触发穿戴式设备 100; 也可以通过穿戴式设备 100 的触发器检测用户的预设行为或预设语音指令进行触发。 例如: 触发器可检测用户的点 头、 甩手、 踢脚等动作进行触发。 或者触发器检测用户说出的固定语音信息, 例如: 触发 器检测用户说出的 "开始录音"触发穿戴式设备 100; 还可以通过触发器检测周围的温度 或磁场的变化触发穿戴式设备 100。 例如: 用户通过安检门进行安全检查的时候, 触发器 检测到磁场的变化, 自动触发穿戴式设备 100, 通过识别用户的声纹对用户身份进行判断。 再例如: 用户将手指放在触发器上, 到达预设温度自动触发穿戴式设备 100; 还可以通过 触发器接收红外触发信号, 以触发穿戴式设备 100; 触发器还可以是穿戴式设备 100上 的摄像头, 通过获取并检测用户图像的变化, 对穿戴式设备 100进行触发。 在本发明的一个实施例中, 穿戴式设备 100可以先缓存语音信息, 然后在与移动终 端 200 建立通信之后, 穿戴式设备 100 可以一边记录该语音信息一边发送至移动终端 200, 也可以记录完成之后再将该语音信息发送至移动终端 200。 其中, 记录的用户语音 信息可为用户当前说话的语音信息, 也可为其他电子设备传输至穿戴式设备 100的语音信 息, 也可为其他电子设备记录之后播放的语音信息。
移动终端 200用于对语言信息进行语音识别以获取用户的指令。
在本发明的实施例中, 移动终端 200在接收到穿戴式设备 100发送的语音信息之后, 可 通过语音识别程序对该语音信息进行识别以获取用户的指令。在本发明的实施例中, 可通过 移动终端 200的语音识别程序进行识别,也可以由移动终端 200将语音信息发送至云端服务 器, 通过移动终端 200调用云端服务器的语音识别程序进行识别, 或者其他方式进行识别, 总之能够使得移动终端 200能够获得语音识别结果即可, 在此不再赘述。
举例来说, 当用户想要进行语音拨号时, 可说出预先存在通讯簿中的姓名, 移动终端 200即可通过语音识别技术获取该姓名, 然后可自动拨出该姓名对应的电话号码, 当然也可 根据用户进一步的语音指令, 例如 "拨出", 即可实现语音拨号。
再例如, 当用户想获取当前时间时, 可说出 "报时 "的指令, 移动终端 200即可通过语 音识别技术获取该指令, 做出相应反馈, 语音播报出当前时间。
本发明实施例的人机交互系统,通过穿戴式设备先接收并记录用户的语音信息, 并在与 移动终端建立通信之后, 将语音信息发送至移动终端, 然后移动终端对该语音信息进行 语音识别以获取用户的指令, 使用户在人机交互中操作快捷、 简便, 提升了用户体验。
本发明的另一个实施例中, 移动终端对该语音信息进行语音识别以获取用户的指令, 在获取执行结果之前, 穿戴式设备向用户播放预设音频信号, 可使人机交互更加智能化, 操作快捷、 简便, 更加符合用户需求, 提升了用户体验。
在本发明的实施例中, 穿戴式设备 100还可以实时记录用户的语音信息, 并在接收 到语音信息之后, 向移动终端 200中的语音识别程序发送激活指令。 由于用户在通过穿 戴式设备 100接收用户的语音信息时, 此时移动终端 200还未准备好, 因此穿戴式设备 100在接收到语音信息之后, 向移动终端 200中的语音识别程序发送激活指令, 从而对 移动终端 200及移动终端 200中的相应应用程序进行激活。
具体地, 用户可通过触发穿戴式设备 100上的操作按键向移动终端 200发送激活语 音识别程序指令。 当移动终端 200中的语音识别程序被激活后, 穿戴式设备 100将刚才 记录的语音信息发送给移动终端 200,以使移动终端 200通过该语音识别程序进行识别。 可以理解的是, 语音识别程序也可以在云端服务器上, 通过将语音信息发送至云端服务 器, 以此来进行识别。
在本发明的实施例中, 在穿戴式设备 100接收到执行结果之前, 穿戴式设备 100 向 用户播放预设音频信号以对用户进行提示。 在此期间, 对语音信息识别有可能失败, 也 可能由于网络原因, 发送语音信息有延迟。 所以, 穿戴式设备 100可向用户播放预设音 频信号, 以提示用户。例如: "语音识别失败, 请重试"; 或者"语音识别中, 请稍候"。 如果在双方通话的情况下, 该预设音频信号也可发送至通话对方。
在本发明的实施例中, 穿戴式设备 100本身也可主动识别简单的少数语音信息, 在 双方通话的情况下, 可发送对应的双音多频至通话对方。 其中, 双音多频可用于通话过 程中输出数字信号。 在本发明的实施例中, 穿戴式设备 100还用于根据用户的拍摄指令进行拍摄, 并将 拍摄的图像或视频发送至移动终端 200。 拍摄指令可通过穿戴式设备 100上的操作按钮 输入, 也可通过语音输入。 若如此做, 只需几秒之内即可完成拍摄操作, 无需取出移动 终端 200, 解锁屏幕, 打开照相的应用软件等繁琐步骤, 不仅节省了操作步骤, 还可以 拍摄到平时错过的场景。 例如高速运动中经过的场景等。
在本发明的实施例中, 移动终端 200还可根据指令生成执行结果, 并将执行结果反 馈至穿戴式设备 100, 以使穿戴式设备 100将执行结果提供至用户。 其中, 执行结果提 供至用户的方式可通过播放语音提供给用户, 当然也可通过显示图片或视频提供给用 户。
本发明实施例的人机交互系统,通过穿戴式设备接收并记录用户的语音信息,并在与移 动终端建立通信之后, 将语音信息发送至移动终端, 然后移动终端对该语音信息进行语 音识别以获取用户的指令,在获取执行结果之前,穿戴式设备向用户播放预设音频信号, 可使人机交互更加智能化, 操作快捷、 简便, 更加符合用户需求, 提升了用户体验。
图 2为根据本发明一个实施例的人机交互方法的流程图。
在本实施例中, 通过移动终端应用语音识别技术对语音信息进行语音识别, 无需对移 动终端进行任何操作, 实现人机交互智能化, 使用户在人机交互中操作快捷、 简便, 提升 了用户体验。 具体地, 如图 2所示, 根据本发明实施例的人机交互方法包括:
5201 , 穿戴式设备接收并记录用户的语音信息。
在本发明的实施例中, 对人们日常穿戴的随身用品进行智能化设计, 以此开发出可以穿 戴的设备称为穿戴式设备, 例如已知的智能眼镜、 智能耳机、 智能手环、 智能钱包或智 能纽扣等。 穿戴式设备是全新的人机交互方式, 由于随身携带方便、 如影随形, 更能准 确的了解用户的具体需求, 提升用户体验。 穿戴式设备和移动终端可通过有线或无线的 方式通信, 例如穿戴式设备可通过 WiFi ( wireless fidel ity, 无线保真) 或蓝牙与移 动终端通信, 或者穿戴式设备通过移动终端上的音频插孔与移动终端通信。
在本发明的实施例中, 用户可通过触发穿戴式设备的触发器, 以使穿戴式设备接收并 记录用户的语音信息, 并在与移动终端建立通信之后, 将语音信息发送至移动终端。 其 中, 触发的方式具体如下: 触发器可为穿戴式设备之上的按钮或开关, 用户可以通过触 发按钮或开关, 以触发穿戴式设备; 也可以通过穿戴式设备的触发器检测用户的预设行 为或预设语音指令进行触发。 例如: 触发器可检测用户的点头、 甩手、 踢脚等动作进行触 发。 或者触发器检测用户说出的固定语音信息, 例如: 触发器检测用户说出的 "开始录 音"触发穿戴式设备; 还可以通过触发器检测周围的温度或磁场的变化触发穿戴式设备。 例如: 用户通过安检门进行安全检查的时候, 触发器检测到磁场的变化, 自动触发穿戴式 设备, 通过识别用户的声纹对用户身份进行判断。 再例如: 用户将手指放在触发器上, 到 达预设温度自动触发穿戴式设备; 还可以通过触发器接收红外触发信号, 以触发穿戴式 设备; 触发器还可以是穿戴式设备上的摄像头, 通过获取并检测用户图像的变化, 对穿 戴式设备进行触发。
在本发明的实施例中, 记录的用户语音信息可为用户当前说话的语音信息, 也可为其 他电子设备传输至穿戴式设备的语音信息,也可为其他电子设备记录之后播放的语音信息。
5202 ,穿戴式设备建立与移动终端的通信,以使移动终端中的语音识别程序被激活, 并将语音信息发送至移动终端。 在本发明的实施例中, 穿戴式设备可以先缓存语音信息, 然后在与移动终端建立通 信之后, 以使移动终端中的语音识别程序被激活,然后将语音信息发送至移动终端。 其 中, 发送的方式可以是一边记录一边发送, 也可以是记录完成之后再发送。
S203 , 移动终端对语音信息进行语音识别以获取用户的指令。
在本发明的实施例中, 移动终端接收穿戴式设备发送的语音信息, 通过语音识别程序对 该语音信息进行识别以获取用户的指令。在本发明的实施例中, 可通过移动终端的语音识别 程序进行识别, 也可以由移动终端将语音信息发送至云端服务器, 通过移动终端调用云端服 务器的语音识别程序进行识别, 或者其他方式进行识别, 总之能够使得移动终端能够获得语 音识别结果即可, 在此不再赘述。
举例来说, 当用户想要进行语音拨号时, 可说出预先存在通讯簿中的姓名, 移动终端即 可通过语音识别技术获取该姓名, 然后可自动拨出该姓名对应的电话号码, 当然也可根据用 户进一步的语音指令, 例如 "拨出", 即可实现语音拨号。
再例如, 当用户想获取当前时间时, 可说出 "报时"的指令, 移动终端即可通过语音识 别技术获取该指令, 做出相应反馈, 语音播报出当前时间。
本发明实施例的人机交互方法,通过穿戴式设备先接收并记录用户的语音信息, 并在与 移动终端建立通信之后, 将语音信息发送至移动终端, 然后移动终端对该语音信息进行 语音识别以获取用户的指令, 通过应用语音识别技术, 实现人机交互智能化, 使用户在人 机交互中操作快捷、 简便, 提升了用户体验。
图 3为根据本发明一个具体实施例的人机交互方法的流程图。
在本实施例中, 在获取执行结果之前, 穿戴式设备向用户播放预设音频信号, 可使 人机交互更加智能化; 通过将执行结果反馈至穿戴式设备, 以使穿戴式设备将执行结果 提供至用户, 使用户操作更加快捷、 简便, 更加符合用户需求, 提升了用户体验。 具体地, 如图 3所示, 根据本发明实施例的人机交互方法包括:
S301 , 穿戴式设备接收并记录用户的语音信息。
在本发明的实施例中, 对人们日常穿戴的随身用品进行智能化设计, 以此开发出可以穿 戴的设备称为穿戴式设备, 例如已知的智能眼镜、 智能耳机、 智能手环、 智能钱包或智 能纽扣等。 穿戴式设备是全新的人机交互方式, 由于随身携带方便、 如影随形, 更能准 确的了解用户的具体需求, 提升用户体验。 穿戴式设备和移动终端可通过有线或无线的 方式通信, 例如穿戴式设备可通过 WiFi ( wirel ess fi del ity, 无线保真) 或蓝牙与移 动终端通信, 或者穿戴式设备通过移动终端上的音频插孔与移动终端通信。
在本发明的实施例中, 用户可通过触发穿戴式设备的触发器, 以使穿戴式设备接收并 记录用户的语音信息, 并在与移动终端建立通信之后, 将语音信息发送至移动终端。 其 中, 触发的方式具体如下: 触发器可为穿戴式设备之上的按钮或开关, 用户可以通过触 发按钮或开关, 以触发穿戴式设备; 也可以通过穿戴式设备的触发器检测用户的预设行 为或预设语音指令进行触发。 例如: 触发器可检测用户的点头、 甩手、 踢脚等动作进行触 发。 或者触发器检测用户说出的固定语音信息, 例如: 触发器检测用户说出的 "开始录 音"触发穿戴式设备; 还可以通过触发器检测周围的温度或磁场的变化触发穿戴式设备。 例如: 用户通过安检门进行安全检查的时候, 触发器检测到磁场的变化, 自动触发穿戴式 设备, 通过识别用户的声纹对用户身份进行判断。 再例如: 用户将手指放在触发器上, 到 达预设温度自动触发穿戴式设备; 还可以通过触发器接收红外触发信号, 以触发穿戴式 设备; 触发器还可以是穿戴式设备上的摄像头, 通过获取并检测用户图像的变化, 对穿 戴式设备进行触发。
在本发明的实施例中, 记录的用户语音信息可为用户当前说话的语音信息, 也可为其 他电子设备传输至穿戴式设备的语音信息,也可为其他电子设备记录之后播放的语音信息。
5302 , 穿戴式设备建立与移动终端的通信, 并将语音信息发送至移动终端。
在本发明的实施例中, 穿戴式设备可以先缓存语音信息, 然后在与移动终端建立通 信之后, 将语音信息发送至移动终端。 其中, 发送的方式可以是一边记录一边发送, 也 可以是记录完成之后再发送。
在本发明的实施例中, 穿戴式设备实时记录用户的语音信息, 并在接收到语音信息 之后发送激活移动终端中的语音识别程序的激活指令。 由于用户在通过穿戴式设备接收 用户的语音信息时,此时移动终端还未准备好,因此穿戴式设备在接收到语音信息之后, 向移动终端中的语音识别程序发送激活指令, 从而对移动终端及移动终端中的相应应用 程序进行激活。
具体地, 用户可通过触发穿戴式设备上的操作按键, 向移动终端中的语音识别程序 发送激活指令。 当移动终端中的语音识别程序被激活后, 穿戴式设备将刚才记录的语音 信息发送给移动终端, 以使移动终端通过该语音识别程序进行识别。 可以理解, 语音识 别程序也可以在云端服务器上, 用户通过将语音信息发送至云端服务器, 以此来进行识 别。
在本发明的实施例中, 在穿戴式设备接收到执行结果之前, 穿戴式设备向用户播放 预设音频信号以对用户进行提示。 在此期间, 对语音信息识别有可能失败, 也可能由于 网络原因, 发送语音信息有延迟。 所以, 穿戴式设备可向用户播放预设音频信号, 以提 示用户。 例如: "语音识别失败, 请重试" ; 或者 "语音识别中, 请稍候" 。 如果在双 方通话的情况下, 该预设音频信号也可发送至通话对方。
在本发明的实施例中, 穿戴式设备本身也可主动识别简单的少数语音信息, 在双方 通话的情况下, 可发送对应的双音多频至通话对方。 其中, 双音多频可用于通话过程中 输出数字信号。例如: 用户在拨打 10086的时候, 移动的语音系统可帮助并提示用户如何进 行操作。 " 1 "为普通话; " 2 "为英语; 需要人工服务请按 "0"。 如此, 用户可根据语音系统 的提示音进行相应的操作。 用户按下数字键发送指令, 即运用到双音多频。
5303 , 移动终端对语音信息进行语音识别以获取用户的指令。
在本发明的实施例中, 在移动终端接收到穿戴式设备发来的激活指令后, 开始对语 音信息进行识别, 并以此获取用户的指令, 执行完成用户的操作。
在本发明的实施例中, 可通过移动终端的语音识别程序进行识别, 也可以由移动终端将 语音信息发送至云端服务器, 通过移动终端调用云端服务器的语音识别程序进行识别, 或者 其他方式进行识别, 总之能够使得移动终端能够获得语音识别结果即可, 在此不再赘述。
举例来说, 当用户想要进行语音拨号时, 可说出预先存在通讯簿中的姓名, 移动终端即 可通过语音识别技术获取该姓名, 然后可自动拨出该姓名对应的电话号码, 当然也可根据用 户进一步的语音指令, 例如 "拨出", 即可实现语音拨号。
再例如, 当用户想获取当前时间时, 可说出 "报时"的指令, 移动终端即可通过语音识 别技术获取该指令, 做出相应反馈, 语音播报出当前时间。
5304 , 移动终端根据指令生成执行结果, 并将执行结果反馈至穿戴式设备, 以使穿 戴式设备将执行结果提供至用户。
在本发明的实施例中, 执行结果提供至用户的方式可通过播放语音提供给用户, 当 然也可通过显示图片或视频提供给用户。举例来说, 当用户想要进行语音拨号时, 可说出 预先存在通讯簿中的姓名, 移动终端即可通过语音识别技术获取该姓名, 然后可自动拨出该 姓名对应的电话号码, 当然也可根据用户进一步的语音指令, 例如 "拨出", 进行语音拨号。 移动终端根据语音指令,执行拨出号码动作,然后将拨出号码的动作结果反馈至穿戴式设备, 穿戴式设备可进行语音播报 "号码已拨出", 对用户进行反馈。
在本发明的实施例中, 穿戴式设备还用于根据用户的拍摄指令进行拍摄, 并将拍摄 的图像或视频发送至移动终端。 拍摄指令可通过穿戴式设备上的操作按钮输入, 也可通 过语音输入。 若如此做, 只需几秒之内即可完成拍摄操作, 无需取出移动终端, 解锁屏 幕, 打开照相的应用软件等繁琐步骤, 不仅节省了操作步骤, 还可以拍摄到平时错过的 场景。 例如高速运动中经过的场景等。
本发明实施例的人机交互方法,在获取执行结果之前, 穿戴式设备向用户播放预设音 频信号, 可使人机交互更加智能化; 通过将执行结果反馈至穿戴式设备, 以使穿戴式设 备将执行结果提供至用户, 使用户操作更加快捷、 简便, 更加符合用户需求, 提升了用户 体验。
图 4为根据本发明另一个实施例的人机交互方法的流程图。
在本实施例中, 通过移动终端应用语音识别技术对语音信息进行语音识别, 无需对移 动终端进行任何操作, 实现人机交互智能化, 使用户在人机交互中操作快捷、 简便, 提升 了用户体验。 具体地, 如图 5所示, 根据本发明实施例的人机交互方法包括:
5401 , 接收用户的语音信息。
在本发明的实施例中, 对人们日常穿戴的随身用品进行智能化设计, 以此开发出可以穿 戴的设备称为穿戴式设备, 例如已知的智能眼镜、 智能耳机、 智能手环、 智能钱包或智 能纽扣等。 穿戴式设备是全新的人机交互方式, 由于随身携带方便、 如影随形, 更能准 确的了解用户的具体需求, 提升用户体验。 穿戴式设备和移动终端可通过有线或无线的 方式通信, 例如穿戴式设备可通过 WiFi ( wireless fidel ity, 无线保真) 或蓝牙与移 动终端通信, 或者穿戴式设备通过移动终端上的音频插孔与移动终端通信。
在本发明的实施例中, 用户可通过触发穿戴式设备的触发器, 以使穿戴式设备接收并 记录用户的语音信息, 并在与移动终端建立通信之后, 将语音信息发送至移动终端。 其 中, 触发的方式具体如下: 触发器可为穿戴式设备之上的按钮或开关, 用户可以通过触 发按钮或开关, 以触发穿戴式设备; 也可以通过穿戴式设备的触发器检测用户的预设行 为或预设语音指令进行触发。 例如: 触发器可检测用户的点头、 甩手、 踢脚等动作进行触 发。 或者触发器检测用户说出的固定语音信息, 例如: 触发器检测用户说出的 "开始录 音"触发穿戴式设备; 还可以通过触发器检测周围的温度或磁场的变化触发穿戴式设备。 例如: 用户通过安检门进行安全检查的时候, 触发器检测到磁场的变化, 自动触发穿戴式 设备, 通过识别用户的声纹对用户身份进行判断。 再例如: 用户将手指放在触发器上, 到 达预设温度自动触发穿戴式设备; 还可以通过触发器接收红外触发信号, 以触发穿戴式 设备; 触发器还可以是穿戴式设备上的摄像头, 通过获取并检测用户图像的变化, 对穿 戴式设备进行触发。
5402 , 记录语音信息。 在本发明的实施例中, 穿戴式设备具有缓存语音信息功能, 可对语音信息进行临时 记录存储。
S403 , 建立与移动终端的通信, 并将语音信息发送至移动终端, 以使移动终端对语 音信息进行语音识别以获取用户的指令。
在本发明的实施例中, 穿戴式设备在与移动终端建立通信之后, 将语音信息发送至 移动终端, 以使移动终端对语音信息进行语音识别以获取用户的指令。 其中, 发送的方 式可以是一边记录一边发送, 也可以是记录完成之后再发送。
举例来说, 当用户想要进行语音拨号时, 可说出预先存在通讯簿中的姓名, 穿戴式设备 接收并记录语音信息, 并将语音信息发送至移动终端, 以使移动终端通过语音识别技术获取 该姓名, 然后可自动拨出该姓名对应的电话号码, 当然也可根据用户进一步的语音指令, 例 如 "拨出", 即可实现语音拨号。
再例如, 当用户想获取当前时间时,可说出 "报时 "的指令, 穿戴式设备接收并记录"报 时"这条语音信息, 并将 "报时"发送至移动终端, 以使移动终端即可通过语音识别技术获 取该指令, 做出相应反馈, 通过穿戴式设备语音播报出当前时间。
本发明实施例的人机交互方法,通过穿戴式设备先接收并记录用户的语音信息, 并在与 移动终端建立通信之后, 将语音信息发送至移动终端, 以使移动终端对该语音信息进行 语音识别并获取用户的指令, 通过应用语音识别技术, 实现人机交互智能化, 使用户在人 机交互中操作快捷、 简便, 提升了用户体验。
图 5为根据本发明另一个具体实施例的人机交互方法的流程图。
在本实施例中, 通过穿戴式设备接收并记录用户的语音信息, 并在与移动终端建立通 信之后, 将语音信息发送至移动终端, 以使移动终端对该语音信息进行语音识别以获取 用户的指令, 在获取执行结果之前, 穿戴式设备向用户播放预设音频信号, 可使人机交 互更加智能化, 操作快捷、 简便, 更加符合用户需求, 提升了用户体验。 具体地, 如图 5 所示, 根据本发明实施例的人机交互方法包括:
S501 , 接收用户的语音信息。
在本发明的实施例中, 对人们日常穿戴的随身用品进行智能化设计, 以此开发出可以穿 戴的设备称为穿戴式设备, 例如已知的智能眼镜、 智能耳机、 智能手环、 智能钱包或智 能纽扣等。 穿戴式设备是全新的人机交互方式, 由于随身携带方便、 如影随形, 更能准 确的了解用户的具体需求, 提升用户体验。 穿戴式设备和移动终端可通过有线或无线的 方式通信, 例如穿戴式设备可通过 WiFi ( wire less fidel i ty , 无线保真) 或蓝牙与移 动终端通信, 或者穿戴式设备通过移动终端上的音频插孔与移动终端通信。
在本发明的实施例中, 用户可通过触发穿戴式设备的触发器, 以使穿戴式设备接收并 记录用户的语音信息, 并在与移动终端建立通信之后, 将语音信息发送至移动终端。 其 中, 触发的方式具体如下: 触发器可为穿戴式设备之上的按钮或开关, 用户可以通过触 发按钮或开关, 以触发穿戴式设备; 也可以通过穿戴式设备的触发器检测用户的预设行 为或预设语音指令进行触发。 例如: 触发器可检测用户的点头、 甩手、 踢脚等动作进行触 发。 或者触发器检测用户说出的固定语音信息, 例如: 触发器检测用户说出的 "开始录 音"触发穿戴式设备; 还可以通过触发器检测周围的温度或磁场的变化触发穿戴式设备。 例如: 用户通过安检门进行安全检查的时候, 触发器检测到磁场的变化, 自动触发穿戴式 设备, 通过识别用户的声纹对用户身份进行判断。 再例如: 用户将手指放在触发器上, 到 达预设温度自动触发穿戴式设备; 还可以通过触发器接收红外触发信号, 以触发穿戴式 设备; 触发器还可以是穿戴式设备上的摄像头, 通过获取并检测用户图像的变化, 对穿 戴式设备进行触发。
5502 , 记录语音信息。
在本发明的实施例中, 穿戴式设备具有缓存语音信息功能, 可对语音信息进行临时 记录存储。
5503 , 建立与移动终端的通信, 并将语音信息发送至移动终端, 以使移动终端对语 音信息进行语音识别以获取用户的指令。
在本发明的实施例中, 穿戴式设备在与移动终端建立通信之后, 将语音信息发送至 移动终端。 其中, 发送的方式可以是一边记录一边发送, 也可以是记录完成之后再发送。 用户通过触发穿戴式设备上的操作按键, 向移动终端中的语音识别程序发送激活指令, 以使移动终端中的语音识别程序被激活。 由于用户在通过穿戴式设备接收用户的语音信 息时, 此时移动终端还未准备好, 因此穿戴式设备在接收到语音信息之后, 向移动终端 中的语音识别程序发送激活指令, 从而对移动终端及移动终端中的相应应用程序进行激 活。 具体地, 穿戴式设备可将刚才记录的语音信息发送给移动终端, 以使移动终端通过 该语音识别程序进行识别。 可以理解, 语音识别程序也可以在云端服务器上, 用户通过 将语音信息发送至云端服务器, 以此来进行识别。
在本发明的实施例中, 在穿戴式设备接收到执行结果之前, 穿戴式设备向用户播放 预设音频信号以对用户进行提示。 在此期间, 对语音信息识别有可能失败, 也可能由于 网络原因, 发送语音信息有延迟。 所以, 穿戴式设备可向用户播放预设音频信号, 以提 示用户。 例如: "语音识别失败, 请重试" ; 或者 "语音识别中, 请稍候" 。 如果在双 方通话的情况下, 该预设音频信号也可发送至通话对方。
在本发明的实施例中, 穿戴式设备本身也可主动识别简单的少数语音信息, 在双方 通话的情况下, 可发送对应的双音多频至通话对方。 其中, 双音多频可用于通话过程中 输出数字信号。例如: 用户在拨打 10086的时候, 移动的语音系统可帮助并提示用户如何进 行操作。 " 1 "为普通话; " 2 "为英语; 需要人工服务请按 "0"。 如此, 用户可根据语音系统 的提示音进行相应的操作。 用户按下数字键发送指令, 即运用到双音多频。
5504 , 接收移动终端根据指令生成的执行结果。
在本发明的实施例中, 在移动终端接收到穿戴式设备发来的激活指令后, 开始对语 音信息进行识别, 并以此获取用户的指令, 并根据指令生成执行结果。
举例来说, 当用户想要进行语音拨号时, 可说出预先存在通讯簿中的姓名, 移动终端即 可通过语音识别技术获取该姓名, 然后可自动拨出该姓名对应的电话号码, 当然也可根据用 户进一步的语音指令, 例如 "拨出", 即可实现语音拨号。
再例如, 当用户想获取当前时间时, 可说出 "报时"的指令, 移动终端即可通过语音识 别技术获取该指令, 做出相应反馈, 语音播报出当前时间。
5505 , 将执行结果提供至用户。
在本发明的实施例中, 执行结果提供至用户的方式可通过播放语音提供给用户, 当 然也可通过显示图片或视频提供给用户。
在本发明的实施例中, 穿戴式设备还用于根据用户的拍摄指令进行拍摄, 并将拍摄 的图像或视频发送至移动终端。 拍摄指令可通过穿戴式设备上的操作按钮输入, 也可通 过语音输入。 若如此做, 只需几秒之内即可完成拍摄操作, 无需取出移动终端, 解锁屏 幕, 打开照相的应用软件等繁琐步骤, 不仅节省了操作步骤, 还可以拍摄到平时错过的 场景。 例如高速运动中经过的场景等。
本发明实施例的人机交互方法,通过穿戴式设备接收并记录用户的语音信息,并在与移 动终端建立通信之后, 将语音信息发送至移动终端, 以使移动终端对该语音信息进行语 音识别以获取用户的指令,在获取执行结果之前,穿戴式设备向用户播放预设音频信号, 可使人机交互更加智能化, 操作快捷、 简便, 更加符合用户需求, 提升了用户体验。
图 6为根据本发明一个实施例的人机交互的装置的结构示意图。
如图 6所示, 根据本发明实施例的人机交互装置包括: 穿戴式设备 100。 其中, 穿 戴式设备 100, 具体包括: 触发器 110、 麦克风及语音处理器 120、 存储器 130、 通信器
140和控制器 150。
在本发明的实施例中, 对人们日常穿戴的随身用品进行智能化设计, 以此开发出可以穿 戴的设备称为穿戴式设备 100, 例如已知的智能眼镜、 智能耳机、 智能手环、 智能钱包或 智能纽扣等。 穿戴式设备 100是全新的人机交互方式, 由于随身携带方便、 如影随形, 更能准确的了解用户的具体需求, 提升用户体验。
在本发明的实施例中, 用户可通过触发穿戴式设备 100 的触发器 110, 以使麦克风及 语音处理器 120接收并记录用户的语音信息, 并在与移动终端 200建立通信之后, 将语 音信息发送至移动终端 200。
具体地, 触发的方式具体如下: 触发器 110可以为穿戴式设备 100之上的按钮或开 关, 用户可以通过触发按钮或开关, 以触发麦克风及语音处理器 120; 也可以通过触发 器 110检测用户的预设行为或预设语音指令触发麦克风及语音处理器 120。 例如: 触发 器 110可检测用户的点头、 甩手、 踢脚等动作进行触发。 或者触发器 110检测用户说出的 固定语音信息, 例如: 触发器 110检测用户说出的 "开始录音"触发麦克风及语音处理 器 120 ; 还可以通过触发器 1 10检测周围的温度或磁场的变化触发麦克风及语音处理器 120。 例如: 用户通过安检门进行安全检查的时候, 触发器 110检测到磁场的变化, 自动 触发麦克风及语音处理器 120, 通过识别用户的声纹对用户身份进行判断。 再例如: 用户 将手指放在触发器 110上, 到达预设温度自动触发麦克风及语音处理器 120; 还可以通 过触发器 110接收红外触发信号, 以触发麦克风及语音处理器 120。
存储器 130用于保存语音信息。
在本发明的实施例中, 存储器 130 具有缓存语音信息功能, 可对语音信息进行临时 记录存储。 其中, 存储器 130还可存储图片信息、 视频信息等。
通信器 140用于建立与移动终端 200的通信。
在本发明的实施例中, 通信器 130和移动终端 200可通过有线或无线的方式通信, 例如通信器 140可通过 WiFi ( wireless fidel ity, 无线保真) 或蓝牙与移动终端 200 通信, 或者通信器 140通过移动终端 200上的音频插孔与移动终端 200通信。
控制器 150用于将语音信息通过通信器 140发送至移动终端 200,以使移动终端 200 对语音信息进行语音识别以获取用户的指令。 其中, 控制器 150还可对穿戴式设备 100 中其他装置进行控制。
在本发明的实施例中, 控制器 150通过控制通信器 140在与移动终端 200建立通信 之后, 将语音信息发送至移动终端 200, 以使移动终端 200对语音信息进行语音识别以 获取用户的指令。 其中, 发送的方式可以是一边记录一边发送, 也可以是记录完成之后 再发送。
举例来说, 当用户想要进行语音拨号时, 可说出预先存在通讯簿中的姓名, 移动终端 200即可通过语音识别技术获取该姓名, 然后可自动拨出该姓名对应的电话号码, 当然也可 根据用户进一步的语音指令, 例如 "拨出", 即可实现语音拨号。
再例如, 当用户想获取当前时间时, 可说出 "报时 "的指令, 移动终端 200即可通过语 音识别技术获取该指令, 做出相应反馈, 语音播报出当前时间。
本发明实施例的人机交互的装置, 通过麦克风及语音处理器先接收并记录用户的语音 信息, 并在与移动终端建立通信之后, 控制器将语音信息发送至移动终端, 以使移动终 端对该语音信息进行语音识别并获取用户的指令, 通过应用语音识别技术, 实现人机交互 智能化, 使用户在人机交互中操作快捷、 简便, 提升了用户体验。
图 7为根据本发明一个具体实施例的人机交互的装置的结构示意图。
如图 7所示, 根据本发明实施例的人机交互装置包括: 触发器 110、 麦克风及语音 处理器 120、 存储器 130、 通信器 140、 控制器 150和拍摄器 160。
控制器 150还用于通过通信器 140接收移动终端 200根据指令生成的执行结果, 并 将执行结果提供至用户。 其中, 控制器 150还可对拍摄器 160等其他装置进行控制。
在本发明的实施例中, 执行结果提供至用户的方式可通过播放语音提供给用户, 当 然也可通过显示图片或视频提供给用户。
控制器 150还用于在接收到语音信息之后发送激活移动终端 200中的语音识别程序 的激活指令。 由于用户在通过控制器 150接收用户的语音信息时, 此时移动终端 200还 未准备好, 因此控制器 150在接收到语音信息之后, 向移动终端 200中的语音识别程序 发送激活指令,从而对移动终端 200及移动终端 200中的相应应用程序进行激活。其中, 控制器 150还可对穿戴式设备 100中其他装置进行控制。 例如: 拍摄器 160。
拍摄器 160用于根据用户的拍摄指令进行拍摄, 并通过通信器 140将拍摄的图像或 视频发送至移动终端 200。
在本发明的实施例中, 拍摄器 160用于根据用户的拍摄指令进行拍摄, 并将拍摄的 图像或视频发送至移动终端 200。 拍摄指令可通过穿戴式设备 100上的操作按钮输入, 也可通过语音输入。若如此做,只需几秒之内即可完成拍摄操作,无需取出移动终端 200, 解锁屏幕, 打开照相的应用软件等繁琐步骤, 不仅节省了操作步骤, 还可以拍摄到平时 错过的场景。 例如高速运动中经过的场景等。
在本发明的实施例中, 拍摄器 160还可以用于通过获取并检测用户图像的变化, 对 麦克风及语音处理器 120进行触发, 以使麦克风及语音处理器 120接收并记录用户的语 音信息。
本发明实施例的人机交互的装置, 通过麦克风及语音处理器接收并记录用户的语音信 息, 并在与移动终端建立通信之后, 控制器将语音信息发送至移动终端, 以使移动终端 对该语音信息进行语音识别以获取用户的指令, 在获取执行结果之前, 穿戴式设备向用 户播放预设音频信号, 可使人机交互更加智能化, 操作快捷、 简便, 更加符合用户需求, 提升了用户体验。
图 8为根据本发明一个具体实施例的人机交互的装置的实物示意图。
如图 8所示, 根据本发明实施例的人机交互装置包括: 智能耳机 100和手机 200。 其中, 智能耳机 100包括: 麦克风及语音处理器 110、 存储器 120、 通信器 130、 控制器 140、 拍摄器 150、 信号切换器 160和耳机音频线 170。 手机 200包括音频插孔 210。
麦克风及语音处理器 110用于接收并记录用户的语音信息。
在本发明的实施例中, 用户可通过多种方式触发智能耳机 100, 使得麦克风及语音处 理器 110接收并记录用户的语音信息, 并在与手机 200建立通信之后, 将语音信息发送 至手机 200。 具体地, 用户可以通过触发智能耳机 100之上的按钮或开关, 以触发智能 耳机 100。 用户也可以通过检测用户的预设行为或预设语音指令对智能耳机 100进行触 发。 例如: 点头、 甩手等; 或者说出固定的语音信息, 例如 "开始录音"。
存储器 120用于保存语音信息。
在本发明的实施例中, 存储器 120 具有缓存语音信息功能, 可对语音信息进行临时 记录存储。 其中, 存储器 120还可存储图片信息、 视频信息等。
通信器 130用于建立与手机 200的通信。
在本发明的实施例中, 通信器 130和手机 200可通过有线或无线的方式通信, 例如 通信器 130可通过 WiFi ( wirel ess fidel ity, 无线保真) 或蓝牙与手机 200通信, 或 者通信器 130通过耳机音频线 170插入手机 200上的音频插孔 210, 以此与手机 200通 信。 其中, 耳机音频线 170可用于耳机播放音频或者传输数据。 手机 200上的音频插孔 210可通过插入耳机音频线 170以实现智能耳机 100和手机 200之间的通信。
控制器 140用于将语音信息通过通信器 130发送至手机 200, 以使手机 200对语音 信息进行语音识别以获取用户的指令。
在本发明的实施例中, 控制器 140在与手机 200建立通信之后, 将语音信息发送至 手机 200, 以使手机 200对语音信息进行语音识别以获取用户的指令。 其中, 发送的方 式可以是一边记录一边发送, 也可以是记录完成之后再发送。
举例来说, 当用户想要进行语音拨号时, 可说出预先存在手机 200通讯簿中的姓名, 手 机 200即可通过语音识别技术获取该姓名, 然后可自动拨出该姓名对应的电话号码, 当然也 可根据用户进一步的语音指令, 例如 "拨出", 即可实现语音拨号。
再例如, 当用户想获取当前时间时, 可说出 "报时 "的指令, 手机 200即可通过语音识 别技术获取该指令, 做出相应反馈, 语音播报出当前时间。
控制器 140还用于通过通信器 130接收手机 200根据指令生成的执行结果, 并将执 行结果提供至用户。
在本发明的实施例中, 执行结果提供至用户的方式可通过播放语音提供给用户, 当 然也可通过显示图片或视频提供给用户。
控制器 140还用于在接收到语音信息之后发送激活手机 200中的语音识别程序的激 活指令。由于用户在通过控制器 140接收用户的语音信息时,此时手机 200还未准备好, 因此控制器 140在接收到语音信息之后, 向手机 200中的语音识别程序发送激活指令, 从而对手机 200及手机 200中的相应应用程序进行激活。
其中, 控制器 140还可对拍摄器 150等其他装置进行控制。
拍摄器 150用于根据用户的拍摄指令进行拍摄, 并通过通信器 130将拍摄的图像或 视频发送至手机 200。
在本发明的实施例中, 拍摄器 150用于根据用户的拍摄指令进行拍摄, 并将拍摄的 图像或视频发送至手机 200。 拍摄指令可通过智能耳机 100上的操作按钮输入, 也可通 过语音输入。 若如此做, 只需几秒之内即可完成拍摄操作, 无需取出手机 200, 解锁屏 幕, 打开照相的应用软件等繁琐步骤, 不仅节省了操作步骤, 还可以拍摄到平时错过的 场景。 例如高速运动中经过的场景等。
在本发明的实施例中, 拍摄器 150还可以获取并检测用户图像的变化, 对麦克风及 语音处理器 120进行触发, 使得麦克风及语音处理器 110接收并记录用户的语音信息。
信号切换器 160用于多路信号的切换。 例如: 当用户正在听歌时, 正在占用一个音频 信号线路。此时,有电话打入,则信号切换器 160可将音频信号线路切换至通话信号线路, 使用户与来电对方进行通话。 当通话结束时, 再将通话信号线路切换回播放歌曲的音频 信号线路。
本发明实施例的人机交互的装置, 通过智能耳机的麦克风及语音处理器接收并记录用 户的语音信息, 并在与手机建立通信之后, 控制器将语音信息发送至手机, 以使手机对 该语音信息进行语音识别以获取用户的指令, 在获取执行结果之前, 穿戴式设备向用户 播放预设音频信号, 可使人机交互更加智能化, 操作快捷、 简便, 更加符合用户需求, 提 升了用户体验。
为了实现上述实施例, 本发明还提出一种穿戴式设备。
一种穿戴式设备, 包括如图 6、 图 7任一所示的用于人机交互的装置。
本发明实施例的穿戴式设备,通过穿戴式设备先接收并记录用户的语音信息,并在与移 动终端建立通信之后, 将语音信息发送至移动终端, 以使移动终端对该语音信息进行语 音识别以获取用户的指令,在获取执行结果之前,穿戴式设备向用户播放预设音频信号, 可使人机交互更加智能化, 操作快捷、 简便, 更加符合用户需求, 提升了用户体验。
图 9为根据本发明又一个实施例的人机交互方法的流程图。
在本实施例中, 通过接收穿戴式设备的激活指令, 根据激活指令启动语音识别程序, 接收穿戴式设备发送的用户的语音信息, 并通过语音识别程序对语音信息进行识别, 以 获取用户的指令, 通过应用语音识别技术, 使用户在人机交互中操作快捷、 简便, 提升了 用户体验。 具体地, 如图 9所示, 根据本发明实施例的人机交互方法包括:
5901 , 接收穿戴式设备的激活指令。
在本发明的实施例中, 激活指令用于激活移动终端或者云端服务器中的语音识别程 序。 由于在接收用户的语音信息时, 此时移动终端还未准备好, 因此在接收到语音信息 之后, 移动终端通过接收语音识别程序的激活指令, 从而使移动终端及移动终端中的相 应应用程序激活。
5902 , 根据激活指令启动语音识别程序。
在本发明的实施例中, 语音识别程序可预先安装在移动终端中或者云端服务器中, 以此通过语音识别技术对语音信息进行识别。
5903 , 接收穿戴式设备发送的用户的语音信息, 并通过语音识别程序对语音信息进 行识别, 以获取用户的指令。
在本发明的实施例中, 移动终端在与穿戴式设备建立通信之后, 接收穿戴式设备发 送的用户的语音信息, 并对语音信息进行语音识别以获取用户的指令。
举例来说, 当用户想要进行语音拨号时, 可说出预先存在通讯簿中的姓名, 移动终端即 可通过语音识别技术获取该姓名, 然后可自动拨出该姓名对应的电话号码, 当然也可根据用 户进一步的语音指令, 例如 "拨出", 即可实现语音拨号。 再例如, 当用户想获取当前时间时, 可说出 "报时"的指令, 移动终端即可通过语音识 别技术获取该指令, 做出相应反馈, 语音播报出当前时间。
本发明实施例的人机交互方法, 通过接收穿戴式设备的激活指令, 根据激活指令启动 语音识别程序, 接收穿戴式设备发送的用户的语音信息, 并通过语音识别程序对语音信 息进行识别, 以获取用户的指令, 通过应用语音识别技术, 使用户在人机交互中操作快捷、 简便, 提升了用户体验。
图 10为根据本发明又一个具体实施例的人机交互方法的流程图。
在本实施例中, 通过接收穿戴式设备的激活指令, 根据激活指令启动语音识别程序, 接收穿戴式设备发送的用户的语音信息, 并通过语音识别程序对语音信息进行识别, 以 获取用户的指令, 并将执行结果反馈至穿戴式设备, 以使穿戴式设备将执行结果提供至 用户, 使用户在人机交互中更加操作快速、 简单、 便捷, 更加符合用户需求, 提升了用户体 验。 具体地, 如图 10所示, 根据本发明实施例的人机交互方法包括:
51001 , 接收穿戴式设备的激活指令。
在本发明的实施例中, 激活指令用于激活移动终端或者云端服务器中的语音识别程 序。 由于在接收用户的语音信息时, 此时移动终端还未准备好, 因此在接收到语音信息 之后, 移动终端通过接收语音识别程序的激活指令, 从而使移动终端及移动终端中的相 应应用程序激活。
51002 , 根据激活指令启动语音识别程序。
在本发明的实施例中, 语音识别程序可预先安装在移动终端中或者云端服务器中, 以此通过语音识别技术对语音信息进行识别。
51003 , 接收穿戴式设备发送的用户的语音信息, 并通过语音识别程序对语音信息 进行识别, 以获取用户的指令。
在本发明的实施例中, 移动终端在与穿戴式设备建立通信之后, 接收穿戴式设备发 送的用户的语音信息, 并对语音信息进行语音识别以获取用户的指令。
51004, 根据指令生成的执行结果。
在本发明的实施例中, 在移动终端接收到穿戴式设备发来的激活指令后, 开始对语 音信息进行识别, 并以此获取用户的指令, 并根据指令生成执行结果。
举例来说, 当用户想要进行语音拨号时, 可说出预先存在通讯簿中的姓名, 移动终端即 可通过语音识别技术获取该姓名, 然后可自动拨出该姓名对应的电话号码, 当然也可根据用 户进一步的语音指令, 例如 "拨出", 即可实现语音拨号。
再例如, 当用户想获取当前时间时, 可说出 "报时"的指令, 移动终端即可通过语音识 别技术获取该指令, 做出相应反馈, 语音播报出当前时间。
51005 , 将执行结果反馈至穿戴式设备, 并通过穿戴式设备提供至用户。
在本发明的实施例中, 执行结果提供至用户的方式可通过播放语音提供给用户, 当 然也可通过显示图片或视频提供给用户。
在本发明的实施例中, 移动终端还用于接收穿戴式设备拍摄的图像或视频。 拍摄指 令可通过穿戴式设备上的操作按钮输入, 也可通过语音输入。 若如此做, 只需几秒之内 即可完成拍摄操作, 无需取出移动终端, 解锁屏幕, 打开照相的应用软件等繁琐步骤, 不仅节省了操作步骤, 还可以拍摄到平时错过的场景。 例如高速运动中经过的场景等。
本发明实施例的人机交互方法, 移动终端语音识别程序对语音信息进行识别, 以获取 用户的指令,并将执行结果反馈至穿戴式设备,以使穿戴式设备将执行结果提供至用户, 使用户在人机交互中更加操作快速、 简单、 便捷, 更加符合用户需求, 提升了用户体验。
图 11为根据本发明又一个实施例的人机交互的装置的结构示意图。
如图 11所示, 根据本发明实施例的人机交互装置包括: 第一接收模块 210、 启动模 块 220和第二接收模块 230。
第一接收模块 210用于接收穿戴式设备的激活指令。
在本发明的实施例中, 激活指令用于激活移动终端 200或者云端服务器中的语音识 别程序。 由于在接收用户的语音信息时, 此时第一接收模块 210还未准备好, 因此在接 收到语音信息之后, 第一接收模块 210通过接收语音识别程序的激活指令, 从而使移动 终端 200及移动终端 200中的相应应用程序激活。
启动模块 220用于根据激活指令启动语音识别程序。
在本发明的实施例中, 语音识别程序为预先安装在移动终端 200或者云端服务器中 的程序。 目前, 很多应用软件均包括语音识别功能, 例如: 地图软件、 报时软件、 拨号 软件等。
第二接收模块 230用于接收穿戴式设备 100发送的用户的语音信息, 并通过语音识 别程序对语音信息进行识别, 以获取用户的指令。
在本发明的实施例中, 移动终端 200在与穿戴式设备 100建立通信之后, 接收穿戴 式设备 100发送的用户的语音信息, 并对语音信息进行语音识别以获取用户的指令。
举例来说, 当用户想要进行语音拨号时, 可说出预先存在通讯簿中的姓名, 移动终端 200即可通过语音识别技术获取该姓名, 然后可自动拨出该姓名对应的电话号码, 当然也可 根据用户进一步的语音指令, 例如 "拨出", 即可实现语音拨号。
再例如, 当用户想获取当前时间时, 可说出 "报时 "的指令, 移动终端 200即可通过语 音识别技术获取该指令, 做出相应反馈, 语音播报出当前时间。
本发明实施例的人机交互的装置, 第一接收模块通过接收穿戴式设备的激活指令, 根 据激活指令启动语音识别程序, 第二接收模块接收穿戴式设备发送的用户的语音信息, 并通过语音识别程序对语音信息进行识别, 以获取用户的指令, 通过应用语音识别技术, 使用户在人机交互中操作快捷、 简便, 提升了用户体验。
图 12为根据本发明又一个具体实施例的人机交互的装置的结构示意图。
如图 12所示, 根据本发明实施例的人机交互装置包括: 第一接收模块 210、 启动模 块 220、 第二接收模块 230、 执行模块 240和反馈模块 250。
执行模块 240用于根据指令生成的执行结果。
在本发明的实施例中, 在移动终端 200接收到穿戴式设备 100发来的激活指令后, 开始对语音信息进行识别, 并以此获取用户的指令, 并根据指令生成执行结果。
举例来说, 当用户想要进行语音拨号时, 可说出预先存在通讯簿中的姓名, 移动终端 200即可通过语音识别技术获取该姓名, 然后可自动拨出该姓名对应的电话号码, 当然也可 根据用户进一步的语音指令, 例如 "拨出", 即可实现语音拨号。
再例如, 当用户想获取当前时间时, 可说出 "报时 "的指令, 移动终端 200即可通过语 音识别技术获取该指令, 做出相应反馈, 语音播报出当前时间。
反馈模块 250用于将执行结果反馈至穿戴式设备 100, 并通过穿戴式设备 100提供 至用户。 在本发明的实施例中, 反馈模块 250可将执行结果反馈至穿戴式设备 100, 并通过穿 戴式设备 100提供至用户的方式可通过播放语音提供给用户, 当然也可通过显示图片或 视频提供给用户。
第二接收模块 230还用于接收穿戴式设备 100发送的用户拍摄的图像或视频。
在本发明的实施例中, 第二接收模块 230还用于接收穿戴式设备 100拍摄的图像或 视频。 拍摄指令可通过穿戴式设备 100上的操作按钮输入, 也可通过语音输入。 若如此 做, 只需几秒之内即可完成拍摄操作, 无需取出移动终端 200, 解锁屏幕, 打开照相的 应用软件等繁琐步骤, 不仅节省了操作步骤, 还可以拍摄到平时错过的场景。 例如高速 运动中经过的场景等。
本发明实施例的人机交互的装置,第二接收模块通过语音识别程序对语音信息进行识 别, 以获取用户的指令, 并将执行结果反馈至穿戴式设备, 以使穿戴式设备将执行结果 提供至用户, 使用户在人机交互中更加操作快速、 简单、 便捷, 更加符合用户需求, 提升 了用户体验。
为了实现上述实施例, 本发明还提出一种移动终端。
一种移动终端, 包括如图 11、 图 12任一项所示的用于人机交互的装置。
本发明实施例的移动终端, 移动终端通过接收穿戴式设备的激活指令, 根据激活指 令启动语音识别程序, 接收穿戴式设备发送的用户的语音信息, 并通过语音识别程序对 语音信息进行识别, 以获取用户的指令, 并将执行结果反馈至穿戴式设备, 以使穿戴式 设备将执行结果提供至用户, 使用户在人机交互中更加操作快速、 简单、 便捷, 更加符合 用户需求, 提升了用户体验。
移动终端可为例如是手机、平板电脑、 个人数字助理、 电子书等具有各种操作系统的硬 件设备。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或 更多个用于实现特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分, 并且 本发明的优选实施方式的范围包括另外的实现, 其中可以不按所示出或讨论的顺序, 包括根 据所涉及的功能按基本同时的方式或按相反的顺序, 来执行功能, 这应被本发明的实施例所 属技术领域的技术人员所理解。
在流程图中表示或在此以其他方式描述的逻辑和 /或步骤, 例如, 可以被认为是用于实 现逻辑功能的可执行指令的定序列表, 可以具体实现在任何计算机可读介质中, 以供指令执 行系统、装置或设备(如基于计算机的系统、包括处理器的系统或其他可以从指令执行系统、 装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。 就本说明书而言, "计算机可读介质"可以是任何可以包含、 存储、 通信、 传播或传输程序以 供指令执行系统、装置或设备或结合这些指令执行系统、 装置或设备而使用的装置。 计算机 可读介质的更具体的示例 (非穷尽性列表)包括以下: 具有一个或多个布线的电连接部(电 子装置), 便携式计算机盘盒 (磁装置), 随机存取存储器 (RAM), 只读存储器 (ROM), 可擦 除可编辑只读存储器 ( EPR0M或闪速存储器),光纤装置,以及便携式光盘只读存储器 (CDR0M)。 另外, 计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质, 因为可以 例如通过对纸或其他介质进行光学扫描, 接着进行编辑、解译或必要时以其他合适方式进行 处理来以电子方式获得所述程序, 然后将其存储在计算机存储器中。
应当理解, 本发明的各部分可以用硬件、 软件、 固件或它们的组合来实现。 在上述实施 方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件 来实现。 例如, 如果用硬件来实现, 和在另一实施方式中一样, 可用本领域公知的下列技术 中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻 辑电路, 具有合适的组合逻辑门电路的专用集成电路, 可编程门阵列 (PGA), 现场可编程门 阵列 (FPGA) 等。
本技术领域的普通技术人员可以理解实现上述实施例方法携带的全部或部分步骤是可 以通过程序来指令相关的硬件完成, 所述的程序可以存储于一种计算机可读存储介质中, 该 程序在执行时, 包括方法实施例的步骤之一或其组合。
此外, 在本发明各个实施例中的各功能单元可以集成在一个处理模块中, 也可以是各个 单元单独物理存在, 也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以 采用硬件的形式实现, 也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功 能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介 质中。
上述提到的存储介质可以是只读存储器, 磁盘或光盘等。
在本说明书的描述中, 参考术语 "一个实施例" 、 "一些实施例" 、 "示例" 、 "具体 示例" 、 或 "一些示例"等的描述意指结合该实施例或示例描述的具体特征、 结构、 材料或 者特点包含于本发明的至少一个实施例或示例中。在本说明书中, 对上述术语的示意性表述 不一定指的是相同的实施例或示例。 而且, 描述的具体特征、 结构、 材料或者特点可以在任 何的一个或多个实施例或示例中以合适的方式结合。
尽管已经示出和描述了本发明的实施例, 本领域的普通技术人员可以理解: 在不脱离本 发明的原理和宗旨的情况下可以对这些实施例进行多种变化、 修改、 替换和变型, 本发明的 范围由权利要求及其等同限定。

Claims

权利要求书
1、 一种人机交互系统, 其特征在于, 包括穿戴式设备和移动终端,
所述穿戴式设备, 用于接收并记录用户的语音信息, 并在与所述移动终端建立通信 之后, 将所述语音信息发送至所述移动终端; 以及
所述移动终端, 用于对所述语言信息进行语音识别以获取所述用户的指令。
2、 如权利要求 1 所述的人机交互系统, 其特征在于, 所述穿戴式设备实时记录所 述用户的语音信息, 并在接收到所述语音信息之后发送激活所述移动终端中的语音识别 程序的激活指令。
3、 如权利要求 1 所述的人机交互系统, 其特征在于, 所述移动终端根据所述指令 生成执行结果, 并将所述执行结果反馈至所述穿戴式设备, 以使所述穿戴式设备将所述 执行结果提供至所述用户。
4、 如权利要求 3所述的人机交互系统, 其特征在于, 在所述穿戴式设备接收到所 述执行结果之前, 所述穿戴式设备向所述用户播放预设音频信号。
5、 如权利要求 1-4任一项所述的人机交互系统, 其特征在于, 所述穿戴式设备还 用于根据用户的拍摄指令进行拍摄, 并将拍摄的图像或视频发送至所述移动终端。
6、 如权利要求 5所述的人机交互系统, 其特征在于, 所述拍摄指令通过所述穿戴 式设备之上的操作按钮输入, 或者通过语音输入。
7、 如权利要求 1-4任一项所述的人机交互系统, 其特征在于, 所述穿戴式设备为 智能眼镜、 智能耳机、 智能手环、 智能钱包或智能纽扣。
8、 如权利要求 1-4任一项所述的人机交互系统, 其特征在于, 所述穿戴式设备通 过 WiFi 或蓝牙与所述移动终端通信, 或者所述穿戴式设备通过所述移动终端上的音频 插孔与所述移动终端通信。
9、 如权利要求 1所述的人机交互系统, 其特征在于,
通过触发所述穿戴式设备之上的按钮或开关, 使得所述穿戴式设备接收并记录用户 的语音信息;
或者, 通过检测所述用户的预设行为或预设语音指令, 使得所述穿戴式设备接收并 记录用户的语音信息;
或者, 通过检测所述穿戴式设备的温度或磁场的变化, 使得所述穿戴式设备接收并 记录用户的语音信息;
或者, 通过向所述穿戴式设备发射红外触发信号, 使得所述穿戴式设备接收并记录 用户的语音信息;
或者, 通过所述穿戴式设备上的摄像头获取并检测所述用户图像的变化, 使得所述 穿戴式设备接收并记录用户的语音信息。
10、 一种人机交互方法, 其特征在于, 包括以下步骤:
穿戴式设备接收并记录用户的语音信息;
所述穿戴式设备建立与移动终端的通信, 并将所述语音信息发送至所述移动终端; 以及
所述移动终端对所述语言信息进行语音识别以获取所述用户的指令。
11、 如权利要求 10所述的人机交互方法, 其特征在于, 所述穿戴式设备实时记录 所述用户的语音信息, 并在接收到所述语音信息之后发送激活所述移动终端中的语音识 别程序的激活指令。
12、 如权利要求 10所述的人机交互方法, 其特征在于, 还包括:
所述移动终端根据所述指令生成执行结果, 并将所述执行结果反馈至所述穿戴式设 备, 以使所述穿戴式设备将所述执行结果提供至所述用户。
13、 如权利要求 12所述的人机交互方法, 其特征在于, 在所述移动终端将所述执 行结果反馈至所述穿戴式设备之前, 还包括:
所述穿戴式设备向所述用户播放预设音频信号。
14、 如权利要求 10-13任一项所述的人机交互方法, 其特征在于, 还包括: 所述穿戴式设备根据用户的拍摄指令进行拍摄, 并将拍摄的图像或视频发送至所述 移动终端。
15、 如权利要求 14所述的人机交互方法, 其特征在于, 所述拍摄指令通过所述穿 戴式设备之上的操作按钮输入, 或者通过语音输入。
16、 如权利要求 10-13任一项所述的人机交互方法, 其特征在于, 所述穿戴式设备 为智能眼镜、 智能耳机、 智能手环、 智能钱包或智能纽扣。
17、 如权利要求 10-13任一项所述的人机交互方法, 其特征在于, 所述穿戴式设备 通过 WiFi 或蓝牙与所述移动终端通信, 或者所述穿戴式设备通过所述移动终端之上的 音频插孔与所述移动终端通信。
18、 如权利要求 10所述的人机交互方法, 其特征在于,
通过触发所述穿戴式设备之上的按钮或开关, 使得所述穿戴式设备接收并记录用户 的语音信息;
或者, 通过检测所述用户的预设行为或预设语音指令, 使得所述穿戴式设备接收并 记录用户的语音信息;
或者, 通过检测所述穿戴式设备的温度或磁场的变化, 使得所述穿戴式设备接收并 记录用户的语音信息;
或者, 通过向所述穿戴式设备发射红外触发信号, 使得所述穿戴式设备接收并记录 用户的语音信息;
或者, 通过所述穿戴式设备上的摄像头获取并检测所述用户图像的变化, 使得所述 穿戴式设备接收并记录用户的语音信息。
19、 一种人机交互方法, 其特征在于, 包括以下步骤:
接收用户的语音信息;
记录所述语音信息; 以及
建立与移动终端的通信, 并将所述语音信息发送至所述移动终端, 以使所述移动终 端对所述语言信息进行语音识别以获取所述用户的指令。
20、 如权利要求 19所述的人机交互方法, 其特征在于, 还包括:
接收所述移动终端根据所述指令生成的执行结果; 以及
将所述执行结果提供至所述用户。
21、 如权利要求 19所述的人机交互方法, 其特征在于, 所述穿戴式设备实时记录 所述用户的语音信息, 并在接收到所述语音信息之后激活所述移动终端中的语音识别程 序。
22、 如权利要求 20所述的人机交互方法, 其特征在于, 在所述接收所述移动终端 根据所述指令生成的执行结果之前, 还包括:
所述穿戴式设备向所述用户播放预设音频信号。
23、 如权利要求 19-22任一项所述的人机交互方法, 其特征在于, 还包括: 根据用户的拍摄指令进行拍摄, 并将拍摄的图像或视频发送至所述移动终端。
24、 如权利要求 23所述的人机交互方法, 其特征在于, 所述拍摄指令通过所述穿 戴式设备之上的操作按钮输入, 或者通过语音输入。
25、 如权利要求 19所述的人机交互方法, 其特征在于,
通过触发所述穿戴式设备之上的按钮或开关, 使得所述穿戴式设备接收并记录用户 的语音信息;
或者, 通过检测所述用户的预设行为或预设语音指令, 使得所述穿戴式设备接收并 记录用户的语音信息;
或者, 通过检测所述穿戴式设备的温度或磁场的变化, 使得所述穿戴式设备接收并 记录用户的语音信息;
或者, 通过向所述穿戴式设备发射红外触发信号, 使得所述穿戴式设备接收并记录 用户的语音信息;
或者, 通过所述穿戴式设备上的摄像头获取并检测所述用户图像的变化, 使得所述 穿戴式设备接收并记录用户的语音信息。
26、 一种用于人机交互的装置, 其特征在于, 包括:
触发器, 用于触发麦克风及语音处理器, 以使所述麦克风及语音处理器接收并记录 用户的语音信息;
所述麦克风及语音处理器, 用于接收并记录用户的语音信息;
存储器, 用于保存所述语音信息;
通信器, 用于建立与移动终端的通信;
控制器, 用于将所述语音信息通过所述通信器发送至所述移动终端, 以使所述移动 终端对所述语言信息进行语音识别以获取所述用户的指令。
27、 如权利要求 26所述的用于人机交互的装置, 其特征在于, 所述控制器还用于 通过所述通信器接收所述移动终端根据所述指令生成的执行结果, 并将所述执行结果提 供至所述用户。
28、 如权利要求 26所述的用于人机交互的装置, 其特征在于, 所述控制器还用于 在接收到所述语音信息之后发送激活所述移动终端中的语音识别程序的激活指令。
29、 如权利要求 26所述的用于人机交互的装置, 其特征在于, 还包括:
拍摄器, 用于根据用户的拍摄指令进行拍摄, 并通过所述通信器将拍摄的图像或视 频发送至所述移动终端。
30、 如权利要求 29所述的用于人机交互的装置, 其特征在于, 所述拍摄指令通过 所述穿戴式设备之上的操作按钮输入, 或者通过语音输入。
31、 如权利要求 26所述的用于人机交互的装置, 其特征在于, 所述通信器为 WiFi 或蓝牙和 /或音频接口。
32、 如权利要求 26所述的用于人机交互的装置, 其特征在于, 所述触发器触发方 式包括:
通过触发所述穿戴式设备之上的按钮或开关, 使得所述麦克风及语音处理器接收并 记录用户的语音信息;
或者, 通过检测所述用户的预设行为或预设语音指令, 使得所述麦克风及语音处理 器接收并记录用户的语音信息;
或者, 通过检测所述穿戴式设备的温度或磁场的变化, 使得所述麦克风及语音处理 器接收并记录用户的语音信息;
或者, 通过向所述穿戴式设备发射红外触发信号, 使得所述麦克风及语音处理器接 收并记录用户的语音信息;
或者, 通过所述拍摄器获取并检测所述用户图像的变化, 使得所述麦克风及语音处 理器接收并记录用户的语音信息。
33、 一种穿戴式设备, 其特征在于, 包括如权利要求 26-32任一项所述的用于人机 交互的装置。
34、 一种人机交互方法, 其特征在于, 包括以下步骤:
接收穿戴式设备的激活指令;
根据所述激活指令启动语音识别程序; 以及
接收所述穿戴式设备发送的用户的语音信息, 并通过所述语音识别程序对所述语音 信息进行识别, 以获取所述用户的指令。
35、 如权利要求 34所述的人机交互方法, 其特征在于, 还包括:
根据所述指令生成的执行结果; 以及
将所述执行结果反馈至所述穿戴式设备, 并通过所述穿戴式设备提供至所述用户。
36、 如权利要求 34所述的人机交互方法, 其特征在于, 还包括:
接收所述穿戴式设备发送的所述用户拍摄的图像或视频。
37、 一种用于人机交互的装置, 其特征在于, 包括:
第一接收模块, 用于接收穿戴式设备的激活指令;
启动模块, 用于根据所述激活指令启动语音识别程序; 以及
第二接收模块, 用于接收所述穿戴式设备发送的用户的语音信息, 并通过所述语音 识别程序对所述语音信息进行识别, 以获取所述用户的指令。
38、 如权利要求 37所述的用于人机交互的装置, 其特征在于, 还包括:
执行模块, 用于根据所述指令生成的执行结果; 以及
反馈模块, 用于将所述执行结果反馈至所述穿戴式设备, 并通过所述穿戴式设备提 供至所述用户。
39、 如权利要求 37所述的用于人机交互的装置, 其特征在于, 所述第二接收模块 还用于接收所述穿戴式设备发送的所述用户拍摄的图像或视频。
40、 一种移动终端, 其特征在于, 包括如权利要求 37-39任一项所述的用于人机交 互的装置。
PCT/CN2013/088813 2013-11-07 2013-12-06 人机交互系统、方法及其装置 WO2015066949A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201310552213.0A CN103558916A (zh) 2013-11-07 2013-11-07 人机交互系统、方法及其装置
CN201310552213.0 2013-11-07

Publications (1)

Publication Number Publication Date
WO2015066949A1 true WO2015066949A1 (zh) 2015-05-14

Family

ID=50013193

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2013/088813 WO2015066949A1 (zh) 2013-11-07 2013-12-06 人机交互系统、方法及其装置

Country Status (2)

Country Link
CN (1) CN103558916A (zh)
WO (1) WO2015066949A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107481721A (zh) * 2017-08-16 2017-12-15 北京百度网讯科技有限公司 用于可穿戴电子设备的语音交互方法和可穿戴电子设备
CN113823283A (zh) * 2021-09-22 2021-12-21 百度在线网络技术(北京)有限公司 信息处理的方法、设备、存储介质及程序产品
EP4123444A4 (en) * 2020-05-18 2023-11-15 Guangdong Oppo Mobile Telecommunications Corp., Ltd. METHOD AND DEVICE FOR PROCESSING VOICE INFORMATION AS WELL AS STORAGE MEDIUM AND ELECTRONIC DEVICE

Families Citing this family (64)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9002322B2 (en) 2011-09-29 2015-04-07 Apple Inc. Authentication with secondary approver
WO2014143776A2 (en) 2013-03-15 2014-09-18 Bodhi Technology Ventures Llc Providing remote interactions with host device using a wireless device
CN103928025B (zh) 2014-04-08 2017-06-27 华为技术有限公司 一种语音识别的方法及移动终端
CN105022568B (zh) * 2014-04-30 2018-08-24 青岛北电高科技有限公司 基于智能手表的敲击式人机交互方法和系统
US20150350146A1 (en) 2014-05-29 2015-12-03 Apple Inc. Coordination of message alert presentations across devices based on device modes
KR102201095B1 (ko) * 2014-05-30 2021-01-08 애플 인크. 하나의 디바이스의 사용으로부터 다른 디바이스의 사용으로의 전환
KR102223728B1 (ko) * 2014-06-20 2021-03-05 엘지전자 주식회사 이동단말기 및 그 제어방법
CN104065882A (zh) * 2014-06-23 2014-09-24 惠州Tcl移动通信有限公司 一种基于智能穿戴设备的移动终端拍照控制方法及其系统
CN118192869A (zh) 2014-06-27 2024-06-14 苹果公司 尺寸减小的用户界面
WO2016014601A2 (en) 2014-07-21 2016-01-28 Apple Inc. Remote user interface
CN104091188B (zh) * 2014-07-31 2017-06-20 百度在线网络技术(北京)有限公司 穿戴式设备和智能卡系统
KR102511376B1 (ko) 2014-08-02 2023-03-17 애플 인크. 상황 특정 사용자 인터페이스
US10339293B2 (en) 2014-08-15 2019-07-02 Apple Inc. Authenticated device used to unlock another device
EP3189406B1 (en) 2014-09-02 2022-09-07 Apple Inc. Phone user interface
CN104363544B (zh) * 2014-10-15 2017-10-27 深圳市学立佳教育科技有限公司 Android环境下利用音频接口启动app的外部装置
KR101855392B1 (ko) * 2014-11-05 2018-05-08 전자부품연구원 모듈형 기능 블록을 포함하는 웨어러블 장치 및 이를 이용한 웨어러블 장치의 기능 확장 방법
CN104461290B (zh) * 2014-11-28 2020-07-03 Oppo广东移动通信有限公司 一种拍照控制方法及设备
CN104505091B (zh) * 2014-12-26 2018-08-21 湖南华凯文化创意股份有限公司 人机语音交互方法及系统
CN104683576B (zh) * 2015-02-13 2017-08-25 广东欧珀移动通信有限公司 一种控制播放的方法及播放设备
CN104792015A (zh) * 2015-03-17 2015-07-22 芜湖美的厨卫电器制造有限公司 热水器系统及热水器
KR20170014297A (ko) * 2015-07-29 2017-02-08 엘지전자 주식회사 와치 타입의 이동 단말기 및 그 제어 방법
CN105120528B (zh) * 2015-08-14 2019-06-11 北京奇虎科技有限公司 一种设备间进行配置性设置的方法、装置及系统
CN105093957A (zh) * 2015-08-31 2015-11-25 成都科创城科技有限公司 一种采用手环及智能家居红外模块
CN105224082A (zh) * 2015-09-27 2016-01-06 邱少勐 系统故障实时求助报警装置
CN105244025A (zh) * 2015-10-29 2016-01-13 惠州Tcl移动通信有限公司 一种基于智能佩戴设备的语音识别方法及系统
CN105278110B (zh) * 2015-12-01 2019-02-22 王占奎 智能卫星通讯交互眼镜设备
CN105611165B (zh) * 2015-12-29 2018-05-22 北京灏核鑫京科技有限公司 可穿戴拍照机器人
CN105635460B (zh) * 2015-12-30 2019-09-24 北京搜狗科技发展有限公司 一种用于信息输出的控制方法、移动终端及穿戴式设备
CN105516605A (zh) * 2016-01-20 2016-04-20 广东欧珀移动通信有限公司 一种拍摄方法和装置
CN105974586A (zh) * 2016-05-12 2016-09-28 上海擎感智能科技有限公司 一种智能眼镜、智能眼镜的操控方法及操控系统
DK179186B1 (en) 2016-05-19 2018-01-15 Apple Inc REMOTE AUTHORIZATION TO CONTINUE WITH AN ACTION
US10965800B2 (en) 2016-05-20 2021-03-30 Huawei Technologies Co., Ltd. Interaction method in call and device
DK201770423A1 (en) 2016-06-11 2018-01-15 Apple Inc Activity and workout updates
DK201670622A1 (en) 2016-06-12 2018-02-12 Apple Inc User interfaces for transactions
CN106028217B (zh) * 2016-06-20 2020-01-21 咻羞科技(深圳)有限公司 一种基于音频识别技术的智能设备互动系统及方法
CN106656237A (zh) * 2016-12-19 2017-05-10 杭州联络互动信息科技股份有限公司 穿戴式智能设备
CN106788535A (zh) * 2016-12-22 2017-05-31 歌尔科技有限公司 一种基于可穿戴设备的运动监控方法
CN106875192A (zh) * 2017-02-27 2017-06-20 广东小天才科技有限公司 一种基于移动设备的支付方法及移动设备
US10992795B2 (en) 2017-05-16 2021-04-27 Apple Inc. Methods and interfaces for home media control
US11431836B2 (en) 2017-05-02 2022-08-30 Apple Inc. Methods and interfaces for initiating media playback
CN111343060B (zh) 2017-05-16 2022-02-11 苹果公司 用于家庭媒体控制的方法和界面
US20220279063A1 (en) 2017-05-16 2022-09-01 Apple Inc. Methods and interfaces for home media control
CN107349508A (zh) * 2017-06-28 2017-11-17 重庆金康特智能穿戴技术研究院有限公司 一种基于智能穿戴设备的自闭症交互式引导治愈系统
CN107506915B (zh) * 2017-08-14 2021-04-13 广州耐奇电气科技有限公司 一种用于能源管理人机交互方法
CN107393535A (zh) * 2017-08-29 2017-11-24 歌尔科技有限公司 一种开启终端语音识别功能的方法、装置、耳机及终端
CN107908331B (zh) * 2017-11-17 2020-07-03 广东小天才科技有限公司 一种桌面图标的显示控制方法及电子设备
CN109862170B (zh) * 2017-11-30 2021-02-12 Tcl科技集团股份有限公司 一种通信控制的方法、装置和穿戴设备
CN109901698B (zh) * 2017-12-08 2023-08-08 深圳市腾讯计算机系统有限公司 一种智能交互方法、可穿戴设备和终端以及系统
CN110570864A (zh) * 2018-06-06 2019-12-13 上海擎感智能科技有限公司 一种基于云端服务器的通信方法、系统及云端服务器
CN109269483B (zh) * 2018-09-20 2020-12-15 国家体育总局体育科学研究所 一种动作捕捉节点的标定方法、标定系统及标定基站
CN109859762A (zh) * 2019-01-02 2019-06-07 百度在线网络技术(北京)有限公司 语音交互方法、装置和存储介质
CN111477222A (zh) * 2019-01-23 2020-07-31 上海博泰悦臻电子设备制造有限公司 语音控制终端的方法及智能眼镜
US10996917B2 (en) 2019-05-31 2021-05-04 Apple Inc. User interfaces for audio media control
KR20220027295A (ko) 2019-05-31 2022-03-07 애플 인크. 오디오 미디어 제어를 위한 사용자 인터페이스
US11481094B2 (en) 2019-06-01 2022-10-25 Apple Inc. User interfaces for location-related communications
US11477609B2 (en) 2019-06-01 2022-10-18 Apple Inc. User interfaces for location-related communications
CN110364155A (zh) * 2019-07-30 2019-10-22 广东美的制冷设备有限公司 语音控制报错方法、电器及计算机可读存储介质
CN112581949B (zh) * 2019-09-29 2023-09-01 深圳市万普拉斯科技有限公司 设备控制方法、装置、电子设备及可读存储介质
CN111010482A (zh) * 2019-12-13 2020-04-14 上海传英信息技术有限公司 语音寻回方法、无线设备及计算机可读存储介质
CN111540186A (zh) * 2020-04-09 2020-08-14 安克创新科技股份有限公司 对苹果设备进行播放控制的方法、系统和计算机存储介质
US11392291B2 (en) 2020-09-25 2022-07-19 Apple Inc. Methods and interfaces for media control with dynamic feedback
CN112269468A (zh) * 2020-10-23 2021-01-26 深圳市恒必达电子科技有限公司 基于蓝牙、2.4g、wifi连接获取云端资讯的人机交互智能眼镜、方法及其平台
CN113326018A (zh) * 2021-06-04 2021-08-31 上海传英信息技术有限公司 处理方法、终端设备及存储介质
US11847378B2 (en) 2021-06-06 2023-12-19 Apple Inc. User interfaces for audio routing

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030130852A1 (en) * 2002-01-07 2003-07-10 Kabushiki Kaisha Toshiba Headset with radio communication function for speech processing system using speech recognition
US20070060118A1 (en) * 2005-09-13 2007-03-15 International Business Machines Corporation Centralized voice recognition unit for wireless control of personal mobile electronic devices
CN102138337A (zh) * 2008-08-13 2011-07-27 W·W·格雷林 具有自包含的语音反馈和语音命令的佩戴型头戴式耳机
CN102483915A (zh) * 2009-06-25 2012-05-30 蓝蚁无线股份有限公司 具有包括导引配对和语音触发操作的语音控制功能的电信装置
CN103209246A (zh) * 2012-01-16 2013-07-17 三星电子(中国)研发中心 一种通过蓝牙耳机控制手持设备的方法及手持设备

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3258468B1 (en) * 2008-11-10 2019-08-21 Google LLC Multisensory speech detection
US9122307B2 (en) * 2010-09-20 2015-09-01 Kopin Corporation Advanced remote control of host application using motion and voice commands
CN102609091A (zh) * 2012-02-10 2012-07-25 北京百纳信息技术有限公司 一种移动终端以及启动移动终端语音操作的方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030130852A1 (en) * 2002-01-07 2003-07-10 Kabushiki Kaisha Toshiba Headset with radio communication function for speech processing system using speech recognition
US20070060118A1 (en) * 2005-09-13 2007-03-15 International Business Machines Corporation Centralized voice recognition unit for wireless control of personal mobile electronic devices
CN102138337A (zh) * 2008-08-13 2011-07-27 W·W·格雷林 具有自包含的语音反馈和语音命令的佩戴型头戴式耳机
CN102483915A (zh) * 2009-06-25 2012-05-30 蓝蚁无线股份有限公司 具有包括导引配对和语音触发操作的语音控制功能的电信装置
CN103209246A (zh) * 2012-01-16 2013-07-17 三星电子(中国)研发中心 一种通过蓝牙耳机控制手持设备的方法及手持设备

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107481721A (zh) * 2017-08-16 2017-12-15 北京百度网讯科技有限公司 用于可穿戴电子设备的语音交互方法和可穿戴电子设备
EP4123444A4 (en) * 2020-05-18 2023-11-15 Guangdong Oppo Mobile Telecommunications Corp., Ltd. METHOD AND DEVICE FOR PROCESSING VOICE INFORMATION AS WELL AS STORAGE MEDIUM AND ELECTRONIC DEVICE
US12001758B2 (en) 2020-05-18 2024-06-04 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Voice information processing method and electronic device
CN113823283A (zh) * 2021-09-22 2021-12-21 百度在线网络技术(北京)有限公司 信息处理的方法、设备、存储介质及程序产品
CN113823283B (zh) * 2021-09-22 2024-03-08 百度在线网络技术(北京)有限公司 信息处理的方法、设备、存储介质及程序产品

Also Published As

Publication number Publication date
CN103558916A (zh) 2014-02-05

Similar Documents

Publication Publication Date Title
WO2015066949A1 (zh) 人机交互系统、方法及其装置
KR101571993B1 (ko) 음성 통화 방법, 음성 재생 방법, 장치, 프로그램 및 기록매체
CN105204846B (zh) 多人视频中视频画面的显示方法、装置及终端设备
JP6626440B2 (ja) マルチメディアファイルを再生するための方法及び装置
JP6196398B2 (ja) タッチボタン及び指紋認証を実現する装置、方法、端末機器、並びにプログラム及び記録媒体
WO2017036039A1 (zh) 远程协助方法和客户端
CN104539871B (zh) 多媒体通话方法及装置
CN112806067B (zh) 语音切换方法、电子设备及系统
CN104954719B (zh) 一种视频信息处理方法及装置
WO2015157385A1 (en) Method and system for communication
WO2016015403A1 (zh) 一种接入wi-fi网络的方法及装置
WO2016155304A1 (zh) 无线访问接入点的控制方法及装置
CN109600549A (zh) 拍照方法、装置、设备以及存储介质
JP2022537012A (ja) マルチ端末マルチメディアデータ通信方法及びシステム
CN105242942A (zh) 应用控制方法和装置
CN111696553A (zh) 一种语音处理方法、装置及可读介质
CN105448300A (zh) 用于通话的方法及装置
CN105450861A (zh) 信息提示方法及装置
CN104780256A (zh) 通讯录管理方法和装置、智能终端
CN105187154A (zh) 响应包接收延时的方法及装置
WO2022267640A1 (zh) 视频共享方法、电子设备及存储介质
KR20180056757A (ko) 지문 인증 방법, 장치, 프로그램 및 기록매체
CN105100946A (zh) 视频通讯方法及装置
CN105072243A (zh) 来电提示方法和装置
CN103888612B (zh) 呼叫转移方法、装置及终端

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13896972

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13896972

Country of ref document: EP

Kind code of ref document: A1