CN110838292A - Voice interaction method, electronic equipment and computer storage medium - Google Patents

Voice interaction method, electronic equipment and computer storage medium Download PDF

Info

Publication number
CN110838292A
CN110838292A CN201910935370.7A CN201910935370A CN110838292A CN 110838292 A CN110838292 A CN 110838292A CN 201910935370 A CN201910935370 A CN 201910935370A CN 110838292 A CN110838292 A CN 110838292A
Authority
CN
China
Prior art keywords
instruction
target instruction
type
target
interactive
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910935370.7A
Other languages
Chinese (zh)
Inventor
何瑞澄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Midea Group Co Ltd
Guangdong Midea White Goods Technology Innovation Center Co Ltd
Original Assignee
Midea Group Co Ltd
Guangdong Midea White Goods Technology Innovation Center Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Midea Group Co Ltd, Guangdong Midea White Goods Technology Innovation Center Co Ltd filed Critical Midea Group Co Ltd
Priority to CN201910935370.7A priority Critical patent/CN110838292A/en
Publication of CN110838292A publication Critical patent/CN110838292A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/225Feedback of the input speech

Abstract

The application discloses a voice interaction method, an electronic device and a computer storage medium, which comprises the following steps: activating a first type of interactive instruction after voice awakening; wherein the electronic device can execute the activated interactive instruction; responding to the acquired first target instruction, executing the first target instruction, and activating a second type of interactive instruction associated with the first target instruction, wherein the first target instruction is a target instruction in the first type of interactive instruction; and responding to the acquired first and second target instructions, and executing the first and second target instructions, wherein the second target instruction is a target instruction in the second type of interactive instruction. By means of the method, the input range of the audio information is limited, the situation that irrelevant audio information obstructs voice interaction is avoided, and interaction experience is improved.

Description

Voice interaction method, electronic equipment and computer storage medium
Technical Field
The present application relates to the field of voice interaction, and in particular, to a voice interaction method, an electronic device, and a computer storage medium.
Background
Under the wave of AI (Artificial Intelligence), various intelligent functions have been slowly integrated into various electronic devices, so that the electronic devices have a voice control capability.
The user firstly wakes up the electronic equipment from the dormant state through the wake-up word, and the electronic equipment can respond to and execute other interactive instructions of the user after being woken up. However, if no other interaction instruction is received within a certain time, the electronic device may enter the sleep state again and need to be awakened again, and if the electronic device does not enter the sleep state, the electronic device is easily interfered by other external sounds, and thus the electronic device is easily subjected to misoperation, and the user experience of the interaction method is poor.
Disclosure of Invention
In order to solve the above problems, the present application provides a voice interaction method, an electronic device, and a computer storage medium, which limit the input range of audio information, avoid that irrelevant audio information obstructs voice interaction, and improve interaction experience.
The technical scheme adopted by the application is to provide a voice interaction method of electronic equipment, which comprises the following steps: activating a first type of interactive instruction after voice awakening; wherein the electronic device can execute the activated interactive instruction; responding to the acquired first target instruction, executing the first target instruction, and activating a second type of interactive instruction associated with the first target instruction, wherein the first target instruction is a target instruction in the first type of interactive instruction; and responding to the acquired first and second target instructions, and executing the first and second target instructions, wherein the second target instruction is a target instruction in the second type of interactive instruction.
Wherein executing the first second target instruction comprises: activating a third type of interaction instruction associated with the first second target instruction; and responding to the acquired first third target instruction, and executing the first third target instruction, wherein the third target instruction is a target instruction in the third type of interactive instruction.
After voice awakening, activating a first type of interactive instruction, including: after voice awakening, acquiring a main function list, and activating a first type of interaction instruction associated with the main function list; responding to the acquired first target instruction, executing the first target instruction, and activating a second type of interaction instruction associated with the first target instruction, wherein the steps of: and responding to the acquired first target instruction, acquiring a sub-function list corresponding to the first target instruction, and activating a second type of interactive instruction associated with the sub-function list.
Wherein executing the first second target instruction comprises: acquiring an intention list corresponding to the first second target instruction, and activating a third type of interactive instruction associated with the intention list; and responding to the acquired first third target instruction, and executing the first third target instruction.
Wherein, the method also comprises: responding to that voice data is not obtained within a first preset time period after voice awakening, and performing voice reminding; or responding to that voice data is not acquired in a second preset time period after voice awakening, and closing the voice awakening, wherein the starting time points of the first preset time period and the second preset time period are the time points when the voice awakening is carried out; the first preset time period is less than the second preset time period.
After the first target instruction is executed in response to the acquired first target instruction and the second type of interactive instruction associated with the first target instruction is activated, the method further includes: and responding to the acquired second first target instruction, executing the second first target instruction, activating a second type of interactive instruction associated with the second first target instruction, and ending the activation state of the second type of interactive instruction associated with the first target instruction.
After the first target instruction is executed in response to the acquired first target instruction and the second type of interactive instruction associated with the first target instruction is activated, the method further includes: and responding to the acquired third first target instruction, executing the third first target instruction, and activating a second type of interaction instruction associated with the third first target instruction, so that the second type of interaction instruction associated with the third first target instruction and the second type of interaction instruction associated with the first target instruction are in an activated state at the same time.
And responding to the acquired fourth first target instruction, executing the fourth first target instruction, activating a second type of interactive instruction associated with the fourth first target instruction, and ending the activation state of the second type of interactive instruction associated with the first target instruction, so that the second type of interactive instruction associated with the third first target instruction and the second type of interactive instruction associated with the fourth first target instruction are simultaneously in the activation state.
Another technical solution adopted by the present application is to provide an electronic device, which includes a processor, and a microphone assembly and a memory connected to the processor; the microphone assembly is for collecting audio signals, the memory is for storing program data, and the processor is for executing the program data to implement any of the methods provided in the schemes above.
Another technical solution adopted by the present application is to provide a computer storage medium, where the computer storage medium is used to store program data, and the program data is used to implement any one of the methods provided in the above aspects when executed by a processor.
The beneficial effect of this application is: different from the prior art, the present application provides a method for voice interaction of an electronic device, including: activating a first type of interactive instruction after voice awakening; wherein the electronic device can execute the activated interactive instruction; responding to the acquired first target instruction, executing the first target instruction, and activating a second type of interactive instruction associated with the first target instruction, wherein the first target instruction is a target instruction in the first type of interactive instruction; and responding to the acquired first and second target instructions, and executing the first and second target instructions, wherein the second target instruction is a target instruction in the second type of interactive instruction. Through the mode, on one hand, the input range of the audio information is limited by using the mode that the first target instruction is associated with the second type of interactive instruction, so that the situation that irrelevant audio information obstructs voice interaction is avoided, on the other hand, when part of interactive instructions are associated with the interactive instructions and continuous interaction is needed, the awakening state is kept, the continuous interaction is carried out, and the interactive experience is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts. Wherein:
fig. 1 is a schematic flowchart of a first embodiment of a voice interaction method of an electronic device provided in the present application;
FIG. 2 is a flowchart illustrating a second embodiment of a voice interaction method of an electronic device according to the present application;
fig. 3 is a schematic diagram illustrating a relationship among a first type of interactive instruction, a second type of interactive instruction, and a third type of interactive instruction in a second embodiment of a voice interaction method for an electronic device according to the present application;
FIG. 4 is a flowchart illustrating a voice interaction method of an electronic device according to a third embodiment of the present application;
FIG. 5 is a flowchart illustrating a fourth embodiment of a voice interaction method for an electronic device according to the present application;
FIG. 6 is a flowchart illustrating a fifth embodiment of a voice interaction method of an electronic device according to the present application;
FIG. 7 is a flowchart illustrating a sixth embodiment of a voice interaction method for an electronic device according to the present application;
fig. 8 is a schematic diagram illustrating a relationship between a target instruction and a second type of interactive instruction in a first type of interactive instruction in a sixth embodiment of a voice interaction method for an electronic device provided by the present application;
fig. 9 is a schematic structural diagram of an embodiment of an electronic device provided in the present application;
FIG. 10 is a schematic structural diagram of an embodiment of a computer storage medium provided in the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be further noted that, for the convenience of description, only some of the structures related to the present application are shown in the drawings, not all of the structures. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Referring to fig. 1, fig. 1 is a schematic flowchart of a first embodiment of a voice interaction method of an electronic device provided in the present application, where the method includes:
step 11: activating a first type of interactive instruction after voice awakening; wherein the electronic device can execute the activated interaction instruction.
In some embodiments, the electronic device may be a household appliance, such as: refrigerators, air conditioners, televisions, washing machines, and the like; it can also be a mobile terminal, such as: smart phones, tablet computers, vehicle-mounted computers, and the like.
In some embodiments, the voice wake-up mode may be fixed word wake-up, such as: when the user needs to wake up the mobile phone, the user speaks 'the mobile phone is powered on', and the mobile phone enters a voice wake-up state.
It is understood that the fixed words can be set according to the actual needs of the user.
In some embodiments, the voice wake-up mode may be audio wake-up, that is, the electronic device enters a voice wake-up state when a certain frequency of a sound emitted by a user is detected. Such as: when the sound frequency of the user reaches 5000 Hz, the electronic equipment detects the current sound frequency and enters a voice awakening state.
It is understood that the fixed words can be set according to the actual needs of the user.
In some embodiments, the voice wake-up is performed by touching the electronic device, such as touching a screen of the electronic device, to cause the electronic device to enter a voice wake-up state. It is understood that the number of touches is set according to the user's needs, such as a single touch or two touches or even three touches.
In some embodiments, the voice wake-up is performed by pressing a key, for example, the electronic device is a mobile phone, and the voice wake-up state is entered by double-clicking or single-clicking a side key. An independent key can also be arranged on the electronic equipment for pressing so as to enable the electronic equipment to enter a voice awakening state. The existing key combination mode can be utilized to perform voice wake-up, and like the mode that the volume key and the power-on key are pressed simultaneously to enter the voice wake-up mode. And voice awakening can be performed by clicking a virtual key on the electronic equipment. And voice awakening can be performed in a mode of combining the physical key and the virtual key, for example, the volume key and the virtual home key are pressed to enter voice awakening.
After voice wake-up, the first type of interactive instructions are activated. It is to be understood that activation herein does not refer to execution of the first type of interactivity instructions, but rather places the first type of interactivity instructions in a ready-to-respond state.
In some embodiments, the first type of interactive instruction may be a set of target instructions of the electronic device, the target instructions in the set being grouped by function, such as a smart speaker, for example, all target instructions related to playing music are grouped into a music group, and target instructions related to adjusting the system are grouped into a setting group.
Step 12: and responding to the acquired first target instruction, executing the first target instruction, and activating a second type of interactive instruction associated with the first target instruction, wherein the first target instruction is a target instruction in the first type of interactive instruction.
It will be appreciated that a number of target instructions are included in the first class of interactivity instructions. Such as: after the mobile phone is awakened, the music, the setting, the photo album, the calendar and the like in the mobile phone can be understood as application programs responding to the target instruction in the first type of interaction instruction, and the corresponding target instruction can be 'playing music', 'opening a setting bar', 'opening a photo album', 'viewing weather conditions' and the like.
In some embodiments, the electronic device acquires the first target instruction and activates a second type of interaction instruction associated with the first target instruction. Such as: after the mobile phone is awakened, audio information of a user is obtained, the user can be attributed to a music category after being subjected to NLP (Natural language Processing), namely a first target instruction is correspondingly obtained, and a second type of interaction instruction associated with music is activated based on the first target instruction; the second type of interactive instruction may include a target instruction such as "singer", "play current music", "previous", "pause" in the music field. It should be noted that the first target instruction may be a fixed instruction set in advance, or may be a matched instruction obtained after understanding the NLP through a natural language.
Step 13: and executing the first second target instruction in response to acquiring the first second target instruction, wherein the second target instruction is a target instruction in the second type of interactive instruction.
In some embodiments, when the electronic device acquires that the audio information is a target instruction in the second type of interactive instruction, the electronic device regards the target instruction as a first second target instruction, and the electronic device responds to the first second target instruction. Such as: the first target instruction acquired by the electronic equipment is 'playing Zhoujilun says not cry', the first second target instruction acquired is 'volume is a little higher', and the electronic equipment responds to the first second target instruction and immediately increases the volume of the currently played music.
It is to be understood that when the first second target instruction is further associated with a third type of interactive instruction, the electronic device activates the third type of interactive instruction in response to the first second target instruction.
In an application scene, when the electronic equipment is an air conditioner, a voice interaction system of the air conditioner is awakened through voice, and a first type of interaction instruction is activated. Receiving audio information of a user, responding to a first target instruction when the audio information is detected to belong to the first target instruction in the first class of interactive instructions, and activating a second class of interactive instructions related to the first target instruction; when the audio information is detected not to belong to the target instruction in the first type of interactive instruction, the electronic equipment feeds back an instruction input error to remind the user of re-inputting.
If the audio information input by the user is 'wind sweeping', the detected 'wind sweeping' is a first target instruction, the air conditioner activates a second type of interactive instruction associated with the 'wind sweeping', and the target instruction in the second type of interactive instruction can be 'left-right wind sweeping', 'up-down wind sweeping', and the like. And then receiving that the audio information input by the user is 'left-right wind sweeping', detecting that the 'left-right wind sweeping' is a target instruction of a second type of interactive instruction, responding to the first second target instruction 'left-right wind sweeping', and starting the air conditioner to sweep left and right. And if the audio information input by the user does not belong to the target instruction in the second type of interactive instruction, the air conditioner feeds back the information to the user.
In another application scenario, when the electronic device is an intelligent steaming and baking oven, a user puts food to be cooked into the intelligent steaming and baking oven, a voice interaction system of the intelligent steaming and baking oven is awakened through voice, a first type of interaction instruction is activated, a target instruction in the first type of interaction instruction can be 'steaming', 'baking', 'menu', 'custom', an audio signal received by the user is 'steaming', a second type of interaction instruction associated with the first target instruction is activated in response to the first target instruction 'steaming', and a target instruction in the second type of interaction instruction can be 'time', 'temperature', 'humidity', 'return'. And receiving the audio signal input by the user again to be time, and activating a third type of interactive instruction associated with the first second target instruction in response to the first second target instruction time. After the user inputs all instructions for cooking food by voice, the intelligent steaming oven starts to cook the food. If the command needed for cooking food is 'steaming' and 'time', a mark is set in the 'time' command, the command is marked as the last command, and after the user inputs the corresponding time, the intelligent steaming oven starts to cook food without waiting for other commands.
During the voice input period, if the intelligent steaming oven receives audio information irrelevant to the first-class interaction instruction, the second-class interaction instruction and the third-class interaction instruction, the intelligent steaming oven prompts an input error or does not respond.
In some embodiments, the electronic device acquires a first target instruction, executes the first target instruction, and activates a second type of interaction instruction associated with the first target instruction; and executing the first second target instruction in response to acquiring the first second target instruction, wherein the second target instruction is a target instruction in the second type of interactive instruction. Such as: the intelligent sound box acquires that the first target instruction is 'play Zhougelong' caucasian balloon ', and the intelligent sound box responds to the target instruction and plays the Zhougelong' caucasian balloon; and activating a second type of interactivity instructions associated with the caucasian balloon, where a second target instruction in the second type of interactivity instructions may be "turn up volume", "next", "send lyrics to a connected cell phone", "mute", etc. The intelligent sound box acquires that the second target instruction is 'next', executes the second target instruction, and plays a song 'nunchakus' behind the 'balloon of whiting'.
In some embodiments, when the electronic device acquires voice data, the information contained in the voice data is analyzed, when the information in the voice data can correspond to a target instruction in the first type of interactive instruction, the target instruction is executed, a second type of interactive instruction associated with the target instruction is activated, the analyzed information is matched with the target instruction in the second type of interactive instruction, and when the matching requirement is met, the matched target instruction is executed. Such as: the voice information received by the intelligent air conditioner is identified as 'the air conditioner is set to be at 26 ℃, the air conditioner is swept up and down and is timed for two hours', analysis and judgment show that the temperature of 26 ℃ in the voice information is a first target instruction, the target instruction is executed, the temperature of the air conditioner is set to be at 26 ℃, and the associated second type of interactive instruction is activated, the target instruction of the second type of interactive instruction comprises 'left and right air sweeping', 'up and down air sweeping', 'timed half hour', 'timed one hour', 'timed two hours', 'health mode', and the like, the target instruction is executed if the voice information comprises the target instruction in the second type of interactive instruction, and the air conditioner is set to be swept up and down and is timed for two hours.
In other embodiments, after voice wake-up, if the electronic device does not acquire voice data within a preset first time period, voice prompt is performed; and if the electronic equipment does not acquire the voice data within the preset second time period, closing the voice awakening. If the voice data is not acquired within one minute from the awakening moment after the voice awakening, the electronic equipment reminds that the voice is input; and if the voice data is not acquired within two minutes from the awakening moment, the voice awakening is closed. The first time period is smaller than the second time period, and the initial calculation time of the first time period is the same as the initial calculation time of the second time period, and is calculated from the awakening time of the electronic equipment.
Different from the prior art, the voice interaction method of the electronic device of the application includes: activating a first type of interactive instruction after voice awakening; wherein the electronic device can execute the activated interactive instruction; responding to the acquired first target instruction, executing the first target instruction, and activating a second type of interactive instruction associated with the first target instruction, wherein the first target instruction is a target instruction in the first type of interactive instruction; and responding to the acquired first and second target instructions, and executing the first and second target instructions, wherein the second target instruction is a target instruction in the second type of interactive instruction. Through the mode, on one hand, the input range of the audio information is limited by using the mode that the first target instruction is associated with the second type of interactive instruction, so that the situation that irrelevant audio information obstructs voice interaction is avoided, on the other hand, when part of interactive instructions need to be continuously interacted, the awakening state is kept, continuous interaction is carried out, and the interactive experience is improved.
Referring to fig. 2, fig. 2 is a schematic flowchart of a voice interaction method of an electronic device according to a second embodiment of the present application, where the method includes:
step 21: activating a first type of interactive instruction after voice awakening; wherein the electronic device can execute the activated interaction instruction.
In this embodiment, the voice wake-up mode may be associated word wake-up, such as: when the user needs to wake up the electronic device, says "gurry" and the electronic device receives the audio information, recognizes that the audio information carries "gurry" through voice recognition, and then the electronic device enters a voice wake-up state.
In this example, the voice wake-up mode of the above embodiment may also be adopted to enter a voice wake-up state and activate the first interactive instruction.
Step 22: and responding to the acquired first target instruction, executing the first target instruction, and activating a second type of interactive instruction associated with the first target instruction, wherein the first target instruction is a target instruction in the first type of interactive instruction.
Step 22 has the same or similar technical solutions as those in the above embodiments, and details are not described here.
Step 23: and responding to the acquired first and second target instructions, executing the first and second target instructions, and activating a third type of interactive instruction associated with the first and second target instructions, wherein the second target instruction is a target instruction in the second type of interactive instruction.
In this implementation, in response to acquiring a first second target instruction in the second class of interactive instructions, the electronic device activates a third class of interactive instructions associated with the first second target instruction. It is understood that when the first second target instruction does not have an associated third type of interactive instruction, the electronic device only executes the first second target instruction.
Such as: the electronic equipment is an intelligent dishwasher, and the target instructions in the first type of interactive instructions are 'cleaning', 'helping' and 'setting'. The first target instruction "clean" is acquired, the target instructions of the activated associated second-class interaction instructions are "standard wash", "power wash", "custom" and "return", the first second target instruction is acquired as "custom", the target instructions of the activated associated third-class interaction instructions are "set time" and "cleaning force", and step 24 is executed.
Step 24: and responding to the acquired first third target instruction, and executing the first third target instruction, wherein the third target instruction is a target instruction in the third type of interactive instruction.
And when the first third target instruction is acquired as the setting time, responding to the first third target instruction, prompting the user to input the time length, and setting.
It can be understood that when the audio information acquired by the electronic device does not belong to the target instruction in the interactive instructions, the electronic device does not respond to any instruction.
Referring to fig. 3, fig. 3 is a schematic diagram illustrating a relationship among a first type of interactive instruction, a second type of interactive instruction, and a third type of interactive instruction in a second embodiment of the voice interaction method for an electronic device according to the present application;
there are many first target instructions in the first class of interactivity instructions a, including the first target instruction a 1. When the electronic equipment acquires the first target instruction A1, the first target instruction A1 is executed and the second type interaction instruction B associated with the first target instruction A1 is activated. There are many second target instructions in the second class of interactive instructions B, including the first second target instruction B1. When the electronic equipment acquires the first second target instruction B1, the first second target instruction B1 is executed and the third type interaction instruction C associated with the first second target instruction B1 is activated. There are a number of third target instructions in the third class of interactive instructions C, including the first third target instruction C1. When the electronic device acquires the first third target instruction C1, the first third target instruction C1 is executed. It is understood that a plurality of first target instructions in the first type of interactive instructions A are associated with a second type of interactive instructions B except for specific instructions; a plurality of second target instructions in the second type of interaction instruction B are associated with a third type of interaction instruction C except for a specific instruction.
In this embodiment, the mode that the first target instruction is associated with the second type of interactive instruction and the mode that the first second target instruction is associated with the third type of interactive instruction are used to limit the input range of the audio information, so that the situation that irrelevant audio information obstructs voice interaction is avoided, and on the other hand, when part of interactive instructions need to be continuously interacted, the state of awakening is maintained, continuous interaction is performed, and interactive experience is improved.
Referring to fig. 4, fig. 4 is a schematic flowchart of a voice interaction method of an electronic device according to a third embodiment of the present application, where the method includes:
step 41: and after voice wake-up, acquiring the main function list, and activating a first type of interaction instruction associated with the main function list.
Step 42: and responding to the acquired first target instruction, acquiring a sub-function list corresponding to the first target instruction, and activating a second type of interactive instruction associated with the sub-function list.
Step 43: and responding to the acquired first second target instruction, acquiring an intention list corresponding to the first second target instruction, and activating a third type of interactive instruction associated with the intention list.
The intention list is used for establishing corresponding relations between multiple different expression modes of the same intention and between multiple intentions and corresponding target instructions.
Step 44: and responding to the acquired first third target instruction, and executing the first third target instruction, wherein the third target instruction is a target instruction in the third type of interactive instruction.
In this embodiment, an electronic device is taken as an example of an intelligent electronic device to be specifically described.
After the voice of the smart television is awakened, a main function list of the smart television, such as 'menu', 'setup', 'program', 'network television', and the like, is acquired, and a first type of interaction instruction associated with the main function list is activated. When the acquired first target instruction is a program, acquiring a sub-function list corresponding to the program, and activating a second type of interactive instruction associated with the sub-function list, wherein the target instruction of the second type of interactive instruction comprises a variety, a movie, a television play and the like; and if the acquired first second target instruction is 'movie', acquiring an intention list corresponding to the 'movie', and activating a third type of interactive instruction associated with the intention list. The target instructions of the third type of interaction instructions associated with the intent list may be "play first movie", "fast forward", "rewind", "return", "close", "volume up". And acquiring a target instruction of playing the first movie, playing the first movie in response to the target instruction, and fast forwarding the movie for 10 seconds in response to the instruction if the fast forwarding instruction is acquired again.
Referring to fig. 5, fig. 5 is a schematic flowchart of a fourth embodiment of a voice interaction method of an electronic device provided in the present application, where the method includes:
step 51: activating a first type of interactive instruction after voice awakening; wherein the electronic device can execute the activated interaction instruction.
Step 52: and responding to the acquired first target instruction, executing the first target instruction, and activating a second type of interaction instruction associated with the first target instruction.
Steps 51 to 52 have the same or similar technical solutions as those in the above embodiments, which are not described in detail.
Step 53: and responding to the acquired second first target instruction, executing the second first target instruction, activating a second type of interactive instruction associated with the second first target instruction, and ending the activation state of the second type of interactive instruction associated with the first target instruction.
Wherein the first target instruction is a target instruction in the first class of interactive instructions
In some embodiments, after acquiring the first target instruction in the first class of interactive instructions, the electronic device acquires the second first target instruction in the first class of interactive instructions, so that the second class of interactive instructions and the second class of interactive instructions are simultaneously in an activated state, activates the second class of interactive instructions associated with the second first target instruction, and ends the activated state of the second class of interactive instructions associated with the first target instruction.
For example, the following steps are carried out:
taking an electronic device as an example of a mobile phone, after voice wake-up, a first type of interaction instruction is activated, for example, a target instruction of the first type of interaction instruction includes "open music", "open a camera", "open a xxx application", and the like; the acquired first target instruction is 'open music', and in response to the first target instruction 'open music', a second type of interaction instruction associated with 'open music' is activated, for example, target instructions of the second type of interaction instruction include 'pop music', 'folk music', 'singer', 'combination', and the like; the acquired audio information is 'camera opening', the detected 'camera opening' is a target instruction in the first-class interaction instruction, a second-class interaction instruction associated with 'camera opening' is activated in response to the target instruction of 'camera opening', and the target instructions of the second-class interaction instruction associated with 'camera opening' comprise 'album opening', 'photographing', 'setting', and the like; and ends the activation state of the second type of interactivity instructions associated with "open music".
Referring to fig. 6, fig. 6 is a schematic flowchart of a fifth embodiment of a voice interaction method of an electronic device provided in the present application, where the method includes:
step 61: activating a first type of interactive instruction after voice awakening; wherein the electronic device can execute the activated interaction instruction.
Step 62: and responding to the acquired first target instruction, executing the first target instruction, and activating a second type of interaction instruction associated with the first target instruction.
The first target instruction is a target instruction in the first type of interactive instruction.
Steps 61-62 have the same or similar technical solutions as the above embodiments, which are not described in detail.
And step 63: and responding to the acquired third first target instruction, executing the third first target instruction, and activating a second type of interaction instruction associated with the third first target instruction, so that the second type of interaction instruction associated with the third first target instruction and the second type of interaction instruction associated with the first target instruction are in an activated state at the same time.
In some embodiments, after acquiring a first target instruction in the first class of interactive instructions, the electronic device acquires a third first target instruction in the first class of interactive instructions, executes the third first target instruction, and activates a second class of interactive instructions associated with the third first target instruction, so that the second class of interactive instructions associated with the third first target instruction and the second class of interactive instructions associated with the first target instruction are simultaneously in an activated state, and at this time, the user may arbitrarily input an instruction in the second class of interactive instructions associated with the third first target instruction or the second class of interactive instructions associated with the first target instruction, so as to execute the electronic device.
For example, the following steps are carried out:
taking an electronic device as an example of a mobile phone, after voice wake-up, a first type of interaction instruction is activated, for example, a target instruction of the first type of interaction instruction includes "open music", "open a camera", "open a xxx application", and the like; the obtained first target instruction is 'open music', the music application program in the mobile phone is opened in response to the first target instruction 'open music', and a second type of interaction instruction associated with 'open music' is activated, for example, the target instruction of the second type of interaction instruction includes 'pop music', 'folk music', 'singer', 'combination', 'play pop music', and the like; the acquired audio information is 'camera opening', the detected 'camera opening' is a third first target instruction in the first-class interaction instructions, the camera of the mobile phone is opened in response to the target instruction of 'camera opening', and a second-class interaction instruction associated with 'camera opening' is activated, wherein the target instruction of the second-class interaction instruction associated with 'camera opening' comprises 'album opening', 'photographing', 'setting' and the like; the second type of interaction command associated with "open music" and the second type of interaction command associated with "turn on camera" are now simultaneously active. The user can input the 'playing popular music' in the second type of interactive instruction associated with the 'opening music' through voice, the mobile phone obtains the 'playing popular music' instruction and starts playing the popular music, then the user inputs the 'taking picture' in the second type of interactive instruction associated with the 'opening camera' through voice, and the mobile phone obtains the 'taking picture' instruction and starts taking the picture.
In this embodiment, the second type of interactive instruction associated with the third first target instruction and the second type of interactive instruction associated with the first target instruction are simultaneously activated, so that the electronic device can simultaneously meet multiple requirements of a user, voice interaction is more diversified, and user experience is increased.
Referring to fig. 7, fig. 7 is a schematic flowchart of a sixth embodiment of a voice interaction method of an electronic device provided in the present application, where the method includes:
step 71: activating a first type of interactive instruction after voice awakening; wherein the electronic device can execute the activated interaction instruction.
Step 72: and responding to the acquired first target instruction, executing the first target instruction, and activating a second type of interaction instruction associated with the first target instruction.
The first target instruction is a target instruction in the first type of interactive instruction.
Step 73: and responding to the acquired third first target instruction, executing the third first target instruction, and activating a second type of interaction instruction associated with the third first target instruction, so that the second type of interaction instruction associated with the third first target instruction and the second type of interaction instruction associated with the first target instruction are in an activated state at the same time.
Steps 71-73 have the same or similar technical solutions as the above embodiments, which are not described in detail.
Step 74: and responding to the acquired fourth first target instruction, executing the fourth first target instruction, activating a second class of interaction instruction associated with the fourth first target instruction, and ending the activation state of the second class of interaction instruction associated with the first target instruction, so that the second class of interaction instruction associated with the third first target instruction and the second class of interaction instruction associated with the fourth first target instruction are simultaneously in the activation state.
Referring to fig. 8, fig. 8 is a schematic diagram illustrating a relationship between a target instruction and a second type of interactive instruction in a first type of interactive instruction in a sixth embodiment of a voice interaction method for an electronic device according to the present application;
as illustrated in FIG. 8, when the voice wake-up occurs, the first type of interactive command A is activated, and the first type of interactive command A includes a plurality of target commands, such as a first target command A1, a second target command A2, a third target command A3, and a fourth target command A4. When the electronic equipment acquires the first target instruction A1, the first target instruction A1 is executed and the second type interaction instruction B associated with the first target instruction A1 is activated. Subsequently, the electronic device acquires the third first target instruction A3, executes the third first target instruction A3, and activates the second type of interactive instruction B associated with the third first target instruction A3, wherein the second type of interactive instruction B associated with the first target instruction a1 and the second type of interactive instruction B associated with the third first target instruction A3 are simultaneously activated, and the electronic device responds and executes only when any second target instruction of the two second type of interactive instructions B is acquired. Subsequently, the electronic device responds and executes the fourth first target instruction a4, executes the fourth first target instruction a4, and activates the second type of interactive instruction B associated with the fourth first target instruction a4, at which time the activation state of the second type of interactive instruction B associated with the first target instruction a1 is turned off, and the second type of interactive instruction B associated with the fourth first target instruction a4 and the second type of interactive instruction B associated with the third first target instruction A3 are enabled at the same time, as long as any second target instruction of the two second type of interactive instructions B is acquired. When the second first target instruction A2 is acquired, the second first target instruction A2 is executed, and the second type of interactive instruction B associated with the second first target instruction A2 is activated, the activation state of the second type of interactive instruction B associated with the third first target instruction A3 is closed, the second type of interactive instruction B associated with the second first target instruction A2 and the second type of interactive instruction B associated with the fourth first target instruction A4 are enabled at the same time, and the electronic device responds and executes only when any second target instruction of the two second type of interactive instructions B is acquired. The embodiment ensures that the second type interactive instructions associated with two first target instructions adjacent in time sequence are in the activated state at the same time so as to meet various requirements of users.
By way of example;
taking an electronic device as an intelligent video sound box as an example, after voice awakening, a first type of interaction instruction is activated, for example, target instructions of the first type of interaction instruction include "open music", "make a call", "set daily reminder", "open video", and the like; the acquired first target instruction is 'open music', the first target instruction 'open music' is executed, a second type of interaction instruction associated with 'open music' is activated, and for example, the target instruction of the second type of interaction instruction includes 'play popular music', 'folk music', 'singer', 'combination', and the like; the acquired audio information is 'set daily reminder', the 'set daily reminder' is detected to be a target instruction in the first type of interactive instruction, the intelligent video sound box is executed to start a daily setting function in response to the target instruction of 'set daily reminder', and a second type of interactive instruction associated with 'set daily reminder' is activated, wherein the target instruction of the second type of interactive instruction associated with 'set daily reminder' comprises 'set reminding date', 'set reminding ring tone', 'set reminder', and the like; at the moment, the second type of interactive instruction associated with the music opening and the second type of interactive instruction associated with the daily reminding setting are in the activated state at the same time. And acquiring a fourth first target instruction of 'open video' in the first type of interactive instructions, executing the fourth first target instruction of 'open video', activating a second type of interactive instruction associated with 'open video', and closing the second type of interactive instruction associated with 'open music' so that the second type of interactive instruction associated with 'set daily reminder' and the second type of interactive instruction associated with 'open video' are in an activated state at the same time, wherein the target instruction of the second type of interactive instruction associated with 'open video' is of a variety of entertainment type, a television drama, a movie, a running bar brother playing and the like. And if the fifth first target instruction in the first class of interactive instructions is acquired again, executing the fifth first target instruction, activating a second class of interactive instructions associated with the fifth first target instruction, and closing the second class of interactive instructions associated with the setting of daily reminding, so that the second class of interactive instructions associated with the fifth first target instruction and the second class of interactive instructions associated with the opening of the video are in an activated state at the same time. In this manner, the two closest peer-level interactivity instructions are simultaneously active.
Different from the prior art, the embodiment increases the interaction diversity by enabling the two same-level interactive instructions to be in the activated state at the same time, and enables the two interactive instructions with the closest activation time to be in the activated state at the same time, so that the current requirements of the user can be more intelligently met.
Referring to fig. 9, fig. 9 is a schematic structural diagram of an embodiment of an electronic device provided in the present application, where the electronic device 90 includes a processor 91, a microphone assembly 92 connected to the processor 91, and a memory 93; the microphone assembly 92 is used for collecting audio signals, the memory 93 is used for storing program data, and the processor 91 is used for executing the program data, so as to realize the following methods:
activating a first type of interactive instruction after voice awakening; wherein the electronic device can execute the activated interactive instruction; responding to the acquired first target instruction, executing the first target instruction, and activating a second type of interactive instruction associated with the first target instruction, wherein the first target instruction is a target instruction in the first type of interactive instruction; and responding to the acquired first and second target instructions, and executing the first and second target instructions, wherein the second target instruction is a target instruction in the second type of interactive instruction.
Optionally, when the processor 91 is configured to execute the program data, the following method is further implemented: activating a third type of interaction instruction associated with the first second target instruction; and responding to the acquired first third target instruction, and executing the first third target instruction, wherein the third target instruction is a target instruction in the third type of interactive instruction.
Optionally, when the processor 91 is configured to execute the program data, the following method is further implemented: after voice awakening, acquiring a main function list, and activating a first type of interaction instruction associated with the main function list; and responding to the acquired first target instruction, acquiring a sub-function list corresponding to the first target instruction, and activating a second type of interactive instruction associated with the sub-function list.
Optionally, when the processor 91 is configured to execute the program data, the following method is further implemented: acquiring an intention list corresponding to the first second target instruction, and activating a third type of interactive instruction associated with the intention list; and responding to the acquired first third target instruction, and executing the first third target instruction.
Optionally, when the processor 91 is configured to execute the program data, the following method is further implemented: responding to that voice data is not obtained within a first preset time period after voice awakening, and performing voice reminding; or responding to that voice data is not acquired in a second preset time period after voice awakening, and closing the voice awakening, wherein the starting time points of the first preset time period and the second preset time period are the time points when the voice awakening is carried out; the first preset time period is less than the second preset time period.
Optionally, when the processor 91 is configured to execute the program data, the following method is further implemented: and responding to the acquired second first target instruction, executing the second first target instruction, activating a second type of interactive instruction associated with the second first target instruction, and ending the activation state of the second type of interactive instruction associated with the first target instruction.
Optionally, when the processor 91 is configured to execute the program data, the following method is further implemented: and responding to the acquired third first target instruction, executing the third first target instruction, and activating a second type of interaction instruction associated with the third first target instruction, so that the second type of interaction instruction associated with the third first target instruction and the second type of interaction instruction associated with the first target instruction are in an activated state at the same time.
Optionally, when the processor 91 is configured to execute the program data, the following method is further implemented: and responding to the acquired fourth first target instruction, executing the fourth first target instruction, activating a second class of interaction instruction associated with the fourth first target instruction, and ending the activation state of the second class of interaction instruction associated with the first target instruction, so that the second class of interaction instruction associated with the third first target instruction and the second class of interaction instruction associated with the fourth first target instruction are simultaneously in the activation state.
Referring to fig. 10, fig. 10 is a schematic structural diagram of an embodiment of a computer storage medium 100 provided in the present application, the computer storage medium 100 is used for storing program data 101, and the program data 101, when being executed by a processor, is used for implementing the following method steps:
activating a first type of interactive instruction after voice awakening; wherein the electronic device can execute the activated interactive instruction; responding to the acquired first target instruction, executing the first target instruction, and activating a second type of interactive instruction associated with the first target instruction, wherein the first target instruction is a target instruction in the first type of interactive instruction; and responding to the acquired first and second target instructions, and executing the first and second target instructions, wherein the second target instruction is a target instruction in the second type of interactive instruction.
It will be appreciated that the program data 101, when executed by a processor, is also for implementing any of the embodiment methods described above.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other manners. For example, the above-described device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated units in the other embodiments described above may be stored in a computer-readable storage medium if they are implemented in the form of software functional units and sold or used as separate products. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the purpose of illustrating embodiments of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application or are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (10)

1. A voice interaction method of an electronic device, comprising:
activating a first type of interactive instruction after voice awakening; wherein the electronic device can execute the activated interactive instruction;
responding to a first acquired target instruction, executing the first target instruction, and activating a second type of interactive instruction associated with the first target instruction, wherein the first target instruction is a target instruction in the first type of interactive instruction;
and responding to the acquired first second target instruction, and executing the first second target instruction, wherein the second target instruction is a target instruction in the second type of interactive instruction.
2. The method of claim 1,
the executing the first second target instruction comprises:
activating a third type of interaction instruction associated with the first second target instruction;
and responding to the acquired first third target instruction, and executing the first third target instruction, wherein the third target instruction is a target instruction in the third type of interactive instruction.
3. The method of claim 1,
after voice awakening, activating a first type of interaction instruction, including:
after voice awakening, acquiring a main function list, and activating the first type of interaction instruction associated with the main function list;
the executing the first target instruction and activating a second type of interaction instruction associated with the first target instruction in response to the acquired first target instruction comprises:
and responding to the acquired first target instruction, acquiring a sub-function list corresponding to the first target instruction, and activating the second type of interactive instruction associated with the sub-function list.
4. The method of claim 3,
the executing the first second target instruction comprises:
acquiring an intention list corresponding to the first second target instruction, and activating a third type of interactive instruction associated with the intention list;
and responding to the acquired first third target instruction, and executing the first third target instruction.
5. The method of claim 1,
the method further comprises the following steps:
responding to that voice data is not obtained within a first preset time period after the voice awakening, and performing voice reminding; or
Responding to that voice data is not acquired in a second preset time period after the voice awakening, and closing the voice awakening, wherein starting time points of the first preset time period and the second preset time period are time points when the voice awakening is carried out;
wherein the first preset time period is less than the second preset time period.
6. The method of claim 1,
after the responding to the acquired first target instruction, executing the first target instruction, and activating a second type of interaction instruction associated with the first target instruction, the method further includes:
and responding to the acquired second first target instruction, executing the second first target instruction, activating a second type of interactive instruction associated with the second first target instruction, and ending the activation state of the second type of interactive instruction associated with the first target instruction.
7. The method of claim 1,
after the responding to the acquired first target instruction, executing the first target instruction, and activating a second type of interaction instruction associated with the first target instruction, the method further includes:
and responding to the acquired third first target instruction, executing the third first target instruction, and activating a second type of interaction instruction associated with the third first target instruction, so that the second type of interaction instruction associated with the third first target instruction and the second type of interaction instruction associated with the first target instruction are in an activated state at the same time.
8. The method of claim 7,
and in response to the acquired fourth first target instruction, executing the fourth first target instruction, activating a second type of interactive instruction associated with the fourth first target instruction, and ending the activation state of the second type of interactive instruction associated with the first target instruction, so that the second type of interactive instruction associated with the third first target instruction and the second type of interactive instruction associated with the fourth first target instruction are simultaneously in the activation state.
9. An electronic device comprising a processor and a microphone assembly and a memory connected to the processor;
the microphone assembly is for collecting audio signals, the memory is for storing program data, and the processor is for executing the program data to implement the method of any one of claims 1-8.
10. A computer storage medium for storing program data for implementing the method according to any one of claims 1-8 when executed by a processor.
CN201910935370.7A 2019-09-29 2019-09-29 Voice interaction method, electronic equipment and computer storage medium Pending CN110838292A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910935370.7A CN110838292A (en) 2019-09-29 2019-09-29 Voice interaction method, electronic equipment and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910935370.7A CN110838292A (en) 2019-09-29 2019-09-29 Voice interaction method, electronic equipment and computer storage medium

Publications (1)

Publication Number Publication Date
CN110838292A true CN110838292A (en) 2020-02-25

Family

ID=69574677

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910935370.7A Pending CN110838292A (en) 2019-09-29 2019-09-29 Voice interaction method, electronic equipment and computer storage medium

Country Status (1)

Country Link
CN (1) CN110838292A (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070043573A1 (en) * 2005-08-22 2007-02-22 Delta Electronics, Inc. Method and apparatus for speech input
US20080046873A1 (en) * 2006-08-17 2008-02-21 Rumsey James K Method and system for providing dynamically configurable and extensible control of electronic devices
CN101405739A (en) * 2002-12-26 2009-04-08 摩托罗拉公司(在特拉华州注册的公司) Identification apparatus and method
CN102855873A (en) * 2012-08-03 2013-01-02 海信集团有限公司 Electronic equipment and method used for controlling same
CN102945673A (en) * 2012-11-24 2013-02-27 安徽科大讯飞信息科技股份有限公司 Continuous speech recognition method with speech command range changed dynamically
CN105161097A (en) * 2015-07-23 2015-12-16 百度在线网络技术(北京)有限公司 Voice interaction method and apparatus
CN105931639A (en) * 2016-05-31 2016-09-07 杨若冲 Speech interaction method capable of supporting multi-hierarchy command words
CN107680591A (en) * 2017-09-21 2018-02-09 百度在线网络技术(北京)有限公司 Voice interactive method, device and its equipment based on car-mounted terminal
CN108182943A (en) * 2017-12-29 2018-06-19 北京奇艺世纪科技有限公司 A kind of smart machine control method, device and smart machine
CN108376058A (en) * 2018-02-09 2018-08-07 斑马网络技术有限公司 Sound control method and device and electronic equipment and storage medium
CN109767762A (en) * 2018-12-14 2019-05-17 深圳壹账通智能科技有限公司 Application control method and terminal device based on speech recognition
CN109901810A (en) * 2019-02-01 2019-06-18 广州三星通信技术研究有限公司 A kind of man-machine interaction method and device for intelligent terminal
CN109903754A (en) * 2017-12-08 2019-06-18 北京京东尚科信息技术有限公司 Method for voice recognition, equipment and memory devices

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101405739A (en) * 2002-12-26 2009-04-08 摩托罗拉公司(在特拉华州注册的公司) Identification apparatus and method
US20070043573A1 (en) * 2005-08-22 2007-02-22 Delta Electronics, Inc. Method and apparatus for speech input
US20080046873A1 (en) * 2006-08-17 2008-02-21 Rumsey James K Method and system for providing dynamically configurable and extensible control of electronic devices
CN102855873A (en) * 2012-08-03 2013-01-02 海信集团有限公司 Electronic equipment and method used for controlling same
CN102945673A (en) * 2012-11-24 2013-02-27 安徽科大讯飞信息科技股份有限公司 Continuous speech recognition method with speech command range changed dynamically
CN105161097A (en) * 2015-07-23 2015-12-16 百度在线网络技术(北京)有限公司 Voice interaction method and apparatus
CN105931639A (en) * 2016-05-31 2016-09-07 杨若冲 Speech interaction method capable of supporting multi-hierarchy command words
CN107680591A (en) * 2017-09-21 2018-02-09 百度在线网络技术(北京)有限公司 Voice interactive method, device and its equipment based on car-mounted terminal
CN109903754A (en) * 2017-12-08 2019-06-18 北京京东尚科信息技术有限公司 Method for voice recognition, equipment and memory devices
CN108182943A (en) * 2017-12-29 2018-06-19 北京奇艺世纪科技有限公司 A kind of smart machine control method, device and smart machine
CN108376058A (en) * 2018-02-09 2018-08-07 斑马网络技术有限公司 Sound control method and device and electronic equipment and storage medium
CN109767762A (en) * 2018-12-14 2019-05-17 深圳壹账通智能科技有限公司 Application control method and terminal device based on speech recognition
CN109901810A (en) * 2019-02-01 2019-06-18 广州三星通信技术研究有限公司 A kind of man-machine interaction method and device for intelligent terminal

Similar Documents

Publication Publication Date Title
CN104091423B (en) A kind of method for transmitting signals and family's order programme
WO2019137114A1 (en) Voice control processing method and device
CN104935980A (en) Interactive information processing method, client and service platform
CN103137128A (en) Gesture and voice recognition for control of a device
CN109545206A (en) Voice interaction processing method, device and the smart machine of smart machine
CN109686372B (en) Resource playing control method and device
WO2015066999A1 (en) Terminal screen power saving method, device and terminal
WO2021196617A1 (en) Voice interaction method and apparatus, electronic device and storage medium
WO2015131625A1 (en) Application control method and device, and terminal
CN103854682B (en) A kind of method and device controlling audio file to play
CN109493849A (en) Voice awakening method, device and electronic equipment
CN108037699B (en) Robot, control method of robot, and computer-readable storage medium
CN104825026A (en) Cup with internet surfing function
CN114079812A (en) Display equipment and camera control method
CN110109377B (en) Control system and method of household appliance and air conditioner
CN109600309A (en) Exchange method, device, intelligent gateway and the storage medium of intelligent gateway
CN109062473A (en) Application program playback method, system, readable storage medium storing program for executing and intelligent terminal
CN109903762B (en) Voice control method, device, storage medium and voice equipment
CN109658924B (en) Session message processing method and device and intelligent equipment
WO2019128632A1 (en) Audio playback method, device, and system
CN109473109A (en) Data processing method, device and computer readable storage medium
CN110838292A (en) Voice interaction method, electronic equipment and computer storage medium
CN112599126B (en) Awakening method of intelligent device, intelligent device and computing device
CN104469531A (en) Regular reminding method and device
CN110164426A (en) Sound control method and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination