CN211828111U - Voice interaction system - Google Patents

Voice interaction system Download PDF

Info

Publication number
CN211828111U
CN211828111U CN201921958738.3U CN201921958738U CN211828111U CN 211828111 U CN211828111 U CN 211828111U CN 201921958738 U CN201921958738 U CN 201921958738U CN 211828111 U CN211828111 U CN 211828111U
Authority
CN
China
Prior art keywords
voice
instruction
module
wake
control
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201921958738.3U
Other languages
Chinese (zh)
Inventor
刘冠华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Foshan Shunde Midea Electrical Heating Appliances Manufacturing Co Ltd
Original Assignee
Foshan Shunde Midea Electrical Heating Appliances Manufacturing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Foshan Shunde Midea Electrical Heating Appliances Manufacturing Co Ltd filed Critical Foshan Shunde Midea Electrical Heating Appliances Manufacturing Co Ltd
Priority to CN201921958738.3U priority Critical patent/CN211828111U/en
Application granted granted Critical
Publication of CN211828111U publication Critical patent/CN211828111U/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The application discloses a voice interaction system. The voice interaction system comprises a first device and a second device, wherein the first device and the second device both have a voice recognition function, the first device is used for sending a first voice control instruction to the second device, and the second device is used for receiving the first voice control instruction and executing a preset operation according to the first voice control instruction. In the voice interaction system, interaction between the first equipment and the second equipment which both have the voice recognition function can be realized in a voice mode to execute related control commands, the interaction between the equipment can be implemented in a cross-brand mode without the need of a unified communication protocol standard, the interaction application range is wide, and the user experience is better.

Description

Voice interaction system
Technical Field
The application relates to the field of household appliances, in particular to a voice interaction system.
Background
Along with the popularization of smart homes, smart devices in families can be more and more, however, smart device manufacturers in families have various brands, different communication protocols are often adopted among devices of different brands, so that the smart devices of different brands are difficult to communicate or interact with each other, and the user experience is poor.
SUMMERY OF THE UTILITY MODEL
The embodiment of the application provides a voice interaction system.
The voice interaction system comprises a first device and a second device, wherein the first device and the second device both have a voice recognition function, the first device is used for sending a first voice control instruction to the second device, and the second device is used for receiving the first voice control instruction and executing a preset operation according to the first voice control instruction.
In some embodiments, the first control instruction includes a first wake-up instruction, the first voice control instruction includes a first voice wake-up instruction, the first voice conversion module is configured to convert the first wake-up instruction into the first voice wake-up instruction, the first voice output module is configured to play the first voice wake-up instruction, and the second device receives the first voice wake-up instruction through the second voice receiving module to wake up the second device.
In some embodiments, the second device includes a second voice recognition module, and the second voice recognition module is configured to recognize the received first voice wake-up instruction to wake up the second device.
In some embodiments, the first control instruction includes a first operation instruction, the first voice control instruction includes a first voice operation instruction, the first voice conversion module is configured to convert the first operation instruction into the first voice operation instruction, after the second device is woken up, the first device plays the first voice operation instruction through the first voice output module, and the second device receives the first voice operation instruction through the second voice device receiving module to perform a predetermined operation.
In some embodiments, the second device includes a second voice recognition module to recognize the received first voice wake-up instruction to wake up the second device.
In some embodiments, the second device further comprises a second semantic recognition module and a second control module;
the second voice recognition module is further used for recognizing the instruction content of the received first voice operation instruction;
the second semantic recognition module is used for performing semantic analysis on the instruction content of the first voice operation instruction to generate an execution instruction;
and the second control module controls the second equipment to execute preset operation according to the execution instruction.
In some embodiments, the second device includes a second voice conversion module and a second voice output module, the second voice conversion module is configured to generate feedback information and convert the feedback information into a voice feedback instruction after the second device starts to perform the predetermined operation, and the second voice output module is configured to play the voice feedback instruction.
In some embodiments, the first device further includes a first voice receiving module, a first voice recognition module, a first semantic recognition module, and a first control module, where the first voice receiving module is configured to receive the voice feedback instruction, the first voice recognition module is configured to recognize instruction content of the voice feedback instruction, the first semantic recognition module is configured to perform semantic parsing on the instruction content of the voice feedback instruction to generate a confirmation instruction, and the first control module confirms that the second device executes the predetermined operation according to the confirmation instruction.
In some embodiments, the first device is configured to detect an environmental parameter and generate the first control instruction according to the environmental parameter.
In some embodiments, the second device is further configured to send a second voice control instruction to the first device, and the first device is further configured to receive the second voice instruction and perform a predetermined operation according to the second voice instruction.
In the voice interaction system of the embodiment of the application, interaction between the first equipment and the second equipment both having the voice recognition function can be realized through a voice mode to execute a related control command, the interaction between the equipment can be implemented across brands without unifying communication protocol standards, the interaction application range is wide, and the user experience is better.
Advantages of additional aspects of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The above and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a block schematic diagram of a voice interaction system of an embodiment of the present application;
FIG. 2 is a schematic block diagram of a voice interaction system according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an interaction scenario of a voice interaction system according to an embodiment of the present application;
fig. 4 is an interaction flow diagram of a voice interaction system according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below by referring to the drawings are exemplary only for the purpose of explaining the embodiments of the present application, and are not to be construed as limiting the embodiments of the present application.
Referring to fig. 1, a voice interaction system 1000 according to an embodiment of the present application includes a first device 100 and a second device 200. The first device 100 and the second device 200 each support voice control or have a voice recognition function. The first device 100 is configured to send a first voice control instruction to the second device 200, and the second device 200 is configured to receive the first voice control instruction and perform a predetermined operation according to the first voice instruction.
Along with the popularization of smart homes, more and more smart devices are used in a user's home, however, smart device manufacturers in the home have a variety of brands, and different communication protocols are often adopted among devices of different brands, so that the smart devices of different brands are difficult to communicate or interact with each other. Or, a plurality of intelligent devices need to be connected to the same local area network through network devices such as a router, and interaction among the devices needs to be performed depending on the network.
In the voice interaction system 1000 according to the embodiment of the present application, the first device 100 and the second device 200, both having the voice recognition function, may interact with each other in a voice manner to execute a related control command, and the interaction between the devices may be implemented across brands without unifying communication protocol standards, so that the interaction application range is wide, and the user experience is better.
Specifically, both the first device 100 and the second device 200 can be used as an instruction issuing party and an instruction receiving party, that is, when the first device 100 issues the first voice control instruction, the second device 200 is configured to receive the first voice control instruction. Otherwise, when the second device 200 issues the second voice control command, the first device 200 is configured to receive the second voice control command.
The devices in the voice interaction system 1000 are not limited to only including the first device 100 and the second device 200, and as the number of devices increases, the voice interaction system 1000 may further include a third device, a fourth device, and so on.
The first appliance 100 and the second appliance 200 may be any household appliance having a voice recognition function, such as a television, a refrigerator, a washing machine, an air conditioner, a gas range, a range hood, a water heater, an electric cooker, an oven, a dish washer, etc. having a voice recognition function.
Referring to fig. 2-4, in some embodiments, the first device 100 includes a first voice conversion module 110 and a first voice output module 111. The first voice conversion module 110 is configured to convert the first control instruction into a first voice control instruction, and the first voice output module 111 is configured to play the first voice control instruction. The second device 200 includes a second voice receiving module 212. The second device 200 receives the first voice control instruction through the second voice receiving module 212.
Specifically, the first speech conversion module 110 may be a TTS text-to-speech module, so that the first control instruction may be converted from a text to a speech form, that is, the first speech control instruction. The first voice output module 111 may be a speaker, and is configured to play the converted first voice control command through the speaker.
The second voice receiving module 212 may be a microphone for receiving the first voice control instruction, so that the second device 200 may perform a predetermined operation according to the first voice control instruction.
In such an embodiment, the first control instruction includes a first wake-up instruction, the first voice control instruction includes a first voice wake-up instruction, the first voice conversion module 110 is configured to convert the first wake-up instruction into the first voice wake-up instruction, the first voice output module 111 is configured to play the first voice wake-up instruction, and the second device 200 receives the first voice wake-up instruction through the second voice receiving module 212 to wake up the second device 200.
It is understood that in an interactive system composed of a plurality of devices, when the first device 100 issues a control command to the second device 200, it is necessary to make the first device 100 explicitly control the target, i.e. the second device 200, and at the same time, it is necessary to make the second device 200 know that it is the controlled target. The memory of the first device 100 stores wake-up instructions for waking up other devices, the wake-up instructions may be brand device names, such as "brand a device a", or device action instructions, such as "device a please turn on". Generally, the wake-up command is stored in a text form, the first device 100 is converted into a voice through the first voice conversion module 110, and the converted wake-up command is played through the first voice output module 111. The second device 200 wakes up after receiving the first voice wake-up instruction.
The equipment can have dormancy and operating condition, when equipment was out of work, the equipment was in the dormancy state, and under this state, only pronunciation receiving module kept work, and other functional modules were in the dormancy state, also was non-operating condition, and the equipment can receive the pronunciation instruction from the outside all the time, and when confirming the pronunciation instruction of awakening the local machine up correspondingly, the local machine was awaken up and is gone into operating condition by the dormancy state.
The working state herein should be understood broadly to include both a state where the device performs the relevant operation when executing the specific operation instruction, and a state where the device is in a power-on state and can perform the relevant operation but does not perform the relevant operation.
Further, the second device 200 includes a second voice recognition module 213, and the second voice recognition module 213 is configured to recognize the received first voice wake-up command to wake up the second device 200.
Specifically, the second speech recognition module 213 may be an Automatic Speech Recognition (ASR) module. The ASR module is used to convert sound into text. It can be understood that the second voice receiving module 212 is configured to receive a voice command from an external device or other devices, and after receiving the voice command, it needs to determine whether to wake up the local device, for example, an interactive system includes two or more devices, where one device sends the voice wake up command, and each of the other devices can receive the voice wake up command and recognize the voice wake up command through its respective voice recognition module to determine whether to wake up the local device.
Further, in such embodiments, the first control instruction further comprises a first operation instruction. The first voice control instruction comprises a first voice operation instruction. The first voice conversion module 110 is configured to convert the first operation instruction into a first voice operation instruction.
Specifically, similar to converting the first wake-up command into the first voice wake-up command, the first voice conversion module 110 converts the first operation command stored in the memory of the first device 100 from a text form into a voice form, and plays the first operation command by the first voice output module 111. The second device 200 will continue to receive the first voice operation instruction through the second voice receiving module 212 after being awakened and then definitely receiving the controlled intention. And other devices are not awakened, so that the operation corresponding to the instruction is not executed even if the first voice operation instruction is received. The first voice operation instruction can be independently set for different devices, for example, the first voice operation instruction can include an operation instruction for an electric cooker, an operation instruction for a refrigerator, and the like. In operation, for example, after the second device 200 is added to the interactive system, the operation instruction of the corresponding key may be entered into the memory of the first device 100, so that the first device 100 stores the operation instruction for controlling the second device, and the second device 200 may be controlled in the form of a voice playing operation instruction. Otherwise, the second device 200 may also implement the voice control of the first device 100 through the same process, which is not described herein again.
Further, in this embodiment, the second device 200 further comprises a second semantic recognition module 214 and a second control module 215. The second voice recognition module 213 is further configured to recognize instruction content of the received first voice operation instruction.
The second semantic recognition module 214 is configured to perform semantic parsing on the instruction content of the first voice operation instruction to generate an execution instruction. The second control module 215 controls the second device 200 to perform a predetermined operation according to the execution instruction.
As for the wake-up instruction, the second device 200 only needs to recognize that the device is being woken up through the voice recognition module, so that other circuit parts are powered on to work, the wake-up instruction has no semantic meaning, the second device 200 does not need to recognize the specific meaning of the instruction, and only needs to detect keywords and words in the wake-up instruction after the wake-up instruction is converted into a text to judge whether the device is woken up.
For the operation instruction, because the content is complex, further operation cannot be realized only by speaking the voice and converting the voice into characters through the voice recognition module, and the device also needs to perform semantic recognition on the converted content. Specifically, the second semantic recognition module 214 may be a Natural Language Processing (NLP) module, and the NLP module is configured to implement Natural Language communication between human and machine so that the device can understand the meaning of the Natural Language text, that is, recognize the instruction content of the first voice instruction operation. The second control module may be an MCU, and is configured to control the second device 200 to perform a predetermined operation according to the instruction content.
In actual operation, for example, the second device 200 is an electric cooker, the first voice operation command is "start cooking", the second voice recognition module 213 recognizes that the content text of the first voice operation command is "start cooking", and the second semantic module 215 recognizes that the semantic of the content text is start of the cooking program. The second control module 215 controls the electric rice cooker to start a predetermined operation of starting cooking.
In some embodiments, the second device 200 further comprises a second speech conversion module 210 and a second speech output module 211.
The second voice conversion module 210 is configured to convert the wake-up instruction and the operation instruction into corresponding voice instructions when intending to control other devices, and is further configured to convert the feedback information into a voice feedback instruction after receiving the first voice operation instruction and performing a predetermined operation when the second device 200 is controlled by the controlled object.
The second voice output module 211 is configured to play the voice feedback instruction converted by the second voice conversion module 210, in addition to the voice instruction converted by the second voice conversion module 210.
Specifically, the second speech conversion module 210 may be a TTS text-to-speech module, and the second speech output module 211 may be a speaker.
The feedback information refers to information that is generated by the second device 200 and sent to the first device 100 after the second device 200 starts to execute a predetermined operation corresponding to the first operation instruction, and the second device 200 executes a predetermined operation result, and according to the feedback information, the first device 100 can know whether the second device 200 successfully receives and executes a corresponding control instruction.
In such an embodiment, the first device 100 further comprises a first speech receiving module 112, a first speech recognition module 113, a first semantic recognition module 114, and a first control module 115.
The first voice receiving module 112 is configured to receive a voice feedback instruction, the first voice recognition module 113 is configured to recognize instruction content of the received voice feedback instruction, the first semantic recognition module 114 is configured to perform semantic parsing on the instruction content of the voice feedback instruction to generate a confirmation instruction, and the first control module 116 confirms that the second device 200 executes a predetermined operation according to the confirmation instruction.
Similar to the second device 200 recognizing the first voice operation instruction, the first voice receiving module 112 may be a microphone for receiving the voice feedback instruction, and the first voice recognition module 113 may be an ASR module for converting the received voice feedback instruction into text, that is, recognizing the instruction content of the voice feedback instruction. The first semantic recognition module 114 may be an NLP module for recognizing the meaning of the instruction content, that is, performing semantic parsing on the instruction content of the voice feedback instruction to generate a confirmation instruction, and the first control module 115 may be an MCU for confirming that the second device 200 has performed a predetermined operation according to the confirmation instruction.
In actual operation, for example, the second device 200 is an electric cooker, the first voice operation command is "start cooking", the second voice recognition module 213 recognizes that the content text of the first voice operation command is "start cooking", and the second semantic module 215 recognizes that the semantic of the content text is start of the cooking program. The second control module 215 controls the electric rice cooker to start a predetermined operation of starting cooking.
After the start, the second device 200 converts the text feedback information of "cooking started" into a voice feedback instruction through the second voice conversion module 210, and plays the voice feedback instruction through the second voice output module 211. The first device 100 receives a voice feedback instruction of "cooking started" through the first voice receiving module 112, the first voice recognition module 113 recognizes the received voice recognition feedback instruction, recognizes that a text of the instruction content is "cooking started", the first semantic recognition module 114 analyzes the semantic meaning of the text content of "cooking started" and generates a confirmation instruction, and the first control module 115 confirms that the second device 200 has completed operation according to the confirmation instruction. To this end, the voice interaction between the first device 100 and the second device 200 is completed.
Preferably, in some embodiments, if the first device 100 does not receive the feedback information within the predetermined time, the first device 100 sequentially sends the wake-up command and the operation command at a first time interval until receiving the feedback information sent by the second device 200. If the first device 200 does not receive the feedback information after the predetermined time, an alarm prompt is sent, for example, a voice prompt or a message such as a text is sent to the mobile device of the user to remind the user that the second device 200 may be abnormal.
In some embodiments, the first device 100 further comprises a plurality of sensors for detecting relevant parameters in the environment and generating the first control instructions in dependence of the environmental parameters.
Specifically, for example, a smoke concentration sensor may be disposed in the first device 100, and configured to detect a smoke concentration in a current environment, and when the current smoke concentration exceeds a set standard, the first device 100 may send a first control instruction to the smoke exhaust ventilator to control the smoke exhaust ventilator to turn on or increase wind power. For another example, the first control instruction can be sent to the gas stove to control the gas stove to be closed or adjust the fire power to be small.
It should be noted that the first device 100 may be any household appliance, the sensor for detecting the environmental parameter may be disposed on a suitable device according to the design requirement of the actual product, and in the interactive system, a plurality of devices may cooperate to respectively detect different environmental parameters. For example, a sensor for detecting the concentration of oil smoke can be arranged in a kitchen appliance, so that the range hood or the gas stove can be conveniently controlled. For another example, the sensor for detecting the ambient light can be arranged in indoor electric appliances such as an air conditioner and the like, so that the brightness adjustment of a ceiling lamp or a television can be conveniently controlled.
In other embodiments, the first control instruction is also sent by the user to the second device 200 via the first device 100.
It is to be understood that, in the above interaction process, the first device 100 and the second device 200 are not particularly limited, that is, in the above embodiment, the first device 100 that is issued as a control command may also be the controlled second device 200 in other embodiments.
In such an embodiment, the second device 200 is further configured to send a second voice control instruction to the first device 100, and the first device 100 is further configured to receive the second voice instruction and perform a predetermined operation according to the second voice instruction.
For a specific interaction process, reference may be made to the explanations of corresponding parts in the above embodiments, and details are not described herein again.
In the description of the embodiments of the present application, it is to be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "clockwise", "counterclockwise", and the like, indicate orientations and positional relationships based on the orientations and positional relationships shown in the drawings, and are only for convenience of describing the embodiments of the present application and for simplicity of description, and do not indicate or imply that the device or element referred to must have a particular orientation, be constructed and operated in a particular orientation, and therefore, should not be construed as limiting the embodiments of the present application.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, features defined as "first", "second", may explicitly or implicitly include one or more of the described features. In the description of the embodiments of the present application, "a plurality" means two or more unless specifically defined otherwise.
In the description of the embodiments of the present application, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; may be mechanically connected, may be electrically connected or may be in communication with each other; either directly or indirectly through intervening media, either internally or in any other relationship. Specific meanings of the above terms in the embodiments of the present application can be understood by those of ordinary skill in the art according to specific situations.
In embodiments of the present application, unless expressly stated or limited otherwise, the first feature "on" or "under" the second feature may comprise the first and second features being in direct contact, or may comprise the first and second features being in contact, not directly, but via another feature in between. Also, the first feature being "on," "above" and "over" the second feature includes the first feature being directly on and obliquely above the second feature, or merely indicating that the first feature is at a higher level than the second feature. A first feature being "under," "below," and "beneath" a second feature includes the first feature being directly under and obliquely below the second feature, or simply meaning that the first feature is at a lesser elevation than the second feature.
The above disclosure provides many different embodiments or examples for implementing different configurations of embodiments of the application. In order to simplify the disclosure of embodiments of the present application, specific example components and arrangements are described above. Of course, they are merely examples and are not intended to limit the present application. Furthermore, embodiments of the present application may repeat reference numerals and/or reference letters in the various examples, which have been repeated for purposes of simplicity and clarity and do not in themselves dictate a relationship between the various embodiments and/or arrangements discussed. In addition, embodiments of the present application provide examples of various specific processes and materials, but one of ordinary skill in the art may recognize applications of other processes and/or use of other materials.
In the description herein, reference to the description of the terms "one embodiment," "some embodiments," "an illustrative embodiment," "an example," "a specific example" or "some examples" or the like means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, such as an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processing module-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the embodiments of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc.
Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (10)

1. A voice interaction system is characterized by comprising a first device and a second device, wherein the first device and the second device both have a voice recognition function, the first device is used for sending a first voice control instruction to the second device, and the second device is used for receiving the first voice control instruction and executing a preset operation according to the first voice control instruction.
2. The voice interaction system of claim 1, wherein the first device comprises a first voice conversion module and a first voice output module, the first voice conversion module is configured to convert a first control command into a first voice control command, and the first voice output module is configured to play the first voice control command; the second device comprises a second voice receiving module, and the second device receives the first voice control instruction through the second voice receiving module.
3. The voice interactive system as claimed in claim 2, wherein the first control instruction includes a first wake-up instruction, the first voice control instruction includes a first voice wake-up instruction, the first voice conversion module is configured to convert the first wake-up instruction into the first voice wake-up instruction, the first voice output module is configured to play the first voice wake-up instruction, and the second device receives the first voice wake-up instruction through the second voice receiving module to wake up the second device.
4. The voice interaction system of claim 3, wherein the second device comprises a second voice recognition module to recognize the received first voice wake-up instruction to wake up the second device.
5. The voice interaction system of claim 4, wherein the first control instruction comprises a first operation instruction, the first voice control instruction comprises a first voice operation instruction, the first voice conversion module is configured to convert the first operation instruction into the first voice operation instruction, after the second device is woken up, the first device plays the first voice operation instruction through the first voice output module, and the second device receives the first voice operation instruction through the second voice device receiving module to perform a predetermined operation.
6. The voice interaction system of claim 5, wherein the second device comprises a second semantic recognition module and a second control module;
the second voice recognition module is used for recognizing the instruction content of the received first voice operation instruction;
the second semantic recognition module is used for performing semantic analysis on the instruction content of the first voice operation instruction to generate an execution instruction;
and the second control module controls the second equipment to execute preset operation according to the execution instruction.
7. The voice interaction system of claim 2, wherein the second device comprises a second voice conversion module and a second voice output module, the second voice conversion module is configured to generate feedback information and convert the feedback information into a voice feedback instruction after the second device starts to perform the predetermined operation, and the second voice output module is configured to play the voice feedback instruction.
8. The voice interaction system of claim 7, wherein the first device comprises a first voice receiving module, a first voice recognition module, a first semantic recognition module, and a first control module, the first voice receiving module is configured to receive the voice feedback instruction, the first voice recognition module is configured to recognize instruction content of the voice feedback instruction, the first semantic recognition module is configured to perform semantic parsing on the instruction content of the voice feedback instruction to generate a confirmation instruction, and the first control module confirms that the second device performs the predetermined operation according to the confirmation instruction.
9. The voice interaction system of claim 2, wherein the first device is configured to detect an environmental parameter and generate the first control instruction based on the environmental parameter.
10. The voice interaction system of claim 1, wherein the second device is configured to send a second voice control command to the first device, and the first device is configured to receive the second voice control command and perform a predetermined operation according to the second voice control command.
CN201921958738.3U 2019-11-12 2019-11-12 Voice interaction system Active CN211828111U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201921958738.3U CN211828111U (en) 2019-11-12 2019-11-12 Voice interaction system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201921958738.3U CN211828111U (en) 2019-11-12 2019-11-12 Voice interaction system

Publications (1)

Publication Number Publication Date
CN211828111U true CN211828111U (en) 2020-10-30

Family

ID=73027088

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201921958738.3U Active CN211828111U (en) 2019-11-12 2019-11-12 Voice interaction system

Country Status (1)

Country Link
CN (1) CN211828111U (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113157240A (en) * 2021-04-27 2021-07-23 百度在线网络技术(北京)有限公司 Voice processing method, device, equipment, storage medium and computer program product

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113157240A (en) * 2021-04-27 2021-07-23 百度在线网络技术(北京)有限公司 Voice processing method, device, equipment, storage medium and computer program product

Similar Documents

Publication Publication Date Title
CN107339786B (en) A kind of system and method for air-conditioning, regulation air-conditioning loudspeaker casting volume
CN104347072A (en) Remote-control unit control method and device and remote-control unit
CN106875945B (en) Voice control method and device and air conditioner
WO2019205134A1 (en) Smart home voice control method, apparatus, device and system
CN107477793A (en) A kind of air purifier, control system of air purifier and method
CN108259280B (en) Method and system for realizing indoor intelligent control
CN104538030A (en) Control system and method for controlling household appliances through voice
CN108198550A (en) A kind of voice collecting terminal and system
CN108848011B (en) Household appliance and voice interaction method and device thereof
CN109028478A (en) Air-conditioning remote control and air-conditioner control system
CN114172757A (en) Server, intelligent home system and multi-device voice awakening method
CN110632854A (en) Voice control method and device, voice control node and system and storage medium
CN211828111U (en) Voice interaction system
CN104713188A (en) Control method and system of air conditioner
CN111754997A (en) Control device and operation method thereof, and voice interaction device and operation method thereof
CN105223862A (en) A kind of household electrical appliance and audio control method thereof and system
CN114067798A (en) Server, intelligent equipment and intelligent voice control method
CN111380097B (en) Range hood, range hood kitchen range linkage system and control method thereof
CN114120999A (en) Equipment control method and device
CN113138559A (en) Device interaction method and device, electronic device and storage medium
CN111211953A (en) Intelligent home cloud service system based on natural language processing and Internet of things technology
CN108737772A (en) Range hood and interaction noise-reduction method
CN110648664A (en) Household appliance control method and device with storage function
WO2018023514A1 (en) Home background music control system
CN106681131A (en) Voice timing type household appliance and timing method thereof

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant