CN111354336B - Distributed voice interaction method, device, system and household appliance - Google Patents
Distributed voice interaction method, device, system and household appliance Download PDFInfo
- Publication number
- CN111354336B CN111354336B CN201811560666.7A CN201811560666A CN111354336B CN 111354336 B CN111354336 B CN 111354336B CN 201811560666 A CN201811560666 A CN 201811560666A CN 111354336 B CN111354336 B CN 111354336B
- Authority
- CN
- China
- Prior art keywords
- voice
- command
- instruction
- voice command
- distributed
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 54
- 230000003993 interaction Effects 0.000 title claims abstract description 34
- 230000004044 response Effects 0.000 claims abstract description 53
- 238000012790 confirmation Methods 0.000 claims description 59
- 238000004590 computer program Methods 0.000 claims description 7
- 230000008569 process Effects 0.000 description 7
- 238000006243 chemical reaction Methods 0.000 description 6
- 230000008901 benefit Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 2
- 241001417527 Pempheridae Species 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000005406 washing Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/02—Feature extraction for speech recognition; Selection of recognition unit
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/28—Constructional details of speech recognition systems
- G10L15/30—Distributed recognition, e.g. in client-server systems, for mobile phones or network applications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification techniques
- G10L17/22—Interactive procedures; Man-machine interfaces
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Computational Linguistics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Telephonic Communication Services (AREA)
Abstract
The application provides a distributed voice interaction method, a device, a system, household appliances and a storage medium. Wherein the method comprises the following steps: receiving a voice instruction; determining whether to respond to the voice command according to the characteristic information of the voice command; and if the voice command is determined to be responded, outputting voice response information according to the voice command. In the distributed voice system, when each voice device receives a voice command of a user, whether the voice device needs to respond to the voice command is judged according to the characteristic information of the voice command, and only the voice device needing to respond is determined to reply the user, so that the number of the voice devices responding to the same voice command is greatly reduced, and the redundant reply condition in the distributed voice system is reduced.
Description
Technical Field
The application belongs to the technical field of Internet of things, and particularly relates to a distributed voice interaction method, device and system and household appliances.
Background
At present, a voice module is configured in many household appliances, a voice instruction of a user can be identified through the voice module, and the voice instruction of the user is responded in a voice mode, so that a more anthropomorphic human-computer interaction mode is provided for the user.
Currently, in the related art, a voice module replies to a user after receiving a voice command of the user. In the distributed voice system, because a plurality of voice modules exist in the same network and place at the same time, the voice modules can respond to voice instructions of users in a short time, so that multiple redundancy replies are caused, and the user experience is poor.
Disclosure of Invention
The method determines whether to respond to the voice command according to the characteristic information of the voice command, and replies the voice to the user only when the response is determined, so that the number of voice devices replying to the same voice command in the distributed voice system is reduced, and redundant reply to the user is reduced.
An embodiment of a first aspect of the present application provides a distributed voice interaction method, where the method includes:
receiving a voice instruction;
determining whether to respond to the voice command according to the characteristic information of the voice command;
and if the voice command is determined to be responded, outputting voice response information according to the voice command.
With reference to the foregoing first aspect of the embodiments, the present application proposes a first possible implementation manner of the foregoing first aspect of the embodiments, where the determining, according to feature information of the voice command, whether to respond to the voice command includes:
and determining whether to respond to the voice command according to the volume value of the voice command.
With reference to the first possible implementation manner of the foregoing first aspect, the present application proposes a second possible implementation manner of the foregoing first aspect, where the determining, according to a volume value of the voice command, whether to respond to the voice command includes:
if the volume value of the voice command is larger than or equal to the upper limit value of the preset volume interval, determining to respond to the voice command;
discarding the voice command if the volume value of the voice command is smaller than the lower limit value of the preset volume interval;
if the volume value of the voice command is within the preset volume interval, determining whether to respond to the voice command based on the characteristic information of the voice command received by a plurality of voice devices in the distributed voice system.
With reference to the foregoing first aspect of the embodiment, the present application proposes a third possible implementation manner of the foregoing first aspect of the embodiment, where the determining, according to feature information of the voice command, whether to respond to the voice command includes:
based on characteristic information of the voice command received by a plurality of voice devices in the distributed voice system, whether to respond to the voice command is determined.
With reference to the second or three possible implementation manners of the foregoing first aspect, the present application proposes a fourth possible implementation manner of the foregoing first aspect, where the determining, based on the feature information of the voice command received by a plurality of voice devices in the distributed voice system, whether to respond to the voice command includes:
broadcasting an instruction query packet corresponding to the voice instruction in a distributed voice system;
receiving instruction confirmation packets returned by a plurality of voice devices in the distributed voice system;
and determining whether to respond to the voice command according to the characteristic information included in each command confirmation packet.
With reference to the fourth possible implementation manner of the foregoing first aspect, the present application proposes a fifth possible implementation manner of the foregoing first aspect, wherein the determining, according to feature information included in each instruction acknowledgement packet, whether to respond to the voice instruction includes:
If the volume value of the voice command is larger than the volume value in each command confirmation packet, determining to respond to the voice command;
and discarding the voice command if the volume value of the voice command is smaller than the volume value in at least one command confirmation packet.
With reference to the fourth possible implementation manner of the foregoing first aspect, the present application proposes a sixth possible implementation manner of the foregoing first aspect, wherein the determining, according to feature information included in each instruction acknowledgement packet, whether to respond to the voice instruction includes:
if the instruction receiving time of the voice instruction is earlier than the instruction receiving time in each instruction confirmation packet, determining to respond to the voice instruction;
and discarding the voice command if the command receiving time of the voice command is later than the command receiving time in at least one command confirmation packet.
In combination with the foregoing first aspect embodiment, the present application proposes a seventh possible implementation manner of the foregoing first aspect embodiment, where the method further includes:
receiving an instruction query packet broadcast by voice equipment in a distributed voice system;
Determining whether a voice instruction corresponding to the instruction identification information is received or not according to the instruction identification information included in the instruction query packet;
if the voice command corresponding to the command identification information is determined to be received by the voice device, a command confirmation packet is generated, and the command confirmation packet is sent to the voice device.
In combination with the foregoing first aspect embodiment, the present application proposes an eighth possible implementation manner of the foregoing first aspect embodiment, where the method further includes:
and if the voice command is determined to be responded, broadcasting response notification information in the distributed voice system, wherein the response notification information comprises command identification information and a response identifier of the voice command.
In combination with the foregoing first aspect embodiment, the present application proposes a ninth possible implementation manner of the foregoing first aspect embodiment, where the method further includes:
receiving response notification information broadcasted in a distributed voice system, wherein the response notification information comprises instruction identification information and a response identifier;
determining whether the voice instruction corresponding to the instruction identification information is received or not according to the instruction identification information;
if the voice command corresponding to the command identification information is determined to be received by the user, discarding the voice command corresponding to the command identification information according to the response identifier.
An embodiment of a second aspect of the present application provides a distributed voice interaction device, including:
the receiving module is used for receiving the voice instruction;
the determining module is used for determining whether to respond to the voice instruction according to the characteristic information of the voice instruction received by the receiving module;
and the output module is used for outputting voice response information according to the voice instruction if the determination module determines to respond to the voice instruction.
With reference to the second aspect of the embodiment, the present application proposes a first possible implementation manner of the second aspect of the embodiment, where the determining module is configured to:
if the volume value of the voice command is larger than or equal to the upper limit value of the preset volume interval, determining to respond to the voice command;
discarding the voice command if the volume value of the voice command is smaller than the lower limit value of the preset volume interval;
if the volume value of the voice command is within the preset volume interval, determining whether to respond to the voice command based on the characteristic information of the voice command received by a plurality of voice devices in the distributed voice system.
With reference to the first possible implementation manner of the foregoing second aspect of the embodiments, the present application proposes a second possible implementation manner of the foregoing second aspect of the embodiments, where, if a volume value of the voice command is located within the preset volume interval, the determining module is further configured to:
Broadcasting an instruction query packet corresponding to the voice instruction in a distributed voice system;
receiving instruction confirmation packets returned by a plurality of voice devices in the distributed voice system;
and determining whether to respond to the voice command according to the characteristic information included in each command confirmation packet.
An embodiment of a third aspect of the present application provides an electrical home appliance, including a memory and a processor;
the memory has executable program code stored therein;
the processor reads the executable program code and runs a program corresponding to the executable program code to implement the distributed voice interaction method described in the embodiment of the first aspect.
An embodiment of a fourth aspect of the present application provides a distributed voice system, including a plurality of home devices according to the embodiment of the third aspect.
An embodiment of a fifth aspect of the present application proposes a non-transitory computer readable storage medium, on which a computer program is stored, which when executed by a processor implements the distributed voice interaction method according to the embodiment of the first aspect.
The technical scheme provided in the embodiment of the application has at least the following technical effects or advantages:
In the distributed voice system, when each voice device receives a voice command of a user, whether the voice device needs to respond to the voice command is judged according to the characteristic information of the voice command, and only the voice device needing to respond is determined to reply the user, so that the number of the voice devices responding to the same voice command is greatly reduced, and the redundant reply condition in the distributed voice system is reduced.
Additional aspects and advantages of the application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the application.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the application. Also, like reference numerals are used to designate like parts throughout the figures. In the drawings:
FIG. 1 illustrates a schematic diagram of a distributed speech system provided in one embodiment of the present application;
FIG. 2 is a flow chart illustrating a distributed voice interaction method according to one embodiment of the present application;
FIG. 3 is a flow chart illustrating a distributed voice interaction method according to another embodiment of the present application;
fig. 4 is a schematic flow chart of a distributed voice interaction method according to another embodiment of the present application.
Fig. 5 is a schematic flow chart of a distributed voice interaction method according to another embodiment of the present application.
Fig. 6 is a schematic flow chart of a distributed voice interaction method according to another embodiment of the present application.
Fig. 7 is a schematic structural diagram of a distributed voice interaction device according to an embodiment of the present application.
Detailed Description
Exemplary embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The embodiment of the application is mainly aimed at the problem that in the related technology, voice equipment replies voice to a user after receiving a voice instruction of the user. In the distributed voice system, a plurality of voice devices exist at the same time, the voice devices can reply the voice of the user in a short time, and the technical problem of multiple redundancy replies exists.
In the distributed voice interaction method of the embodiment of the application, after receiving a voice instruction of a user, a voice device determines whether to respond to the voice instruction according to feature information of the voice instruction. The voice response information is output according to the voice command only when it is determined to respond to the voice command. If the voice command is determined to be not required to respond to the voice command, discarding the voice command, and not replying the voice to the user. In the distributed voice system, when each voice device receives a voice command of a user, whether the voice device needs to respond to the voice command is judged according to the characteristic information of the voice command, and only the voice device needing to respond is determined to reply the user, so that the number of the voice devices responding to the same voice command is greatly reduced, and the redundant reply condition in the distributed voice system is reduced.
The following describes a distributed voice interaction method, a device, a system, a home appliance and a storage medium according to embodiments of the present application with reference to the accompanying drawings.
As shown in fig. 1, the distributed voice system includes a plurality of voice devices that are communicatively coupled via a local area network. In which the speech device 1, the speech device 2 and the speech device M are only schematically depicted in fig. 1.
In the case of no user use, the distributed voice system includes a plurality of voice devices in a dormant state, and the voice devices do not recognize and respond to voice information in the surrounding environment in the dormant state. When a user needs to interact with a voice device in a distributed voice system, the voice device needs to be awakened from a dormant state to a working state. In the embodiment of the present application, the plurality of voice devices included in the distributed voice system have the same wake-up instruction, where the wake-up instruction may be a brand name of the voice device, a wake-up language set by a manufacturer of the voice device, or a wake-up language set by a user in a user-defined manner, and so on. When the user needs to use the voice equipment, the user speaks a wake-up instruction, one or more voice equipment in the distributed voice system receives the wake-up instruction, and the voice equipment is switched from the dormant state to the working state, and then the user can interact with the voice equipment in the distributed voice system. Each voice device included in the distributed voice system performs the distributed voice interaction method of the embodiments of the present application.
Fig. 2 is a flow chart of a distributed voice interaction method according to an embodiment of the present application. As shown in fig. 2, the distributed voice interaction method includes the following steps:
Step 101: and receiving a voice instruction.
When the voice equipment in the distributed voice system is awakened and is in a working state, the user speaks a voice instruction, and one or more voice equipment in the distributed voice system receives the voice instruction of the user.
Step 102: and determining whether to respond to the voice command according to the characteristic information of the voice command.
Each voice device receiving the voice command determines whether to respond to the voice command according to the characteristic information of the voice command.
Step 103: and if the voice command is determined to be responded, outputting voice response information according to the voice command.
When the response to the voice command is determined, the voice command is analyzed, and voice response information corresponding to the voice command is output. For example, the voice command is "how today weather" and the voice response information outputted may be "weather today" or the like.
In the distributed voice system, each voice device determines whether to respond to the voice command by itself through the technical scheme provided by the embodiment. Only the voice equipment which determines to respond replies to the user, and the voice equipment which determines not to respond discards the voice instruction, so that the number of the voice equipment which responds to the same voice instruction is reduced, and the redundant reply condition in the distributed voice system is reduced. And system resources of the voice equipment which does not respond can be saved.
Since the processing procedure of each voice device in the distributed voice system is the same, in some embodiments of the present application, a first voice device is taken as an example to describe in detail, where the first voice device is any voice device in the distributed voice system, and for convenience of description, other voice devices except the first voice device in the distributed voice system are referred to as second voice devices. The processing procedure of the second voice device is the same as that of the first voice device.
In an embodiment of the present application, the feature information of the voice command is a volume value of the voice command. The first voice device determines whether to respond to the voice command according to the volume value of the received voice command.
Because the plurality of voice devices included in the distributed voice system are usually arranged in the same place, when a user speaks a voice command in the place, the distance between the user and each voice device is different, and the voice device closer to the user receives the voice command has a larger volume value, and the voice device farther from the user receives the voice command has a smaller volume value. The voice equipment determines whether to respond to the voice instruction according to the volume value of the voice instruction received by the voice equipment, and the voice equipment close to the user replies to the user in the distributed voice system, and the voice equipment far from the user does not reply to the user, so that the number of replied voice equipment is reduced, and the redundant reply times to the user are reduced.
In another embodiment of the present application, the voice devices are each preconfigured with a preset volume interval, an upper limit value of the preset volume interval is a volume value detected when the preset volume interval is a first preset threshold value from the sound source, and a lower limit value of the preset volume interval is a volume value detected when the preset volume interval is a second preset threshold value from the sound source, wherein the first preset threshold value is smaller than the second preset threshold value. And the upper limit value and the lower limit value of the preset volume interval are statistical values, namely, the volume values of the voice sent by the sound source are detected for a plurality of times at the position which is away from the first preset threshold value of the sound source, and the statistical values such as the average value or the median value of the volume values detected for a plurality of times are taken as the upper limit value. And detecting the sound volume value of the voice sent by the sound source for a plurality of times at a position which is away from the sound source by a second preset threshold value, wherein the statistic values such as the average value or the median value of the sound volume values detected for a plurality of times are taken as the lower limit value. As an example, the first preset threshold may be 0.5 meter, 1 meter, 2 meters, or the like, the second preset threshold may be 5 meters, 8 meters, 10 meters, or the like, and the preset volume interval may be [15db,55 db), [20db,60 db), or [25db,65 db), or the like. The embodiment of the application does not limit the specific value of the preset volume interval, and in practical application, the values of the first preset threshold and the second preset threshold can be set according to the requirements, and the corresponding preset volume interval is measured.
For determining whether to respond according to the volume value of the voice command in the above embodiment, another embodiment of the present application provides a specific determination procedure, as shown in fig. 3, where the first voice device specifically determines whether to respond to the voice command by the following operations of steps 101, S1-S5, 103, including:
step 101: the first voice device receives a voice command.
S1: the first voice device judges whether the volume value of the voice command is larger than or equal to the upper limit value of the preset volume interval, if so, the step S2 is executed, and if not, the step S3 is executed.
S2: the first voice device determines to respond to the voice command, after which step 103 is performed.
When the volume value is judged to be larger than or equal to the upper limit value of the preset volume interval, the distance between the first voice device and the user is indicated to be smaller than the first preset threshold value, the first voice device is very close to the user, and the voice command is directly determined to be responded at the moment, so that the voice device close to the user can quickly respond to the user, the waiting time of the user is shortened, the volume value when the voice response information replied by the first voice device reaches the position where the user is located is higher, the user can clearly hear the voice response information replied by the first voice device, and the quality and efficiency of voice interaction between the voice device and the user are improved.
S3: the first voice device judges whether the volume value of the voice command is smaller than the lower limit value of the preset volume interval, if yes, the step S4 is executed, and if not, the step S5 is executed.
S4: the first voice device discards the voice command and ends the operation.
When the volume value is judged to be smaller than the lower limit value of the preset volume interval, the distance between the first voice equipment and the user is indicated to be larger than the second preset threshold value, the voice command is discarded when the first voice equipment is far away from the user, and the first voice equipment does not reply to the user, so that the number of voice equipment responding to the voice command is reduced, and redundant reply to the user is reduced.
S5: based on the characteristic information of the voice command received by the plurality of voice devices in the distributed voice system, determining whether to respond to the voice command, if yes, executing step 103, and if not, executing step S4.
Step 103: and if the voice command is determined to be responded, outputting voice response information according to the voice command.
If the first voice device determines that the volume value of the voice command is not less than the lower limit value of the preset volume interval in the step S3, the volume value of the voice command is located in the preset volume interval, which indicates that the distance between the first voice device and the user is greater than the first preset threshold and less than or equal to the second preset threshold, at this time, the first voice device needs to perform network bidding in the distributed voice system, and determines whether to respond to the voice command according to the result of the network bidding. I.e., in conjunction with other voice devices in the distributed voice system receiving characteristic information of the voice command to determine whether or not to respond to the voice command itself. And (3) determining whether the voice equipment needs to respond according to the condition of all voice equipment receiving the voice instruction in the comprehensive distributed voice system, so that only a few voice equipment responds or even only one voice equipment responds in the distributed voice system, the number of the voice equipment responding to the voice instruction is reduced, and redundant replies to users are reduced.
In one embodiment of the present application, the specific process for determining based on the characteristic information of the voice command received by the plurality of voice devices in the distributed voice system is provided, as shown in fig. 4, when the volume value of the voice command is within the preset volume interval, the specific process for determining based on the characteristic information of the voice command received by the plurality of voice devices in the distributed voice system by the following operations of steps 101, S1-S4, A1-A3, 103 includes:
step 101: the first voice device receives a voice command.
S1: the first voice device judges whether the volume value of the voice command is larger than or equal to the upper limit value of the preset volume interval, if so, the step S2 is executed, and if not, the step S3 is executed.
S2: the first voice device determines to respond to the voice command, after which step 103 is performed.
S3: the first voice device judges whether the volume value of the voice command is smaller than the lower limit value of the preset volume interval, if yes, the step S4 is executed, and if not, the step A1 is executed.
S4: the first voice device discards the voice command and ends the operation.
When the volume value is judged to be smaller than the lower limit value of the preset volume interval, the distance between the first voice equipment and the user is indicated to be larger than the second preset threshold value, the voice command is discarded when the first voice equipment is far away from the user, and the first voice equipment does not reply to the user, so that the number of voice equipment responding to the voice command is reduced, and redundant reply to the user is reduced.
A1: the first voice device broadcasts an instruction query packet corresponding to the voice instruction in the distributed voice system.
When the first voice device judges that the volume value of the voice command is within the preset volume interval, the first voice device generates a command query packet, wherein the command query packet comprises a first packet identifier, a device identifier of the first voice device and command identifier information corresponding to the voice command. The first packet is identified as a character sequence which is allocated to the instruction inquiry packet by the first voice device and can uniquely identify the instruction inquiry packet. The instruction identification information is identification information capable of uniquely identifying the voice instruction, the first voice device can directly take audio data of the voice instruction as instruction identification information, can firstly perform text conversion on the voice instruction, take obtained text information as instruction identification information, can perform hash operation on the text information obtained by text conversion, and take the obtained hash value as instruction identification information corresponding to the voice instruction.
After the first voice equipment generates the instruction inquiry packet, the instruction inquiry packet is broadcast in the distributed voice system through the local area network. After receiving the instruction inquiry packet, a second voice device except the first voice device in the distributed voice system determines whether the second voice device receives a voice instruction corresponding to the instruction identification information according to the instruction identification information included in the instruction inquiry packet.
Specifically, the second voice device analyzes the instruction identification information included in the instruction query packet, and if the instruction identification information is audio data, the second voice device performs text conversion on the audio data to obtain text information corresponding to the instruction identification information. And converting all voice commands received by the user into text information, respectively calculating the similarity between the text information corresponding to the command identification information and the text information corresponding to each voice command received currently, and if the voice command with the similarity larger than a preset value exists, determining that the voice command is the voice command corresponding to the command identification information, thereby determining that the user also receives the voice command corresponding to the command identification information. If no voice instruction with the similarity larger than the preset value exists, determining that the voice instruction corresponding to the instruction identification information is not received. The preset value may be 85% or 90%.
If the instruction identification information is text information, the second voice equipment calculates the similarity between the text information and the text information corresponding to each currently received voice instruction in the mode, and further determines whether the second voice equipment also receives the voice instruction corresponding to the instruction identification information.
If the instruction identification information is a hash value, the second voice equipment converts all voice instructions received by the second voice equipment into text information, carries out hash operation on the text information corresponding to each voice instruction respectively to obtain a hash value corresponding to each voice instruction, calculates similarity between the hash value included in the instruction inquiry packet and the hash value corresponding to each voice instruction, and if voice instructions with the similarity larger than a preset value exist, determines that the voice instructions corresponding to the instruction identification information are received by the second voice equipment. If no voice instruction with the similarity larger than the preset value exists, determining that the voice instruction corresponding to the instruction identification information is not received.
After the second voice device determines that the second voice device also receives the voice command in the above manner, the second voice device generates a command confirmation packet, and sends the command confirmation packet to the first voice device according to the device identifier of the first voice device included in the command query packet. The instruction confirmation packet comprises a second packet identifier, a device identifier of the second voice device, and characteristic information of the voice instruction received by the second voice device. The second packet identifier is a character sequence which is allocated to the instruction acknowledgement packet by the second voice device and can uniquely identify the instruction acknowledgement packet. The characteristic information of the second voice device receiving the voice command may be a volume value or a receiving time of the second voice device receiving the voice command.
A2: the first voice device receives instruction acknowledgement packets returned by a plurality of second voice devices in the distributed voice system.
A3: the first voice device determines whether to respond to the voice command according to the feature information included in each command confirmation packet, if so, step 103 is executed, and if not, step S4 is executed.
Step 103: and if the voice command is determined to be responded, outputting voice response information according to the voice command.
When the volume value of the voice command is within the preset volume interval, the first voice device broadcasts a command query packet in the distributed voice system, so that command confirmation packets returned by the second voice device which also receives the voice command are obtained, and the first voice device determines whether to respond to the voice command by combining characteristic information included in the command confirmation packets, so that the number of voice devices responding to the voice command in the distributed voice system can be reduced, and redundant replies to users are reduced.
In another embodiment of the present application, instead of determining whether to respond according to the volume value of the voice command, as shown in fig. 5, determining whether to respond directly based on the feature information of the voice command received by a plurality of voice devices in the distributed voice system specifically includes:
Step 101: and receiving a voice instruction.
Step 1021: based on characteristic information of a voice command received by a plurality of voice devices in the distributed voice system, whether to respond to the voice command is determined.
Step 103: and if the voice command is determined to be responded, outputting voice response information according to the voice command.
After receiving the voice command, the voice equipment directly performs network bidding in the distributed voice system, and determines whether to respond to the voice command according to the result of the network bidding. And (3) determining whether the voice equipment needs to respond according to the condition of all voice equipment receiving the voice instruction in the comprehensive distributed voice system, so that only a few voice equipment responds or even only one voice equipment responds in the distributed voice system, the number of the voice equipment responding to the voice instruction is reduced, and redundant replies to users are reduced. And system resources of the voice equipment which does not respond can be saved.
In an embodiment of the present application, the specific process of determining based on the feature information of the voice commands received by the plurality of voice devices in the distributed voice system is provided, and the flowchart of the distributed voice interaction method shown in fig. 6 is that after receiving the voice commands, whether to respond is determined directly based on the feature information of the voice commands received by the plurality of voice devices in the distributed voice system.
As shown in fig. 6, the first voice device specifically determines whether to respond based on the characteristic information of the voice command received by the plurality of voice devices in the distributed voice system through the following operations of steps 101, A1-A3, S4, 103, including:
step 101: the first voice device receives a voice command.
A1: the first voice device broadcasts an instruction query packet corresponding to the voice instruction in the distributed voice system.
When the first voice device judges that the volume value of the voice command is within the preset volume interval, the first voice device generates a command query packet, wherein the command query packet comprises a first packet identifier, a device identifier of the first voice device and command identifier information corresponding to the voice command. The first packet is identified as a character sequence which is allocated to the instruction inquiry packet by the first voice device and can uniquely identify the instruction inquiry packet. The instruction identification information is identification information capable of uniquely identifying the voice instruction, the first voice device can directly take audio data of the voice instruction as instruction identification information, can firstly perform text conversion on the voice instruction, take obtained text information as instruction identification information, can perform hash operation on the text information obtained by text conversion, and take the obtained hash value as instruction identification information corresponding to the voice instruction.
After the first voice equipment generates the instruction inquiry packet, the instruction inquiry packet is broadcast in the distributed voice system through the local area network. After receiving the instruction inquiry packet, a second voice device except the first voice device in the distributed voice system determines whether the second voice device receives a voice instruction corresponding to the instruction identification information according to the instruction identification information included in the instruction inquiry packet.
Specifically, the second voice device analyzes the instruction identification information included in the instruction query packet, and if the instruction identification information is audio data, the second voice device performs text conversion on the audio data to obtain text information corresponding to the instruction identification information. And converting all voice commands received by the user into text information, respectively calculating the similarity between the text information corresponding to the command identification information and the text information corresponding to each voice command received currently, and if the voice command with the similarity larger than a preset value exists, determining that the voice command is the voice command corresponding to the command identification information, thereby determining that the user also receives the voice command corresponding to the command identification information. If no voice instruction with the similarity larger than the preset value exists, determining that the voice instruction corresponding to the instruction identification information is not received. The preset value may be 85% or 90%.
If the instruction identification information is text information, the second voice equipment calculates the similarity between the text information and the text information corresponding to each currently received voice instruction in the mode, and further determines whether the second voice equipment also receives the voice instruction corresponding to the instruction identification information.
If the instruction identification information is a hash value, the second voice equipment converts all voice instructions received by the second voice equipment into text information, carries out hash operation on the text information corresponding to each voice instruction respectively to obtain a hash value corresponding to each voice instruction, calculates similarity between the hash value included in the instruction inquiry packet and the hash value corresponding to each voice instruction, and if voice instructions with the similarity larger than a preset value exist, determines that the voice instructions corresponding to the instruction identification information are received by the second voice equipment. If no voice instruction with the similarity larger than the preset value exists, determining that the voice instruction corresponding to the instruction identification information is not received.
After the second voice device determines that the second voice device also receives the voice command in the above manner, the second voice device generates a command confirmation packet, and sends the command confirmation packet to the first voice device according to the device identifier of the first voice device included in the command query packet. The instruction confirmation packet comprises a second packet identifier, a device identifier of the second voice device, and characteristic information of the voice instruction received by the second voice device. The second packet identifier is a character sequence which is allocated to the instruction acknowledgement packet by the second voice device and can uniquely identify the instruction acknowledgement packet. The characteristic information of the second voice device receiving the voice command may be a volume value or a receiving time of the second voice device receiving the voice command.
A2: the first voice device receives instruction acknowledgement packets returned by a plurality of second voice devices in the distributed voice system.
A3: the first voice device determines whether to respond to the voice command according to the feature information included in each command confirmation packet, if so, step 103 is executed, and if not, step S4 is executed.
S4: the first voice device discards the voice command and ends the operation.
When the volume value is judged to be smaller than the lower limit value of the preset volume interval, the distance between the first voice equipment and the user is indicated to be larger than the second preset threshold value, the voice command is discarded when the first voice equipment is far away from the user, and the first voice equipment does not reply to the user, so that the number of voice equipment responding to the voice command is reduced, and redundant reply to the user is reduced.
Step 103: and if the voice command is determined to be responded, outputting voice response information according to the voice command.
The first voice device acquires the command confirmation packet returned by the second voice device which also receives the voice command by broadcasting the command query packet in the distributed voice system, and the first voice device determines whether to respond to the voice command by combining characteristic information included in the command confirmation packet, so that the number of voice devices responding to the voice command in the distributed voice system can be reduced, and redundant replies to users can be reduced.
For the command confirmation packet in the above steps A2 and A3, in one embodiment of the present application, the feature information included in the command confirmation packet is a volume value. The first voice device compares the volume value of the voice command received by the first voice device with the volume value included in each command confirmation packet, and if the volume value of the voice command is larger than the volume value in each command confirmation packet, the first voice device determines to respond to the voice command. If the volume value of the voice command is smaller than the volume value in any command confirmation packet, discarding the voice command.
The first voice device can determine whether the distance between the first voice device and the user is nearest in all voice devices receiving the voice command by comparing the volume value of the voice command received by the first voice device with the volume value included in each command confirmation packet, namely, when the volume value of the voice command received by the first voice device is larger than the volume value in each command confirmation packet, the first voice device indicates that the distance between the first voice device and the user is nearest, and then the first voice device determines to respond to the voice command. The voice response information replied by the first voice device can be clearly heard by the user, and the quality and efficiency of voice interaction between the voice device and the user are improved.
And when the volume value of the voice command received by the first voice device is smaller than the volume value in at least one command confirmation packet, indicating that the distance between the first voice device and the user is not nearest, and other voice devices with closer distances to the user exist. Therefore, the first voice device discards the voice command and does not respond to the voice command, thereby reducing the number of voice devices responding to the voice command and reducing redundant replies to users.
In another embodiment of the present application, the characteristic information included in the instruction acknowledgement packet is an instruction receiving time. The first voice equipment compares the receiving time of the voice command received by the first voice equipment with the volume value included in each command confirmation packet, and if the command receiving time of the voice command is earlier than the command receiving time in each command confirmation packet, the first voice equipment determines to respond to the voice command; if the instruction receiving time of the voice instruction is later than the instruction receiving time in at least one instruction confirmation packet, discarding the voice instruction.
The first voice device can determine whether the first voice device is the voice device which receives the voice command earliest among all voice devices which receive the voice command by comparing the receiving time of the voice command with the receiving time included in each command confirmation packet, namely, when the receiving time of the voice command received by the first voice device is earlier than the receiving time in each command confirmation packet, the first voice device indicates that the first voice device is the voice device which receives the voice command earliest, and the first voice device determines to respond to the voice command. The voice equipment which receives the voice command earliest responds to the user, shortens the waiting time of the user and improves the voice interaction speed.
And when the first voice device receives the voice command at a time later than the receiving time in the at least one command confirmation packet, indicating that the first voice device is not the voice device which received the voice command earliest. Therefore, the first voice device discards the voice command and does not respond to the voice command, thereby reducing the number of voice devices responding to the voice command and reducing redundant replies to users.
In another embodiment of the present application, after determining to respond to the voice command, the first voice device further broadcasts response notification information in the distributed voice system through the local area network, where the response notification information includes command identification information and a response identifier of the voice command. The response identifier is used for indicating that the voice equipment for determining the voice command corresponding to the command identification information exists currently, and the response identifier can be a pre-configured character, such as 0 or 1.
The second voice device receives the response notification information, determines whether the second voice device receives the voice command corresponding to the command identification information according to the command identification information included in the response notification information, and the specific determination process is the same as the determination process in the step A1, which is not described herein. After the second voice device determines that the second voice device receives the voice command corresponding to the command identification information, the second voice device determines that the second voice device does not need to respond to the voice command according to the response identifier, and discards the voice command, so that the processing operation of the second voice device on the voice command is terminated in time, the system resources of the second voice device are saved, the number of voice devices responding to the voice command is reduced, and redundant replies to users are reduced.
In another embodiment of the present application, the first voice device may also receive response notification information broadcast by other voice devices, determine, according to instruction identification information included in the response notification information, whether the first voice device itself receives a voice instruction corresponding to the instruction identification information, determine, after the first voice device itself receives the voice instruction corresponding to the instruction identification information, determine, according to the response identifier, that the first voice device itself does not need to respond to the voice instruction any more, and discard the voice instruction, thereby terminating processing operation on the voice instruction in time, saving system resources, reducing the number of voice devices responding to the voice instruction, and reducing redundant replies to users.
In the distributed voice system, each voice device determines whether to respond to the voice command by itself through the technical scheme provided by any one of the embodiments. Only the voice equipment which determines to respond replies to the user, and the voice equipment which determines not to respond discards the voice instruction, so that the number of the voice equipment which responds to the same voice instruction is reduced, and the redundant reply condition in the distributed voice system is reduced. And system resources of the voice equipment which does not respond can be saved.
In order to implement the foregoing embodiments, the embodiments of the present application further provide a distributed voice interaction device, as shown in fig. 7, where the device includes: a receiving module 100, a determining module 200 and an output module 300.
The receiving module 100 is configured to receive a voice command.
The determining module 200 is configured to determine whether to respond to the voice command according to the feature information of the voice command received by the receiving module 100.
The output module 300 is configured to output voice response information according to the voice command if the determination module 200 determines to respond to the voice command.
In one possible implementation manner of the embodiment of the present application, the determining module 200 is configured to determine whether to respond to the voice command according to the volume value of the voice command.
Specifically, the determining module 200 is further configured to determine to respond to the voice command if the volume value of the voice command is greater than or equal to the upper limit value of the preset volume interval; if the volume value of the voice command is smaller than the lower limit value of the preset volume interval, discarding the voice command; if the volume value of the voice command is within the preset volume interval, determining whether to respond to the voice command based on characteristic information of the voice command received by a plurality of voice devices in the distributed voice system.
In another possible implementation manner of the embodiment of the present application, the determining module 200 is further configured to determine whether to respond to the voice command directly based on the feature information of the voice command received by the plurality of voice devices in the distributed voice system.
In an implementation manner of determining whether to respond based on the feature information of the voice instructions received by the plurality of voice devices in the distributed voice system, the determining module 200 is further configured to broadcast an instruction query packet corresponding to the voice instructions in the distributed voice system; receiving instruction confirmation packets returned by a plurality of voice devices in a distributed voice system; and determining whether to respond to the voice command according to the characteristic information included in each command confirmation packet.
In one possible implementation manner of the embodiment of the present application, the feature information included in the instruction acknowledgement packet is a volume value; the determining module 200 is further configured to determine to respond to the voice command if the volume value of the voice command is greater than the volume value in each command confirmation packet; if the volume value of the voice command is smaller than the volume value in at least one command confirmation packet, discarding the voice command.
In another possible implementation manner of the embodiment of the present application, the feature information included in the instruction acknowledgement packet is an instruction receiving time; the determining module 200 is further configured to determine to respond to the voice command if the command receiving time of the voice command is earlier than the command receiving time in each command confirmation packet; if the instruction receiving time of the voice instruction is later than the instruction receiving time in at least one instruction confirmation packet, discarding the voice instruction.
In a possible implementation manner of the embodiment of the present application, the apparatus further includes: the instruction confirmation module is used for receiving an instruction query packet broadcast by voice equipment in the distributed voice system; determining whether the voice instruction corresponding to the instruction identification information is received or not according to the instruction identification information included in the instruction query packet; if the voice command corresponding to the command identification information is determined to be received by the voice device, a command confirmation packet is generated, and the command confirmation packet is sent to the voice device.
In a possible implementation manner of the embodiment of the present application, the apparatus further includes: and the response notification module is used for broadcasting response notification information in the distributed voice system if the voice command is determined to be responded, wherein the response notification information comprises command identification information of the voice command and a response identifier.
In another possible implementation manner of the embodiment of the present application, the response notification module is further configured to receive response notification information broadcast in the distributed voice system, where the response notification information includes instruction identification information of a voice instruction and a response identifier; determining whether the voice instruction corresponding to the instruction identification information is received or not according to the instruction identification information; if the voice command corresponding to the command identification information is determined to be received by the user, discarding the voice command corresponding to the command identification information according to the response identifier.
It should be noted that the foregoing explanation of the embodiment of the distributed voice interaction method is also applicable to the distributed voice interaction device of this embodiment, so that the description thereof is omitted herein.
In order to achieve the above embodiments, an embodiment of the present application further provides an electrical home appliance, where the electrical home appliance includes a memory and a processor; executable program code is stored in the memory; the processor reads the executable program code stored in the memory, and runs a program corresponding to the executable program code, for implementing the distributed voice interaction method according to any of the above embodiments. The household electrical appliance can be an air conditioner, a washing machine, a refrigerator, a sweeper, a microwave oven and the like.
In order to implement the foregoing embodiments, another embodiment of the present application further provides a distributed voice system, where the distributed voice system includes a plurality of home devices as described in the foregoing embodiments, and each home device is capable of implementing the distributed voice interaction method described in any one of the foregoing embodiments.
In order to implement the above embodiments, an embodiment of the present application further proposes a non-transitory computer readable storage medium, on which a computer program is stored, which computer program, when executed by a processor, implements a distributed voice interaction method according to any of the above embodiments.
It should be noted that:
the algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose devices may also be used with the teachings herein. The required structure for the construction of such devices is apparent from the description above. In addition, the present application is not directed to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the present application as described herein, and the above description of specific languages is provided for disclosure of preferred embodiments of the present application.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the present application may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the application, various features of the application are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the application and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be construed as reflecting the intention that: i.e., the claimed application requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this application.
Those skilled in the art will appreciate that the modules in the apparatus of the embodiments may be adaptively changed and disposed in one or more apparatuses different from the embodiments. The modules or units or components of the embodiments may be combined into one module or unit or component and, furthermore, they may be divided into a plurality of sub-modules or sub-units or sub-components. Any combination of all features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be used in combination, except insofar as at least some of such features and/or processes or units are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the present application and form different embodiments. For example, in the following claims, any of the claimed embodiments can be used in any combination.
Various component embodiments of the present application may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that some or all of the functions of some or all of the components in the creation means of a virtual machine according to embodiments of the present application may be implemented in practice using a microprocessor or Digital Signal Processor (DSP). The present application may also be embodied as an apparatus or device program (e.g., computer program and computer program product) for performing a portion or all of the methods described herein. Such a program embodying the present application may be stored on a computer readable medium, or may have the form of one or more signals. Such signals may be downloaded from an internet website, provided on a carrier signal, or provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the application, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The application may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names.
The foregoing is merely a preferred embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions easily contemplated by those skilled in the art within the technical scope of the present application should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (9)
1. A distributed voice interaction method, the method comprising:
receiving a voice instruction;
if the volume value of the voice command is larger than or equal to the upper limit value of the preset volume interval, determining to respond to the voice command;
discarding the voice command if the volume value of the voice command is smaller than the lower limit value of the preset volume interval;
if the volume value of the voice command is within the preset volume interval, broadcasting a command query packet corresponding to the voice command in a distributed voice system, receiving command confirmation packets returned by a plurality of voice devices in the distributed voice system, and determining whether to respond to the voice command according to characteristic information included in each command confirmation packet;
if the voice command is determined to be responded, outputting voice response information according to the voice command;
The determining whether to respond to the voice command according to the characteristic information included in each command confirmation packet comprises the following steps:
if the instruction receiving time of the voice instruction is earlier than the instruction receiving time in each instruction confirmation packet, determining to respond to the voice instruction;
and discarding the voice command if the command receiving time of the voice command is later than the command receiving time in at least one command confirmation packet.
2. The method of claim 1, wherein said determining whether to respond to said voice command based on characteristic information included in each of said command confirmation packets comprises:
if the volume value of the voice command is larger than the volume value in each command confirmation packet, determining to respond to the voice command;
and discarding the voice command if the volume value of the voice command is smaller than the volume value in at least one command confirmation packet.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
receiving an instruction query packet broadcast by voice equipment in a distributed voice system;
determining whether a voice instruction corresponding to the instruction identification information is received or not according to the instruction identification information included in the instruction query packet;
If the voice command corresponding to the command identification information is determined to be received by the voice device, a command confirmation packet is generated, and the command confirmation packet is sent to the voice device.
4. The method according to claim 1 or 2, characterized in that the method further comprises:
and if the voice command is determined to be responded, broadcasting response notification information in the distributed voice system, wherein the response notification information comprises command identification information and a response identifier of the voice command.
5. The method according to claim 1 or 2, characterized in that the method further comprises:
receiving response notification information broadcasted in a distributed voice system, wherein the response notification information comprises instruction identification information and a response identifier;
determining whether the voice instruction corresponding to the instruction identification information is received or not according to the instruction identification information;
if the voice command corresponding to the command identification information is determined to be received by the user, discarding the voice command corresponding to the command identification information according to the response identifier.
6. A distributed voice interaction apparatus, comprising:
the receiving module is used for receiving the voice instruction;
The determining module is used for determining to respond to the voice command if the volume value of the voice command is greater than or equal to the upper limit value of the preset volume interval; discarding the voice command if the volume value of the voice command is smaller than the lower limit value of the preset volume interval; if the volume value of the voice command is within the preset volume interval, broadcasting a command query packet corresponding to the voice command in a distributed voice system, receiving command confirmation packets returned by a plurality of voice devices in the distributed voice system, and determining whether to respond to the voice command according to characteristic information included in each command confirmation packet;
the determining module is further configured to determine to respond to the voice command if the command receiving time of the voice command is earlier than the command receiving time in each command confirmation packet;
discarding the voice command if the command receiving time of the voice command is later than the command receiving time in at least one command confirmation packet;
and the output module is used for outputting voice response information according to the voice instruction if the determination module determines to respond to the voice instruction.
7. The household appliance is characterized by comprising a memory and a processor;
the memory has executable program code stored therein;
the processor reads the executable program code, runs a program corresponding to the executable program code to implement the distributed voice interaction method of any one of claims 1 to 5.
8. A distributed voice system comprising a plurality of home devices of claim 7.
9. A non-transitory computer readable storage medium having stored thereon a computer program, wherein the computer program when executed by a processor implements the distributed voice interaction method of any of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811560666.7A CN111354336B (en) | 2018-12-20 | 2018-12-20 | Distributed voice interaction method, device, system and household appliance |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811560666.7A CN111354336B (en) | 2018-12-20 | 2018-12-20 | Distributed voice interaction method, device, system and household appliance |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111354336A CN111354336A (en) | 2020-06-30 |
CN111354336B true CN111354336B (en) | 2023-12-19 |
Family
ID=71196683
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811560666.7A Active CN111354336B (en) | 2018-12-20 | 2018-12-20 | Distributed voice interaction method, device, system and household appliance |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111354336B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112071306A (en) * | 2020-08-26 | 2020-12-11 | 吴义魁 | Voice control method, system, readable storage medium and gateway equipment |
CN113111199B (en) * | 2021-03-31 | 2023-02-03 | 青岛海尔科技有限公司 | Method and device for continuing playing of multimedia resource, storage medium and electronic device |
CN113990312A (en) * | 2021-10-18 | 2022-01-28 | 珠海格力电器股份有限公司 | Equipment control method and device, electronic equipment and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104978956A (en) * | 2014-04-14 | 2015-10-14 | 美的集团股份有限公司 | Voice control method and system |
CN106469040A (en) * | 2015-08-19 | 2017-03-01 | 华为终端(东莞)有限公司 | Communication means, server and equipment |
CN107146614A (en) * | 2017-04-10 | 2017-09-08 | 北京猎户星空科技有限公司 | A kind of audio signal processing method, device and electronic equipment |
CN107895578A (en) * | 2017-11-15 | 2018-04-10 | 百度在线网络技术(北京)有限公司 | Voice interactive method and device |
CN108351872A (en) * | 2015-09-21 | 2018-07-31 | 亚马逊技术股份有限公司 | Equipment selection for providing response |
CN108766422A (en) * | 2018-04-02 | 2018-11-06 | 青岛海尔科技有限公司 | Response method, device, storage medium and the computer equipment of speech ciphering equipment |
CN108922528A (en) * | 2018-06-29 | 2018-11-30 | 百度在线网络技术(北京)有限公司 | Method and apparatus for handling voice |
-
2018
- 2018-12-20 CN CN201811560666.7A patent/CN111354336B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104978956A (en) * | 2014-04-14 | 2015-10-14 | 美的集团股份有限公司 | Voice control method and system |
CN106469040A (en) * | 2015-08-19 | 2017-03-01 | 华为终端(东莞)有限公司 | Communication means, server and equipment |
CN108351872A (en) * | 2015-09-21 | 2018-07-31 | 亚马逊技术股份有限公司 | Equipment selection for providing response |
CN107146614A (en) * | 2017-04-10 | 2017-09-08 | 北京猎户星空科技有限公司 | A kind of audio signal processing method, device and electronic equipment |
CN107895578A (en) * | 2017-11-15 | 2018-04-10 | 百度在线网络技术(北京)有限公司 | Voice interactive method and device |
CN108766422A (en) * | 2018-04-02 | 2018-11-06 | 青岛海尔科技有限公司 | Response method, device, storage medium and the computer equipment of speech ciphering equipment |
CN108922528A (en) * | 2018-06-29 | 2018-11-30 | 百度在线网络技术(北京)有限公司 | Method and apparatus for handling voice |
Also Published As
Publication number | Publication date |
---|---|
CN111354336A (en) | 2020-06-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111354336B (en) | Distributed voice interaction method, device, system and household appliance | |
US11223497B2 (en) | Method and apparatus for providing notification by interworking plurality of electronic devices | |
CN104853405A (en) | Intelligent networking method and intelligent device | |
CN109361703B (en) | Voice equipment binding method, device, equipment and computer readable medium | |
CN111063343B (en) | Voice interaction method and device, electronic equipment and medium | |
CN109450747B (en) | Method and device for awakening smart home equipment and computer storage medium | |
CN204810556U (en) | Smart machine | |
CN110875045A (en) | Voice recognition method, intelligent device and intelligent television | |
US20170005966A1 (en) | Information sending method and information sending apparatus | |
US10652185B2 (en) | Information sending method and information sending apparatus | |
CN113031452B (en) | Method and system for batch processing of intelligent household equipment control instructions | |
CN112581959B (en) | Intelligent equipment control method, system and voice server | |
CN113053369A (en) | Voice control method and device of intelligent household appliance and intelligent household appliance | |
CN108932947B (en) | Voice control method and household appliance | |
CN111667825A (en) | Voice control method, cloud platform and voice equipment | |
CN112420051A (en) | Equipment determination method, device and storage medium | |
CN111954868A (en) | Multi-voice assistant control method, device, system and computer readable storage medium | |
CN109597996B (en) | Semantic analysis method, device, equipment and medium | |
CN109409883B (en) | Cooperative processing method based on intelligent contract, household appliance and server | |
CN109286861A (en) | Information query method, device and its equipment of smart machine | |
WO2020024508A1 (en) | Voice information obtaining method and apparatus | |
CN115547352A (en) | Electronic device, method, apparatus and medium for processing noise thereof | |
CN109976168B (en) | Decentralized intelligent home control method and system | |
CN112216279A (en) | Voice transmission method, intelligent terminal and computer readable storage medium | |
CN113630298A (en) | Intelligent control system, method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |