CN110632854A - Voice control method and device, voice control node and system and storage medium - Google Patents

Voice control method and device, voice control node and system and storage medium Download PDF

Info

Publication number
CN110632854A
CN110632854A CN201910977334.7A CN201910977334A CN110632854A CN 110632854 A CN110632854 A CN 110632854A CN 201910977334 A CN201910977334 A CN 201910977334A CN 110632854 A CN110632854 A CN 110632854A
Authority
CN
China
Prior art keywords
voice
voice control
control node
address information
instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910977334.7A
Other languages
Chinese (zh)
Inventor
张勇
颜明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Pingzheng Intelligent Technology Co Ltd
Original Assignee
Shenzhen Pingzheng Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Pingzheng Intelligent Technology Co Ltd filed Critical Shenzhen Pingzheng Intelligent Technology Co Ltd
Priority to CN201910977334.7A priority Critical patent/CN110632854A/en
Publication of CN110632854A publication Critical patent/CN110632854A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B15/00Systems controlled by a computer
    • G05B15/02Systems controlled by a computer electric
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/418Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS] or computer integrated manufacturing [CIM]
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/26Pc applications
    • G05B2219/2642Domotique, domestic, home control, automation, smart house
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Manufacturing & Machinery (AREA)
  • Quality & Reliability (AREA)
  • Selective Calling Equipment (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The invention relates to a voice control method and device, a voice control node and system and a storage medium, wherein when a voice instruction is determined to be acquired, whether equipment corresponding to address information included in the voice instruction has a controlled relationship with a local voice control node is judged; controlling the corresponding local equipment to execute the action included in the voice command when the local equipment is the time control; otherwise, the voice command and the voice recognition certainty factor are sent to the remote target voice control node which has a controlled relation with the corresponding equipment, so that the target voice control node controls the corresponding equipment to execute the action included in the voice command with the highest certainty factor. Each voice control node is independent and peer-to-peer, and stores the address of each voice control node which can be modified by the user and a controlled relationship comparison table between each voice control node and the equipment which can be controlled by the voice control node; when a voice command is input, each node which monitors the voice command can process the voice command respectively, and the processing result is intercommunicated through wireless communication networking and is comprehensively decided and executed.

Description

Voice control method and device, voice control node and system and storage medium
Technical Field
The present application belongs to the field of control, and in particular, relates to a voice control method and apparatus, a voice control node and system, and a storage medium.
Background
With the rapid development of smart homes and internet of things technologies, a large number of voice interaction products appear on the market. The voice interaction product is used for being connected with various household electrical appliances, so that a user can interact with the voice interaction product through voice, and then the voice interaction product controls the household electrical appliances.
However, for the existing voice interaction products, when there is only one voice interaction product, the user needs to perform voice interaction with the specific area (the voice recognizable area of the voice interaction product), otherwise, the control function of the home appliance cannot be realized, and even though there are a plurality of independent voice interaction products, the mutual cooperative and interworking is not realized, which affects the user experience.
Disclosure of Invention
In view of the above, an object of the present invention is to provide a voice control method and apparatus, a voice control node and system, and a storage medium, so that a user can perform a voice control function on a home appliance indoors without being limited by a receiving range of a voice signal of the voice control node, and can control the home appliance with voice in multiple areas of a home scene without interfering with each other.
The embodiment of the application is realized as follows:
in a first aspect, an embodiment of the present application provides a voice control method, which is applied to each voice control node included in a voice control system, where the voice control nodes are communicatively connected in a wireless networking manner, and the method includes: when a voice instruction is determined to be acquired, judging whether equipment corresponding to address information included in the voice instruction has a controlled relationship with a current voice control node; if yes, controlling the equipment corresponding to the address information to execute the action included in the voice instruction; if not, sending the voice instruction to a target voice control node in controlled relation with the equipment corresponding to the address information, so that the target voice control node controls the equipment corresponding to the address information to execute the action included in the voice instruction; wherein, each voice control node is stored with a modifiable address of each voice control node and a controlled relation comparison table between each voice control node and the equipment which can be controlled by the voice control node. In the scheme, even if the area where the user sends the voice command is no longer within the signal receiving range of the target voice control node, the target control node can control the equipment which is expected to be controlled in the voice command, namely, compared with the prior art, the user can realize the function of voice control on the household electrical appliance indoors without being limited by the signal receiving range of the voice control node.
With reference to the embodiment of the first aspect, in a possible implementation manner, a preset voice instruction set is stored in each voice control node, and before the determining whether the address information included in the obtained voice instruction has a controlled relationship with a current voice control node, the method further includes: judging whether the monitored voice information is matched with a preset voice instruction in the preset voice instruction set; when yes, the voice instruction is determined to be acquired.
With reference to the embodiment of the first aspect, in a possible implementation manner, the determining whether the monitored voice information matches one preset voice instruction in the preset voice instruction set includes: recognizing the voice audio contained in the voice information to obtain recognized semantics; and judging whether the recognized semantics are matched with a preset voice instruction in the preset voice instruction set.
With reference to the embodiment of the first aspect, in a possible implementation manner, the recognizing the speech audio includes: and identifying the voice audio in an off-line or on-line mode.
With reference to the embodiment of the first aspect, in a possible implementation manner, each voice control node stores a wakeup word, and before the determining whether the monitored voice information matches one preset voice instruction in the preset voice instruction set, the method further includes: judging whether the awakening words are monitored or not; if yes, entering a voice instruction monitoring mode; and if not, continuing to keep the awakening word monitoring mode to judge whether the awakening word is acquired.
With reference to the embodiment of the first aspect, in a possible implementation manner, the sending the voice instruction to a target voice control node having a controlled relationship with a device corresponding to the address information includes: and sending the voice command and the pre-calculated certainty factor corresponding to the voice command to a target voice control node which has a controlled relationship with the equipment corresponding to the address information, so that the target voice control node selects the voice command with the highest certainty factor from the acquired voice commands to execute.
With reference to the embodiment of the first aspect, in one possible implementation manner, the method further includes: acquiring a user-defined modification instruction triggered by a user; and modifying the address of each voice control node stored by the self and the function definition for identifying the voice control node according to the self-defined modification instruction.
In a second aspect, an embodiment of the present application provides a voice control apparatus, which is applied to each voice control node included in a voice control system, and the voice control nodes are communicatively connected in a wireless networking manner, where the apparatus includes: the first judgment module is used for judging whether equipment corresponding to address information included in a voice instruction has a controlled relationship with a current voice control node or not when the voice instruction is determined to be acquired; the first execution module is used for controlling the equipment corresponding to the address information to execute the action included by the voice instruction when the first judgment module judges that the equipment is the equipment; the voice command is sent to a target voice control node which has a controlled relationship with the equipment corresponding to the address information when the first judgment module judges that the equipment corresponding to the address information does not have the controlled relationship with the equipment corresponding to the address information, so that the target voice control node controls the equipment corresponding to the address information to execute the action included in the voice command; the voice control nodes are stored with user settable and modifiable addresses of the voice control nodes and a controlled relation comparison table between the voice control nodes and the devices which can be controlled by the voice control nodes. With reference to the second aspect, in a possible implementation manner, a preset voice instruction set is stored in each voice control node, and the apparatus further includes a second determining module and a second executing module, where the second determining module is configured to determine whether the monitored voice information matches one preset voice instruction in the preset voice instruction set; and the second execution module is used for determining to acquire the voice instruction when the second judgment module judges that the voice instruction is acquired.
With reference to the second aspect, in a possible implementation manner, the second determining module is configured to identify a voice audio included in the voice information to obtain a semantic meaning after the identification; and judging whether the recognized semantics are matched with a preset voice instruction in the preset voice instruction set.
With reference to the second aspect, in a possible implementation manner, the second determining module is configured to identify the voice audio in an offline or online manner.
With reference to the second aspect, in a possible implementation manner, each voice control node stores a wakeup word, the apparatus further includes a third determining module and a third executing module, where the third determining module is configured to determine whether the wakeup word is monitored, and the third executing module is configured to enter a voice instruction monitoring mode when the third determining module determines that the wakeup word is monitored, and continue to maintain the wakeup word monitoring mode to determine whether the wakeup word is acquired when the third determining module determines that the wakeup word is not monitored.
With reference to the second aspect, in a possible implementation manner, the second execution module is configured to send the voice instruction and the pre-calculated certainty factor corresponding to the voice instruction to a target voice control node having a controlled relationship with the device corresponding to the address information, so that the target voice control node selects a voice instruction with the highest certainty factor from the acquired voice instructions to execute the voice instruction.
With reference to the second aspect, in a possible implementation manner, the apparatus further includes an obtaining module and a modifying module, where the obtaining module is further configured to obtain a user-defined modifying instruction triggered by a user, and the modifying module is configured to modify, according to the user-defined modifying instruction, an address of each voice control node stored in the modifying module and a function definition used for identifying the voice control node.
In a third aspect, an embodiment of the present application further provides a non-volatile computer-readable storage medium (hereinafter, referred to as a storage medium), on which a computer program is stored, where the computer program is executed by a computer to perform the method in the foregoing first aspect and/or any possible implementation manner of the first aspect.
In a fourth aspect, an embodiment of the present application further provides a voice control node, including: the system comprises a voice recognition module, an execution module and a wireless communication networking module; the voice recognition module is used for judging whether equipment corresponding to address information included in a voice instruction has a controlled relationship with a current voice control node or not when the voice instruction is determined to be acquired; the execution module is used for controlling the equipment corresponding to the address information to execute the action included in the voice instruction when the voice recognition module judges that the equipment is the equipment; and the wireless communication networking module is used for sending the voice command to a target voice control node which has a controlled relationship with the equipment corresponding to the address information when the voice recognition module judges that the voice command is not received, so that the target voice control node controls the equipment corresponding to the address information to execute the action included in the voice command.
In a fifth aspect, an embodiment of the present application provides a voice control system, including a plurality of voice control nodes in communication connection with each other, where, for each voice control node, when it is determined that a voice instruction is obtained, it is determined whether a device corresponding to address information included in the voice instruction has a controlled relationship with a current voice control node; the voice command processing module is also used for controlling the equipment corresponding to the address information to execute the action included in the voice command when the judgment result is yes; the voice control node is further used for sending the voice command and the certainty factor of the voice command recognition result to a target voice control node which has a controlled relationship with the equipment corresponding to the address information when the judgment result is negative, so that the target voice control node controls the equipment corresponding to the address information to execute the action included in the voice command; wherein, each voice control node is stored with a modifiable address of each voice control node and a controlled relation comparison table between each voice control node and the equipment which can be controlled by the voice control node.
With reference to the embodiment of the fifth aspect, in a possible implementation manner, networking is performed between the voice control nodes in a Mesh manner.
With reference to the fifth aspect, in a possible implementation manner, when two different voice control nodes acquire different voice instructions, the two different voice control nodes process the acquired voice instructions at the same time.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the embodiments of the application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and drawings.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts. The foregoing and other objects, features and advantages of the application will be apparent from the accompanying drawings. Like reference numerals refer to like parts throughout the drawings. The drawings are not intended to be to scale as practical, emphasis instead being placed upon illustrating the subject matter of the present application.
Fig. 1 shows a schematic structural diagram of a speech control system according to an embodiment of the present application.
Fig. 2 shows a schematic structural diagram of a voice control node according to an embodiment of the present application.
Fig. 3A illustrates a schematic structural diagram of a speech recognition module included in a speech control node according to an embodiment of the present application.
Fig. 3B shows a schematic structural diagram of an execution module included in a voice control node according to an embodiment of the present application.
Fig. 3C is a schematic structural diagram illustrating a wireless communication networking module included in a voice control node according to an embodiment of the present application.
Fig. 4 shows a flowchart of a voice control method provided in an embodiment of the present application.
Fig. 5 shows a block diagram of a voice control apparatus according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, relational terms such as "first," "second," and the like may be used solely in the description herein to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Further, the term "and/or" in the present application is only one kind of association relationship describing the associated object, and means that three kinds of relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone.
In addition, the defects of the voice interaction products (users need to perform voice interaction with the products in a specific area, otherwise, the control functions of the household appliances cannot be realized, and the user experience is affected) appearing in the prior art belong to the results obtained after the applicant has practiced and studied carefully, and therefore, the discovery process of the above defects and the solutions proposed in the following embodiments of the present application for the above defects should be contributions of the applicant to the present application in the process of the present application.
In order to overcome the defects of voice interaction products in the prior art, embodiments of the present application provide a voice control method and apparatus, a voice control node, a system and a storage medium, so that a user can perform a voice control function on a home appliance indoors without being limited by a signal receiving range of the voice control node, and can perform voice control on an intelligent home device in multiple areas of a home scene without mutual interference. The technology can be realized by adopting corresponding software, hardware and a combination of software and hardware. The following describes embodiments of the present application in detail.
First, a voice control system 10 for implementing the voice control method and apparatus according to the embodiment of the present application is described with reference to fig. 1.
The voice control system 10 includes a plurality of voice control nodes 100 (i.e., nodes in fig. 1). Wherein each voice control node 100 controls different objects, and one voice control node may contain one or more control objects, for example, in the voice control system 10 shown in fig. 1, node 1 is used for controlling a living room light control, node 2 is used for controlling a restaurant light control and a kitchen light control, node 3 is used for controlling a main lying light control, node 4 is used for controlling a sub-lying light control, node 5 is used for controlling a main bathroom light control, a corridor light control and a cloakroom light control, node 6 is used for controlling a balcony light control, node 7 is used for controlling a living room curtain, node 8 is used for controlling a main lying air conditioner, and node 9 is used for controlling a sub-lying air conditioner. Of course, the voice control node 100 establishes a controlled relationship with the corresponding object to be controlled in advance, for example, the node 1 establishes a controlled relationship with a light control switch in the living room in advance, so that the node 1 can control the light in the living room.
Of course, the objects controlled by the nodes can be customized and modified by the user.
One voice control node 100 may be directly or indirectly in communication connection with another voice control node 100, and correspondingly, one voice control node 100 may be directly or indirectly in data interaction with another voice control node 100, so as to achieve the purpose of communicating among multiple voice control nodes 100. For example, in fig. 1, node 1 directly interacts with node 2, and node 1 interacts with node 3 via node 2 or node 5 or other nodes.
As an alternative embodiment, the plurality of voice control nodes 100 included in the voice control system 10 may be networked in a Mesh (wireless Mesh network) manner.
The individual voice control nodes 100 may be distributed at different locations in the room. When a user needs to control an electrical appliance that is in a controlled relationship with a certain voice control node 100 through a voice instruction, under normal conditions, the voice instruction sent by the user is acquired by the voice control node 100 (called a target voice control node 100 for easy distinction), and then the target voice control node 100 directly controls the electrical appliance to execute an operation corresponding to the voice instruction. In a special case, if the target voice control node 100 cannot directly acquire the voice command of the user due to a distance (for example, the distance between the target voice control node 100 and the user exceeds the signal receiving range of the voice control node 100) or other environmental factors (the volume of the voice command sent by the user is not enough), the voice control nodes 100 arranged at other positions may successfully acquire the voice command of the user because data interaction may be performed between the voice control nodes 100, and send the voice command to the target voice control node 100 by analyzing the content represented by the voice command. It is to be noted that the voice audio uttered by the user, which can be judged as a voice instruction by the voice control node 100, includes the appliance address information and the operation action for the appliance, for example, "turn on hall lantern", which represents the address information of the appliance, and "turn on" which represents the operation action. After the voice control node 100 acquires the voice command, the voice command is transmitted to the voice control node 100 (i.e., the target voice control node 100) corresponding to the address information capable of controlling the electric appliance by recognizing the address information of the electric appliance.
When the voice command matches with the voice command included in the preset voice command set, the voice command is successfully recognized by the voice control node 100, that is, the voice command is acquired for the voice control node 100.
For each voice control node 100 included in the voice control system 10, when determining that the voice instruction is acquired, determining whether address information included in the voice instruction has a controlled relationship with a current voice control node; the voice command processing module is also used for controlling the equipment corresponding to the address information to execute the action included in the voice command when the judgment result is yes; the voice control node 100 is further configured to send the voice instruction to a target voice control node 100 having a controlled relationship with the device corresponding to the address information when the determination result is negative, so that the target voice control node 100 controls the device corresponding to the address information to execute an action included in the voice instruction; in each voice control node 100, a controlled relationship comparison table between address information of each voice control node 100 and devices that can be controlled by the voice control node is stored.
Of course, as an alternative embodiment, each voice control node 100 may also calculate the certainty factor of the voice command when determining to acquire the voice command. Subsequently, when a user issues a voice command, the voice command may be received by a plurality of different voice control nodes 100 at the same time. Subsequently, if a plurality of different voice control nodes 100 all determine that the voice command should not be executed by themselves, different voice control nodes 100 respectively send the voice command and the confidence level calculated by each voice control node to the target voice control node 100. Accordingly, if the target voice control node 100 acquires the voice commands containing the certainty factors sent by the plurality of voice control nodes 100, the voice command with the highest certainty factor can be selected from the voice commands for execution.
In order to enable each voice control node 100 to implement the above functions, please refer to fig. 2, which may include a voice recognition module 101, an execution module 102, a wireless communication networking module 103, and a power supply module 104 for each voice control node 100. The modules are connected with each other, can be directly connected by adopting finished product modules on the market, and can also be integrated on a circuit board in a circuit mode.
It should be noted that the components and structure of the voice control node 100 shown in fig. 2 are exemplary only, and not limiting, and the voice control node 100 may have other components and structures as desired.
Referring to fig. 3A, the voice recognition module 101 may include a microphone 201, a speaker 202, a voice recognition chip 203, and a communication control interface 204.
The voice recognition module 101 can complete a voice audio pickup function through the microphone 201, and complete a voice audio recognition function through the voice recognition chip 203. In addition, the voice recognition chip 203 converts the voice audio determined as the voice command into a control signal, and determines whether the control signal is a command to be executed by the voice control node 100, if so, the control signal is sent to the execution module 102 through the communication control interface 204, and if not, the control signal is sent to the wireless communication networking module 103 through the communication control interface 204 for processing.
Of course, the voice recognition chip 203 may further have an audio noise reduction function, and the voice recognition module 101 may further provide voice feedback to the user through the speaker 202, for example, to prompt the user that the issued voice command cannot be recognized.
As an optional implementation manner, the voice recognition chip 203 may be an offline voice recognition chip, a preset voice instruction set is stored in an internal memory of the offline voice recognition chip, and the voice audio acquired by the microphone 201 may be matched with the local preset voice instruction set to complete an offline voice audio recognition function. At this time, the model of the voice recognition chip 203 may be the CI 1002.
As an alternative embodiment, the voice recognition chip 203 may also be an online voice recognition chip. In this embodiment, the voice recognition chip 203 transmits the voice audio acquired by the microphone 201 to the server through the network for online recognition, and acquires the recognition result returned by the server to complete the online voice audio recognition function.
Referring to fig. 3B, the execution module 102 may include a controller (not shown), an infrared transmitter circuit 301, an infrared code bank 303, a relay or silicon controller array 302, and a button and LED lamp 304.
The controller is configured to receive a control signal sent by the speech recognition module 101 located in the same speech control node 100 as the controller itself, or receive a control signal sent by the speech recognition module 101 included in another speech control node 100 through the wireless communication networking module 103 included in the other speech control node, and then transmit the control signal to the infrared emission circuit 301 or the relay or the silicon controlled array 302 according to the difference of the types of the control signals.
The infrared transmitting circuit 301 is used for controlling household appliances such as an air conditioner and a television which can be controlled by an infrared remote controller. The infrared transmitting circuit 301 controls the corresponding electric appliance by simulating the infrared code sent by the infrared remote controller. Accordingly, the control signal includes information related to the infrared codes that the infrared transmitting circuit 301 needs to simulate, and the information of the codes is stored in the infrared code bank 303.
The relay or silicon controlled array 302 is used to control the on/off of lighting lamps, motorized shades, and other electrical products. Control signal input interface 303 is a control electrical signal from offline speech recognition module 101.
It should be noted that the components and structure of the execution module 102 shown in FIG. 3B are exemplary only, and not limiting, and the execution module 102 may have other components and structures as desired.
Referring to fig. 3C, the wireless communication networking module 103 includes a wireless communication networking chip 401, an antenna 402 and a communication interface 403 for establishing communication connection with other voice control nodes 100. The wireless communication networking module 103 may be a bluetooth module, a WiFi module, a Zigbee module, a 2.4GHz module, or the like.
The wireless communication networking chip 401 stores a communication protocol stack and necessary networking information required by MESH networking, and the communication interface 403 is an internal communication protocol for connecting the wireless communication networking module and the voice recognition module, and may be a uart (universal Asynchronous Receiver transmit) serial communication protocol, an spi (serial interface) communication protocol, an I2C communication protocol, or the like.
The antenna 402 may be a PCB antenna.
It should be noted that the components and structure of the wireless communication networking module 103 shown in fig. 3C are merely exemplary and not limiting, and the wireless communication networking module 103 may have other components and structures as desired.
The power module 104 is used for supplying power to other modules inside the voice control node 100, and may be a common AC (alternating current) to DC (direct current) isolation circuit/module on the market, or a common AC (alternating current) to DC (direct current) non-isolation circuit or module on the market, which is not described herein again. Referring to fig. 4, a voice control method applied to the voice control node 100 according to an embodiment of the present application will be described.
Step S110: and when the voice command is determined to be acquired, judging whether equipment corresponding to the address information included in the voice command has a controlled relationship with the current voice control node.
After the voice control nodes 100 are distributed at different locations in the room, the monitoring of the voice information is started for each of the voice control nodes 100. The same preset voice instruction set is pre-stored in each voice control node 100, the preset voice instruction set includes a plurality of preset voice instructions, and each preset voice instruction includes an operation action part and an address information part of an electrical appliance. For example, the system may include "turn on hall lantern", "turn off sun table lamp", "turn on curtain of living room", "turn on main and horizontal lamp", "25 degree main and horizontal air conditioner", etc.
When a certain voice control node 100 monitors voice information, it is determined whether the monitored voice information matches a preset voice instruction in a preset voice instruction set, if so, it is determined that the voice instruction is acquired, otherwise, it is determined that the voice instruction is not acquired, and the voice information continues to be monitored.
Optionally, the voice control node 100 performs semantic recognition on the voice audio included in the voice information through the voice recognition module 101 included in the voice control node, so as to obtain a recognized semantic. And then judging whether the recognized semantics are matched with a preset voice instruction in a preset voice instruction set.
Alternatively, the speech control node 100 may identify the semantics of the speech audio in an offline or online manner.
It should be noted that the matching herein does not merely indicate that the semantic meaning expressed by the voice message is completely consistent with the preset voice command, and it can be understood that when the operation action part in the semantic meaning expressed by a certain voice message is similar to the meaning expressed by the operation action part included in the preset voice command a, and the address information part of the electrical appliance in the voice message is the same as the operation action part included in the preset voice command a, the voice message can also be considered to be matched with the preset voice command a. For example, voice control node 100 may consider "turn on hall lantern" to match "turn on hall lantern".
As an alternative embodiment, each voice control node 100 may include at least two states, a wakeup word listening mode and a voice command listening mode. When the voice control node 100 is in the wake-up word monitoring mode, the voice control node is configured to monitor the wake-up word, and when the wake-up word is monitored, the voice control node is switched from the wake-up word monitoring mode to the voice command monitoring mode.
Accordingly, a wake-up word, such as "hello, XX", is stored in each of the voice control nodes 100, where "XX" may be a name of a provider of the voice control node 100 or a name defined by a user for the voice control node 100. Before the voice control node 100 starts monitoring the voice command, judging whether a wakeup word is monitored or not, and when the wakeup word is not monitored, continuously keeping a wakeup word monitoring mode to judge whether the wakeup word is monitored or not; otherwise, switching to a voice instruction monitoring mode and starting to monitor the voice instruction. Of course, after switching to the voice instruction monitoring mode, if the voice control node 100 determines that the voice instruction is not monitored within the preset time, switching to the wakeup word monitoring mode again.
Alternatively, when the voice control node 100 is in the wake-up word listening mode, the power consumption of the voice control node 100 may be reduced by reducing some unnecessary functions (e.g., screen display functions) of the voice control node 100.
It is worth noting that the appliances that each voice control node 100 can control are different. To achieve this function, a controlled relationship needs to be established between the voice control node 100 and the appliances it needs to control. For example, a piece of infrared code information is set in advance for the infrared transmitting circuit 301 included in a certain voice control node 100, and the infrared code simulated by the infrared code information can control the television, so that the voice control node 100 establishes a controlled relationship with the television.
In this way, after the controlled relationship is established between each voice control node 100 and the electrical appliances that can be controlled by the voice control node, the controlled relationships are integrated together to generate a controlled relationship comparison table, and the controlled relationship comparison table is stored in each voice control node 100, and the table is used for representing which voice control node 100 can control which electrical appliance. Of course, the address of each voice control node is also stored in each voice control node 100.
In addition, as an optional implementation manner, when the voice control node 100 determines to acquire the voice instruction, the voice instruction may also be forwarded to the current voice control node 100 by another voice control node 100.
After the voice control node 100 determines to acquire the voice command, it is determined whether a controlled relationship exists between the device (electrical appliance) corresponding to the address information included in the voice command and the current voice control node 100.
Step S120: and if so, controlling the equipment corresponding to the address information to execute the action included in the voice instruction.
When there is a controlled relationship between the device (electrical appliance) corresponding to the address information part included in the voice instruction and the current voice control node 100, the current voice control node 100 directly executes the action corresponding to the operation action part included in the voice instruction.
Step S130: and if not, sending the voice command to a target voice control node in controlled relation with the equipment corresponding to the address information, so that the target voice control node controls the equipment corresponding to the address information to execute the action included in the voice command.
When there is no controlled relationship between the device (electrical appliance) corresponding to the address information portion included in the voice instruction and the current voice control node 100, the current voice control node 100 determines the target voice control node 100 having the controlled relationship with the electrical appliance corresponding to the address information portion included in the voice instruction by querying the controlled relationship comparison table, and then sends the control signal corresponding to the voice instruction to the target voice control node 100.
Optionally, when the current voice control node 100 acquires the voice instruction, the certainty factor of the voice instruction may also be calculated, and accordingly, the current voice control node 100 sends the voice instruction and the corresponding certainty factor to the target voice control node 100, so that the target voice control node 100 selects the voice instruction with the highest certainty factor from the acquired voice instructions to execute.
According to the voice control method provided by the embodiment of the application, when a certain voice control node in a voice control system acquires a voice instruction, whether a controlled relationship exists between equipment corresponding to address information included in the voice instruction and a current voice control node is judged, if yes, the current voice control node can control the equipment corresponding to the address information, the current voice control node controls the equipment, if not, the current voice control node cannot control the equipment corresponding to the address information, at the moment, the current voice control node finds a target voice control node corresponding to the equipment by inquiring a controlled relationship comparison table, and sends the voice instruction to the target voice control node, so that the target voice control node controls the equipment. In the scheme, even if the area where the user sends the voice command is no longer within the signal receiving range of the target voice control node, the target control node can control the equipment which is expected to be controlled in the voice command, namely, compared with the prior art, the user can realize the function of voice control on the household electrical appliance indoors without being limited by the signal receiving range of the voice control node.
As shown in fig. 5, an embodiment of the present application further provides a voice control apparatus 400, where the voice control apparatus 400 may include: a first determining module 410 and a first executing module 420.
A first determining module 410, configured to determine, when it is determined that a voice instruction is obtained, whether a device corresponding to address information included in the voice instruction has a controlled relationship with a current voice control node;
a first executing module 420, configured to control, when the first determining module 410 determines that the device corresponding to the address information performs an action included in the voice instruction; and is further configured to, when the first determining module 410 determines that the device has the controlled relationship with the address information, send the voice instruction to a target voice control node having the controlled relationship with the device corresponding to the address information, so that the target voice control node controls the device corresponding to the address information to execute an action included in the voice instruction.
The voice control nodes are stored with user settable and modifiable addresses of the voice control nodes and a controlled relation comparison table between the voice control nodes and the devices which can be controlled by the voice control nodes.
Optionally, a preset voice instruction set is stored in each voice control node, and the apparatus may further include a second determining module and a second executing module. The second judging module is used for judging whether the monitored voice information is matched with one preset voice instruction in the preset voice instruction set; and the second execution module is used for determining to acquire the voice instruction when the second judgment module judges that the voice instruction is acquired.
Optionally, the second determining module is configured to identify a voice audio included in the voice information to obtain a semantic meaning after identification; and judging whether the recognized semantics are matched with a preset voice instruction in the preset voice instruction set.
Optionally, the second determining module is configured to identify the voice audio in an offline or online manner.
Optionally, a wakeup word is stored in each voice control node, the device further includes a third judgment module and a third execution module, the third judgment module is configured to judge whether the wakeup word is monitored, the third execution module is configured to enter a voice instruction monitoring mode when the third judgment module judges that the wakeup word is monitored, and when the third judgment module judges that the wakeup word is not monitored, the third judgment module continues to maintain the wakeup word monitoring mode to judge whether the wakeup word is acquired.
In a possible implementation manner, the second execution module is configured to send the voice instruction and the pre-calculated certainty factor corresponding to the voice instruction to a target voice control node having a controlled relationship with the device corresponding to the address information, so that the target voice control node selects a voice instruction with the highest certainty factor from the acquired voice instructions to execute the voice instruction.
In a possible implementation manner, the apparatus further includes an obtaining module and a modifying module, the obtaining module is further configured to obtain a user-defined modifying instruction triggered by a user, and the modifying module is configured to modify, according to the user-defined modifying instruction, an address of each voice control node stored in the modifying module and a function definition used for identifying the voice control node.
The voice control apparatus 400 provided in the embodiment of the present application has the same implementation principle and the same technical effect as those of the foregoing embodiments, and for the sake of brief description, reference may be made to the corresponding contents in the foregoing method embodiments for the parts of the embodiments of the apparatus that are not mentioned.
In addition, an embodiment of the present application further provides a storage medium, where a computer program is stored on the storage medium, and when the computer program is executed by a computer, the steps included in the voice control method are executed.
In summary, in the voice control method and apparatus, the voice control node and the system and the storage medium provided in the embodiments of the present invention, when a certain voice control node in the voice control system obtains a voice command, firstly, judging whether the equipment corresponding to the address information included in the voice instruction has a controlled relationship with the current voice control node, if so, indicating that the current voice control node can control the equipment corresponding to the address information, the current voice control node controls the equipment, if not, the current voice control node can not control the equipment corresponding to the address information, at this time, the current voice control node finds out a target voice control node capable of controlling the equipment by inquiring the controlled relation comparison table, and sending the voice command to the target voice control node so that the target voice control node controls the equipment. In the scheme, even if the area where the user sends the voice command is no longer within the signal receiving range of the target voice control node, the target control node can control the equipment which is expected to be controlled in the voice command, namely, compared with the prior art, the user can realize the function of voice control on the household electrical appliance indoors without being limited by the signal receiving range of the voice control node.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a notebook computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application.

Claims (10)

1. A voice control method is applied to each voice control node included in a voice control system, and the voice control nodes are in communication connection in a wireless networking mode, and the method comprises the following steps:
when a voice instruction is determined to be acquired, judging whether equipment corresponding to address information included in the voice instruction has a controlled relationship with a current voice control node;
if yes, controlling the equipment corresponding to the address information to execute the action included in the voice instruction;
if not, sending the voice instruction to a target voice control node in controlled relation with the equipment corresponding to the address information, so that the target voice control node controls the equipment corresponding to the address information to execute the action included in the voice instruction;
the voice control nodes are stored with user modifiable addresses of the voice control nodes and a controlled relation comparison table between the voice control nodes and the devices which can be controlled by the voice control nodes.
2. The method according to claim 1, wherein said sending the voice command to a target voice control node having a controlled relationship with the device corresponding to the address information comprises:
and sending the voice command and the pre-calculated certainty factor corresponding to the voice command to a target voice control node which has a controlled relationship with the equipment corresponding to the address information, so that the target voice control node selects the voice command with the highest certainty factor from the acquired voice commands to execute.
3. The method of claim 1, further comprising:
acquiring a user-defined modification instruction triggered by a user;
and modifying the address of each voice control node stored by the self and the function definition for identifying the voice control node according to the self-defined modification instruction.
4. The method of claim 1, further comprising:
and identifying the acquired voice audio in an off-line or on-line mode to determine whether the voice instruction is acquired.
5. A voice control apparatus, applied to each voice control node included in a voice control system, wherein the voice control nodes are communicatively connected in a wireless networking manner, the apparatus comprising:
the first judgment module is used for judging whether equipment corresponding to address information included in a voice instruction has a controlled relationship with a current voice control node or not when the voice instruction is determined to be acquired;
the first execution module is used for controlling the equipment corresponding to the address information to execute the action included by the voice instruction when the first judgment module judges that the equipment is the equipment; and also used for
When the first judgment module judges that the equipment corresponding to the address information is not in the controlled relation, the voice instruction is sent to a target voice control node which has the controlled relation with the equipment corresponding to the address information, so that the target voice control node controls the equipment corresponding to the address information to execute the action included in the voice instruction;
the voice control nodes are stored with user modifiable addresses of the voice control nodes and a controlled relation comparison table between the voice control nodes and the devices which can be controlled by the voice control nodes.
6. A storage medium, having stored thereon a computer program which, when executed by a computer, performs the method of any one of claims 1-4.
7. A voice control node, comprising: the system comprises a voice recognition module, an execution module and a wireless communication networking module;
the voice recognition module is used for judging whether equipment corresponding to address information included in a voice instruction has a controlled relationship with a current voice control node or not when the voice instruction is determined to be acquired;
the execution module is used for controlling the equipment corresponding to the address information to execute the action included in the voice instruction when the voice recognition module judges that the equipment is the equipment;
and the wireless communication networking module is used for sending the voice command to a target voice control node which has a controlled relationship with the equipment corresponding to the address information when the voice recognition module judges that the voice command is not received, so that the target voice control node controls the equipment corresponding to the address information to execute the action included in the voice command.
8. The voice control system is characterized by comprising a plurality of voice control nodes which are in communication connection with each other, wherein each voice control node is used for judging whether equipment corresponding to address information included in a voice instruction has a controlled relationship with the current voice control node or not when the voice instruction is determined to be acquired; the voice command processing module is also used for controlling the equipment corresponding to the address information to execute the action included in the voice command when the judgment result is yes; the voice control node is further used for sending the voice command and the certainty factor of the voice command recognition result to a target voice control node which has a controlled relationship with the equipment corresponding to the address information when the judgment result is negative, so that the target voice control node controls the equipment corresponding to the address information to execute the action included in the voice command;
the voice control nodes are stored with user modifiable addresses of the voice control nodes and a controlled relation comparison table between the voice control nodes and the devices which can be controlled by the voice control nodes.
9. The system according to claim 8, wherein a plurality of said voice control nodes are networked in a Mesh manner.
10. The system according to claim 8, wherein when two different voice control nodes obtain different voice commands, the two different voice control nodes process the respective obtained voice commands simultaneously.
CN201910977334.7A 2019-10-15 2019-10-15 Voice control method and device, voice control node and system and storage medium Pending CN110632854A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910977334.7A CN110632854A (en) 2019-10-15 2019-10-15 Voice control method and device, voice control node and system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910977334.7A CN110632854A (en) 2019-10-15 2019-10-15 Voice control method and device, voice control node and system and storage medium

Publications (1)

Publication Number Publication Date
CN110632854A true CN110632854A (en) 2019-12-31

Family

ID=68975084

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910977334.7A Pending CN110632854A (en) 2019-10-15 2019-10-15 Voice control method and device, voice control node and system and storage medium

Country Status (1)

Country Link
CN (1) CN110632854A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111583921A (en) * 2020-04-22 2020-08-25 珠海格力电器股份有限公司 Voice control method, device, computer equipment and storage medium
CN112233672A (en) * 2020-09-30 2021-01-15 成都长虹网络科技有限责任公司 Distributed voice control method, system, computer device and readable storage medium
CN113639386A (en) * 2021-07-07 2021-11-12 宁波奥克斯电气股份有限公司 Control method and device for multiple voice air conditioners and air conditioner
CN114495920A (en) * 2022-02-12 2022-05-13 深圳市宏芯达科技有限公司 AI all-in-one chip

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106288229A (en) * 2016-09-20 2017-01-04 珠海格力电器股份有限公司 A kind of air conditioning control method, device, centralized control node and system
CN107622767A (en) * 2016-07-15 2018-01-23 青岛海尔智能技术研发有限公司 The sound control method and appliance control system of appliance system
CN108156497A (en) * 2018-01-02 2018-06-12 联想(北京)有限公司 A kind of control method, control device and control system
CN109215658A (en) * 2018-11-30 2019-01-15 广东美的制冷设备有限公司 Voice awakening method, device and the household appliance of equipment
US20190043502A1 (en) * 2015-12-14 2019-02-07 Shenzhen Light Life Technology Co., Ltd. Voice recognition lamp capable of networking and voice recognition lamp control system thereof
CN110265006A (en) * 2019-04-28 2019-09-20 北京百度网讯科技有限公司 Awakening method, master node, slave node and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190043502A1 (en) * 2015-12-14 2019-02-07 Shenzhen Light Life Technology Co., Ltd. Voice recognition lamp capable of networking and voice recognition lamp control system thereof
CN107622767A (en) * 2016-07-15 2018-01-23 青岛海尔智能技术研发有限公司 The sound control method and appliance control system of appliance system
CN106288229A (en) * 2016-09-20 2017-01-04 珠海格力电器股份有限公司 A kind of air conditioning control method, device, centralized control node and system
CN108156497A (en) * 2018-01-02 2018-06-12 联想(北京)有限公司 A kind of control method, control device and control system
CN109215658A (en) * 2018-11-30 2019-01-15 广东美的制冷设备有限公司 Voice awakening method, device and the household appliance of equipment
CN110265006A (en) * 2019-04-28 2019-09-20 北京百度网讯科技有限公司 Awakening method, master node, slave node and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111583921A (en) * 2020-04-22 2020-08-25 珠海格力电器股份有限公司 Voice control method, device, computer equipment and storage medium
CN112233672A (en) * 2020-09-30 2021-01-15 成都长虹网络科技有限责任公司 Distributed voice control method, system, computer device and readable storage medium
CN113639386A (en) * 2021-07-07 2021-11-12 宁波奥克斯电气股份有限公司 Control method and device for multiple voice air conditioners and air conditioner
CN114495920A (en) * 2022-02-12 2022-05-13 深圳市宏芯达科技有限公司 AI all-in-one chip

Similar Documents

Publication Publication Date Title
CN110632854A (en) Voice control method and device, voice control node and system and storage medium
CN113516979B (en) Server-provided visual output at a voice interface device
CN108022590B (en) Focused session at a voice interface device
EP3637243B1 (en) Customized interface based on vocal input
EP3455747B1 (en) Voice-controlled closed caption display
WO2019205134A1 (en) Smart home voice control method, apparatus, device and system
CN110853619B (en) Man-machine interaction method, control device, controlled device and storage medium
Yue et al. Voice activated smart home design and implementation
US20140100854A1 (en) Smart switch with voice operated function and smart control system using the same
CN110506452A (en) Load control system based on audio
JP2016502355A (en) Voice-controlled configuration of an automation system
WO2017197186A1 (en) Voice-controlled closed caption display
EP3996333A1 (en) Multi-source smart-home device control
CN110278135B (en) Equipment position searching method, device, gateway and storage medium
CN112838967B (en) Main control equipment, intelligent home and control device, control system and control method thereof
JP2016063415A (en) Network system, audio output method, server, device and audio output program
CN114120996A (en) Voice interaction method and device
CN113658590A (en) Control method and device of intelligent household equipment, readable storage medium and terminal
CN110262276B (en) Intelligent home system based on raspberry group and control method thereof
CN106210002B (en) Control method and device and electronic equipment
CN113940143B (en) System and method for assisting a user in configuring a lighting system
US11818820B2 (en) Adapting a lighting control interface based on an analysis of conversational input
US20220319507A1 (en) Electronic device for identifying electronic device to perform speech recognition and method of operating same
KR20240063131A (en) Launching a hierarchical mobile application
CN113917844A (en) Intelligent household control method, device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20191231