CN111665737A - Intelligent household scene control method and system - Google Patents

Intelligent household scene control method and system Download PDF

Info

Publication number
CN111665737A
CN111665737A CN202010706256.XA CN202010706256A CN111665737A CN 111665737 A CN111665737 A CN 111665737A CN 202010706256 A CN202010706256 A CN 202010706256A CN 111665737 A CN111665737 A CN 111665737A
Authority
CN
China
Prior art keywords
scene
equipment
target
preset
voice
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010706256.XA
Other languages
Chinese (zh)
Other versions
CN111665737B (en
Inventor
傅东伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aux Air Conditioning Co Ltd
Ningbo Aux Electric Co Ltd
Original Assignee
Aux Air Conditioning Co Ltd
Ningbo Aux Electric Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aux Air Conditioning Co Ltd, Ningbo Aux Electric Co Ltd filed Critical Aux Air Conditioning Co Ltd
Priority to CN202010706256.XA priority Critical patent/CN111665737B/en
Publication of CN111665737A publication Critical patent/CN111665737A/en
Application granted granted Critical
Publication of CN111665737B publication Critical patent/CN111665737B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B15/00Systems controlled by a computer
    • G05B15/02Systems controlled by a computer electric
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/418Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS], computer integrated manufacturing [CIM]
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/26Pc applications
    • G05B2219/2642Domotique, domestic, home control, automation, smart house
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Abstract

The invention relates to the technical field of intelligent home furnishing, and provides an intelligent home furnishing scene control method and system, wherein the method comprises the following steps: the distributed voice equipment receives the scene control instruction and sends the scene control audio in the scene control instruction and the identification of the distributed voice equipment to the voice server; the voice server carries out voice recognition on the scene control audio to obtain a name of a scene to be controlled corresponding to the scene control audio; the voice server determines target scene information matched with the name of the scene to be controlled from the stored preset scene information according to the identification of the distributed voice equipment, and returns the target scene information to the Internet of things server; and the Internet of things server controls the intelligent household equipment according to the target scene information.

Description

Intelligent household scene control method and system
Technical Field
The invention relates to the technical field of intelligent home, in particular to an intelligent home scene control method and system.
Background
At present, voice interaction equipment in intelligent homes in the market is mostly presented in the form of single intelligent sound boxes, and a user can conveniently control the voice interaction equipment through voice so as to achieve the purpose of controlling the voice interaction equipment.
However, such a one-to-one control method is too single, and interaction among multiple devices in the smart home network cannot be realized based on a scene to be controlled, and thus a scene requirement for controlling multiple smart home devices in multiple smart spaces cannot be met.
Disclosure of Invention
The invention solves the problems that the existing one-to-one control mode for controlling voice interaction equipment through voice is too single, interaction among multiple pieces of equipment in an intelligent home network cannot be realized based on a scene to be controlled, and further the scene requirement for controlling multiple pieces of intelligent home equipment in multiple intelligent spaces cannot be met.
In order to solve the above problems, the present invention provides an intelligent home scene control method, which is applied to an intelligent home network, where the intelligent home network includes distributed voice devices, a voice server, an internet of things server, and intelligent home devices, and the method includes: the distributed voice equipment receives a scene control instruction and sends a scene control audio frequency in the scene control instruction and the identification of the distributed voice equipment to the voice server; the voice server carries out voice recognition on the scene control audio to obtain a name of a scene to be controlled corresponding to the scene control audio; the voice server determines target scene information matched with the name of the scene to be controlled from stored preset scene information according to the identification of the distributed voice equipment, and returns the target scene information to the Internet of things server; and the Internet of things server controls the intelligent household equipment according to the target scene information.
Compared with the prior art, the intelligent household scene control method has the following advantages: when a user needs to perform scene control, a scene control instruction carrying a scene control audio is sent to the distributed voice equipment, the distributed voice equipment sends the scene control audio and the identification of the voice interaction equipment to the voice server, the voice server performs voice recognition on the scene control audio to obtain a name of a scene to be controlled corresponding to the scene control audio, the voice server determines target scene information matched with the name of the scene to be controlled from stored preset scene information according to the identification of the distributed voice equipment and sends the target scene information to the server of the internet of things, the server of the internet of things controls the intelligent home equipment according to the target scene information, and the audio is controlled based on the scene received by the distributed voice equipment, so that the scene control on the intelligent home equipment in the intelligent home network is realized, and the aim of linkage control of a plurality of equipment in the intelligent home network is fulfilled, the control mode of the intelligent household equipment is enriched, and the scene requirement for controlling the intelligent household equipment is met.
Further, the step of determining, by the voice server, target scene information matched with the name of the scene to be controlled from the stored preset scene information according to the identifier of the distributed voice device includes:
the voice server acquires a second user identifier for binding the distributed voice equipment according to the identifier of the distributed voice equipment;
and when a target scene identifier with the same first user identifier and second user identifier and the same preset scene name and the same to-be-controlled scene name exists in the preset scene identifiers, the voice server determines the target scene identifier as target scene information matched with the to-be-controlled scene name.
Further, the internet of things server stores preset scene identification, preset control equipment related to the preset scene identification, and a preset control command for controlling the preset control equipment in advance, the preset control equipment is selected from the smart home equipment in advance, and the step of controlling the smart home equipment by the internet of things server according to the target scene information includes:
the Internet of things server determines a target preset scene identifier consistent with the target scene identifier from the preset scene identifiers;
the Internet of things server acquires target preset control equipment related to the target preset scene identification and a target preset control command for controlling the target preset control equipment;
and the Internet of things server sends the target preset control command to corresponding target preset control equipment so as to control the target preset control equipment.
Furthermore, the method includes the steps that a plurality of intelligent home devices are provided, each intelligent home device corresponds to an identifier, a third user identifier for binding the intelligent home device and a binding position, and the step that the voice server determines target scene information matched with the name of the scene to be controlled from stored preset scene information according to the identifier of the distributed voice device includes:
when the target scene identification which is the same as the first user identification and the second user identification and is the same as the preset scene name and the scene name to be controlled does not exist in the preset scene identification, the voice server acquires the binding position of the distributed voice equipment according to the identification of the distributed voice equipment;
the voice server searches for primarily selected intelligent home equipment of which the binding position is the same as that of the distributed voice equipment and the third user identification is the same as that of the second user identification from the intelligent home equipment;
and the voice server determines target scene information matched with the name of the scene to be controlled according to a preset rule and the primarily selected intelligent household equipment and the preset scene information, wherein the target scene information comprises target intelligent household equipment selected from the primarily selected intelligent household equipment and a target control command for controlling the target intelligent household equipment.
Furthermore, each smart home device corresponds to a device type, the voice server stores a preset scene identifier, a preset control device related to the preset scene identifier, and a preset control command for controlling the preset control device in advance, and the step of determining, by the voice server, target scene information matched with the name of the scene to be controlled according to a preset rule and according to the initially selected smart home device and the preset scene information includes:
the voice server acquires the equipment type of the primarily selected intelligent household equipment;
the voice server calculates the similarity between the equipment type of the primarily selected intelligent household equipment and the equipment type of the preset control equipment related to the preset scene identification;
the voice server determines the preset scene identification with the highest similarity as a target preset scene identification and determines preset control equipment related to the target preset scene identification as target preset control equipment;
the voice server determines a first target device from the primary intelligent home device and determines a second target device from the target preset control device, wherein the first target device and the second target device are the same in device type;
and the voice server takes the first target equipment as target intelligent household equipment and takes a preset control command for controlling the second target equipment as a target control command for the target intelligent household equipment.
Further, the step of sending the target preset control command to the corresponding target preset control device by the internet of things server to control the target preset control device includes:
the Internet of things server judges whether the target preset control equipment is on line or not;
and when the target preset control equipment is on line, the Internet of things server sends the target preset control command to the corresponding target preset control equipment so as to control the target preset control equipment.
Further, the method further comprises:
and the voice server receives a control result which is returned by the Internet of things server and used for controlling the intelligent household equipment, and broadcasts the control result in an audio mode.
Further, the method further comprises:
the method comprises the steps that an Internet of things server receives an equipment binding command sent by a mobile terminal, wherein the equipment binding command comprises an equipment identifier, an equipment name, a position to be bound and a user identifier for binding the equipment to be bound;
the Internet of things server establishes a first corresponding relation between the equipment identifier of the equipment to be bound and the equipment name, the position to be bound and the user identifier binding the equipment to be bound and stores the first corresponding relation;
the Internet of things server sends the first corresponding relation to the voice server;
the voice server stores the first correspondence.
Further, the method further comprises:
the internet of things server receives scene configuration information sent by the mobile terminal, wherein the scene configuration information comprises a scene name of a scene to be configured, intelligent household equipment to be added to the scene to be configured, a control instruction for controlling the intelligent household equipment to be added and a user identifier for configuring the scene to be configured, and the intelligent household equipment to be added and a user represented by the user identifier for configuring the scene to be configured have a first corresponding relationship;
the Internet of things server generates a corresponding scene identifier for the scene to be configured;
the Internet of things server establishes a second corresponding relation between the scene identifier of the scene to be configured and the intelligent home equipment to be added, the control instruction for controlling the intelligent home equipment to be added and the user identifier for configuring the scene to be configured, and stores the second corresponding relation;
the Internet of things server sends the second corresponding relation to the voice server;
the voice server stores the second correspondence.
Further, the voice server includes a first voice server and a second voice server, and the voice server performs voice recognition on the scene control audio to obtain a name of a scene to be controlled corresponding to the scene control audio, including:
the first voice server sends the scene control audio and the identification of the distributed voice equipment to the second voice server;
and the second voice server performs voice recognition on the scene control audio to obtain a name of the scene to be controlled corresponding to the scene control audio.
The invention also provides an intelligent home scene control system, which comprises distributed voice equipment, a voice server, an internet of things server and intelligent home equipment, wherein the distributed voice equipment is used for receiving a scene control instruction and sending a scene control audio frequency in the scene control instruction and an identifier of the distributed voice equipment to the voice server; the voice server is used for carrying out voice recognition on the scene control audio to obtain a name of a scene to be controlled corresponding to the scene control audio; the voice server is further used for determining target scene information matched with the name of the scene to be controlled from stored preset scene information according to the identification of the distributed voice equipment, and returning the target scene information to the internet of things server; and the Internet of things server is used for controlling the intelligent household equipment according to the target scene information.
Drawings
Fig. 1 is a schematic view of an application scenario of the intelligent home scenario control method provided by the invention.
Fig. 2 is a schematic flow chart of a smart home scene control method provided by the invention.
Fig. 3 is a schematic flow chart of another intelligent home scene control method provided by the invention.
Fig. 4 is a schematic flow chart of another intelligent home scene control method provided by the invention.
Fig. 5 is a schematic flow chart of another intelligent home scene control method provided by the invention.
Fig. 6 is a schematic flow chart of another intelligent home scene control method provided by the present invention.
Fig. 7 is a schematic flow chart of another intelligent home scene control method provided by the invention.
Description of reference numerals:
10-distributed voice devices; 20-a voice server; 201-a first voice server; 202-a second voice server; 40-an internet of things server; 50-smart home equipment; 60-mobile terminal.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below.
Referring to fig. 1, fig. 1 is a schematic view of an application scenario of the intelligent home scenario control method provided by the present invention, and in fig. 1, a distributed voice device 10, a voice server 20, an internet of things server 40, an intelligent home device 50, and a mobile terminal 60 form an intelligent home network. The voice server 20 is in communication connection with the distributed voice devices 10 and the internet of things server 40, and the internet of things server 40 is in communication connection with the smart home devices 50.
The distributed voice equipment 10 acquires a scene control audio sent by a user, and sends the scene control audio and the identifier of the distributed voice equipment 10 to the voice server 20; the voice server 20 performs voice recognition on the scene control audio to obtain a name of a scene to be controlled corresponding to the scene control audio, the voice server 20 determines target scene information matched with the name of the scene to be controlled from the stored preset scene information according to the identifier of the distributed voice equipment 10, and returns the target scene information to the internet of things server 40; the internet of things server 40 controls the smart home devices 50 according to the target scene information.
The user may bind the smart home device 50 with a preset user through the mobile terminal 60, or set scene information through the mobile terminal 60, so as to control the smart home device 50 according to the set scene information.
The distributed voice device 10 may be a smart home device 50 having an audio input function, and may receive a scene control audio from a user when the distributed voice device 10 is awakened.
The voice server 20 may be an entity server or a virtual machine capable of implementing the function of the entity server, and may also be a cloud server, etc.
As another implementation manner, the voice server 20 includes a first voice server 201 and a second voice server 202, where the second voice server 202 may be a server provided by a third party and capable of parsing or recognizing voice, and in order to minimize interaction between the distributed voice device 10 and the second voice server 202 and reduce an influence on the second voice server 202 when the distributed voice device 10 changes, the second voice server 202 performs information interaction with the distributed voice device 10 through the first voice server 201. The first voice server 201 is configured to provide an interface for accessing the second voice server 202 for the distributed voice device 10, and when there are multiple distributed voice devices 10, the multiple distributed voice devices 10 perform information interaction through the first voice server 201 and the second voice server 202, so as to implement loose coupling between the distributed voice devices 10 and the second voice server 202, so as to facilitate the influence on the second voice server 202 when the distributed voice devices 10 are increased or decreased, and implement unified management of the distributed voice devices 10.
The internet of things server 40 may be a server for performing unified control management on the smart home devices 50, and all control commands that need to be controlled on the smart home devices 50 are directly issued to the smart home devices 50 by the internet of things server 40. The internet-of-things server 40 is responsible for judging whether the smart home devices 50 are online or not, analyzing the received network packet, and converting a control command for controlling the smart home devices 50 in the network packet into a command form which can be recognized by the smart home devices 50.
The smart home devices 50 are home devices that enable a user to remotely and automatically control the home devices from any internet-connected location in the world using mobile or other networked devices, including, but not limited to, smart appliances, smart curtains, smart lighting, smart entrance guards, smart games, and the like.
In some scenarios, the distributed speech device 10 and the smart home device 50 may be the same, and in other scenarios, the smart home device 50 may include the distributed speech device 10.
The mobile terminal 60 may be, but is not limited to, a mobile device such as a smartphone, a tablet, a wearable smart device, etc.
Based on the scenario shown in fig. 1, an embodiment of the present invention provides an intelligent home scenario control method applied to an intelligent home network shown in fig. 1, please refer to fig. 2, and fig. 2 is a schematic flow diagram of the intelligent home scenario control method provided in the present invention, where the method includes the following steps:
step S101, the distributed voice equipment receives a scene control instruction, and sends a scene control audio frequency in the scene control instruction and the identification of the distributed voice equipment to a voice server.
In this embodiment, the scene control instruction may be a voice instruction issued by a user for performing scene control after the distributed voice device 10 is woken up, for example, the user says: and opening the noon break scene, wherein the audio corresponding to the noon break scene is the scene control audio in the scene control instruction.
In this embodiment, the identifier of the distributed voice device 10 may be a symbol or a symbol string used to uniquely characterize the distributed voice device 10, for example, the identifier of the distributed voice device 10 may be an IP address of the distributed voice device 10, or a factory number of the distributed voice device 10, or the like.
In the present embodiment, the distributed speech device 10 transmits its own identification to the speech server 20 together with the scene control audio in the scene control instruction.
And S102, carrying out voice recognition on the scene control audio by the voice server to obtain a name of the scene to be controlled corresponding to the scene control audio.
In this embodiment, the voice server 20 performs voice recognition on the scene control audio, which may be to convert the scene control audio into corresponding words, where the words are names of scenes to be controlled.
And S103, the voice server determines target scene information matched with the name of the scene to be controlled from the stored preset scene information according to the identification of the distributed voice equipment, and returns the target scene information to the Internet of things server.
In this embodiment, the preset scene information stored in the voice server 20 may be preset by the user through the mobile terminal 60, the mobile terminal 60 sends the preset scene information preset by the user to the internet of things server 40, and the internet of things server 40 sends the preset scene information to the voice server 20.
In this embodiment, the preset scene information may include, but is not limited to, a user identifier for setting the preset scene information, a preset scene name of the preset scene, the smart home devices 50 related to the preset scene, and a control command for controlling the smart home devices 50, where the smart home devices 50 related to the preset scene are the smart home devices 50 that need to be controlled in the preset scene. For example, the preset scene name set by the user is: "start the lunch break scene", the relevant smart home devices 50 are: the air conditioner 1-1 and the curtain 1-1, wherein the control command for controlling the air conditioner 1-1 is as follows: "turn on the air conditioner cooling mode, set up the temperature 26 °", the control command to the curtain 1-1 is: "close the curtain".
In this embodiment, the target scene information may be an identifier of a preset scene matched with a name of a scene to be controlled, or may be the smart home device 50 matched with the name of the scene to be controlled and a control command for controlling the smart home device 50.
And step S104, the Internet of things server controls the intelligent household equipment according to the target scene information.
The intelligent home scene control method provided by the embodiment of the invention is based on the scene control audio received by the distributed voice equipment 10, so that the intelligent home equipment in the intelligent home network is subjected to scene control, the aim of linkage control of a plurality of pieces of equipment in the intelligent home network is fulfilled, the control modes of the intelligent home equipment are enriched, and the scene requirement for controlling the plurality of pieces of intelligent home equipment is met.
On the basis of fig. 2, an embodiment of the present invention provides a possible implementation manner in which the voice server 20 determines target scene information matched with a name of a scene to be controlled from preset scene information, please refer to fig. 3, where fig. 3 is a flowchart of an intelligent home scene control method provided by the present invention, and step S103 further includes the following sub-steps:
and step S1031, the voice server obtains a second user identifier for binding the distributed voice equipment according to the identifier of the distributed voice equipment.
In this embodiment, as a specific implementation manner, an APP may run on the mobile terminal, after the APP of the mobile terminal is registered by using the user identifier, the user logs in the APP of the mobile terminal through the registered user identifier to bind the distributed voice device 10, that is, an association relationship between the user identifier of the logged in mobile terminal and the identifier of the distributed voice device 10 is established, the second user identifier is the user identifier of the distributed voice device 10, and the second user identifier may be represented by a character string composed of letters, numbers, or other characters. For example, a user logs in a mobile terminal through a user identifier abc, and binds the distributed voice device 10 through the mobile terminal, where the abc is a second user identifier for binding the distributed voice device 10.
And a substep S1032, when a target scene identifier exists in the preset scene identifiers, where the first user identifier is the same as the second user identifier and the preset scene name is the same as the to-be-controlled scene name, determining, by the voice server, the target scene identifier as target scene information matched with the to-be-controlled scene name.
In this embodiment, the preset scene information stored in the voice server 20 includes a preset scene identifier of a preset scene, and a preset scene name and a first user identifier corresponding to the preset scene identifier, where the first user identifier is used to represent a user who sets the preset scene, and the first user identifier may be represented by a character string composed of letters, numbers, or other characters, for example, the preset scene information stored in the voice server 20 is as shown in table 1 below:
preset scene identification Preset scene name First user identification
S1 Starting a mid-day break scene A1
S2 Close lunch break scene A1
S3 Opening a theater scene B1
S4 Closing a theater scene B2
The name of the scene to be controlled is: "turn on lunch break scene", the second user identification is a1, and the target scene identification is S1.
It should be noted that, according to the actual scene needs, whether the preset scene name is the same as the scene name to be controlled may also be implemented by determining whether the preset scene name and the scene name to be controlled have the same keywords, or whether the number of the keywords having the same keywords reaches the preset number.
Based on the specific implementation manner of steps S1031 to S1032, the embodiment of the present invention further provides a possible implementation manner for controlling the smart home device by the internet of things server 40 according to the target scene information, and step S104 may include the following sub-steps:
in the substep S1041, the internet of things server determines a target preset scene identifier consistent with the target scene identifier from the preset scene identifiers.
In this embodiment, the internet of things server stores a preset scene identifier in advance, and the target preset scene identifier is a preset scene identifier consistent with the target scene identifier.
In the substep S1042, the internet of things server obtains a target preset control device related to the target preset scene identifier and a target preset control command for controlling the target preset control device.
In this embodiment, the internet of things server stores a preset scene identifier, a preset control device related to the preset scene identifier, and a preset control command for controlling the preset control device in advance, where the preset control device is pre-selected from the smart home devices 50. For example, the smart home devices 50 include an air conditioner, a curtain, a lighting device, a television, and a stereo, and the preset scene identifier, the preset control device related to the preset scene identifier, and the preset control command for controlling the preset control device are shown in table 2 below:
TABLE 2
Figure BDA0002594800030000111
If the target preset scene identification is S1, the target preset control devices are an air conditioner a1, a curtain b1 and a lighting device c1, and the target preset control commands include an "open air conditioner sleep mode" corresponding to the air conditioner a1, a "close curtain" corresponding to the curtain b1 and a "close lighting device" corresponding to the lighting device c 1.
In the substep S1043, the internet of things server sends the target preset control command to the corresponding target preset control device to control the target preset control device.
In the present embodiment, when there are a plurality of target preset control devices, the corresponding target preset control command is sent to the corresponding target preset control device, for example, in the scenario of S1 in table 2 above, a command of "turn on the air conditioner sleep mode" is sent to the air conditioner a1, a command of "turn off the curtain" is sent to the curtain b1, and a command of "turn off the lighting" is sent to the lighting device c 1.
As a specific implementation manner, in order to reduce the probability of the execution failure of the target preset control command and improve the user experience, the sub-step may be implemented by:
firstly, the server of the internet of things judges whether the target preset control equipment is on line or not.
In this embodiment, the target preset control device is selected from the smart home device 50 when the user sets the preset scene information, and one way of determining whether the target preset control device is online may be: when the network communication between the target preset control device and the network server 50 is normal, it is determined that the target preset control device is on-line, or it is determined that the target preset control device is on-line by determining that the network communication quality between the target preset control device and the network server 50 reaches a preset quality standard or that the average network bandwidth reaches a preset network bandwidth. The present invention does not limit the specific manner of determining whether the target preset control device is online.
Secondly, when the target preset control equipment is on line, the internet of things server sends the target preset control command to the corresponding target preset control equipment so as to control the target preset control equipment.
According to the intelligent home scene control method provided by the embodiment of the invention, when the preset scene information is preset by the user, the target scene information matched with the scene control audio and set by the user can be automatically matched, so that the intelligent home equipment can be controlled according to the target scene information, the control of the user is facilitated, and the user experience is improved.
In this embodiment, a user does not need to set preset scene information in advance, at this time, a target scene identifier does not exist in the preset scene identifier, and in order to determine target scene information matched with a name of a scene to be controlled, so as to finally control the smart home device according to the target scene information, an embodiment of the present invention further provides another possible implementation manner for determining, according to the identifier of the distributed speech device 10, target scene information matched with the name of the scene to be controlled from stored preset scene information, please continue to refer to fig. 3, where step S103 further includes the following sub-steps:
and a substep S1033, when there is no target scene identifier in the preset scene identifiers, where the first user identifier is the same as the second user identifier and the preset scene name is the same as the scene name to be controlled, the voice server obtains the binding position of the distributed voice device according to the identifier of the distributed voice device.
In this embodiment, when binding the distributed voice device 10, a user may set a binding location of the distributed voice device 10, where the binding location may be a location where the distributed voice device 10 is located, the binding location may be a room identifier, such as a room identifier, a room identifier may be a bedroom or a living room, and the binding location may also be a floor identifier, such as a 15-th building.
In the substep S1034, the voice server searches the primarily selected smart home devices with the same binding position as the distributed voice device and the same third user identifier as the second user identifier from the smart home devices.
In this embodiment, the third user identifier is used to represent the user who binds to the smart home device 50, and the third user identifier may be represented by a character string composed of letters, numbers, or other characters. The number of the smart home devices 50 may be multiple, each smart home device 50 corresponds to one identifier, a third user identifier for binding the smart home device 50, and a binding position, and the user may also set the binding position when binding the smart home device 50, and the binding position of the smart home device 50 is similar to the meaning and the setting manner of the binding position representation of the distributed voice device 10, and details are not repeated here.
In this embodiment, the initially selected smart home devices are smart home devices in which the binding positions of the smart home devices 50 are the same as the binding positions of the distributed voice devices, and the third user identifier is the same as the second user identifier, that is, the smart home devices in the smart home devices 50 belong to the same binding user as the distributed voice devices 10 and have the same binding positions.
And step S1035, the voice server determines target scene information matched with the name of the scene to be controlled according to the preset rule, the primarily selected intelligent household equipment and preset scene information, wherein the target scene information comprises the target intelligent household equipment selected from the primarily selected intelligent household equipment and a target control command for controlling the target intelligent household equipment.
In this embodiment, each smart home device 50 corresponds to a device type, the voice server 20 stores a preset scene identifier, a preset control device related to the preset scene identifier, and a preset control command for controlling the preset control device in advance, and the preset control device is selected from the smart home devices 50.
In this embodiment, the preset rule is used to determine target scene information matched with the name of the scene to be controlled from preset scene information. For example, the preset rule may determine the target scene information according to preset scene information corresponding to a preset control device which is consistent with the initially selected smart home device type and has the highest use frequency, or may determine the target scene information according to preset scene information which has the highest similarity between the device type of the preset control device and the initially selected smart home device type. The following description is given taking one possible implementation as an example:
firstly, the voice server obtains the device type of the primarily selected smart home device.
In this embodiment, the primary smart home devices may be one or more, and when the primary smart home devices are multiple, the device types thereof include all types of the multiple primary smart home devices, for example, the primary smart home devices are: the types of the equipment a, the equipment b and the equipment c are respectively as follows: air conditioner, (window) curtain, air conditioner, then the equipment type of primary election intelligent household equipment does: { air conditioner, curtain }.
And secondly, the voice server calculates the similarity between the equipment type of the primarily selected intelligent household equipment and the equipment type of the preset control equipment related to the preset scene identification.
In this embodiment, the current user does not set a preset scene, and the preset scene stored in the voice server 20 is a preset scene preset by another user at this time.
In this embodiment, as a specific implementation manner, the similarity between the device type of the initially selected smart home device and the device type of the preset control device related to the preset scene identifier may be determined according to the number of devices having the same device type, for example, there are two preset scenes, and the corresponding preset control devices are as shown in table 3 below:
preset scene identification Presetting a device type of a control device
S1 Air conditioner, television and lighting equipment
S2 Air conditioner, curtain and lighting equipment
The equipment type of the primarily selected intelligent household equipment is as follows: { air conditioner and curtain }, the similarity between the device type of the primarily selected smart home device and the device type of S1 is 1, and the similarity between the device type of the primarily selected smart home device and the device type of S1 is 2.
And thirdly, the voice server determines the preset scene identification with the highest similarity as a target preset scene identification and determines the preset control equipment related to the target preset scene identification as target preset control equipment.
In this embodiment, for example, the device types of the initially selected smart home devices are: { air conditioner, curtain }, where the preset scene is as shown in table 3 above, S2 is a target preset scene identifier, and the target preset control device is: the device corresponding to the air conditioner, the device corresponding to the curtain and the device corresponding to the lighting device.
And fourthly, the voice server determines first target equipment from the primarily selected intelligent household equipment and determines second target equipment from the target preset control equipment, wherein the equipment types of the first target equipment and the second target equipment are the same.
In this embodiment, for example, the initially selected smart home devices and device types are: { equipment a (type: air conditioner), equipment b (type: curtain) }, the target preset control equipment and the equipment types are: { device c (type: air conditioner), device d (type: curtain), device e (type: lighting) }, the first target device includes device a and device b, and the second target device includes device c and device d.
And fifthly, the voice server takes the first target equipment as target intelligent home equipment, and takes a preset control command for controlling the second target equipment as a target control command for the target intelligent home equipment.
In this embodiment, when the first target device and the second target device are both multiple, the preset control command controlled by the second target device with the same device type is used as the target control command of the corresponding first target device. For example, the preset control command for the device type of the second target device being the air conditioner is used as the target control command for the device type of the first target device being the air conditioner, and the preset control command for the device type of the second target device being the curtain is used as the target control command for the device type of the first target device being the curtain.
It should be noted that, after determining the target smart home device and the target control command for controlling the target smart home device, the internet of things server 40 sends the target control command to the corresponding target smart home device, so as to control the target smart home device. As a specific implementation manner, in order to improve the success rate of control, similar to the method described in the sub-step S1043, the internet of things server 40 may also first determine whether the target smart home device is online, and when the target smart home device is online, send the target control command to the corresponding target smart home device.
According to the intelligent home scene control method provided by the embodiment of the invention, when the user does not preset the preset scene information, the target scene information matched with the scene control audio can be automatically matched according to the preset scene information set by other users, so that the intelligent home equipment can be controlled according to the target scene information, the control mode is enriched, and the user experience is improved.
In this embodiment, in order to enable a user to more conveniently and timely obtain a scene control result, an embodiment of the present invention further provides another implementation manner for prompting a scene control result on the basis of fig. 2, please refer to fig. 4, where fig. 4 is a schematic flow diagram of an intelligent home scene control method provided by the present invention, and the method further includes the following steps:
and S105, the voice server receives a control result which is returned by the Internet of things server and used for controlling the intelligent household equipment, and broadcasts the control result in an audio mode.
In this embodiment, the voice server may perform audio broadcasting for each smart home device that returns a control result, for example: "the air conditioners in the room 1-1 are successfully turned on". When the intelligent household equipment quantity that needs control is more, also can only report an audio frequency: the method comprises the following steps of 'starting a noon break scene successfully' or 'starting a noon break scene failure, partial equipment starting failure' and the like.
According to the intelligent home scene control method provided by the embodiment of the invention, the user can timely and conveniently obtain the scene control result in an audio broadcasting mode, so that the user experience is improved.
Step S105 in fig. 4 may be used in combination with fig. 3, steps S1031 to S1035 in fig. 3 may replace step S103 in fig. 4, and steps S1041 to S1042 in fig. 3 may replace step S104 in fig. 4.
In this embodiment, for the distributed voice device 10 or the smart home device 50, a user may bind the distributed voice device 10 or the smart home device 50 to facilitate the user to uniformly manage the distributed voice device 10 or the smart home device 50, and please refer to fig. 5, where fig. 5 is a flowchart of an intelligent home scene control method provided by the present invention, and the method further includes the following steps:
step S201, the Internet of things server receives an equipment binding command sent by the mobile terminal, wherein the equipment binding command comprises an equipment identifier, an equipment name, a position to be bound and a user identifier for binding the equipment to be bound.
In this embodiment, the device to be bound may be the distributed voice device 10, or may be the smart home device 50.
It should be noted that, according to actual needs, the device binding command may further include information of the device to be bound, which is input by another user from the mobile terminal, for example, a purchase date, a purchase location, and the like of the device to be bound.
Step S202, the Internet of things server establishes a first corresponding relation between the equipment identifier of the equipment to be bound and the equipment name, the position and the user identifier of the equipment to be bound and stores the first corresponding relation.
In this embodiment, the internet of things server may perform unified management on the bound devices by using the stored first corresponding relationship, and provide a function of querying, modifying, adding, or deleting the device information that has been bound by the user.
Step S203, the Internet of things server sends the first corresponding relation to the voice server.
Step S204, the voice server stores the first corresponding relation.
In this embodiment, the voice server 20 may find, according to the first corresponding relationship, a user identifier for performing a binding operation on the specified device, so as to find a preset scene set by the user represented by the user identifier.
It should be noted that, although the embodiment of the present invention does not provide a specific implementation for unbinding the bound device, a person skilled in the art may infer an unbinding process opposite to the binding process according to the binding process described in the embodiment of the present invention.
The method for controlling the scene of the smart home provided by the embodiment of the invention provides a method for binding the devices to be bound, establishes a first corresponding relationship between the device identifier of the device to be bound and the device name, the position to be bound and the user identifier for binding the device to be bound, and stores the first corresponding relationship in the internet of things server 40 and the voice server 20, so as to uniformly manage the devices to be bound, quickly find the target scene information corresponding to the scene control audio, and further realize the control of the smart home device 50.
In this embodiment, in order to facilitate a user to control the smart home device 50 according to a scene, an embodiment of the present invention further provides an implementation manner for configuring the scene, please refer to fig. 6, where fig. 6 is a schematic flow diagram of a method for controlling a smart home scene provided by the present invention, and the method further includes the following steps:
step S301, the Internet of things server receives scene configuration information sent by the mobile terminal, wherein the scene configuration information comprises a scene name of a scene to be configured, intelligent household equipment to be added to the scene to be configured, a control instruction for controlling the intelligent household equipment to be added and a user identifier for configuring the scene to be configured, and the intelligent household equipment to be added and a user represented by the user identifier for configuring the scene to be configured have a first corresponding relation.
In this embodiment, the user can only configure the smart home device 50 bound by the user to the scene to be configured, that is, the user identifier bound to the smart home device 50 needs to be the same as the user identifier configured to the scene to be configured.
Step S302, the Internet of things server generates a corresponding scene identifier for a scene to be configured.
In this embodiment, different users may set the same scene name of the scene to be configured, and in order to facilitate the distinction, the internet of things server generates a scene identifier for uniquely characterizing the scene to be configured for the scene to be configured.
Step S303, the Internet of things server establishes a second corresponding relation between the scene identifier of the scene to be configured and the intelligent household equipment to be added, the control instruction for controlling the intelligent household equipment to be added and the user identifier for configuring the scene to be configured, and stores the second corresponding relation.
In this embodiment, the internet of things server 40 may determine, by using the second corresponding relationship, a target preset scene identifier that is consistent with the target scene identifier, and further determine a target preset control device (i.e., the smart home device 50 added to the target preset scene) related to the target preset scene identifier and a target preset control command for controlling the target preset control device (i.e., a control command for controlling the smart home device 50 in the target preset scene).
And step S304, the Internet of things server sends the second corresponding relation to the voice server.
Step S305, the voice server stores the second corresponding relationship.
In this embodiment, the voice server may determine the target scene identifier matching the name of the scene to be controlled according to the stored second corresponding relationship.
According to the intelligent home scene control method provided by the embodiment of the invention, the Internet of things server and the voice server both store the second corresponding relation, and the voice server only needs to send the target scene identifier to the voice server, so that the data transmission quantity between the Internet of things server and the voice server is reduced.
In this embodiment, a server provided by a third party is generally used to analyze or recognize a voice, in order to reduce interaction between the distributed voice device 10 and the server provided by the third party as much as possible and reduce an influence on the server provided by the third party when the distributed voice device 10 changes, on the basis of fig. 2, an embodiment of the present invention further provides another intelligent home scene control method, please refer to fig. 7, fig. 7 is a flowchart of another intelligent home scene control method provided by the present invention, and step S102 includes the following sub-steps:
in sub-step S1021, the first voice server sends the scene control audio and the identifier of the distributed voice device to the second voice server.
In this embodiment, the voice server 20 includes a first voice server 201 and a second voice server 202, and the first voice server 201 is responsible for providing an interface for interaction between the distributed voice device 10 and the second voice server 202, for example, forwarding the scene control audio received by the distributed voice device 10 to the second voice server for voice recognition. The second voice server 202 is a server provided by a third party that can parse or recognize the voice.
In the substep S1022, the second voice server performs voice recognition on the scene control audio to obtain a name of the scene to be controlled corresponding to the scene control audio.
According to the method for controlling the smart home scene provided by the embodiment of the invention, the first voice server 201 provides an interactive interface between the distributed voice device 10 and the second voice server 202, so that the interaction between the distributed voice device 10 and a server provided by a third party is reduced, and the influence on the server provided by the third party when the distributed voice device 10 changes is reduced.
It should be noted that substeps S1021 to S1022 in fig. 7 may be used in combination with fig. 3 to 6, that is, in place of step S102 in fig. 3 to 6.
Based on the above-described intelligent home scene control method, an embodiment of the present invention further provides an intelligent home scene control system, where the intelligent home scene control system includes a distributed voice device 10, a voice server 20, an internet of things server 40, and an intelligent home device 50, where:
and the distributed voice equipment 10 is configured to receive the scene control instruction, and send the scene control audio in the scene control instruction and the identifier of the distributed voice equipment to the voice server.
And the voice server 20 is configured to perform voice recognition on the scene control audio to obtain a name of the scene to be controlled corresponding to the scene control audio.
And the voice server 20 is further configured to determine, according to the identifier of the distributed voice device, target scene information matched with the name of the scene to be controlled from the stored preset scene information, and return the target scene information to the voice server.
As a specific embodiment, the preset scene information stored by the voice server 20 includes a preset scene identifier of a preset scene, and a preset scene name and a first user identifier corresponding to the preset scene identifier, where the first user identifier is used to represent a user who sets the preset scene, and the voice server 20 is further configured to, when determining, according to the identifier of the distributed voice device, target scene information matched with the name of the scene to be controlled from the stored preset scene information: acquiring a second user identifier for binding the distributed voice equipment according to the identifier of the distributed voice equipment; and when a target scene identifier with the first user identifier and the second user identifier which are the same and the preset scene name and the scene name to be controlled which are the same exists in the preset scene identifiers, determining the target scene identifier as target scene information matched with the scene name to be controlled.
As a specific implementation manner, the number of the smart home devices is multiple, each smart home device corresponds to an identifier, a third user identifier for binding the smart home device, and a binding position, and the voice server 20 is further configured to, when determining target scene information matched with a name of a scene to be controlled from stored preset scene information according to the identifier of the distributed voice device: when the target scene identification which is the same as the first user identification and the second user identification and is the same as the preset scene name and the scene name to be controlled does not exist in the preset scene identification, acquiring the binding position of the distributed voice equipment according to the identification of the distributed voice equipment; searching primarily selected intelligent home equipment of which the binding position is the same as that of the distributed voice equipment and the third user identification is the same as that of the second user identification from the intelligent home equipment; and determining target scene information matched with the name of the scene to be controlled according to a preset rule, the primarily selected intelligent household equipment and preset scene information, wherein the target scene information comprises the target intelligent household equipment selected from the primarily selected intelligent household equipment and a target control command for controlling the target intelligent household equipment.
As a specific implementation manner, each smart home device corresponds to a device type, the voice server 20 stores a preset scene identifier, a preset control device related to the preset scene identifier, and a preset control command for controlling the preset control device in advance, and the voice server 20 is further configured to, according to a preset rule, determine target scene information matched with a name of a to-be-controlled scene according to a primarily selected smart home device and preset scene information: acquiring the equipment type of the primarily selected intelligent household equipment; calculating the similarity between the equipment type of the primarily selected intelligent household equipment and the equipment type of the preset control equipment related to the preset scene identification; determining a preset scene identifier with the highest similarity as a target preset scene identifier, and determining a preset control device related to the target preset scene identifier as a target preset control device; determining first target equipment from the primarily selected intelligent household equipment, and determining second target equipment from target preset control equipment, wherein the equipment types of the first target equipment and the second target equipment are the same; and taking the first target equipment as target intelligent household equipment, and taking a preset control command for controlling the second target equipment as a target control command for the target intelligent household equipment.
As a specific embodiment, the voice server 20 is further configured to: the first correspondence is stored.
As a specific embodiment, the voice server 20 is further configured to: the second correspondence is stored.
And the voice server 20 is configured to send the target scene information to the internet of things server.
As a specific embodiment, the voice server 20 is further configured to: and receiving a control result which is returned by the Internet of things server and used for controlling the intelligent household equipment, and broadcasting the control result in an audio mode.
As a specific embodiment, the voice server 20 includes a first voice server (201) and a second voice server (202), the first voice server (201) is configured to send the scene control audio and the identification of the distributed voice device to the second voice server (202); and the second voice server (202) performs voice recognition on the scene control audio to obtain a name of the scene to be controlled corresponding to the scene control audio.
And the internet of things server 40 is used for controlling the intelligent household equipment according to the target scene information.
As a specific implementation manner, the internet of things server 40 prestores preset scene identifiers, preset control devices related to the preset scene identifiers, and preset control commands for controlling the preset control devices, where the preset control devices are preselected from the smart home devices, and when the internet of things server 40 controls the smart home devices according to the target scene information, the internet of things server is further configured to: determining a target preset scene identifier consistent with the target scene identifier from the preset scene identifiers; acquiring target preset control equipment related to a target preset scene identifier and a target preset control command for controlling the target preset control equipment; and sending the target preset control command to corresponding target preset control equipment so as to control the target preset control equipment.
As a specific implementation manner, the internet of things server 40 sends the target preset control command to the corresponding target preset control device, so as to control the target preset control device, further: judging whether the target intelligent household equipment in the target scene information is on-line or not; and when the target intelligent household equipment is on line, sending the target control instruction in the target scene information to the corresponding target intelligent household equipment so as to control the target intelligent household equipment.
As a specific embodiment, the internet of things server 40 is further configured to: receiving an equipment binding command sent by a mobile terminal, wherein the equipment binding command comprises an equipment identifier, an equipment name, a position to be bound and a user identifier for binding the equipment to be bound; establishing a first corresponding relation between the equipment identifier of the equipment to be bound and the equipment name, the position to be bound and the user identifier for binding the equipment to be bound, and storing the first corresponding relation; and sending the first corresponding relation to a voice server.
As a specific embodiment, the internet of things server 40 is further configured to: receiving scene configuration information sent by a mobile terminal, wherein the scene configuration information comprises a scene name of a scene to be configured, intelligent home equipment to be added to the scene to be configured, a control instruction for controlling the intelligent home equipment to be added and a user identifier for configuring the scene to be configured, and a first corresponding relation exists between the intelligent home equipment to be added and a user represented by the user identifier for configuring the scene to be configured; generating a corresponding scene identifier for a scene to be configured; establishing a second corresponding relation between the scene identifier of the scene to be configured and the intelligent household equipment to be added, the control instruction for controlling the intelligent household equipment to be added and the user identifier for configuring the scene to be configured, and storing the second corresponding relation; and sending the second corresponding relation to a voice server.
In summary, the present invention provides an intelligent home scene control method and system, which are applied to an intelligent home network, where the intelligent home network includes distributed voice devices, a voice server, an internet of things server, and intelligent home devices, and the method includes: the distributed voice equipment receives a scene control instruction and sends a scene control audio frequency in the scene control instruction and the identification of the distributed voice equipment to the voice server; the voice server carries out voice recognition on the scene control audio to obtain a name of a scene to be controlled corresponding to the scene control audio; the voice server determines target scene information matched with the name of the scene to be controlled from stored preset scene information according to the identification of the distributed voice equipment, and returns the target scene information to the Internet of things server; and the Internet of things server controls the intelligent household equipment according to the target scene information. Compared with the prior art, the method and the system have the advantages that scene control is carried out on the intelligent home devices in the intelligent home network based on the scene control audio received by the distributed voice devices, so that the purpose of linkage control of multiple devices in the intelligent home network is achieved, the control modes of the intelligent home devices are enriched, and the scene requirements for controlling the multiple intelligent home devices are met.
Although the present invention is disclosed above, the present invention is not limited thereto. Various changes and modifications may be effected therein by one skilled in the art without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (11)

1. The intelligent home scene control method is applied to an intelligent home network, wherein the intelligent home network comprises distributed voice equipment (10), a voice server (20), an Internet of things server (40) and intelligent home equipment (50), and the method comprises the following steps:
the distributed voice equipment (10) receives a scene control instruction, and sends a scene control audio frequency in the scene control instruction and the identification of the distributed voice equipment to the voice server (20);
the voice server (20) performs voice recognition on the scene control audio to obtain a name of a scene to be controlled corresponding to the scene control audio;
the voice server (20) determines target scene information matched with the name of the scene to be controlled from stored preset scene information according to the identification of the distributed voice equipment, and returns the target scene information to the Internet of things server (40);
the internet of things server (40) controls the intelligent household equipment (50) according to the target scene information.
2. The smart home scene control method according to claim 1, wherein the preset scene information stored by the voice server (20) includes a preset scene identifier of a preset scene, and a preset scene name and a first user identifier corresponding to the preset scene identifier, the first user identifier is used for representing a user who sets the preset scene, and the step of determining, by the voice server (20), target scene information matched with the scene name to be controlled from the stored preset scene information according to the identifier of the distributed voice device includes:
the voice server (20) acquires a second user identifier binding the distributed voice equipment (10) according to the identifier of the distributed voice equipment (10);
when a target scene identifier with the same first user identifier and second user identifier and the same preset scene name as the to-be-controlled scene name exists in the preset scene identifiers, the voice server (20) determines the target scene identifier as target scene information matched with the to-be-controlled scene name.
3. The smart home scene control method according to claim 2, wherein the internet of things server (40) stores a preset scene identifier, a preset control device related to the preset scene identifier, and a preset control command for controlling the preset control device in advance, the preset control device is pre-selected from the smart home devices (50), and the step of controlling the smart home devices (50) by the internet of things server (40) according to the target scene information includes:
the Internet of things server (40) determines a target preset scene identification consistent with the target scene identification from the preset scene identifications;
the internet of things server (40) acquires target preset control equipment related to the target preset scene identification and a target preset control command for controlling the target preset control equipment;
and the Internet of things server (40) sends the target preset control command to corresponding target preset control equipment so as to control the target preset control equipment.
4. The intelligent home scene control method according to claim 2, wherein the number of the intelligent home devices (50) is multiple, each intelligent home device (50) corresponds to an identifier, a third user identifier for binding the intelligent home device (50), and a binding position, and the step of the voice server (20) determining the target scene information matched with the name of the scene to be controlled from the stored preset scene information according to the identifier of the distributed voice device (10) includes:
when the target scene identification which is the same as the first user identification and the second user identification and is the same as the preset scene name and the scene name to be controlled does not exist in the preset scene identification, the voice server (20) acquires the binding position of the distributed voice equipment (10) according to the identification of the distributed voice equipment (10);
the voice server (20) searches for primarily selected intelligent home equipment of which the binding position is the same as that of the distributed voice equipment (10) and the third user identification is the same as that of the second user identification from the intelligent home equipment (50);
and the voice server (20) determines target scene information matched with the name of the scene to be controlled according to a preset rule and the primarily selected intelligent household equipment and the preset scene information, wherein the target scene information comprises target intelligent household equipment selected from the primarily selected intelligent household equipment and a target control command for controlling the target intelligent household equipment.
5. The intelligent home scene control method according to claim 4, wherein each of the intelligent home devices (50) corresponds to a device type, the voice server (20) stores a preset scene identifier, a preset control device related to the preset scene identifier, and a preset control command for controlling the preset control device in advance, and the step of determining, by the voice server (20), target scene information matched with the name of the scene to be controlled according to the initially selected intelligent home device and the preset scene information according to a preset rule includes:
the voice server (20) acquires the equipment type of the primarily selected intelligent household equipment;
the voice server (20) calculates the similarity between the equipment type of the primarily selected intelligent household equipment and the equipment type of the preset control equipment related to the preset scene identification;
the voice server (20) determines the preset scene identification with the highest similarity as a target preset scene identification, and determines preset control equipment related to the target preset scene identification as target preset control equipment;
the voice server (20) determines a first target device from the primarily selected smart home devices and determines a second target device from the target preset control devices, wherein the first target device and the second target device are the same in device type;
and the voice server (20) takes the first target equipment as target intelligent household equipment, and takes a preset control command for controlling the second target equipment as a target control command for the target intelligent household equipment.
6. The intelligent home scene control method according to claim 3, wherein the step of sending the target preset control command to the corresponding target preset control device by the Internet of things server (40) to control the target preset control device comprises:
the Internet of things server (40) judges whether the target preset control equipment is on line or not;
when the target preset control equipment is on line, the Internet of things server (40) sends the target preset control command to the corresponding target preset control equipment so as to control the target preset control equipment.
7. The smart home scene control method according to claim 1, further comprising:
the voice server (20) receives a control result which is returned by the Internet of things server (40) and used for controlling the intelligent household equipment (50), and broadcasts the control result in an audio mode.
8. The smart home scene control method according to claim 1, further comprising:
the method comprises the steps that an Internet of things server (40) receives an equipment binding command sent by a mobile terminal (60), wherein the equipment binding command comprises an equipment identifier, an equipment name, a position to be bound and a user identifier for binding the equipment to be bound;
the Internet of things server (40) establishes a first corresponding relation between the equipment identifier of the equipment to be bound and the equipment name, the position to be bound and the user identifier binding the equipment to be bound, and stores the first corresponding relation;
the internet of things server (40) sends the first corresponding relation to the voice server (20);
the voice server (20) stores the first correspondence.
9. The smart home scene control method according to claim 8, further comprising:
the internet of things server (40) receives scene configuration information sent by the mobile terminal (60), wherein the scene configuration information comprises a scene name of a scene to be configured, intelligent household equipment to be added to the scene to be configured, a control instruction for controlling the intelligent household equipment to be added and a user identifier for configuring the scene to be configured, and the intelligent household equipment to be added and a user represented by the user identifier for configuring the scene to be configured have a first corresponding relationship;
the Internet of things server (40) generates a corresponding scene identifier for the scene to be configured;
the Internet of things server (40) establishes a second corresponding relation between the scene identifier of the scene to be configured and the intelligent household equipment to be added, the control instruction for controlling the intelligent household equipment to be added and the user identifier for configuring the scene to be configured, and stores the second corresponding relation;
the Internet of things server (40) sends the second corresponding relation to the voice server (20);
the voice server (20) stores the second correspondence.
10. The smart home scene control method according to claim 1, wherein the voice server (20) includes a first voice server (201) and a second voice server (202), and the step of performing voice recognition on the scene control audio by the voice server (20) to obtain a scene name to be controlled corresponding to the scene control audio includes:
the first voice server (201) sending the scene control audio and the identification of the distributed voice device to the second voice server (202);
and the second voice server (202) performs voice recognition on the scene control audio to obtain a name of the scene to be controlled corresponding to the scene control audio.
11. An intelligent home scene control system is characterized by comprising distributed voice equipment (10), a voice server (20), an Internet of things server (40) and intelligent home equipment,
the distributed voice equipment (10) is used for receiving a scene control instruction and sending a scene control audio frequency in the scene control instruction and the identification of the distributed voice equipment (10) to the voice server (20);
the voice server (20) is used for performing voice recognition on the scene control audio to obtain a name of a scene to be controlled corresponding to the scene control audio;
the voice server (20) is further configured to determine target scene information matched with the name of the scene to be controlled from stored preset scene information according to the identifier of the distributed voice device (10), and return the target scene information to the internet of things server (40);
the internet of things server (40) is used for controlling the intelligent household equipment (50) according to the target scene information.
CN202010706256.XA 2020-07-21 2020-07-21 Smart home scene control method and system Active CN111665737B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010706256.XA CN111665737B (en) 2020-07-21 2020-07-21 Smart home scene control method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010706256.XA CN111665737B (en) 2020-07-21 2020-07-21 Smart home scene control method and system

Publications (2)

Publication Number Publication Date
CN111665737A true CN111665737A (en) 2020-09-15
CN111665737B CN111665737B (en) 2023-09-15

Family

ID=72393004

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010706256.XA Active CN111665737B (en) 2020-07-21 2020-07-21 Smart home scene control method and system

Country Status (1)

Country Link
CN (1) CN111665737B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113093561A (en) * 2021-03-24 2021-07-09 青岛海尔科技有限公司 Door equipment control method and device, storage medium and electronic device
CN113268020A (en) * 2021-04-15 2021-08-17 珠海荣邦智能科技有限公司 Method for controlling electronic equipment by intelligent gateway, intelligent gateway and control system
CN113472857A (en) * 2021-06-09 2021-10-01 吴伟彤 Control method of Internet of things equipment and Internet of things equipment
CN113593545A (en) * 2021-06-24 2021-11-02 青岛海尔科技有限公司 Linkage scene execution method and device, storage medium and electronic equipment
CN114019815A (en) * 2021-11-10 2022-02-08 宁波迪惟科技有限公司 Intelligent household equipment configuration system and method
CN114124597A (en) * 2021-10-28 2022-03-01 青岛海尔科技有限公司 Control method, equipment and system of Internet of things equipment
CN114143359A (en) * 2021-10-28 2022-03-04 青岛海尔科技有限公司 Control method, equipment and system of Internet of things equipment
CN114584416A (en) * 2022-02-11 2022-06-03 青岛海尔科技有限公司 Electrical equipment control method, system and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170019265A1 (en) * 2015-07-13 2017-01-19 Xiaomi Inc. Method, terminal and server for controlling smart device
CN106569467A (en) * 2016-10-29 2017-04-19 深圳智乐信息科技有限公司 Method for selecting scene based on mobile terminal and system
CN108683574A (en) * 2018-04-13 2018-10-19 青岛海信智慧家居系统股份有限公司 A kind of apparatus control method, server and intelligent domestic system
CN110045621A (en) * 2019-04-12 2019-07-23 深圳康佳电子科技有限公司 Intelligent scene processing method, system, smart home device and storage medium
CN110070864A (en) * 2019-03-13 2019-07-30 佛山市云米电器科技有限公司 A kind of control system and its method based on voice setting household scene
CN110958142A (en) * 2019-11-26 2020-04-03 华为技术有限公司 Device maintenance method, maintenance device, storage medium, and computer program product
CN111258224A (en) * 2018-11-30 2020-06-09 西安欧思奇软件有限公司 Intelligent household control method and device, computer equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170019265A1 (en) * 2015-07-13 2017-01-19 Xiaomi Inc. Method, terminal and server for controlling smart device
CN106569467A (en) * 2016-10-29 2017-04-19 深圳智乐信息科技有限公司 Method for selecting scene based on mobile terminal and system
CN108683574A (en) * 2018-04-13 2018-10-19 青岛海信智慧家居系统股份有限公司 A kind of apparatus control method, server and intelligent domestic system
CN111258224A (en) * 2018-11-30 2020-06-09 西安欧思奇软件有限公司 Intelligent household control method and device, computer equipment and storage medium
CN110070864A (en) * 2019-03-13 2019-07-30 佛山市云米电器科技有限公司 A kind of control system and its method based on voice setting household scene
CN110045621A (en) * 2019-04-12 2019-07-23 深圳康佳电子科技有限公司 Intelligent scene processing method, system, smart home device and storage medium
CN110958142A (en) * 2019-11-26 2020-04-03 华为技术有限公司 Device maintenance method, maintenance device, storage medium, and computer program product

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113093561A (en) * 2021-03-24 2021-07-09 青岛海尔科技有限公司 Door equipment control method and device, storage medium and electronic device
CN113093561B (en) * 2021-03-24 2023-09-19 青岛海尔科技有限公司 Door equipment control method and device, storage medium and electronic device
CN113268020A (en) * 2021-04-15 2021-08-17 珠海荣邦智能科技有限公司 Method for controlling electronic equipment by intelligent gateway, intelligent gateway and control system
CN113472857A (en) * 2021-06-09 2021-10-01 吴伟彤 Control method of Internet of things equipment and Internet of things equipment
CN113593545A (en) * 2021-06-24 2021-11-02 青岛海尔科技有限公司 Linkage scene execution method and device, storage medium and electronic equipment
CN114124597A (en) * 2021-10-28 2022-03-01 青岛海尔科技有限公司 Control method, equipment and system of Internet of things equipment
CN114143359A (en) * 2021-10-28 2022-03-04 青岛海尔科技有限公司 Control method, equipment and system of Internet of things equipment
CN114143359B (en) * 2021-10-28 2023-12-19 青岛海尔科技有限公司 Control method, equipment and system of Internet of things equipment
CN114019815A (en) * 2021-11-10 2022-02-08 宁波迪惟科技有限公司 Intelligent household equipment configuration system and method
CN114019815B (en) * 2021-11-10 2024-03-29 宁波迪惟科技有限公司 Intelligent household equipment configuration system and method
CN114584416A (en) * 2022-02-11 2022-06-03 青岛海尔科技有限公司 Electrical equipment control method, system and storage medium
CN114584416B (en) * 2022-02-11 2023-12-19 青岛海尔科技有限公司 Electrical equipment control method, system and storage medium

Also Published As

Publication number Publication date
CN111665737B (en) 2023-09-15

Similar Documents

Publication Publication Date Title
CN111665737B (en) Smart home scene control method and system
WO2018039814A1 (en) Smart household control method, apparatus and system
EP3640936B1 (en) Apparatus control device, method, and apparatus having same
CN105471705B (en) Intelligent control method, equipment and system based on instant messaging
CN108683574A (en) A kind of apparatus control method, server and intelligent domestic system
US10405051B2 (en) First-screen navigation with channel surfing, backdrop reviewing and content peeking
CN111447123A (en) Smart home configuration method and device, electronic equipment and medium
CN107204903A (en) Intelligent domestic system and its control method
CN104898629A (en) Intelligent household control end and control method
CN111970180B (en) Networking configuration method and device for intelligent household equipment, electronic equipment and storage medium
CN111487884A (en) Storage medium, and intelligent household scene generation device and method
CN111817936A (en) Control method and device of intelligent household equipment, electronic equipment and storage medium
CN110456755A (en) A kind of smart home long-range control method based on cloud platform
CN113450792A (en) Voice control method of terminal equipment, terminal equipment and server
CN112034725A (en) Remote home control method based on Internet of things
WO2022268136A1 (en) Terminal device and server for voice control
CN112180753A (en) Intelligent home control method, system and server
WO2024016539A1 (en) Device control method and apparatus, and storage medium and electronic apparatus
CN110426965A (en) A kind of smart home long-range control method based on cloud platform
CN113825004A (en) Multi-screen sharing method and device for display content, storage medium and electronic device
CN113300920A (en) Intelligent household appliance control method and control equipment based on household appliance control group
CN113093561A (en) Door equipment control method and device, storage medium and electronic device
CN111818172A (en) Method and device for controlling intelligent equipment by management server of Internet of things
CN113296415A (en) Intelligent household electrical appliance control method, intelligent household electrical appliance control device and system
CN107490980A (en) Infrared forwarding home control method based on speech recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant