CN108683574B - Equipment control method, server and intelligent home system - Google Patents
Equipment control method, server and intelligent home system Download PDFInfo
- Publication number
- CN108683574B CN108683574B CN201810332631.1A CN201810332631A CN108683574B CN 108683574 B CN108683574 B CN 108683574B CN 201810332631 A CN201810332631 A CN 201810332631A CN 108683574 B CN108683574 B CN 108683574B
- Authority
- CN
- China
- Prior art keywords
- server
- scene
- list
- control instruction
- voice data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 33
- 230000001960 triggered effect Effects 0.000 claims description 10
- 238000004458 analytical method Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 14
- 238000004590 computer program Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 7
- 238000012545 processing Methods 0.000 description 7
- 238000004891 communication Methods 0.000 description 6
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 3
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 238000003058 natural language processing Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000012905 input function Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/28—Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
- H04L12/2803—Home automation networks
- H04L12/2816—Controlling appliance services of a home automation network by calling their functionalities
- H04L12/282—Controlling appliance services of a home automation network by calling their functionalities based on user interaction within the home
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/26—Speech to text systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/02—Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
- H04L67/025—Protocols based on web technology, e.g. hypertext transfer protocol [HTTP] for remote control or remote monitoring of applications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/12—Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/14—Session management
- H04L67/146—Markers for unambiguous identification of a particular session, e.g. session cookie or URL-encoding
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Automation & Control Theory (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- Telephonic Communication Services (AREA)
- Selective Calling Equipment (AREA)
Abstract
The embodiment of the invention discloses an equipment control method, a server and an intelligent home system, which comprise the following steps: the first server receives and analyzes the voice data sent by the voice input device, generates a control instruction according to the stored device list and the device name contained in the voice data, and sends the control instruction to the second server, and the second server controls the controlled device according to the control instruction. It can be seen that, since the second server can directly control the controlled device, the device list in the second server can include all device names of any controlled device. The first server may directly obtain the device list through the second server, and thus the first server may have the same device list as the second server. When the user inputs the device name named for the controlled device by voice to control the controlled device, the first server can successfully determine the device identifier corresponding to the device name according to the device list, so that the controlled device is successfully controlled, and the user experience is effectively improved.
Description
Technical Field
The invention relates to the technical field of intelligent home, in particular to a device control method, a server and an intelligent home system.
Background
With the development of technologies such as smart home, communication, smart voice and the like, voice-controlled smart home devices gradually enter the user's home. The intelligent sound box can control devices such as a television, an air conditioner, light and a curtain, a user can control intelligent household devices such as household appliances through voice without looking for a remote controller and a mobile phone or even without standing up, and the intelligent sound box is convenient and fast.
In an existing smart home system, a smart speaker is generally used as a device terminal access system. Because the smart home system divides the device binding relationship by account number and family, a certain device must be bound under a certain family of a certain account number, namely, the account number, the family and the device have a one-to-one correspondence relationship. In a specific application scenario, a family includes a plurality of family members, and each family member corresponds to an account. Considering that naming habits of each family member on the device may be different, in order to facilitate a user to operate the device in the APP, the smart home system allows each family member to respectively alias the device.
For example, for a smart speaker, since the smart speaker is already bound to an account of a family member in the family (e.g., account a), the smart speaker can synchronize only a device list corresponding to account a, that is, only a device alias from account a as a device exists in the device list. Therefore, when other family members use the intelligent loudspeaker box to control the intelligent household equipment, the account A must be used as the alias of the equipment to successfully control the equipment, and if the account A is used as the alias of the equipment, the equipment cannot be successfully controlled. It can be seen that this is inconsistent with the usage habits of other family members, and can significantly reduce the user experience.
In summary, there is a need for an apparatus control method for solving the technical problem in the prior art that the intelligent home system allows each family member to name the intelligent home apparatus, but cannot use the alias voice control apparatus named by each family member, which results in poor user experience.
Disclosure of Invention
The invention provides an equipment control method, a server and an intelligent home system, which are used for solving the technical problem that in the prior art, the intelligent home system allows each family member to name the intelligent home equipment respectively, but cannot use the named alias voice control equipment, so that the user experience is poor.
The equipment control method provided by the embodiment of the invention comprises the following steps:
the method comprises the steps that a first server receives voice data sent by voice input equipment and analyzes the voice data;
the first server generates a control instruction corresponding to the voice data according to the stored device list and the device name of the controlled device contained in the voice data, wherein the control instruction comprises a device identifier of the controlled device; the device list is obtained by the first server from a second server, and the device list comprises a device identifier and at least one device name of a device identified by the device identifier;
and the first server sends the control instruction to the second server, and the second server is used for controlling the controlled equipment according to the control instruction.
Optionally, after the first server parses the voice data, the method further includes:
the first server determines a scene mode triggered by the voice data according to a stored scene list and scene keywords contained in the voice data, and sends a control instruction corresponding to the scene mode to the second server; the control instruction comprises a device identification of the controlled device in the scene mode; the scene list is obtained by the first server from the second server, and the scene list comprises a scene mode and at least one scene keyword corresponding to the scene mode.
Optionally, the control instruction further includes a control operation for the controlled device, where the control operation is used to enable the second server to control the controlled device to execute the control operation.
Optionally, the method further comprises:
the first server periodically updates the device list and/or the scene list.
Based on the same inventive concept, the invention also provides another equipment control method, which comprises the following steps:
a second server receives a control instruction sent by a first server, wherein the control instruction comprises a device identifier of a controlled device, and the control instruction is obtained by the first server after analyzing received voice data according to a device list and/or a scene list sent by the second server;
and the second server controls the controlled equipment according to the control instruction.
Optionally, the control instruction further includes a control operation for the controlled device, where the control operation is used to enable the second server to control the controlled device to execute the control operation.
Optionally, the device list includes a device identifier and at least one device name of a device identified by the device identifier;
the scene list comprises a scene mode and at least one scene keyword corresponding to the scene mode.
Based on the same inventive concept, the present invention also provides a server, comprising:
the receiving and sending module is used for receiving voice data sent by the voice input equipment;
the analysis module is used for analyzing the voice data;
the control module is used for generating a control instruction corresponding to the voice data according to the stored device list and the device name of the controlled device contained in the voice data, wherein the control instruction comprises a device identifier of the controlled device; the device list is obtained by the first server from a second server, and the device list comprises a device identifier and at least one device name of a device identified by the device identifier;
the transceiver module is further configured to send the control instruction to the second server, and the second server is configured to control the controlled device according to the control instruction.
Optionally, the control module is further configured to determine a scene mode triggered by the voice data according to a stored scene list and a scene keyword included in the voice data, and send a control instruction corresponding to the scene mode to the second server; the control instruction comprises a device identification of the controlled device in the scene mode; the scene list is obtained by the first server from the second server, and the scene list comprises a scene mode and at least one scene keyword corresponding to the scene mode.
Optionally, the control instruction further includes a control operation for the controlled device, where the control operation is used to enable the second server to control the controlled device to execute the control operation.
Optionally, the first server periodically updates the device list and/or the scene list.
Based on the same inventive concept, the invention also provides another server, comprising:
the receiving and sending module is used for receiving a control instruction sent by a first server, wherein the control instruction comprises a device identifier of a controlled device, and the control instruction is obtained by the first server after analyzing received voice data according to a device list and/or a scene list sent by a second server;
and the control module is used for controlling the controlled equipment according to the control instruction.
Optionally, the control instruction further includes a control operation for the controlled device;
the control module is specifically configured to control the controlled device to execute the control operation.
Optionally, the device list includes a device identifier and at least one device name of a device identified by the device identifier;
the scene list comprises a scene mode and at least one scene keyword corresponding to the scene mode.
Based on the same inventive concept, the invention also provides another intelligent home system, which comprises:
the system comprises a first server, and a voice input device, a second server and at least one intelligent household device which are connected with the first server;
the voice input equipment is used for receiving voice data input by a user and sending the voice data to the first server;
the first server is used for receiving the voice data sent by the voice input equipment and analyzing the voice data; receiving an equipment list sent by the second server, and generating a control instruction corresponding to the voice data according to the equipment list and the equipment name of the controlled equipment contained in the voice data; receiving a scene list sent by the second server, and generating a control instruction corresponding to the voice data according to the scene list and scene keywords included by the voice data;
the second server is used for sending the equipment list and/or the scene list to the first server, receiving a control instruction determined by the first server according to the equipment list and/or the scene list, and controlling the intelligent home equipment according to the control instruction.
Another embodiment of the present invention provides a control device, which includes a memory for storing program instructions and a processor for calling the program instructions stored in the memory to execute any one of the above methods according to the obtained program.
Another embodiment of the present invention provides a computer storage medium having stored thereon computer-executable instructions for causing a computer to perform any one of the methods described above.
In the embodiment of the invention, after receiving the voice data sent by the voice input device, the first server analyzes the voice data, generates the control instruction corresponding to the voice data according to the stored device list and the device name of the controlled device contained in the voice data, and sends the control instruction to the second server, and the second server controls the controlled device according to the control instruction. As can be seen, the second server is a server for directly controlling the controlled device, and thus the device list in the second server may include all device names of any controlled device, and there is no binding relationship between the device names and the user account. Considering that the first server can directly obtain the device list through the second server without obtaining the device list through the voice input device, the device list stored in the first server may be the same as the device list in the second server, and may include all device names of any controlled device. When a user uses the voice input device to control the controlled device, the user inputs the device name named for the controlled device by the voice input device, the first server can successfully determine the device identifier corresponding to the device name according to the stored device list, and then generates a control instruction to successfully control the controlled device, so that the user experience is effectively improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a system architecture diagram of an intelligent home system according to an embodiment of the present invention;
fig. 2 is a schematic flowchart corresponding to an apparatus control method according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a server according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of another server according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a control device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be described in further detail with reference to the accompanying drawings, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiments of the present invention will be described in further detail with reference to the drawings attached hereto.
Fig. 1 is an exemplary system architecture diagram of an intelligent home system in an embodiment of the present invention, and as shown in fig. 1, the intelligent home system includes a first server 101, a second server 102 connected to the first server 101, a voice input device 103, and at least one intelligent home device (e.g., an intelligent white light device 104, an intelligent gateway 105, and the like shown in fig. 1).
In this embodiment of the present invention, the voice input device 103 may be an intelligent sound box or other intelligent devices with a voice input function, which is not limited in this invention. The user 107 can input a voice such as "turn on the hall lantern" through the voice input device 103.
The first server 101 may be a voice semantic server, and has a function of analyzing a piece of voice data and recognizing semantics contained in the voice data. The server is connected with the voice input device, and can obtain voice data input by a user from the voice input device and identify the meaning expressed by the user.
The second server 102 may be an intelligent home cloud server, and the intelligent home cloud server is in communication connection with each intelligent home device added to the intelligent home system, and may send a control instruction to any intelligent home device to control the intelligent home device to execute the specified control operation. The control instruction may be sent to the second server after the first server generates the control instruction according to the voice data input by the user, or sent by the user through other ways, which is not limited in the present invention.
Optionally, the second server may further have a communication connection with client software installed in an intelligent terminal of the user, for example, the intelligent home application 106 shown in fig. 1, the user may directly input a control operation for a certain intelligent home device in the client software, generate a control instruction and send the control instruction to the second server, and then the second server issues the control instruction to a specific intelligent home device to control the intelligent home device to execute a specified control operation.
Fig. 2 shows a schematic flow chart corresponding to a device control method provided in an embodiment of the present invention, and as shown in fig. 2, the method includes:
step S201, voice input equipment receives voice data input by a user and sends the voice data to a first server;
step S202, the first server receives voice data sent by the voice input equipment and analyzes the voice data;
step S203, the first server generates a control instruction corresponding to the voice data according to the stored equipment list and the equipment name of the controlled equipment contained in the voice data;
step S204, the first server sends the control instruction to a second server;
step S205: and the second server receives the control instruction sent by the first server and controls the controlled equipment according to the control instruction.
The first server does not need to acquire the equipment list through the voice input equipment, but can directly interact with the second server to acquire the equipment list, so that the equipment list stored in the first server can comprise all equipment names corresponding to the intelligent household equipment, when a user controls the intelligent household equipment by using the voice input equipment, the voice input equipment is the equipment name started by the controlled equipment, and the first server can successfully determine the equipment identifier corresponding to the equipment name, so that a control instruction is generated, the successful control on the controlled equipment is realized, and the user experience is effectively improved.
It should be noted that before the user uses the voice input device to control the smart home device, the user also needs to perform network access configuration on the voice input device, so that the user establishes a communication connection with the first server. In the embodiment of the present invention, the voice input device may be connected to the first server through a wired network or a wireless network, which is not limited in the present invention.
Taking the example that the smart speaker is wirelessly connected to the first server, considering that the smart speaker may not have an operation interface, the related operations of the network access configuration may be performed in cooperation with the smart terminal (e.g., a smart phone, a tablet computer, etc.). For example, a Wi-Fi (Wireless-Fidelity) module of the smart speaker is turned on to enter an AP (Wireless Access Point) mode, the smart terminal is connected to the Wi-Fi of the smart speaker, after the connection is successful, a Service Set Identifier (SSID) and a password of a Wireless network are sent to the smart speaker in a User Datagram Protocol (UDP) message manner, and the smart speaker is connected to the network according to the received SSID and password to establish a communication connection with the first server.
Specifically, in the implementation of step S201, the voice input device may receive voice data input by a user and send the voice data to the first server. The voice data may include information of the smart home device that the user wants to control (e.g., a device name named by the user for a certain smart home device), and a control operation for the smart home device. For example, the user may enter a speech such as "tune to Hunan satellite television".
In this step, before the user inputs voice in the voice input device, if the voice input device has entered a sleep or offline state, the user may also use a specific wake-up keyword to wake up the voice input device to enter a working state. Also, the wake-up keyword may be set by the user himself, and the present invention is not particularly limited thereto.
In the implementation of step S202, the first server may receive the voice data sent by the voice input device and parse the voice data.
In this embodiment of the present invention, the parsing of the voice data by the first server may include converting the voice data into text data, and determining a device name and a control operation of a specific controlled device, which are input by a user, from the text data. Or the first server can analyze the voice data through a natural language processing algorithm, so as to determine the device name and the control operation input by the user. It should be noted that the first server may use any natural language processing algorithm to perform the speech semantic recognition, and the present invention is not limited to this specifically.
For example, if the user inputs "Hunan satellite", the first server may recognize that the semantic meaning to be expressed by the user is "switch the channel of the television to Hunan satellite", wherein the controlled device is the television and the control operation is "switch the channel of the television to Hunan satellite".
In a specific implementation of step S203, the first server may generate a control instruction corresponding to the voice data according to the device list stored in the first server and the device name of the controlled device included in the voice data. The controlled equipment refers to intelligent household equipment which a user wants to control.
In this embodiment of the present invention, the device list stored in the first server is obtained from the second server, and the first server may update the device list stored in the first server through information interaction with the second server according to a set period, or may send the updated device list or the updated content to the first server after the second server determines that the device list stored in the second server changes once, or may combine the two manners to update the device list, which is not limited in this embodiment of the present invention. The device period may be set by a person skilled in the art according to actual needs, and the present invention is not limited to this.
In the embodiment of the present invention, the second server is used as a server for directly controlling each smart home device, and the device list stored in the second server may include all device names of any smart home device. The first server can acquire the device list through the second server, so that a complete device list can be stored in the first server, that is, the device list stored in the first server can be the same as the device list stored in the second server, and includes all device names of any smart home device, instead of the device name of any smart home device under a certain account.
Taking a certain family as an example, the device list of the family stored in the first server may include device identifiers of all smart home devices bound to the family, and at least one device name of the smart home device identified by each device identifier.
Specifically, each smart home device accessing the smart home system may have a device identifier, and the device identifier may be a string of character sequences for uniquely identifying the smart home device. Because the device identifiers are inconvenient to remember and use, each family member in the family can also name a device name for the smart home device, and the device name can also be called a device nickname or a device alias.
In the embodiment of the present invention, each family member may start a device name for the smart home device according to a naming habit thereof, and one family member may start a device name for only one smart home device, or start multiple device names for one smart home device, which is not specifically limited in the present invention.
Table 1 below is an example of a device list stored in a first server in the embodiment of the present invention, and as shown in table 1, the family includes 3 family members, and accounts of the 3 family members are account a, account B, and account C, respectively. The family is accessed with 3 pieces of intelligent household equipment, and for any piece of intelligent household equipment, the equipment list records the equipment names of all family members (namely account A, account B and account C) in the family as the equipment. Taking the smart home device with the device identifier 123a4 as an example, the device is a fluorescent lamp, the device name with the account number a as the fluorescent lamp is the top lamp of the living room, the device name with the account number B as the fluorescent lamp is the ceiling lamp of the living room, the device name with the account number C as the fluorescent lamp is the large living room, and the like. It can be seen that although the same fluorescent lamp is used and the names of the family members are similar, the names of the devices are still different. In this scenario, whether the voice input by the family member is "ceiling light in living room", or "hall", the first server may determine that the smart home device that the user wants to control is the fluorescent light identified by the device identifier "123 a 4".
Device identification | Account A | Account number B | Account number C |
123a4 | Parlor ceiling lamp | Ceiling lamp for living room | Living room headlight |
123a5 | Television receiver | Liquid crystal television | TV set for living room |
123a6 | Air conditioner for living room | Living room air conditioner | First air conditioner |
TABLE 1
In the prior art, the first server needs to synchronize the device list through the voice input device, if the voice input device accesses the home through the account a, the voice data device is bound to the account a, and the device list obtained by synchronizing the first server only contains device names of the smart home devices with the account a. Thus, if the account B inputs "open the living room ceiling lamp" by voice, the first server cannot be matched with the device identifier of the "living room ceiling lamp", and thus the control operation of "open the living room ceiling lamp" cannot be realized.
In the embodiment of the present invention, the voice data input by the user may further include a control operation for the controlled device, so that after the first server parses the voice data, the control operation for the controlled device may be further determined according to semantics included in the voice data. Therefore, the first server determines the device identifier of the controlled device according to the device list, and can generate the control instruction together with the control operation for the controlled device according to the device identifier.
For example, if the user inputs "south of lake satellite" by voice and the semantic recognized by the first server is "switch the channel of the television to south of lake satellite", the first server determines that the device identifier corresponding to the device name "television" is "52578920 fd 4521" from the device list, and generates the control instruction together with the string "switch the channel of the television to south of lake satellite" indicating the control operation.
In this embodiment of the present invention, the control instruction may further include a device type of the controlled device, so that the second server converts the string representing the control operation into a specific control command according to the device list. For example, the device type of the television set having the device identification "52578920 fd 4521" is "tv".
In the specific implementation of step S204 and step S205, the first server sends the control instruction to the second server, and the second server receives the control instruction, analyzes the control instruction to obtain the device identifier and the control operation therein, and then issues the control command to the controlled device corresponding to the device identifier, so that the controlled device executes the specified control operation.
It should be noted that the user may only control one smart home device in one voice, or may control a plurality of smart home devices in one voice, and thus, the first server may analyze the voice input by the user, determine the device identifier of each smart home device, determine the control operation for each smart home device, and issue the control command to each smart home device.
In this embodiment of the present invention, the first server may further store a scene list, where the scene list may be obtained by the first server from the second server in the same manner as the device list is obtained, or may also be obtained in another manner, and this is not limited in this invention.
If the first server also updates the scene list synchronously from the second server according to a certain set period, the set period for updating the scene list by the first server may be the same as or different from the set period for updating the device list, and the present invention is not limited to this specifically.
Specifically, for example, a certain family, the scene list stored in the first server may include a scene mode and at least one scene keyword corresponding to the scene mode. As shown in table 3, the scene mode may be a "home mode", and the scene keywords corresponding to the "home mode" may be "home", "back", "home", and the like.
TABLE 3
In the embodiment of the present invention, the scene list of a certain home in the intelligent home system includes which scene modes, the scene keywords corresponding to any scene mode, and which controlled devices are included in any scene mode, and what control operation is executed by any controlled device in the scene mode can be set by a person skilled in the art according to actual needs, which is not limited by the present invention. A scene mode may correspond to one or more scene keywords, and may also include one or more controlled devices, which is also not limited in the present invention.
In an implementation of step S202, after the first server parses the voice data, if it is determined that the voice data contains a scene keyword related to a certain scene mode, it may be determined that the user intends to trigger the scene mode.
Furthermore, in the implementation of step S203, the first server may convert the semantic meaning expressed by the user into a control command according to the stored scene list and the scene keyword included in the voice data, and send the control command to the second server in step S204. The control instruction is a control instruction corresponding to the triggered scene mode, and specifically includes a device identifier of each controlled device in the scene mode, a control operation for each controlled device, and the like.
Then, the second server can issue a control command to each controlled device according to the control instruction, so that the controlled device executes the designated control operation.
For example, if the controlled device corresponding to the "return mode" includes a ceiling lamp in the living room, an air conditioner, and a water heater, it indicates that the user wants to control to turn on the ceiling lamp in the living room, turn on the air conditioner, adjust to a proper temperature, turn on the water heater, and prepare to take a bath when the return mode is triggered. Therefore, when the first server determines that the voice data sent by the user includes keywords such as 'home-returning' and the like, the home-returning mode can be triggered, and a control instruction is sent to the second server, so that the second server controls and turns on intelligent household equipment such as a switch of a ceiling lamp in a living room, an air conditioner and a water heater.
It can be seen that, in the embodiment of the present invention, the voice input device does not access the smart home system by using the identity of the device terminal or the client software, but accesses the smart home system by using the identity of the public device, so that there is no need to establish a binding relationship with an account in a family, and there is no need to synchronize devices and a scene list from the second server, thereby simplifying the system architecture of the smart home system and saving the processing resources of the voice input device.
Furthermore, after the semantics of the voice data input by the user are recognized by the first server, the semantic recognition result does not need to be returned to the voice input device, and the control instruction obtained after the analysis of the voice input device is sent to the second server. Because the communication connection is established between the first server and the second server, and the device and the scene list can be synchronized and updated from the second server, the first server can directly convert the recognized semantics into corresponding control instructions according to the device list and the scene list stored by the first server and send the control instructions to the second server, so that the processing flow of the voice control device is simplified, the response speed of the voice input by the user is improved, and the user experience is improved. The voice input equipment is only used as radio equipment to be arranged in the intelligent home system, so that the hardware requirement on the voice input equipment of the user is reduced, and the problem that the user needs to purchase high-performance voice input equipment to use the voice control function is solved.
By adopting the technical scheme provided by the embodiment of the invention, each family member in the family can respectively start the equipment name for the intelligent household equipment in the family and upload the equipment name to the second server. Therefore, when any family member uses the voice control function, because the first server stores the device list which is the same as that of the second server and keeps real-time synchronous updating, any family member can adopt the device name named for the intelligent household device to successfully control the device.
Based on the same inventive concept, an embodiment of the present invention further provides a server, and fig. 3 is a schematic structural diagram of the server provided in the embodiment of the present invention, as shown in fig. 3, the server 300 includes:
a transceiving module 301, configured to receive voice data sent by a voice input device;
a parsing module 302, configured to parse the voice data;
the control module 303 is configured to generate a control instruction corresponding to the voice data according to the stored device list and the device name of the controlled device included in the voice data, where the control instruction includes a device identifier of the controlled device; the device list is obtained by the first server from a second server, and the device list comprises a device identifier and at least one device name of a device identified by the device identifier;
the transceiver module 301 is further configured to send the control instruction to the second server, where the second server is configured to control the controlled device according to the control instruction.
Optionally, the control module 303 is further configured to determine a scene mode triggered by the voice data according to a stored scene list and a scene keyword included in the voice data, and send a control instruction corresponding to the scene mode to the second server; the control instruction comprises a device identification of the controlled device in the scene mode; the scene list is obtained by the first server from the second server, and the scene list comprises a scene mode and at least one scene keyword corresponding to the scene mode.
Optionally, the control instruction further includes a control operation for the controlled device, where the control operation is used to enable the second server to control the controlled device to execute the control operation.
Optionally, the first server periodically updates the device list and/or the scene list.
Based on the same inventive concept, an embodiment of the present invention further provides another server, fig. 4 is a schematic structural diagram of a server provided in an embodiment of the present invention, and as shown in fig. 4, the server 400 includes:
a transceiver module 401, configured to receive a control instruction sent by a first server, where the control instruction includes a device identifier of a controlled device, and the control instruction is obtained by the first server analyzing received voice data according to a device list and/or a scene list sent by a second server;
and a control module 402, configured to control the controlled device according to the control instruction.
Optionally, the control instruction further includes a control operation for the controlled device;
the control module is specifically configured to control the controlled device to execute the control operation.
Optionally, the device list includes a device identifier and at least one device name of a device identified by the device identifier;
the scene list comprises a scene mode and at least one scene keyword corresponding to the scene mode.
Another embodiment of the present invention provides a control device, which includes a memory for storing program instructions and a processor for calling the program instructions stored in the memory to execute any one of the above methods according to the obtained program.
Another embodiment of the present invention provides a computer storage medium having stored thereon computer-executable instructions for causing a computer to perform any one of the methods described above.
Based on the same inventive concept, the embodiment of the present invention further provides another control device, and the parallax correction terminal may specifically be a desktop computer, a portable computer, a smart phone, a tablet computer, a Personal Digital Assistant (PDA), or the like. As shown in fig. 5, the control device 500 may include a Central Processing Unit (CPU) 501, a memory 502, an input/output device 503, a bus system 504, and the like. The input device may include a keyboard, a mouse, a touch screen, and the like, and the output device may include a Display device such as a Liquid Crystal Display (LCD), a Cathode Ray Tube (CRT), and the like.
The memory may include Read Only Memory (ROM) and Random Access Memory (RAM), and provides the processor with program instructions and data stored in the memory. In the embodiment of the present invention, the memory may be used to store a program of the above-described device control method.
The processor is used for executing the device control method according to the obtained program instructions by calling the program instructions stored in the memory.
Based on the same inventive concept, embodiments of the present invention provide a computer storage medium for storing computer program instructions for the above detection terminal, which includes a program for executing the above apparatus control method.
The computer storage media may be any available media or data storage device that can be accessed by a computer, including, but not limited to, magnetic memory (e.g., floppy disks, hard disks, magnetic tape, magneto-optical disks (MOs), etc.), optical memory (e.g., CDs, DVDs, BDs, HVDs, etc.), and semiconductor memory (e.g., ROMs, EPROMs, EEPROMs, non-volatile memory (NAND FLASH), Solid State Disks (SSDs)), etc.
From the above, it can be seen that:
in the embodiment of the invention, after receiving the voice data sent by the voice input device, the first server analyzes the voice data, generates the control instruction corresponding to the voice data according to the stored device list and the device name of the controlled device contained in the voice data, and sends the control instruction to the second server, and the second server controls the controlled device according to the control instruction. As can be seen, the second server is a server for directly controlling the controlled device, and thus the device list in the second server may include all device names of any controlled device, and there is no binding relationship between the device names and the user account. Considering that the first server can directly obtain the device list through the second server without obtaining the device list through the voice input device, the device list stored in the first server may be the same as the device list in the second server, i.e., may include all device names of any controlled device. When a user uses the voice input device to control the controlled device, the user inputs the device name named for the controlled device by the voice input device, the first server can successfully determine the device identifier corresponding to the device name according to the stored device list, and then generates a control instruction to successfully control the controlled device, so that the user experience is effectively improved.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart block or blocks and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While alternative embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following appended claims be interpreted as including alternative embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.
Claims (9)
1. An apparatus control method characterized by comprising:
the method comprises the steps that a first server receives voice data sent by voice input equipment and analyzes the voice data;
the first server generates a control instruction corresponding to the voice data according to the stored device list and the device name of the controlled device contained in the voice data, wherein the control instruction comprises a device identifier of the controlled device; the device list is obtained by the first server from a second server, and the device list comprises a device identifier and at least one device name of a device identified by the device identifier;
the first server sends the control instruction to the second server, and the second server is used for controlling the controlled equipment according to the control instruction;
after the first server parses the voice data, the method further includes:
the first server determines a scene mode triggered by the voice data according to a stored scene list and scene keywords contained in the voice data, and sends a control instruction corresponding to the scene mode to the second server; the control instruction comprises a device identification of the controlled device in the scene mode; the scene list is obtained by the first server from the second server, and the scene list comprises a scene mode and at least one scene keyword corresponding to the scene mode.
2. The method according to claim 1, wherein the control instruction further includes a control operation for the controlled device, and the control operation is used for causing the second server to control the controlled device to execute the control operation.
3. The method of claim 2, further comprising:
the first server periodically updates the device list and/or the scene list.
4. An apparatus control method characterized by comprising:
a second server receives a control instruction sent by a first server, wherein the control instruction comprises a device identifier of a controlled device, and the control instruction is obtained by the first server after analyzing received voice data according to a device list and/or a scene list sent by the second server; the device list comprises a device identifier and at least one device name of a device identified by the device identifier;
the second server controls the controlled equipment according to the control instruction;
before the second server receives the control instruction, the method further includes:
the second server sends the scene list to the first server, so that the first server determines a scene mode triggered by the voice data according to the scene list and scene keywords contained in the voice data, and generates a control instruction corresponding to the scene mode, wherein the control instruction comprises a device identifier of a controlled device in the scene mode, and the scene list comprises the scene mode and at least one scene keyword corresponding to the scene mode.
5. A server, comprising:
the receiving and sending module is used for receiving voice data sent by the voice input equipment;
the analysis module is used for analyzing the voice data;
the control module is used for generating a control instruction corresponding to the voice data according to the stored device list and the device name of the controlled device contained in the voice data, wherein the control instruction comprises a device identifier of the controlled device; the device list is obtained by the server from a second server, and the device list comprises a device identifier and at least one device name of a device identified by the device identifier;
the transceiver module is further configured to send the control instruction to the second server, where the second server is configured to control the controlled device according to the control instruction;
the control module is further configured to determine a scene mode triggered by the voice data according to a stored scene list and a scene keyword included in the voice data, and send a control instruction corresponding to the scene mode to the second server; the control instruction comprises a device identification of the controlled device in the scene mode; the scene list is obtained by the server from a second server, and the scene list comprises a scene mode and at least one scene keyword corresponding to the scene mode.
6. A server, comprising:
the receiving and sending module is used for receiving a control instruction sent by a first server, wherein the control instruction comprises a device identifier of a controlled device, and the control instruction is obtained by the first server after analyzing received voice data according to a device list and/or a scene list sent by the server; the device list comprises a device identifier and at least one device name of a device identified by the device identifier;
the control module is used for controlling the controlled equipment according to the control instruction;
the transceiver module is further configured to send the scene list to the first server, so that the first server determines a scene mode triggered by the voice data according to the scene list and a scene keyword included in the voice data, and generates a control instruction corresponding to the scene mode, where the control instruction includes a device identifier of a controlled device in the scene mode, and the scene list includes the scene mode and at least one scene keyword corresponding to the scene mode.
7. The intelligent home system is characterized by comprising a first server, and a voice input device, a second server and at least one intelligent home device which are connected with the first server;
the voice input equipment is used for receiving voice data input by a user and sending the voice data to the first server;
the first server is used for receiving the voice data sent by the voice input equipment and analyzing the voice data; receiving an equipment list sent by the second server, and generating a control instruction corresponding to the voice data according to the equipment list and the equipment name of the controlled equipment contained in the voice data; receiving a scene list sent by the second server, and generating a control instruction corresponding to the voice data according to the scene list and scene keywords included in the voice data; the device list comprises a device identifier and at least one device name of a device identified by the device identifier, and the scene list comprises a scene mode and at least one scene keyword corresponding to the scene mode; the second server is used for sending the equipment list and/or the scene list to the first server, receiving a control instruction determined by the first server according to the equipment list and/or the scene list, and controlling the intelligent home equipment according to the control instruction.
8. A control apparatus, characterized by comprising:
a memory for storing program instructions;
a processor for calling program instructions stored in said memory to execute the method of any one of claims 1 to 3 in accordance with the obtained program.
9. A control apparatus, characterized by comprising:
a memory for storing program instructions;
a processor for calling program instructions stored in said memory to execute the method of claim 4 in accordance with the obtained program.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810332631.1A CN108683574B (en) | 2018-04-13 | 2018-04-13 | Equipment control method, server and intelligent home system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810332631.1A CN108683574B (en) | 2018-04-13 | 2018-04-13 | Equipment control method, server and intelligent home system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108683574A CN108683574A (en) | 2018-10-19 |
CN108683574B true CN108683574B (en) | 2020-12-08 |
Family
ID=63799563
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810332631.1A Active CN108683574B (en) | 2018-04-13 | 2018-04-13 | Equipment control method, server and intelligent home system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108683574B (en) |
Families Citing this family (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109407956B (en) * | 2018-10-25 | 2021-01-01 | 三星电子(中国)研发中心 | Equipment control method and system based on Internet of things |
CN109495546B (en) * | 2018-10-26 | 2021-11-23 | 北京车和家信息技术有限公司 | Data processing method, system and server |
CN109525675B (en) * | 2018-11-28 | 2021-08-03 | 广东海格怡创科技有限公司 | Northbound server file downloading method and device, computer equipment and storage medium |
CN109584872A (en) * | 2018-12-10 | 2019-04-05 | 深圳创维-Rgb电子有限公司 | A kind of speech control system, control method, equipment and medium |
CN110070864A (en) * | 2019-03-13 | 2019-07-30 | 佛山市云米电器科技有限公司 | A kind of control system and its method based on voice setting household scene |
CN109887512A (en) * | 2019-03-15 | 2019-06-14 | 深圳市奥迪信科技有限公司 | Wisdom hotel guest room control method and system |
CN110136707B (en) * | 2019-04-22 | 2021-03-02 | 云知声智能科技股份有限公司 | Man-machine interaction system for multi-equipment autonomous decision making |
CN110246495A (en) * | 2019-06-28 | 2019-09-17 | 联想(北京)有限公司 | Information processing method and electronic equipment |
CN112256947B (en) * | 2019-07-05 | 2024-01-26 | 北京猎户星空科技有限公司 | Recommendation information determining method, device, system, equipment and medium |
CN110262272B (en) * | 2019-07-12 | 2022-06-28 | 四川虹美智能科技有限公司 | Intelligent household equipment control method, device and system |
CN112241130B (en) * | 2019-07-18 | 2023-02-17 | 上汽通用汽车有限公司 | Vehicle-mounted processing equipment and remote equipment control system |
CN110246499B (en) * | 2019-08-06 | 2021-05-25 | 思必驰科技股份有限公司 | Voice control method and device for household equipment |
CN112448869A (en) * | 2019-08-27 | 2021-03-05 | 深圳Tcl数字技术有限公司 | Naming method and system of intelligent household equipment and computer equipment |
CN110768877B (en) * | 2019-09-27 | 2022-05-27 | 百度在线网络技术(北京)有限公司 | Voice control instruction processing method and device, electronic equipment and readable storage medium |
CN110851221B (en) * | 2019-10-30 | 2023-06-30 | 青岛海信智慧生活科技股份有限公司 | Smart home scene configuration method and device |
CN110768878A (en) * | 2019-10-31 | 2020-02-07 | 广州华凌制冷设备有限公司 | Voice function configuration method, configuration device and readable storage medium |
CN112786022B (en) * | 2019-11-11 | 2023-04-07 | 青岛海信移动通信技术股份有限公司 | Terminal, first voice server, second voice server and voice recognition method |
CN111261158A (en) * | 2020-01-15 | 2020-06-09 | 上海思依暄机器人科技股份有限公司 | Function menu customization method, voice shortcut control method and robot |
CN111583921A (en) * | 2020-04-22 | 2020-08-25 | 珠海格力电器股份有限公司 | Voice control method, device, computer equipment and storage medium |
CN111665737B (en) * | 2020-07-21 | 2023-09-15 | 宁波奥克斯电气股份有限公司 | Smart home scene control method and system |
CN114067792B (en) * | 2020-08-07 | 2024-06-14 | 北京猎户星空科技有限公司 | Control method and device of intelligent equipment |
CN111918110A (en) * | 2020-08-31 | 2020-11-10 | 中移(杭州)信息技术有限公司 | Set top box control method, server, system, electronic device and storage medium |
CN112307460B (en) * | 2020-09-21 | 2024-09-20 | 北京汇钧科技有限公司 | Control method and device of intelligent equipment, equipment and storage medium |
WO2022061293A1 (en) | 2020-09-21 | 2022-03-24 | VIDAA USA, Inc. | Display apparatus and signal transmission method for display apparatus |
CN112153440B (en) * | 2020-10-10 | 2023-04-25 | Vidaa美国公司 | Display equipment and display system |
CN112367229B (en) * | 2020-11-11 | 2022-05-03 | 深圳市欧瑞博科技股份有限公司 | Control method and device of intelligent household equipment, electronic equipment and storage medium |
CN112492023B (en) * | 2020-11-25 | 2023-04-07 | 青岛海尔科技有限公司 | Device control method, device, storage medium, and electronic apparatus |
CN113593545A (en) * | 2021-06-24 | 2021-11-02 | 青岛海尔科技有限公司 | Linkage scene execution method and device, storage medium and electronic equipment |
CN114584416B (en) * | 2022-02-11 | 2023-12-19 | 青岛海尔科技有限公司 | Electrical equipment control method, system and storage medium |
CN114610206A (en) * | 2022-03-17 | 2022-06-10 | 深圳创维-Rgb电子有限公司 | Wireless device distinguishing method, device, equipment and readable storage medium |
CN115562054A (en) * | 2022-09-28 | 2023-01-03 | 北京小米移动软件有限公司 | Equipment control method, device, readable storage medium and chip |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103730116B (en) * | 2014-01-07 | 2016-08-17 | 苏州思必驰信息科技有限公司 | Intelligent watch realizes the system and method that intelligent home device controls |
KR102417682B1 (en) * | 2015-09-09 | 2022-07-07 | 삼성전자주식회사 | Method and apparatus for managing nick name using a voice recognition |
CN106886170A (en) * | 2015-12-16 | 2017-06-23 | 美的集团股份有限公司 | The control method of household electrical appliance, system and audio amplifier |
CN106227055B (en) * | 2016-08-31 | 2020-10-09 | 海信集团有限公司 | Method for controlling intelligent household equipment, server and gateway |
CN107688329B (en) * | 2017-08-21 | 2020-02-14 | 杭州博联智能科技股份有限公司 | Intelligent home control method and intelligent home control system |
CN107577151A (en) * | 2017-08-25 | 2018-01-12 | 谢锋 | A kind of method, apparatus of speech recognition, equipment and storage medium |
-
2018
- 2018-04-13 CN CN201810332631.1A patent/CN108683574B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN108683574A (en) | 2018-10-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108683574B (en) | Equipment control method, server and intelligent home system | |
US11282520B2 (en) | Method, apparatus and device for interaction of intelligent voice devices, and storage medium | |
CN108831448B (en) | Method and device for controlling intelligent equipment through voice and storage medium | |
US10324707B2 (en) | Method, apparatus, and computer-readable storage medium for upgrading a ZigBee device | |
CN107294793B (en) | Replacement method, device and equipment of intelligent household equipment and storage medium | |
CN106487928B (en) | Message pushing method and device | |
CN106683674A (en) | System and method for controlling intelligent home by aid of voice | |
CN110618613A (en) | Linkage control method and device for intelligent equipment | |
US20170060599A1 (en) | Method and apparatus for awakening electronic device | |
CN111464402B (en) | Control method of intelligent household equipment, terminal equipment and medium | |
CN109473092B (en) | Voice endpoint detection method and device | |
EP2840455A1 (en) | Method, apparatus and system for intelligently controlling device, and plug-and-play device | |
JP2017506772A (en) | Intelligent device scene mode customization method and apparatus | |
AU2015292985A1 (en) | Subscriber identification module management method and electronic device supporting the same | |
KR20160018852A (en) | Method and device for automatically displaying application component on desktop | |
US10957305B2 (en) | Method and device for information processing | |
US10091141B2 (en) | Method and device for providing communication between multi-devices | |
WO2015139468A1 (en) | Method and related device for remotely controlling smart television | |
CN105404161A (en) | Intelligent voice interaction method and device | |
CN106572131B (en) | The method and system that media data is shared in Internet of Things | |
CN105425603A (en) | Method and apparatus for controlling intelligent equipment | |
CN105912358A (en) | Intelligent electronic device and setting method thereof | |
CN113615141B (en) | Account association method, device, system, server and storage medium | |
CN112486031A (en) | Control method of intelligent household equipment, storage medium and intelligent terminal | |
CN104968057A (en) | Intelligent hardware device automatic networking method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP01 | Change in the name or title of a patent holder | ||
CP01 | Change in the name or title of a patent holder |
Address after: 266100 Songling Road, Laoshan District, Qingdao, Shandong Province, No. 399 Patentee after: Qingdao Hisense Smart Life Technology Co.,Ltd. Address before: 266100 Songling Road, Laoshan District, Qingdao, Shandong Province, No. 399 Patentee before: QINGDAO HISENSE SMART HOME SYSTEMS Co.,Ltd. |