WO2019034083A1 - 智能家居语音控制方法、智能设备 - Google Patents

智能家居语音控制方法、智能设备 Download PDF

Info

Publication number
WO2019034083A1
WO2019034083A1 PCT/CN2018/100662 CN2018100662W WO2019034083A1 WO 2019034083 A1 WO2019034083 A1 WO 2019034083A1 CN 2018100662 W CN2018100662 W CN 2018100662W WO 2019034083 A1 WO2019034083 A1 WO 2019034083A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
smart
location information
location
action
Prior art date
Application number
PCT/CN2018/100662
Other languages
English (en)
French (fr)
Inventor
李林
邢栋
Original Assignee
捷开通讯(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 捷开通讯(深圳)有限公司 filed Critical 捷开通讯(深圳)有限公司
Publication of WO2019034083A1 publication Critical patent/WO2019034083A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/2803Home automation networks
    • H04L12/2816Controlling appliance services of a home automation network by calling their functionalities
    • H04L12/282Controlling appliance services of a home automation network by calling their functionalities based on user interaction within the home
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/418Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS], computer integrated manufacturing [CIM]
    • G05B19/4185Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS], computer integrated manufacturing [CIM] characterised by the network communication
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/26Pc applications
    • G05B2219/2642Domotique, domestic, home control, automation, smart house

Definitions

  • the invention relates to the field of smart home technology, in particular to a smart home voice control method and a smart device.
  • the device needs to be woken up first, and the wake-up word is usually a preset device name, and the user specifies the target device by calling a specific name, and performs subsequent voice control.
  • This method of specifying a target device through a calling device is acceptable in a single smart device, but as the number of smart voice devices in the home increases, each device has a device name, causing the user to remember a large number of device names. The experience is declining; in addition, if there are two identical devices, there will be cases where they are simultaneously controlled by wake-up, resulting in error control.
  • a technical solution adopted by the present invention is to provide a smart device, and the smart device includes a processor, and the processor implements the following steps when executing program data:
  • the voice control instruction includes device information, location information, and action information of the device to be controlled;
  • the target device is found, and the action information is sent to the target device to control the target device to perform a corresponding action.
  • the smart device acquires its corresponding device information and location information
  • the voice control instruction includes device information, location information, and action information of the device to be controlled;
  • another technical solution adopted by the present invention is to provide a smart home voice control method, and the method includes the following steps:
  • the voice control instruction includes device information, location information, and action information of the device to be controlled;
  • the target device is found, and the action information is sent to the target device to control the target device to perform a corresponding action.
  • another technical solution adopted by the present invention is to provide a device having a storage function, in which program data is stored, and when the program data is executed by the processor, the following steps are implemented:
  • the voice control instruction includes device information, location information, and action information of the device to be controlled;
  • the target device is found, and the action information is sent to the target device to control the target device to perform a corresponding action.
  • program data is executed by the processor to implement the following steps:
  • the smart device acquires its corresponding device information and location information
  • the invention has the beneficial effects that the smart home voice control method is obtained by first acquiring the device information and the corresponding location information of each smart device in the space; the user is performing voice control.
  • a voice control instruction is issued, including device information, location information, and action information of the device to be controlled, wherein the device information and the location information are used to specify the device to be controlled, and the action information is used to control the target device to perform a corresponding action;
  • the terminal receives the voice control command sent by the user, and then parses the command, and extracts device information, location information, and action information of the device to be controlled; and then performs device information and location information of the device to be controlled with device information and location information of each smart device.
  • Matching searching for the target device by matching the device information and the location information; if the matching is successful, finding the target device, sending the action information in the voice control instruction to the target device, to control the target device to perform the corresponding action.
  • the above method uses the combination of device information and location information of the smart device to determine the target device, and the device information and location information of the device to be controlled in the voice control command sent by the user and the device information and location of each smart device acquired in advance. The information is matched, and the target device is searched by matching the device information and the location information, thereby avoiding the trouble that the user needs to remember a large number of device names, and the search accuracy is high and convenient.
  • FIG. 1 is a schematic flow chart of an embodiment of a smart home voice control method according to the present invention.
  • FIG. 2 is a schematic diagram of partitioning of an embodiment of space division according to a user's current location in the smart home voice control method shown in FIG. 1;
  • FIG. 3 is a schematic diagram of a smart home voice control method and an application scenario housing arrangement shown in FIG. 1;
  • FIG. 4 is a schematic flow chart of another embodiment of a smart home voice control method according to the present invention.
  • FIG. 5 is a schematic flow chart of still another embodiment of a smart home voice control method according to the present invention.
  • FIG. 6 is a schematic structural diagram of an embodiment of a smart device according to the present invention.
  • FIG. 7 is a schematic structural diagram of an embodiment of an apparatus having a storage function according to the present invention.
  • FIG. 1 is a schematic flowchart of an embodiment of a smart home voice control method according to the present invention. As shown in FIG. 1 , the smart home voice control method of this embodiment includes the following steps:
  • the smart device is a smart device capable of voice control, including devices (such as lights, televisions, computers, air conditioners, fans, etc.) and mobile devices (such as sweeping robots) that are normally fixed in position.
  • the device information and corresponding location information of each smart device can be obtained by real-time monitoring by a depth camera, and the depth camera can adopt a separate camera or a camera configured by a certain smart device.
  • the device information and the corresponding location information of the foregoing devices may be obtained by other smart devices having the function, which is not limited by the present invention.
  • the present embodiment will be described by taking a depth camera as an example.
  • the smart device is used to identify each smart device in the space, and then the device information of each smart device, such as one or more combinations of device name, device brand, device model, and the like, is obtained through network or system data search. . If the device information of each smart device is obtained by using the networked mode, the smart device is identified by the depth camera, and each smart device is searched by the network search, and the device information corresponding to each smart device is obtained. For example, a picture of a smart device is captured by a depth camera, and then a picture search is performed through networking, and the smart device is searched for, thereby acquiring device information corresponding to the smart device.
  • the user needs to manually input the device information of each smart device into the system in advance, and use the object recognition function to identify the object through the depth camera, and compare with the device information pre-recorded in the system. For the search, and then get the corresponding device information.
  • the location information corresponding to each smart device is obtained through the depth camera, such as absolute location information of each smart device in space and/or relative location information relative to the current location of the user.
  • the absolute position information may include spatial coordinate information and/or orientation information of the fixed position object relative to the same space.
  • the space can be three-dimensionally modeled by the depth camera to generate spatial coordinates, and then the image recognition technology is used for object recognition and screening to obtain the spatial coordinate information of each intelligent device in the space, wherein the device with fixed position can be advanced Modeling completes the generation of spatial coordinate data, while for mobile devices, tracking generates real-time spatial coordinate data.
  • the image recognition technology can be used to identify the object through the depth camera, and the orientation information of each smart device in the space relative to the fixed position object is formed, such as the living room/wall TV, living room Lights inside the TV/TV.
  • the real-time position and posture information of the user in the space can be acquired through the depth camera monitoring, and the posture information includes the user's orientation and limb characteristics (such as hands, feet, heads, etc.);
  • the real-time position and posture information are spatially divided; and according to the absolute position information of each smart device, the relative position information of each smart device under the spatial division relative to the current location of the user is obtained.
  • FIG. 2 is a schematic diagram of a partitioning manner according to an embodiment of the user's current location in the smart home voice control method shown in FIG. 1, as shown in FIG. 2, according to the current location of the user.
  • the division can be divided into front, back, left, right, left front, right front, left rear, right rear, and upper (top).
  • other division manners may also be adopted, such as adding a directional function to a depth camera or other smart device having such a function, thereby allowing space according to the user's real-time position and posture information. It is divided into: east, south, west, north, northeast, southeast, northwest, southwest, upper (top of the head) and below (below).
  • the corresponding device-location relationship topology chart may be established according to the obtained device information and location information of each smart device, and the specific information of the device is in the space.
  • the location information within the correspondence corresponds to facilitate the integrated management of the acquired information.
  • the smart camera identifies a smart device in a certain space, and acquires device information of each smart device, including a device name, a brand, and a model.
  • the space is three-dimensionally modeled by a depth camera to generate spatial coordinates, and then image recognition is performed.
  • the technology performs object recognition and screening, acquires spatial coordinates of each smart device in the space, and analyzes and acquires orientation information of each smart device in the space relative to the fixed position object, thereby acquiring location information of each smart device; and then acquiring the intelligence according to the above
  • the device name, brand, model, space coordinates, and bearing information of the device are used to establish a corresponding device-location relationship topology chart, as shown in Table 1 below:
  • each device can be directly numbered or numbered according to the device type, and then tabulated by number.
  • the device information and location information of each smart device can be directly tabulated or numbered.
  • 102 Receive a voice control instruction of the user, where the voice control instruction includes device information, location information, and action information of the device to be controlled.
  • a corresponding voice control instruction is issued, where the voice control instruction includes device information, location information, and action information of the device to be controlled.
  • the device information and the location information are used to specify the device to be controlled, and the action information is used to control the target device to perform a corresponding operation.
  • the device information may be one or more of the device name, the device brand, and the device model of the device to be controlled. If the device is convenient for the call, the device information may be one type, such as the device name; To further improve the accuracy of call designation, multiple combinations can be used, such as device name and device brand combination.
  • the location information may be absolute location information of the device to be controlled in space and/or relative location information relative to the current location of the user.
  • the absolute location information may use spatial coordinate information and/or orientation information of the fixed location object relative to the same space.
  • the device information type and the location information type in the voice control instruction are corresponding to the device information type and the location information type of each smart device acquired in step 101 above, or the device information type and location acquired in step 101 above.
  • the information type contains the device information type and the location information type in the voice control instruction to ensure that the corresponding matching can be performed to find the target device.
  • the command receiving end After the user sends the above voice control command, the command receiving end correspondingly receives the voice control command, and then performs step 103.
  • the command receiving end After receiving the voice control command sent by the user, the command receiving end parses the command, extracts device information, location information, and action information of the device to be controlled in the command, and then performs step 104.
  • the device information and the location information of the device to be controlled are compared with the device of each smart device acquired in step 101.
  • the information is matched with the location information. If the device information and the location information of a device in the obtained information of the smart device are corresponding to or include the device information and the location information of the device to be controlled extracted and extracted in step 103, the matching may be successful. Go to the target device; otherwise the match fails and the target information cannot be found. If the matching is successful, step 105 is performed.
  • the device information and location information of the device to be controlled may be compared with the smart devices of the established device-relational topology chart. The device information and the location information are matched as above to find the target device.
  • the target device is found, and the action information is sent to the target device to control the target device to perform the corresponding action.
  • the device If the information of each smart device acquired in step 101 matches the device information and the location information of the device, or includes the device information and the location information of the device to be controlled extracted in step 103, the device is the target to be controlled.
  • the device sends the action information obtained by analyzing and extracting the voice control command to the target device in step 103 to control the target device to perform the corresponding action.
  • the device information and the location information of the device to be controlled are parsed according to the user voice control command, the device information and the corresponding location information of each smart device acquired in step 101 are not matched to the corresponding device, and the target device is not found. If the user fails to match, the target device is not found, and the control operation is ended or the user is prompted to issue a new voice control command. If the voice command newly issued by the user is received, the above operation steps are repeated.
  • FIG. 3 is a schematic diagram of a smart home voice control method and an application scenario house arrangement shown in FIG.
  • the user 1 had dinner at the restaurant in the evening, went to the position between the living room and the restaurant (the left side of the current position of the user 1 is the restaurant, the right hand side is the living room), and he wants to open the living room.
  • TV 4 and turn off the light 3-1 of the restaurant, the process can be done by voice control.
  • the smart device in the space is identified by the depth camera 2 disposed at the upper left corner of the user 1, and the device information of each smart device is obtained, including the device name, brand, and model.
  • the smart device is acquired by the depth camera 2 in the space.
  • the orientation information of the object relative to the fixed position and the relative position information of the current location of the user 1 are obtained, thereby acquiring the location information of each smart device; and then establishing the corresponding device according to the device information and the location information of each smart device acquired above -
  • the positional relationship topology diagram, the resulting device-location relationship topology diagram is shown in Table 2 below:
  • User 1 must first open the TV 4 of the living room and issue a voice control command "Turn on the TV on my right hand side".
  • the command receiving end receives the voice control command of User 1, and then parses the command to extract the device information of the device to be controlled. "Location information "right hand side” and action information "open”; then the device information "television” and location information "right hand side” of the obtained device to be controlled and the device information and location information of each smart device in Table 2 will be extracted. Perform matching, find that the device information and location information of the device to be controlled match the information of the device 4 in Table 2, determine that the device 4 is the target device, and then send the action information "ON" to the device 4, to control the device 4 to perform corresponding Open action.
  • the user 1 wants to turn off the light 3-1 of the restaurant, and issues a voice control command "close the light in the restaurant above my upper left", and the receiving end receives the voice control instruction of the user 1, and then parses the command to extract the device to be controlled.
  • the device information "light”, location information "top left, inside the restaurant” and action information "closed”; then match the device information and location information of each smart device in Table 2, find the device 3-1 in Table 2 The information matches, the device 3-1 is determined to be the target device, and then the action information "OFF" is sent to the device 3-1 to control the device 3-1 to perform the corresponding shutdown action.
  • the smart home voice control method of the present embodiment uses the combination of the device information and the location information of the smart device to determine the target device, and the device information and location information of the device to be controlled in the voice control command sent by the user and the pre-acquired intelligence.
  • the device information and the location information of the device are matched, and the target device is searched by matching the device information and the location information, thereby avoiding the trouble that the user needs to remember a large number of device names, and the search accuracy is high and convenient.
  • FIG. 4 is a schematic flowchart diagram of another embodiment of a smart home voice control method according to the present invention. As shown in FIG. 4, steps 401 and 404 of the smart home voice control method of the present embodiment are the same as steps 101 and 104 of the smart home voice control method shown in FIG. 1 respectively; The difference between the smart home voice control methods shown in 1 is:
  • Step 402 Receive a voice control instruction of the user, where the voice control instruction includes device information, location information, action information, and an action execution time of the device to be controlled.
  • Step 403 Parsing the above voice control instruction, and extracting device information, location information, action information, and action execution time of the device to be controlled;
  • Step 405 If the matching is successful, the target device is found, and the real-time monitoring determines whether the current time reaches the action execution time. When the action execution time is reached, the action information is sent to the target device to control the target device to perform the corresponding action.
  • the voice control command sent by the user includes the action execution time in addition to the device information, the location information, and the action information of the device to be controlled; after receiving the voice control command of the user, the command receiving end parses the extraction control Controlling the device information, the location information, and the action information of the device, and extracting the action execution time in the instruction; if the device information and the location information of the device to be controlled match the acquired device information and location information of each smart device, The target device monitors in real time whether the current time reaches the action execution time in the voice control instruction. When the action execution time is reached, the action information is sent to the target device to control the target device to perform the corresponding action, thereby being able to perform the corresponding action according to the user's instruction. Perform the corresponding actions at regular intervals.
  • the action may be executed immediately by default, that is, if the target device is found by the match, the action information in the voice action command is immediately sent to the target device. Perform the corresponding action by controlling the target device.
  • the voice control command "turn on the air conditioner in the room” is issued, and the receiving end receives the user instruction. And further extracting and extracting the device information of the device to be controlled, the location information “inside the room” and the action information “opening”, and then the device information “air conditioning” and the location information “inside the room” of the device to be controlled and the pre-acquired space
  • the device information of each smart device is matched with the location information. After the target device is found by matching, the action information is sent “open” to the target device, and the target device performs the corresponding open action after receiving the action information.
  • the device information and the location information of each smart device are matched, and after the matching to find the target device, the real-time monitoring determines whether the current time reaches the action execution time “1:20 on August 11, 2017”, when the action execution time is reached, Then, the action information “off” is sent to the target device, and the target device performs the corresponding closing action after receiving the action information.
  • the smart home voice control method of the present embodiment uses the combination of the device information and the location information of the smart device to determine the target device, and the device information and location information of the device to be controlled in the voice control command sent by the user and the pre-acquired intelligence.
  • the device information and location information of the device are matched, and the target device is searched by matching the device information and the location information, thereby avoiding the trouble that the user needs to remember a large number of device names, and the search accuracy is high and convenient.
  • the monitoring function of the action execution time is increased. When the monitoring determines that the current time reaches the action execution time in the user voice control instruction, the action information is sent to the target device to control the target device to perform the corresponding action, according to the user's instruction. Perform the corresponding actions at regular intervals.
  • the device information and the location information of the device to be controlled in the voice control command sent by the user and the acquired intelligence are obtained through the device information and the location information of each smart device acquired in advance.
  • the information of the device is matched and matched to find the target device, which can realize unified management and control of each smart device in the space.
  • the present invention further provides another implementation manner of the smart home voice control method, where the smart device can match the device information and location information of the device to be controlled in the voice control instruction of the user with the device information and location information of the smart device itself. To determine if it is the target device.
  • FIG. 5 is a schematic flowchart diagram of still another embodiment of the smart home voice control method according to the present invention. As shown in FIG. 5, the smart home voice control method of this embodiment includes the following steps:
  • the smart device acquires device information and location information corresponding to the smart device.
  • the smart device obtains device information and location information corresponding to itself.
  • the location information of the smart device includes absolute location information of the smart device in the space and/or relative location information relative to the current location of the user, and the absolute location information may include spatial coordinate information and/or orientation of the fixed location object relative to the same space. information.
  • the location information may be obtained by the depth camera configured by the smart device itself, or may be obtained by receiving other devices to obtain the location information corresponding to the device, that is, the other device obtains the location information of the smart device, and then sends the information to the smart device. The device receives and obtains its corresponding location information.
  • the smart device receives a voice control instruction of the user, where the command includes device information, location information, and action information of the device to be controlled.
  • the device information type and the location information type of the device to be controlled correspond to the device information type and the location information type acquired by the smart device, or the device information type and the location information type acquired by the smart device include the voice control instruction of the user.
  • the device information type and location information type of the device to be controlled correspond to the device information type and the location information type acquired by the smart device, or the device information type and the location information type acquired by the smart device include the voice control instruction of the user.
  • the device information type and location information type of the device to be controlled correspond to the device information type and the location information type acquired by the smart device, or the device information type and the location information type acquired by the smart device include the voice control instruction of the user.
  • the device information type and location information type of the device to be controlled correspond to the device information type and the location information type acquired by the smart device, or the device information type and the location information type acquired by the smart device
  • 503 Parse the above voice control instruction, and extract device information, location information, and action information of the device to be controlled.
  • the smart device After receiving the voice control instruction of the user, the smart device parses the command and extracts device information, location information, and action information of the device to be controlled.
  • 504 Match the device information and the location information of the extracted device to be controlled with the device information and the location information acquired above to determine whether the smart device itself is the target device.
  • the device information and the location information of the device to be controlled are matched with the device information and the location information acquired by the smart device in step 501 to determine whether the smart device itself is Target device. If the device information and the location information corresponding to the device acquired by the smart device in step 501 correspond to or include the device information and the location information of the device to be controlled that are extracted and extracted in step 503, the matching is successful, and the smart device itself is determined to be the target device; otherwise, the matching is performed. Failure, determine that the smart device itself is not the target device. If the matching is successful, step 505 is performed.
  • step 501 If the device information and the location information corresponding to the device that is obtained in step 501 correspond to or include the device information and the location information of the device to be controlled that are extracted and extracted in step 503, and determine that the smart device itself is the target device, perform the action in the user voice control command. The corresponding action of the information.
  • the smart device obtains its own corresponding device information and location information. After receiving the voice control command of the user, the device information and location information of the device to be controlled are obtained through parsing, and then to be controlled. The device information is matched with the device information and the location information of the smart device to determine whether it is the target device. When the match determines that it is the target device, the corresponding command action is performed.
  • the method utilizes the combination of device information and location information of the smart device to determine and determine the target device, which is convenient, fast, and highly accurate, and can avoid the trouble that the user needs to remember a large number of device names.
  • the action execution time monitoring function may also be added to implement controllable timing execution of the command action.
  • the corresponding voice control instruction is sent, and the instruction includes the device execution time, the position information, and the action information, and the action execution time;
  • the terminal After receiving the voice control instruction of the user, the terminal extracts the device information, the location information, and the action information of the device to be controlled, and extracts the action execution time in the instruction; if the device information and the location information of the device to be controlled and the smart device The acquired device information and the location information are matched, and the device is determined to be the target device.
  • the real-time monitoring determines whether the current time reaches the action execution time in the voice control command. When the action execution time is reached, the action information in the user voice control command is executed. Corresponding actions, so that the corresponding actions can be performed according to the user's instruction timing.
  • the action may be performed immediately. That is, if the match judgment determines that the smart device itself is the target device, the action corresponding to the action information is immediately executed.
  • the above method is applied to a smart device, and its logical process can be represented by a computer program and can be implemented by a smart device.
  • FIG. 6 is a schematic structural diagram of an embodiment of the smart device according to the present invention.
  • the smart device 601 of the present embodiment includes a processor 602, and the processor 602 implements the steps of the above embodiments of the smart home voice control method when executing program data.
  • the processor 602 of the smart device 601 of the present embodiment executes the program data, and the device information of the smart device and the location information can be combined to determine the target device, which is convenient, fast, and highly accurate, and can avoid the user needing to remember a large number of devices. The trouble of the name.
  • FIG. 7 is a schematic structural diagram of an embodiment of a device having a storage function, wherein the device 701 having a storage function stores program data 702, which can be executed by a processor to implement the above embodiments of the smart home voice control method.
  • the processor may be a processor owned by the device 701 having the storage function, or may be a processor of another device device.
  • the device 701 having the storage function may include any device capable of carrying the above program data 702, such as a USB flash drive, an optical disk, and at least one of a device, a server, and the like, which is not limited herein.
  • the device 701 having the storage function when the program data 702 stored thereon is executed by the processor, can determine the target device by using the combination of the device information and the location information of the smart device, which is convenient, fast, and highly accurate. And it can avoid the trouble of users needing to remember a large number of device names.

Abstract

本发明公开了一种智能家居语音控制方法、智能设备。该方法利用智能设备的设备信息和位置信息结合的方式,来确定目标设备,通过先获取各智能设备的设备信息和对应的位置信息;在接收到用户发出的语音控制指令后,解析该指令,提取其中待控制设备的设备信息、位置信息以及动作信息;再将待控制设备的设备信息和位置信息与所获取的各智能设备的设备信息和位置信息进行匹配,以查找确定目标设备;若匹配成功,查找到目标设备,则将动作信息发送到目标设备,以控制目标设备执行相应的动作。通过上述方式,本发明能够避免用户需记住大量设备名称的麻烦,且目标设备查找的准确性高,方便快捷。

Description

智能家居语音控制方法、智能设备 【技术领域】
本发明涉及智能家居技术领域,特别是涉及一种智能家居语音控制方法、智能设备。
【背景技术】
随着人们生活水平的提高,人们对居住环境提出了更高的要求,越来越注重居家生活的舒适、安全与便捷。智能家居的出现顺应了人们的需求,其旨在融合计算机、自动化控制、人工智能和网络通讯各项技术于一体,将家居环境下的各种设备终端通过家庭网络连接在一起,实现家居环境的智能化控制。随着科技的进步,各种利用语音识别就能够实现控制的人机产品不断涌现,语音控制技术进一步增强了智能家居的便利性。
但是,目前使用语音控制设备时,需要先对设备进行唤醒操作,其唤醒词通常为预置的设备名称,用户通过呼叫特定名称来指定目标设备,并进行后续语音控制。这种通过呼叫设备来指定目标设备的方法在单智能设备时体验尚可,但随着家庭中智能语音设备的增多,由于每个设备都有一个设备名称,导致用户需要记住大量的设备名称,体验反而下降;另外,如果存在两台相同设备,还会出现同时被唤醒控制的情况,导致错误控制。
【发明内容】
本发明是提供一种智能家居语音控制方法、智能设备,能够解决现有的智能家居语音控制方法,通过呼叫设备名指定目标设备,用户需记住大量设备名称,较为麻烦,降低用户体验,以及若存在相同设备,易导致错误控制的问题。
为解决上述技术问题,本发明采用的一个技术方案是:提供一种智能设备,所述智能设备包括处理器,所述处理器在执行程序数据时实现 如下步骤:
获取各智能设备的设备信息和对应的位置信息,其中,所述智能设备的设备信息包括设备名称、品牌以及型号;
接收用户的语音控制指令,所述语音控制指令包括待控制设备的设备信息、位置信息以及动作信息;
解析所述语音控制指令,提取所述待控制设备的设备信息、位置信息以及动作信息;
将所述待控制设备的设备信息和位置信息与所述各智能设备的设备信息和位置信息进行匹配,以查找目标设备;
若匹配成功,查找到所述目标设备,将所述动作信息发送到所述目标设备,以控制所述目标设备执行相应的动作。
为解决上述技术问题,本发明采用的另一个技术方案是:提供一种智能家居语音控制方法,所述方法包括以下步骤:
智能设备获取自身对应的设备信息和位置信息;
接收用户的语音控制指令,所述语音控制指令包括待控制设备的设备信息、位置信息以及动作信息;
解析所述语音控制指令,提取所述待控制设备的设备信息、位置信息以及动作信息;
将所述待控制设备的设备信息和位置信息与所述获取的设备信息和位置信息进行匹配,以判断所述智能设备是否为目标设备;
若匹配成功,确定所述智能设备为目标设备,则执行所述动作信息相应的动作。
为解决上述技术问题,本发明采用的又一个技术方案是:提供一种智能家居语音控制方法,所述方法包括以下步骤:
获取各智能设备的设备信息和对应的位置信息;
接收用户的语音控制指令,所述语音控制指令包括待控制设备的设备信息、位置信息以及动作信息;
解析所述语音控制指令,提取所述待控制设备的设备信息、位置信息以及动作信息;
将所述待控制设备的设备信息和位置信息与所述各智能设备的设备信息和位置信息进行匹配,以查找目标设备;
若匹配成功,查找到所述目标设备,将所述动作信息发送到所述目标设备,以控制所述目标设备执行相应的动作。
为解决上述技术问题,本发明采用的再一个技术方案是:提供一种具有存储功能的装置,所述装置内存储有程序数据,所述程序数据被处理器执行时实现如下步骤:
获取各智能设备的设备信息和对应的位置信息;
接收用户的语音控制指令,所述语音控制指令包括待控制设备的设备信息、位置信息以及动作信息;
解析所述语音控制指令,提取所述待控制设备的设备信息、位置信息以及动作信息;
将所述待控制设备的设备信息和位置信息与所述各智能设备的设备信息和位置信息进行匹配,以查找目标设备;
若匹配成功,查找到所述目标设备,将所述动作信息发送到所述目标设备,以控制所述目标设备执行相应的动作。
或者,所述程序数据被处理器执行时实现如下步骤:
智能设备获取自身对应的设备信息和位置信息;
接收用户的语音控制指令,所述语音控制指令包括待控制设备的设备信息、位置信息以及动作信息;
解析所述语音控制指令,提取所述待控制设备的设备信息、位置信息以及动作信息;
将所述待控制设备的设备信息和位置信息与所述获取的设备信息和位置信息进行匹配,以判断所述智能设备是否为目标设备;
若匹配成功,确定所述智能设备为目标设备,则执行所述动作信息相应的动作。
本发明的有益效果是:区别于现有技术的情况,本发明提供一种智能家居语音控制方法,该方法通过先获取空间内各智能设备的设备信息和对应的位置信息;用户在进行语音控制操作时,发出包括待控制设备 的设备信息、位置信息以及动作信息的语音控制指令,其中,设备信息、位置信息用于指定待控制设备,而动作信息用于控制目标设备执行相应动作;指令接收端接收用户发出的语音控制指令,进而解析该指令,提取待控制设备的设备信息、位置信息以及动作信息;而后将待控制设备的设备信息和位置信息与各智能设备的设备信息和位置信息进行匹配,通过设备信息和位置信息的匹配来查找目标设备;若匹配成功,查找到目标设备,将语音控制指令中的动作信息发送到目标设备,以控制目标设备执行相应的动作。以上方法利用智能设备的设备信息和位置信息结合的方式,来确定目标设备,将用户发出的语音控制指令中的待控制设备的设备信息和位置信息与预先获取的各智能设备的设备信息和位置信息进行匹配,通过设备信息和位置信息的匹配来查找目标设备,可避免用户需记住大量设备名称的麻烦,且查找准确性高,方便快捷。
【附图说明】
图1是本发明智能家居语音控制方法一实施方式的流程示意图;
图2是图1所示智能家居语音控制方法中根据用户当前位置进行空间划分一实施方式的分区示意图;
图3是图1所示智能家居语音控制方法一应用场景房屋布置示意图;
图4是本发明智能家居语音控制方法另一实施方式的流程示意图;
图5是本发明智能家居语音控制方法又一实施方式的流程示意图;
图6是本发明智能设备一实施方式的结构示意图;
图7是本发明具有存储功能的装置一实施方式的结构示意图。
【具体实施方式】
下面结合附图和实施方式对本发明进行详细说明。
参阅图1,图1是本发明智能家居语音控制方法一实施方式的流程示意图。如图1所示,本实施方式智能家居语音控制方法包括以下步骤:
101:获取各智能设备的设备信息和对应的位置信息。
其中,智能设备为可进行语音控制的智能设备,包括通常情况下位 置固定的设备(如灯、电视、电脑、空调、风扇等)和可移动设备(如扫地机器人等)。而各智能设备的设备信息和对应的位置信息可通过深度摄像头实时监测获取,而深度摄像头可采用独立的摄像头或某一智能设备配置的摄像头。当然,以上各设备的设备信息和对应的位置信息还可通过其他具有该功能的智能设备获取,本发明对此不做限定。而为了便于说明,本实施方式以深度摄像头为例进行说明。
具体地,通过深度摄像头识别空间内各智能设备,再通过联网或系统数据查找的方式获取各智能设备的设备信息,如设备名称、设备品牌、设备型号等设备信息中的一种或多种组合。若采用联网方式获取各智能设备的设备信息,则通过深度摄像头识别智能设备后,通过联网搜索比对查找各智能设备,进而获取各智能设备对应的设备信息。例如,通过深度摄像头拍摄某一智能设备的图片,而后通过联网进行图片搜索,比对查找该智能设备,进而获取该智能设备对应的设备信息。若通过系统数据查找的方式获取各智能设备的设备信息,用户需提前将各智能设备的设备信息手工录入系统,通过深度摄像头利用物体识别功能识别物体后,与系统中预先录入的设备信息进行比对查找,进而获取相应的设备信息。
同时,通过深度摄像头获取各智能设备对应的位置信息,如各智能设备在空间内的绝对位置信息和/或相对于用户当前所在位置的相对位置信息。
其中,绝对位置信息可包括空间坐标信息和/或相对同一空间固定位置物体的方位信息。对于空间坐标信息,可通过深度摄像头对空间进行三维建模生成空间坐标,再利用图像识别技术进行物体识别和筛选,获取空间内各智能设备的空间坐标信息,其中,对于位置固定的设备可提前建模完成生成空间坐标数据,而对于可移动的设备,则跟踪生成实时空间坐标数据。对于相对同一空间固定位置物体的方位信息,可通过深度摄像头利用图像识别技术进行物体识别,分析形成各智能设备在空间内相对于固定位置物体的方位信息,如客厅内/靠墙的电视、客厅内/电视旁的灯等。
相对用户当前所在位置的相对位置信息,可通过深度摄像头监测获取用户在空间内的实时位置和姿态信息,姿态信息包括用户的朝向和肢体特征(如手、脚、头等);而后根据获取的用户的实时位置及姿态信息进行空间划分;进而根据各智能设备的绝对位置信息获取各智能设备在空间划分下相对用户当前所在位置的相对位置信息。
其中,对于空间划分,请参见图2,图2为图1所示智能家居语音控制方法中根据用户当前位置进行空间划分一实施方式的分区示意图,如图2所示,根据用户当前位置对空间进行划分,可划分为前、后、左、右、左前方、右前方、左后方、右后方、上(头顶)。除此之外,在其他实施方式中,还可以采用其他的划分方式,如可在深度摄像头或其他具有此类功能的智能设备上增加定向功能,进而可根据用户的实时位置及姿态信息将空间划分为:东、南、西、北、东北、东南、西北、西南、上方(头顶)和下方(脚下)等。
在获取了各智能设备的设备信息和对应的位置信息之后,还可根据所获取的各智能设备的设备信息和位置信息,建立对应的设备-位置关系拓扑图表,将设备的具体信息与在空间内的位置信息对应,以便于对所获取信息进行整合管理。
例如,通过深度摄像头识别某一空间内的各智能设备,获取各智能设备的设备信息,包括设备名称、品牌、型号;同时,通过深度摄像头对空间进行三维建模生成空间坐标,再利用图像识别技术进行物体识别筛选,获取空间内各智能设备的空间坐标,并分析获取各智能设备在空间内相对于固定位置物体的方位信息,从而获取各智能设备的位置信息;而后根据以上获取的各智能设备的设备名称、品牌、型号、空间坐标和方位信息,建立对应的设备-位置关系拓扑图表,具体如下表1所示:
表1
设备名称 品牌 型号 空间坐标(xyz) 方位信息
电视 TCL 55H7800A-UD (1.5,0.5,1) 客厅内/靠墙
餐厅灯 TCL TCLSZ-0447 (2,1.5,2.7) 餐厅内/天花板
空调 XXX KFR-XXX (1,0,2.2) 客厅内/电视旁
冰箱 TCL BCD-282KR50 (1.7,1,2) 客厅内/靠墙
而为了进一步便于对各智能设备的整合管理,可对各设备进行直接编号或按设备类型进行编号,再按编号进行制表。当然还可根据其他规律(如区域)对各智能设备的设备信息和位置信息进行直接制表或编号制表。
102:接收用户的语音控制指令,语音控制指令包括待控制设备的设备信息、位置信息以及动作信息。
具体地,当用户要通过语音控制某一智能设备执行相应操作时,发出相应的语音控制指令,该语音控制指令包括待控制设备的设备信息、位置信息和动作信息。其中,设备信息和位置信息用于指定待控制设备,动作信息用于控制目标设备执行相应操作。设备信息可为待控制设备的设备名称、设备品牌、设备型号等类型中的一种或多种,若从便于用户呼叫的角度考虑,设备信息可采用一种,如采用设备名称等;若为进一步提高呼叫指定的准确性,可采用多种组合的方式,如采用设备名称和设备品牌组合等。而位置信息可为待控制设备在空间内的绝对位置信息和/或相对于用户当前所在位置的相对位置信息,绝对位置信息可采用空间坐标信息和/或相对同一空间固定位置物体的方位信息。但需要注意的是,语音控制指令中的设备信息类型和位置信息类型需与以上步骤101获取的各智能设备的设备信息类型和位置信息类型相对应,或者以上步骤101获取的设备信息类型和位置信息类型包含语音控制指令中的设备信息类型和位置信息类型,以保证可进行对应地匹配以查找目标设备。
例如,步骤101获取的各智能设备的设备信息为设备名称和设备品牌,位置信息为相对于相对同一空间固定位置物体的方位信息和用户当前所在位置的相对位置信息,而用户发出的语音控制指令中待控制设备的设备信息和位置信息可以分别采用设备名称和相对同一空间固定位置物体的方位信息,或分别采用(设备名称+设备品牌)和(相对同一空间固定位置物体的方位信息+用户当前所在位置的相对位置信息)。
用户发出以上语音控制指令后,指令接收端对应地接收该语音控制指令,进而执行步骤103。
103:解析以上语音控制指令,提取待控制设备的设备信息、位置信息以及动作信息。
当指令接收端经步骤102接收到用户发出的语音控制指令后,解析该指令,提取指令中待控制设备的设备信息、位置信息以及动作信息,而后执行步骤104。
例如,用户想要打开客厅中的电视,发出语音控制指令“打开客厅中电视”,指令接收端接收到该指令,进而对其进行解析,提取得到待控制设备的设备信息“电视”、位置信息“客厅中”和动作信息“打开”。
104:将待控制设备的设备信息和位置信息与所获取的各智能设备的设备信息和位置信息进行匹配,以查找目标设备。
经步骤103对用户发出的语音控制指令进行解析,提取得到待控制设备的设备信息、位置信息和动作信息之后,将其中待控制设备的设备信息和位置信息与步骤101获取的各智能设备的设备信息和位置信息进行匹配,若获取的各智能设备的信息中某一设备的设备信息和位置信息对应或包含步骤103解析提取得到的待控制设备的设备信息和位置信息,则可匹配成功,查找到目标设备;否则匹配失败,无法查找到目标信息。若匹配成功,则执行步骤105。
若已根据获取的各智能设备的设备信息和位置信息建立了对应的设备-关系拓扑图表,则可将待控制设备的设备信息和位置信息与所建立的设备-关系拓扑图表中各智能设备的设备信息和位置信息进行如上匹配,以查找目标设备。
105:若匹配成功,查找到目标设备,将动作信息发送到目标设备,以控制目标设备执行相应的动作。
若在步骤101获取的各智能设备的信息中匹配到某一设备的设备信息和位置信息对应或包含步骤103解析提取得到的待控制设备的设备信息和位置信息,该设备则为待控制的目标设备,则将步骤103对语音控制指令进行解析提取得到的动作信息发送给到该目标设备,以控制该目标设备执行相应的动作。
若根据用户语音控制指令解析提取得到的待控制设备的设备信息 和位置信息,没有在步骤101获取的各智能设备的设备信息和对应位置信息中匹配到相应的设备,未找到目标设备,则提示用户匹配失败,未查找到目标设备,进而结束控制操作或等待用户发出新的语音控制指令,若接收到用户新发出的语音指令后,则重复如上操作步骤。
下面通过一个应用场景进行说明,请参阅图3,图3是图1所示智能家居语音控制方法一应用场景房屋布置示意图。如图3所示,假设一场景,用户1晚上在餐厅吃过晚饭,走到客厅与餐厅之间的位置(用户1当前位置的左手边为餐厅,右手边为客厅),其想打开客厅的电视4,并关闭餐厅的灯3-1,该过程可通过语音控制完成。具体通过设置在用户1左上方墙角处的深度摄像头2识别空间内的各智能设备,获取各智能设备的设备信息,包括设备名称、品牌、型号;同时,通过深度摄像头2获取各智能设备在空间内相对固定位置物体的方位信息和相对于用户1当前所在位置的相对位置信息,从而获取各智能设备的位置信息;而后根据以上获取的各智能设备的设备信息和位置信息,建立对应的设备-位置关系拓扑图表,所得设备-位置关系拓扑图表如下表2所示:
表2
Figure PCTCN2018100662-appb-000001
用户1要先打开客厅的电视4,发出语音控制指令“打开我右手边的电视”,指令接收端接收到用户1的语音控制指令,进而解析该指令,提取得到待控制设备的设备信息“电视”、位置信息“右手边”和动作信息“打开”;而后将提取得到的待控制设备的以上设备信息“电视”和位置信息“右手边”与表2中各智能设备的设备信息和位置信息进行匹配,查找到待控制设备的设备信息和位置信息与表2中设备4的信息相匹配,确定设备4为目标设备,而后将动作信息“打开”发送到设备 4,以控制设备4执行相应的打开动作。
随后,用户1要关闭餐厅的灯3-1,发出语音控制指令“关闭我左上方餐厅中的灯”,指令接收端接收到用户1的语音控制指令,进而解析该指令,提取得到待控制设备的设备信息“灯”、位置信息“左上方、餐厅内”和动作信息“关闭”;而后与表2中各智能设备的设备信息和位置信息进行匹配,查找到与表2中设备3-1的信息相匹配,确定设备3-1为目标设备,而后将动作信息“关闭”发送到设备3-1,以控制设备3-1执行相应的关闭动作。
本实施方式智能家居语音控制方法利用智能设备的设备信息和位置信息结合的方式,来确定目标设备,将用户发出的语音控制指令中的待控制设备的设备信息和位置信息与预先获取的各智能设备的设备信息和位置信息进行匹配,通过设备信息和位置信息的匹配来查找目标设备,可避免用户需记住大量设备名称的麻烦,查找准确性高,方便快捷。
以上实施方式智能家居语音控制方法,当匹配查找到目标设备时,立即将动作信息发送到目标设备。但在一些情况下,用户并不希望设备当下执行其指令动作,而希望在其他时间执行,如延后定时执行。为此,本发明提出了智能家居语音控制方法的另一实施方式,以实现语音控制指令的动作执行时间可控。具体请参阅图4,图4是本发明智能家居语音控制方法另一实施方式的流程示意图。如图4所示,本实施方式的智能家居语音控制方法的步骤401和步骤404分别与图1所示智能家居语音控制方法的步骤101和步骤104相同;本实施方式智能家居语音控制方法与图1所示智能家居语音控制方法的不同之处在于:
步骤402:接收用户的语音控制指令,该语音控制指令包括待控制设备的设备信息、位置信息、动作信息以及动作执行时间;
步骤403:解析以上语音控制指令,提取待控制设备的设备信息、位置信息、动作信息以及动作执行时间;
步骤405:若匹配成功,查找到目标设备,则实时监测判断当前时间是否达到动作执行时间,当达到动作执行时间时,将动作信息发送到目标设备,以控制目标设备执行相应的动作。
也就是说,用户发出的语音控制指令中,除了包括待控制设备的设备信息、位置信息和动作信息之外,还包括动作执行时间;指令接收端接收到用户的语音控制指令后,解析提取待控制设备的设备信息、位置信息和动作信息的同时,还提取指令中的动作执行时间;若待控制设备的设备信息和位置信息与所获取的各智能设备的设备信息和位置信息匹配,查找到目标设备,实时监测判断当前时间是否达到语音控制指令中的动作执行时间,当达到动作执行时间时,再将动作信息发送到目标设备,以控制目标设备执行相应的动作,从而可根据用户的指令定时执行相应动作。
需要注意的是,若用户所发出的语音控制指令中未出现动作执行时间,则可默认为立即执行,即若匹配查找到目标设备,则立即将语音动作指令中的动作信息发送到目标设备,以控制目标设备执行相应的动作。
例如,若当前时间为2017年8月10日23:20,用户睡觉时要打开房间空调,但希望2小时后关闭,则发出语音控制指令“打开房间内空调”,指令接收端接收到用户指令,进而进行解析提取得到待控制设备的设备信息、位置信息“房间内”和动作信息“打开”,而后将待控制设备的设备信息“空调”和位置信息“房间内”与预先获取的空间内各智能设备的设备信息和位置信息进行匹配,经匹配查找到目标设备后,将动作信息发“打开”发送到目标设备,目标设备接收到该动作信息后执行相应打开的动作。
用户若要2小时后关闭空调,则发出另一条语音控制指令“2小时后关闭房间内空调”,指令接收端接收到用户的该指令,解析提取得到待控制设备的设备信息“空调”、位置信息“房间内”、动作信息“关闭”以及动作执行时间“2017年8月11日1:20”,而后将待控制设备的设备信息“空调”和位置信息“房间内”与预先获取的空间内各智能设备的设备信息和位置信息进行匹配,经匹配查找到目标设备后,实时监测判断当前时间是否达到动作执行时间“2017年8月11日1:20”,当达到动作执行时间时,则将动作信息“关闭”发送到目标设备,目标设备接收 到该动作信息后执行相应关闭的动作。
本实施方式智能家居语音控制方法利用智能设备的设备信息和位置信息结合的方式,来确定目标设备,将用户发出的语音控制指令中的待控制设备的设备信息和位置信息与预先获取的各智能设备的设备信息和位置信息进行匹配,通过设备信息和位置信息的匹配来查找目标设备,可避免用户需记住大量设备名称的麻烦,且查找准确性高,方便快捷。同时,增加动作执行时间的监控功能,当监测判断当前时间达到用户语音控制指令中的动作执行时间时,再将动作信息发送到目标设备,以控制目标设备执行相应的动作,可根据用户的指令定时执行相应动作。
以上各实施方式的智能家居语音控制方法,通过预先获取的各智能设备的设备信息和位置信息,再将用户发出的语音控制指令中的待控制设备的设备信息和位置信息与所获取的各智能设备的信息进行匹配查找目标设备,可实现对空间内的各智能设备的统一管控。另外,本发明还提出了智能家居语音控制方法的另一实施方式,智能设备可通过将用户的语音控制指令中待控制设备的设备信息和位置信息与智能设备自身的设备信息与位置信息进行匹配,以判断自身是否为目标设备。具体请参阅图5,图5是本发明智能家居语音控制方法又一实施方式的流程示意图。如图5所示,本实施方式智能家居语音控制方法包括以下步骤:
501:智能设备获取自身对应的设备信息和位置信息。
智能设备获取其自身所对应的设备信息和位置信息。
其中,设备信息可包括设备名称、设备品牌、设备型号中的一种或多种的组合。具体地,可通过深度摄像头或其他设备识别智能设备,而后通过联网获取智能设备的设备信息,再将设备信息发送给智能设备;或者用户可将智能设备的设备信息提前手工录入系统,智能设备通过系统获取其对应的设备信息。
而智能设备的位置信息包括智能设备在空间内的绝对位置信息和/或相对于用户当前所在位置的相对位置信息,而绝对位置信息可包括空间坐标信息和/或相对同一空间固定位置物体的方位信息。该位置信息可 通过智能设备自身配置的深度摄像头获取,也可通过接收其他设备向其发送其自身所对应的位置信息获取,即其他设备获取智能设备的位置信息,而后再发送给智能设备,智能设备接收获取其对应的位置信息
502:接收用户的语音控制指令,该语音控制指令包括待控制设备的设备信息、位置信息以及动作信息。
智能设备接收用户的语音控制指令,该指令中包括待控制设备的设备信息、位置信息以及动作信息。其中,待控制设备的设备信息类型和位置信息类型与智能设备之前所获取的设备信息类型和位置信息类型相对应,或者智能设备之前所获取的设备信息类型和位置信息类型包括用户的语音控制指令中待控制设备的设备信息类型和位置信息类型。
503:解析以上语音控制指令,提取待控制设备的设备信息、位置信息以及动作信息。
智能设备经步骤502接收到用户的语音控制指令后,解析该指令,并提取待控制设备的设备信息、位置信息以及动作信息。
504:将所提取的待控制设备的设备信息和位置信息与以上获取的设备信息和位置信息进行匹配,以判断智能设备自身是否为目标设备。
经步骤503对用户的语音控制指令进行解析后,将提取的待控制设备的设备信息和位置信息与步骤501智能设备所获取的自身对应设备信息和位置信息进行匹配,以判断智能设备自身是否为目标设备。若步骤501智能设备所获取的自身对应的设备信息和位置信息对应或包含步骤503解析提取得到的待控制设备的设备信息和位置信息,则匹配成功,确定智能设备自身为目标设备;否则,匹配失败,确定智能设备自身不是目标设备。若匹配成功,执行步骤505。
505:若匹配成功,确定智能设备自身为目标设备,则执行动作信息相应的动作。
若步骤501所获取的自身对应的设备信息和位置信息对应或包含步骤503解析提取得到的待控制设备的设备信息和位置信息,可确定智能设备自身为目标设备,则执行用户语音控制指令中动作信息相应的动作。
本实施方式智能家居语音控制方法,智能设备获取其自身对应的设备信息和位置信息,当接收到用户的语音控制指令后,通过解析提取得到待控制设备的设备信息和位置信息,而后将待控制设备的信息与智能设备自身的设备信息和位置信息进行匹配,以判断自身是否为目标设备,当匹配确定自身为目标设备时,执行相应的指令动作。该方法利用智能设备的设备信息和位置信息结合的方式,来判断确定目标设备,方便快捷,准确性高,并且可避免用户需记住大量设备名称的麻烦。
另外,在其他实施方式中,也可增加动作执行时间监控功能,以实现指令动作的可控定时执行。具体地,用户若要控制某一智能设备定时执行相应操作,发出的相应语音控制指令,指令中除了包括待控制设备的设备信息、位置信息和动作信息之外,还包括动作执行时间;指令接收端接收到用户的语音控制指令后,解析提取待控制设备的设备信息、位置信息和动作信息的同时,还提取指令中的动作执行时间;若待控制设备的设备信息和位置信息与智能设备所获取的自身对应的设备信息和位置信息匹配,确定自身为目标设备,实时监测判断当前时间是否达到语音控制指令中的动作执行时间,当达到动作执行时间时,再执行用户语音控制指令中动作信息相应的动作,从而可根据用户的指令定时执行相应动作。
而若用户所发出的语音控制指令中未包括动作执行时间,则可默认为立即执行,即若匹配判断确定智能设备自身为目标设备,则立即执行动作信息相应的动作。
上述方法应用于智能设备,其逻辑过程可通过计算机程序来表示,并可通过智能设备实现。
对于智能设备的硬件结构,请参阅图6,图6是本发明智能设备一实施方式的结构示意图。如图6所示,本实施方式智能设备601包括处理器602,而处理器602在执行程序数据时以实现以上各智能家居语音控制方法实施方式的步骤。
本实施方式智能设备601的处理器602执行程序数据,可利用智能设备的设备信息和位置信息结合的方式,来确定目标设备,方便、快捷, 准确性高,并且可避免用户需记住大量设备名称的麻烦。
对于计算机程序,以软件形式实现并作为独立的产品销售或使用时,可存储在一个电子设备可读存储介质中,即,本发明还提供一种具有存储功能的装置,请参阅图7,图7是本发明具有存储功能的装置一实施方式的结构示意图,该具有存储功能的装置701上存储有程序数据702,该程序数据702能够被处理器执行以实现以上各智能家居语音控制方法实施方式的步骤。其中,处理器可以是该具有存储功能的装置701本身所具有的处理器,也可以是其他设备装置的处理器。而该具有存储功能的装置701可包括能够携带以上程序数据702的任何装置,如包括U盘、光盘以及设备、服务器等中的至少一种,在此不做限定。
本实施方式具有存储功能的装置701,其上存储的程序数据702被处理器执行时,可利用智能设备的设备信息和位置信息结合的方式,来确定目标设备,方便、快捷,准确性高,并且可避免用户需记住大量设备名称的麻烦。
以上所述仅为本发明的实施方式,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。

Claims (15)

  1. 一种智能设备,其中,所述智能设备包括处理器,所述处理器在执行程序数据时实现如下步骤:
    获取各智能设备的设备信息和对应的位置信息,其中,所述智能设备的设备信息包括设备名称、品牌以及型号;
    接收用户的语音控制指令,所述语音控制指令包括待控制设备的设备信息、位置信息以及动作信息;
    解析所述语音控制指令,提取所述待控制设备的设备信息、位置信息以及动作信息;
    将所述待控制设备的设备信息和位置信息与所述各智能设备的设备信息和位置信息进行匹配,以查找目标设备;
    若匹配成功,查找到所述目标设备,将所述动作信息发送到所述目标设备,以控制所述目标设备执行相应的动作。
  2. 根据权利要求1所述的智能设备,其中,所述位置信息包括各智能设备在空间内的绝对位置信息和/或相对用户当前所在位置的相对位置信息。
  3. 根据权利要求2所述的智能设备,其中,所述绝对位置信息包括空间坐标信息和/或相对同一空间固定位置物体的方位信息。
  4. 根据权利要求2所述的智能设备,其中,所述相对用户当前所在位置的相对位置信息通过如下方式获取:
    监测获取用户在空间内的实时位置以及姿态信息,而后根据获取的所述用户的实时位置以及姿态信息进行空间划分,并根据各智能设备的绝对位置信息获取各智能设备在所述空间划分下相对用户当前所在位置的相对位置信息。
  5. 根据权利要求1所述的智能设备,其中,所述处理器执行的所述获取各智能设备的设备信息和对应的位置信息的步骤,包括实时监测获取各智能设备的设备信息和对应的位置信息。
  6. 根据权利要求1所述的智能设备,其中,所述处理器执行的所述获取各智能设备的设备信息和对应的位置信息的步骤之后,还包括:根据所述各智能设备的设备信息和位置信息,建立对应的设备-位置关系拓扑图表;
    所述处理器执行的所述将所述目标设备的设备信息和位置信息与所述各智能设备的设备信息和位置信息进行匹配,以查找目标设备的步骤,具体包括: 将所述目标设备的设备信息和位置信息与所述设备-位置关系拓扑图表中各智能设备的设备信息和位置信息进行匹配,以查找目标设备。
  7. 根据权利要求1所述的智能设备,其中,所述处理器在执行程序数据时还实现以下步骤:
    所述语音控制指令还包括动作执行时间;
    所述解析所述语音控制指令之后,提取的信息还包括所述待控制设备的动作执行时间;
    所述将所述动作信息发送到所述目标设备之前,还包括:实时监测判断当前时间是否达到所述动作执行时间;若达到所述动作执行时间,将所述动作信息发送到所述目标设备,以控制所述目标设备执行相应的动作。
  8. 一种智能家居语音控制方法,其中,所述方法包括以下步骤:
    获取各智能设备的设备信息和对应的位置信息;
    接收用户的语音控制指令,所述语音控制指令包括待控制设备的设备信息、位置信息以及动作信息;
    解析所述语音控制指令,提取所述待控制设备的设备信息、位置信息以及动作信息;
    将所述待控制设备的设备信息和位置信息与所述各智能设备的设备信息和位置信息进行匹配,以查找目标设备;
    若匹配成功,查找到所述目标设备,将所述动作信息发送到所述目标设备,以控制所述目标设备执行相应的动作。
  9. 根据权利要求8所述的方法,其中,所述位置信息包括各智能设备在空间内的绝对位置信息和/或相对用户当前所在位置的相对位置信息。
  10. 根据权利要求9所述的方法,其中,所述绝对位置信息包括空间坐标信息和/或相对同一空间固定位置物体的方位信息。
  11. 根据权利要求9所述的方法,其中,所述相对用户当前所在位置的相对位置信息通过如下方式获取:
    监测获取用户在空间内的实时位置以及姿态信息,而后根据获取的所述用户的实时位置以及姿态信息进行空间划分,并根据各智能设备的绝对位置信息获取各智能设备在所述空间划分下相对用户当前所在位置的相对位置信息。
  12. 根据权利要求8所述的方法,其中,所述获取各智能设备的设备信息和对应的位置信息的步骤,包括实时监测获取各智能设备的设备信息和对应的位 置信息。
  13. 根据权利要求8所述的方法,其中,所述获取各智能设备的设备信息和对应的位置信息的步骤之后,还包括:根据所述各智能设备的设备信息和位置信息,建立对应的设备-位置关系拓扑图表;
    所述将所述目标设备的设备信息和位置信息与所述各智能设备的设备信息和位置信息进行匹配,以查找目标设备的步骤,具体包括:将所述目标设备的设备信息和位置信息与所述设备-位置关系拓扑图表中各智能设备的设备信息和位置信息进行匹配,以查找目标设备。
  14. 根据权利要求8所述的方法,其中,所述方法还包括:
    所述语音控制指令还包括动作执行时间;
    所述解析所述语音控制指令之后,提取的信息还包括所述待控制设备的动作执行时间;
    所述将所述动作信息发送到所述目标设备之前,还包括:实时监测判断当前时间是否达到所述动作执行时间;若达到所述动作执行时间,将所述动作信息发送到所述目标设备,以控制所述目标设备执行相应的动作。
  15. 一种智能家居语音控制方法,其中,所述方法包括以下步骤:
    智能设备获取自身对应的设备信息和位置信息;
    接收用户的语音控制指令,所述语音控制指令包括待控制设备的设备信息、位置信息以及动作信息;
    解析所述语音控制指令,提取所述待控制设备的设备信息、位置信息以及动作信息;
    将所述待控制设备的设备信息和位置信息与所述获取的设备信息和位置信息进行匹配,以判断所述智能设备是否为目标设备;
    若匹配成功,确定所述智能设备为目标设备,则执行所述动作信息相应的动作。
PCT/CN2018/100662 2017-08-16 2018-08-15 智能家居语音控制方法、智能设备 WO2019034083A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710706333.XA CN107528753B (zh) 2017-08-16 2017-08-16 智能家居语音控制方法、智能设备及具有存储功能的装置
CN201710706333.X 2017-08-16

Publications (1)

Publication Number Publication Date
WO2019034083A1 true WO2019034083A1 (zh) 2019-02-21

Family

ID=60681238

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/100662 WO2019034083A1 (zh) 2017-08-16 2018-08-15 智能家居语音控制方法、智能设备

Country Status (2)

Country Link
CN (1) CN107528753B (zh)
WO (1) WO2019034083A1 (zh)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10827028B1 (en) 2019-09-05 2020-11-03 Spotify Ab Systems and methods for playing media content on a target device
CN113014633A (zh) * 2021-02-20 2021-06-22 杭州云深科技有限公司 预置设备的定位方法、装置、计算机设备及存储介质
US11094319B2 (en) 2019-08-30 2021-08-17 Spotify Ab Systems and methods for generating a cleaned version of ambient sound
CN113608449A (zh) * 2021-08-18 2021-11-05 四川启睿克科技有限公司 一种智慧家庭场景下语音设备定位系统及自动定位方法
US11308959B2 (en) 2020-02-11 2022-04-19 Spotify Ab Dynamic adjustment of wake word acceptance tolerance thresholds in voice-controlled devices
US11328722B2 (en) 2020-02-11 2022-05-10 Spotify Ab Systems and methods for generating a singular voice audio stream
US20220170656A1 (en) * 2019-08-20 2022-06-02 Gd Midea Air-Conditioning Equipment Co., Ltd. Air-conditioning instruction detection method, control device and air-conditioning system
US11822601B2 (en) 2019-03-15 2023-11-21 Spotify Ab Ensemble-based data comparison

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107528753B (zh) * 2017-08-16 2021-02-26 捷开通讯(深圳)有限公司 智能家居语音控制方法、智能设备及具有存储功能的装置
CN108304497B (zh) * 2018-01-12 2020-06-30 深圳壹账通智能科技有限公司 终端控制方法、装置、计算机设备和存储介质
CN108459510A (zh) * 2018-02-08 2018-08-28 百度在线网络技术(北京)有限公司 智能家电的控制方法、设备、系统及计算机可读介质
CN108538290A (zh) * 2018-04-06 2018-09-14 东莞市华睿电子科技有限公司 一种基于音频信号检测的智能家居控制方法
CN108735214A (zh) * 2018-05-30 2018-11-02 出门问问信息科技有限公司 设备的语音控制方法及装置
WO2019239738A1 (ja) * 2018-06-12 2019-12-19 ソニー株式会社 情報処理装置、情報処理方法
CN110619739A (zh) * 2018-06-20 2019-12-27 深圳市领芯者科技有限公司 基于人工智能的蓝牙控制方法、装置和移动设备
CN109347709B (zh) * 2018-10-26 2021-02-23 北京蓦然认知科技有限公司 一种智能设备控制方法、装置及系统
CN109979449A (zh) * 2019-02-15 2019-07-05 江门市汉的电气科技有限公司 一种智能灯具的语音控制方法、装置、设备和存储介质
CN110278291B (zh) * 2019-03-19 2022-02-11 新华三技术有限公司 无线设备命名方法、存储介质及系统
CN111833862B (zh) * 2019-04-19 2023-10-20 佛山市顺德区美的电热电器制造有限公司 一种设备的控制方法、控制设备及存储介质
CN112053683A (zh) * 2019-06-06 2020-12-08 阿里巴巴集团控股有限公司 一种语音指令的处理方法、设备及控制系统
CN110708220A (zh) * 2019-09-27 2020-01-17 恒大智慧科技有限公司 一种智能家居控制方法及系统、计算机可读存储介质
CN110687815B (zh) * 2019-10-29 2023-07-14 北京小米智能科技有限公司 设备控制方法、装置、终端设备及存储介质
CN112987580B (zh) * 2019-12-12 2022-10-11 华为技术有限公司 一种设备的控制方法、装置、服务器以及存储介质
CN111243588A (zh) * 2020-01-13 2020-06-05 北京声智科技有限公司 一种控制设备的方法、电子设备及计算机可读存储介质
CN113823280A (zh) * 2020-06-19 2021-12-21 华为技术有限公司 智能设备控制方法,电子设备及系统
CN112270924A (zh) * 2020-09-18 2021-01-26 青岛海尔空调器有限总公司 空调器的语音控制方法与装置
CN112468377B (zh) * 2020-10-23 2023-02-24 和美(深圳)信息技术股份有限公司 智能语音设备的控制方法及系统
CN113359501A (zh) * 2021-06-29 2021-09-07 前海沃乐家(深圳)智能生活科技有限公司 基于智能开关的远程控制系统及方法
WO2023284562A1 (zh) * 2021-07-14 2023-01-19 海信视像科技股份有限公司 控制设备、家电设备以及控制方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105700375A (zh) * 2016-03-24 2016-06-22 四川邮科通信技术有限公司 一种智能家居控制系统
CN105739320A (zh) * 2016-04-29 2016-07-06 四川邮科通信技术有限公司 一种基于应用场景的智能家居控制方法
CN106448658A (zh) * 2016-11-17 2017-02-22 海信集团有限公司 智能家居设备的语音控制方法及智能家居网关
CN107528753A (zh) * 2017-08-16 2017-12-29 捷开通讯(深圳)有限公司 智能家居语音控制方法、智能设备及具有存储功能的装置

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014190496A1 (en) * 2013-05-28 2014-12-04 Thomson Licensing Method and system for identifying location associated with voice command to control home appliance
CN105700389B (zh) * 2014-11-27 2020-08-11 青岛海尔智能技术研发有限公司 一种智能家庭自然语言控制方法
CN105629750A (zh) * 2015-10-29 2016-06-01 东莞酷派软件技术有限公司 一种智能家居控制方法及系统
CN106847269A (zh) * 2017-01-20 2017-06-13 浙江小尤鱼智能技术有限公司 一种智能家居系统的语音控制方法及装置
CN106707788B (zh) * 2017-03-09 2019-05-28 上海电器科学研究院 一种智能家居语音控制识别系统与方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105700375A (zh) * 2016-03-24 2016-06-22 四川邮科通信技术有限公司 一种智能家居控制系统
CN105739320A (zh) * 2016-04-29 2016-07-06 四川邮科通信技术有限公司 一种基于应用场景的智能家居控制方法
CN106448658A (zh) * 2016-11-17 2017-02-22 海信集团有限公司 智能家居设备的语音控制方法及智能家居网关
CN107528753A (zh) * 2017-08-16 2017-12-29 捷开通讯(深圳)有限公司 智能家居语音控制方法、智能设备及具有存储功能的装置

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11822601B2 (en) 2019-03-15 2023-11-21 Spotify Ab Ensemble-based data comparison
US20220170656A1 (en) * 2019-08-20 2022-06-02 Gd Midea Air-Conditioning Equipment Co., Ltd. Air-conditioning instruction detection method, control device and air-conditioning system
US11094319B2 (en) 2019-08-30 2021-08-17 Spotify Ab Systems and methods for generating a cleaned version of ambient sound
US11551678B2 (en) 2019-08-30 2023-01-10 Spotify Ab Systems and methods for generating a cleaned version of ambient sound
US10827028B1 (en) 2019-09-05 2020-11-03 Spotify Ab Systems and methods for playing media content on a target device
US11308959B2 (en) 2020-02-11 2022-04-19 Spotify Ab Dynamic adjustment of wake word acceptance tolerance thresholds in voice-controlled devices
US11328722B2 (en) 2020-02-11 2022-05-10 Spotify Ab Systems and methods for generating a singular voice audio stream
US11810564B2 (en) 2020-02-11 2023-11-07 Spotify Ab Dynamic adjustment of wake word acceptance tolerance thresholds in voice-controlled devices
CN113014633A (zh) * 2021-02-20 2021-06-22 杭州云深科技有限公司 预置设备的定位方法、装置、计算机设备及存储介质
CN113608449A (zh) * 2021-08-18 2021-11-05 四川启睿克科技有限公司 一种智慧家庭场景下语音设备定位系统及自动定位方法
CN113608449B (zh) * 2021-08-18 2023-09-15 四川启睿克科技有限公司 一种智慧家庭场景下语音设备定位系统及自动定位方法

Also Published As

Publication number Publication date
CN107528753B (zh) 2021-02-26
CN107528753A (zh) 2017-12-29

Similar Documents

Publication Publication Date Title
WO2019034083A1 (zh) 智能家居语音控制方法、智能设备
US20200286482A1 (en) Processing voice commands based on device topology
CN108683574B (zh) 一种设备控制方法、服务器和智能家居系统
CN105471705B (zh) 一种基于即时通讯的智能控制方法、设备及系统
WO2017071645A1 (zh) 语音控制方法、装置及系统
US10511550B2 (en) Systems and methods for instant messaging
CN104852975B (zh) 一种家居设备调用方法及装置
CN105045140B (zh) 智能控制受控设备的方法和装置
WO2017206312A1 (zh) 控制、获取智能家居设备上传数据的方法及装置
CN105116859B (zh) 一种利用无人飞行器实现的智能家居系统及方法
WO2016065813A1 (zh) 自定义智能设备场景模式的方法和装置
WO2020244573A1 (zh) 一种语音指令的处理方法、设备及控制系统
WO2016065812A1 (zh) 基于设定场景模式的智能设备控制方法和装置
TW201805744A (zh) 控制系統、控制處理方法及裝置
WO2016065825A1 (zh) 针对智能设备的场景模式推荐方法和装置
WO2020168571A1 (zh) 设备控制方法、装置、系统、电子设备以及云服务器
CN204903983U (zh) 一种智能家居系统及其无人飞行器、智能中枢设备
WO2020133495A1 (zh) 一种智能设备管理方法、移动终端及系统
EP3769168A1 (en) Processing a command
JP2021043936A (ja) インタラクション方法、機器、システム、電子機器及び記憶媒体
CN110661888A (zh) 家电设备的语音控制方法、控制装置及可读存储介质
CN112533070B (zh) 视频声音和画面的调整方法、终端和计算机可读存储介质
CN114120996A (zh) 语音交互方法及装置
WO2023236848A1 (zh) 设备控制方法、装置、系统、电子设备及可读存储介质
CN108415572B (zh) 应用于移动终端的模块控制方法、装置及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18846894

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18846894

Country of ref document: EP

Kind code of ref document: A1