WO2022183936A1 - 一种智能家居设备选择方法及终端 - Google Patents

一种智能家居设备选择方法及终端 Download PDF

Info

Publication number
WO2022183936A1
WO2022183936A1 PCT/CN2022/077290 CN2022077290W WO2022183936A1 WO 2022183936 A1 WO2022183936 A1 WO 2022183936A1 CN 2022077290 W CN2022077290 W CN 2022077290W WO 2022183936 A1 WO2022183936 A1 WO 2022183936A1
Authority
WO
WIPO (PCT)
Prior art keywords
smart home
terminal
room
user
distribution map
Prior art date
Application number
PCT/CN2022/077290
Other languages
English (en)
French (fr)
Inventor
黄益贵
乔登龙
夏潘斌
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2022183936A1 publication Critical patent/WO2022183936A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B15/00Systems controlled by a computer
    • G05B15/02Systems controlled by a computer electric
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/418Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS], computer integrated manufacturing [CIM]
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/26Pc applications
    • G05B2219/2642Domotique, domestic, home control, automation, smart house
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Definitions

  • the present application relates to the technical field of terminals, and in particular, to a method and terminal for selecting a smart home device.
  • the smart home cloud can extract the data from the user's voice. the intent to determine the corresponding controlled smart home device.
  • the voice assistant feedback allows the user to clarify the instructions of the only controlled smart home device until the corresponding controlled smart home device is determined.
  • the process of multiple voice interactions between the user and the terminal results in poor user experience.
  • the purpose of this application is to provide a smart home equipment selection method and terminal, so that in the scenario of voice control of smart home equipment, when there are multiple target smart home equipment candidates, according to the user's Voice instructions, determine the smart home devices that the user intends to control, reduce the number of voice interactions between the user and the terminal's voice, and improve the user experience.
  • a method for selecting a smart home device is provided.
  • the method is applied to a smart home system.
  • the smart home system includes a smart home cloud, a plurality of smart home devices and terminals. At least some of the smart home devices of the plurality of smart home devices are located in different rooms, the smart home cloud is connected to the plurality of smart home devices and the terminal for communication, and the method includes: the terminal uses pedestrian dead track reckoning PDR technology, determining the room where the terminal is currently located according to a room distribution map, wherein the room distribution map includes location information and/or room information of the multiple smart home devices; the smart home cloud is based on the terminal The current room and the user's intention determine the controlled smart home device, wherein the user's intention is obtained based on the user's voice command, the smart home cloud stores the room distribution map, or the smart home cloud stores Room information of multiple smart home devices.
  • the controlled smart home device can be determined from the room where the terminal held by the user is located, reducing the number of voice interactions between the user and the terminal and improving the user's experience; no additional chips are required. Support, reduce hardware cost.
  • the smart home cloud determines the controlled smart home device according to the current room where the terminal is located and the user's intention, which specifically includes: the smart home cloud determines a list of smart home devices according to the user's intention, The controlled smart home device is determined from the smart home device list according to the room where the terminal is currently located, wherein the smart home device list includes smart home devices located in different rooms.
  • the controlled smart home device can be determined by determining the list of smart home devices.
  • the terminal determines the current room where the terminal is located according to the room distribution map by using the pedestrian track reckoning PDR technology, which specifically includes:
  • the terminal acquires the acceleration information collected by the acceleration sensor, the angular velocity information collected by the gyro sensor, the direction information collected by the direction sensor and/or the air pressure information collected by the air pressure sensor;
  • the terminal calculates the terminal position by using the PDR technology according to the acceleration information, the angular velocity information, the direction information and/or the air pressure information;
  • the terminal determines the room where the terminal is currently located according to the terminal position and the room distribution map.
  • the terminal when the terminal enters the smart home environment, the current room where the terminal is located can be obtained, so as to prepare for the subsequent determination of the controlled smart home device from the room where the terminal is located, so that when the user sends a voice instruction, the terminal can immediately
  • the room distribution map further includes a PDR beacon, and before the terminal determines the room where the terminal is currently located according to the terminal location and the room distribution map, the method further includes:
  • the terminal corrects the terminal position according to the PDR beacon.
  • the PDR beacon is obtained by marking on the room distribution map by the user, and the PDR beacon includes a door, a wall corner of a room and/or a corridor.
  • the user's position can be corrected when the user passes through these places where the direction of travel may change greatly, avoiding the deviation of the user's positioning caused by the cumulative error of the PDR, and improving the user's ability to The accuracy of positioning in the distribution map.
  • the room distribution map is obtained by the user walking along the room with the terminal and drawing using the PDR technology
  • the location information and/or the room information of the plurality of smart home devices is obtained by marking the room layout diagram by the user.
  • the distribution map of smart home devices in the room can be obtained, and preparations can be made for the subsequent use of PDR technology to determine the room where the terminal is located.
  • Smart home equipment to enhance user experience.
  • the method further includes:
  • the smart home cloud determines a control instruction according to the controlled smart home device, and sends the control instruction to the controlled smart home device, where the control instruction is used to control the controlled smart home device.
  • the method further includes: acquiring, by the terminal, the user's voice instruction.
  • a second aspect of the embodiments of the present application provides a method for determining a room where a terminal is located, including: the terminal determines, by the terminal, a pedestrian track reckoning PDR technology and a room where the terminal is currently located according to a room distribution map, wherein the room The distribution map includes location information and/or room information of multiple smart home devices; the terminal sends information about the room where the terminal is currently located.
  • the terminal when the terminal enters the smart home environment, the current room where the terminal is located can be obtained, so as to prepare for the subsequent determination of the controlled smart home device from the room where the terminal is located, so that when the user sends a voice instruction, the terminal can immediately
  • the terminal uses the PDR technology to determine the room where the terminal is currently located according to a room distribution map, which specifically includes: the terminal acquiring acceleration information collected by an acceleration sensor, angular velocity information collected by a gyroscope sensor, Direction information collected by a direction sensor and/or air pressure information collected by an air pressure sensor; the terminal calculates the PDR technology according to the acceleration information, the angular velocity information, the direction information and/or the air pressure information The position of the terminal in the room distribution map; the terminal determines the room where the terminal is currently located according to the position of the terminal in the room distribution map and the room distribution map.
  • a room distribution map which specifically includes: the terminal acquiring acceleration information collected by an acceleration sensor, angular velocity information collected by a gyroscope sensor, Direction information collected by a direction sensor and/or air pressure information collected by an air pressure sensor; the terminal calculates the PDR technology according to the acceleration information, the angular velocity information, the direction information and/or the air pressure information The position of the terminal in the room distribution map;
  • the current room where the terminal is located can be obtained according to the user's walking information obtained by the sensor on the terminal and the distribution map, so as to determine the controlled smart home device from the room where the terminal is located subsequently.
  • Make preparations so that when the user sends a voice instruction, the smart home device that the user intends to control can be immediately determined according to the room where the terminal is located, so as to improve the user experience.
  • the room distribution map further includes a PDR beacon
  • the terminal determines the room where the terminal is currently located according to the location of the terminal in the room distribution map and the room distribution map
  • the method further includes: the terminal correcting the terminal position according to the PDR beacon.
  • the PDR beacon is obtained by marking on the room distribution map by the user, and the PDR beacon includes a door, a wall corner of a room and/or a corridor.
  • the user's position can be corrected when the user passes through these places where the direction of travel may change greatly, avoiding the deviation of the user's positioning caused by the cumulative error of the PDR, and improving the user's ability to The accuracy of positioning in the distribution map.
  • the room distribution map is obtained by the user walking along the room with the terminal and drawing using the PDR technology; the location information and/or the room to which the multiple smart home devices belong.
  • the information is obtained by the user marking the room layout.
  • the distribution map of smart home devices in the room can be obtained, and preparations can be made for the subsequent use of PDR technology to determine the room where the terminal is located.
  • Smart home equipment to enhance user experience.
  • a method for selecting a smart home device is provided.
  • the method is applied to a smart home system.
  • the smart home system includes a smart home cloud and a plurality of smart home devices. At least some of the smart home devices in the home devices are located in different rooms, the smart home cloud is connected to the plurality of smart home devices for communication, and the method includes: the smart home cloud obtains information of the room where the terminal is currently located, wherein , the information of the room where the terminal is currently located is determined by the terminal using the pedestrian track reckoning PDR technology and according to the room distribution map, and the room distribution map includes the location information of multiple smart home devices and/or the room to which they belong.
  • the smart home cloud determines the controlled smart home device according to the current room of the terminal and the user's intention, wherein the user's intention is obtained based on the user's voice command, and the smart home cloud stores the room distribution map , or, the smart home cloud stores room information of the plurality of smart home devices.
  • the controlled smart home device can be determined from the room where the terminal held by the user is located, reducing the number of voice interactions between the user and the terminal and improving the user's experience; no additional chips are required. Support, reduce hardware cost.
  • the method further includes: the smart home cloud determines a control instruction according to the controlled smart home device, and sends the control instruction to the controlled smart home device, wherein the The control instruction is used to control the controlled smart home device.
  • a fourth aspect of the embodiments of the present application provides a smart home system, the smart home system includes a smart home cloud and a terminal, the smart home cloud and the terminal include a memory and a processor, and the memory stores instructions , when the instruction is invoked and executed by the processor, the smart home cloud and the terminal are caused to execute the method described in any one of the first aspect and possible implementation manners of the embodiments of this application.
  • a terminal including: a processor, a memory, a display screen, a speaker, a microphone, a direction sensor, a gyroscope sensor, an acceleration sensor, and a computer program, where the computer program is stored in a computer program.
  • the computer program includes instructions; the display screen is used to display the user interface; the speaker is used to broadcast the user's voice; the microphone is used to obtain the voice; the acceleration sensor is used to Collect the movement acceleration of the terminal; the direction sensor is used to determine the direction of the terminal; the gyroscope sensor is used to collect the angular velocity of the rotation of the terminal; when the instruction is called and executed by the processor, it makes The terminal executes the method described in any one of the second aspect and possible implementation manners of the embodiments of this application.
  • a sixth aspect of the embodiments of the present application provides a computer-readable storage medium or a non-volatile computer-readable storage medium, where the computer-readable storage medium or non-volatile computer-readable storage medium includes a computer program, when When the computer program runs on the electronic device, the electronic device is caused to execute the method described in any one of the second aspect and possible implementation manners of the embodiments of the present application, or the electronic device is caused to execute the method described in the first aspect of the present application. The method described in any one of the three aspects and possible implementations thereof.
  • FIG. 1 is a flowchart of a method for selecting a smart home device provided by an embodiment of the present application
  • FIG. 2 is a schematic structural diagram of a smart home equipment system provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a hardware structure of a terminal provided by an embodiment of the present application.
  • FIG. 4 is a flowchart of a method for selecting a smart home device provided by an embodiment of the present application
  • FIG. 5 is a flowchart of a method for obtaining a distribution map of smart home devices in various rooms provided by an embodiment of the application;
  • FIG. 6 is a flowchart of a method for obtaining a room where a terminal is located according to an embodiment of the application
  • FIG. 7 is a flowchart of a method for selecting a smart home device provided by an embodiment of the application.
  • FIG. 8 is a distribution diagram of a smart home device in each room provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a motion trajectory of a terminal in a room provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of a motion trajectory of another terminal in a room provided by an embodiment of the present application.
  • FIG. 11 is a schematic diagram of a user position correction using a PDR beacon provided by an embodiment of the present application.
  • FIG. 12 shows a schematic diagram of using PDR to calculate the position of a user in the distribution diagram shown in FIG. 9 provided by an embodiment of the present application;
  • FIG. 13 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • Pedestrian dead reckoning is a technology that locates people based on the walking data obtained by inertial sensors. The principle is to determine the person's walking direction and the person's step length according to the person's angular motion data and line motion data obtained by the inertial sensor, and then calculate the pedestrian's position. Pedestrian tracks can be obtained by connecting lines based on the pedestrian's position.
  • ASR Automatic speech recognition
  • Natural language understanding is a technology that enables computers to understand human natural language, which can obtain human intentions based on human natural language text.
  • Dialog Manager is a technology that manages the context in the process of dialogue between people and computers, and arranges different services involved in the dialogue process.
  • Text to speech is a technology that allows computers to broadcast text in human's natural language.
  • Application refers to a computer program that accomplishes one or more tasks.
  • API Application program interface
  • Ultra-wide band is a wireless carrier communication technology that transmits data using nanosecond to microsecond non-sinusoidal narrow pulses.
  • the received signal strength indication is an optional part of the wireless transmission layer, which is used to determine the link quality and whether to increase the broadcast transmission strength. It is a positioning technology that determines the distance between the signal point and the receiving point by the strength of the received signal, and then performs positioning calculation according to the corresponding data.
  • ZigBee is a short-range, low-power wireless communication technology.
  • step S01 when the user says to turn on a certain smart home device, such as "turn on the lights", the smart home cloud executes step S01 according to the user's intention: judging the number of smart home devices, when the number of smart home devices is equal to 1 , turn on the only smart home device, and then perform step S02: the voice assistant voice broadcasts: "OK, it has been turned on”; when the number of smart home devices is greater than 1, the smart home cloud performs step S03: judging the number of rooms, When the number of rooms is equal to 1, all smart home devices in the room are controlled to be turned on, and then step S05 is performed: the voice assistant voice broadcasts: "All smart home devices have been turned on”; when the number of rooms is greater than 1, the smart home cloud executes the steps S04: Determine the number of rooms, when the number of rooms is less than or equal to 3, perform step S06: select a room from the room list, that is, the voice assistant of the mobile phone broadcasts "Do you want to turn on the lights in the living room, or the
  • step S01 again according to the voice command issued by the user again: determine the number of smart home devices;
  • the number of lights is 2 (smart lights in the master bedroom and smart lights in the secondary bedroom) and greater than 1; go to step S03: determine the number of rooms, if the number of bedrooms is 2 greater than 1, go to step S04: determine the number of rooms ;
  • step S06 is executed to control the voice assistant to broadcast: "Do you want to turn on the lights in the master bedroom or the lights in the second bedroom?".
  • the user needs to issue a voice command for the third time, for example, the user says: "turn on the light in the master bedroom", and the smart home device executes step S01 for the third time according to the voice command issued by the user for the third time: judging the number of smart home devices , at this time, the number of smart lights in the master bedroom is 1, then step S02 is executed, and the voice assistant voice broadcasts: "OK, it has been turned on”.
  • a possible implementation provides a method for controlling smart home devices based on UWB technology.
  • the user uses a terminal installed with a UWB transceiver chip to perform a "finger" operation, that is, the user uses a terminal equipped with a UWB transceiver chip.
  • the terminal of the chip points to the controlled smart home equipment that is also installed with the UWB transceiver chip, and the operation of the controlled smart home equipment can be realized.
  • UWB technology is a wireless positioning technology. Different from the global positioning system (GPS), it has higher positioning accuracy, and is especially suitable for indoor places where GPS signals are weak.
  • GPS global positioning system
  • the controlled smart home device will feed back a signal to the terminal after receiving the control signal directed to the terminal of the smart home device, and the corresponding operation interface will pop up instantly on the screen of the terminal, thus completing the control of the controlled smart home device.
  • various operations are possible implementation.
  • Another possible implementation provides a smart home device selection method and terminal.
  • the user uses the terminal to point to the smart home device he wants to control, and uses the direction collected by the direction sensor on the terminal to determine the target smart device pointed by the terminal.
  • Home equipment and then display the control interface of the target smart home equipment on the display screen of the terminal, so as to realize the control of the target smart home equipment.
  • This implementation has the following defects: 1. This implementation is based on Wi-Fi and RSSI technology to determine the coordinates of the target smart home. Due to the large fluctuation of Wi-Fi signal attenuation, the positioning accuracy is low in indoor positioning scenarios. , the positioning error is large. 2. This embodiment needs to point the terminal at the controlled smart home device, and is also not applicable to the scenario where the target smart home device is controlled by voice.
  • the embodiments of the present application provide a smart home control method, which can be used when a user issues a voice command. , according to the distribution map of smart home in the user's room, use PDR technology to determine the room where the user is currently located, and determine the only controlled smart home device according to the room where the user is located. Reduce the number of voice interactions between users and terminals, and improve user experience.
  • the following describes the smart home system 1000 involved in the smart home control method provided by the embodiment of the present application with reference to FIG. 2 .
  • FIG. 2 shows a schematic structural diagram of a smart home system 1000 .
  • the smart home system 1000 includes smart home devices 410 , 420 , and 430 , a terminal 100 , a voice assistant cloud 210 , a smart home cloud 220 , and a smart home gateway 300 .
  • the terminal 100 is installed with sensors and application programs such as a smart home APP, a voice assistant APP, and a perception service APP.
  • the user can use the application program on the terminal 100 in conjunction with the cloud server (voice assistant cloud 210, smart home cloud 220) to realize various smart home Control of devices 410, 420, 430.
  • the cloud server voice assistant cloud 210, smart home cloud 220
  • the smart home devices 410, 420, 430, the terminal 100, the applications installed on the terminal 100 (smart home APP, voice assistant APP, perception service APP, etc.), voice assistant cloud 210, smart The home cloud 220 and the smart home gateway 300 will be described.
  • the smart home devices 410, 420, and 430 are connected to the smart home gateway 300 through wireless communication technologies such as Wi-Fi, ZigBee, and Bluetooth, and perform corresponding operations by receiving control commands issued by the user through the smart home APP or through the voice assistant APP. hardware equipment.
  • the smart home devices include, for example, smart lighting 410, smart TV 420, smart air conditioner 430, smart home gateway 300, smart speakers, smart security equipment, smart projection, and the like.
  • Smart home gateway 300 Also known as a router, a hardware device used to connect two or more networks, acting as a gateway between the networks, is a dedicated intelligence that reads the address of each data packet and then decides how to transmit network equipment.
  • the router can facilitate users to easily control various smart home devices through wireless connection with terminals such as mobile phones or tablet computers.
  • General routers provide Wi-Fi hotspots.
  • Smart home devices 410, 420, 430 and terminal 100 access the Wi-Fi network by accessing the Wi-Fi hotspot of the router.
  • Smart home devices 410, 420, 430 and terminal 100 access The routers can be the same or different.
  • Sensor service may include a sensor installed on the terminal 100 for obtaining the user's walking information and/or a compass APP capable of displaying the user's terminal direction.
  • the sensors used to obtain the user's walking information may include inertial sensors, such as an acceleration sensor 142, a gyro sensor 143, etc., and may also include a direction sensor, an air pressure sensor 144, and the like.
  • the acceleration sensor 142 is used to determine the acceleration of the terminal on the X-axis, the Y-axis and the Z-axis in the three-axis coordinate system.
  • the gyro sensor 143 is used to determine the angular velocity of the terminal rotation.
  • the direction sensor can obtain the direction of the terminal, and the compass APP can display the direction of the terminal on the display screen 132 according to the direction of the terminal obtained by the direction sensor.
  • the air pressure sensor 144 can obtain air pressure. According to the acceleration, angular velocity and/or the direction of the terminal obtained by the sensor, the position of the terminal at any moment can be calculated by using the PDR technology. According to the air pressure obtained by the sensor, the altitude of the terminal at any time can be calculated by using the corresponding relationship between the air pressure and the altitude.
  • Smart home APP It is a software program installed on the terminal used by the user to select and control various smart home devices.
  • the smart home APP may have an operation interface, and the user can control the corresponding smart home device by operating the operation interface.
  • the smart home APP can also have the function of drawing the distribution map of smart home devices in each room.
  • the smart home APP can access the smart home gateway through Wi-Fi, and the user holds the terminal along the Walk the walls of each room for a week to obtain the user's room layout plan; the user marks the location of smart home devices in each room and the location of the door on the room layout plan to obtain the distribution map of smart home devices in each room.
  • the smart home APP may also receive a drawn room layout plan and/or a distribution map of smart home devices in each room from other devices, through a network, or uploaded by a user.
  • the distribution map of the smart home devices in each room can be saved in the smart home APP, or uploaded to the smart home cloud 220 .
  • the smart home APP referred to below may be an application installed when the terminal leaves the factory, or may be an application downloaded by a user from the network or obtained from other devices during the use of the terminal.
  • Voice assistant APP It is an APP installed on the terminal used by the user to provide the voice control function. It can use the radio function provided by the hardware microphone on the terminal to obtain the user's voice command, convert the voice command input by the user into text content through ASR, and send it to the voice assistant cloud 210; the voice assistant cloud 210 can also be based on the controlled smart home. The result of the device's instruction execution generates a text statement, and the text statement is broadcast in human's natural language through TTS. In some embodiments, the voice assistant APP can also send the user's voice command to the voice assistant cloud 210, and the voice assistant cloud converts the user's voice command into text content.
  • Voice assistant cloud 210 used to provide cloud-side functions for the voice assistant APP. It performs semantic analysis on the user's text content through NLU, and obtains the user's intention and slot; through DM, the user's text content is contextually managed, and according to the user's intention and slot, the voice assistant APP is used to execute the corresponding API through API. operate.
  • the voice assistant APP may also have NLU and DM functions.
  • Smart home cloud 220 is a remote server used to provide cloud-side functions for smart home APPs and smart home devices; or, a smart home central control device installed in the user's home, including transceivers, processors , memory.
  • the user can operate the operation interface of the smart home APP, and the smart home APP sends the user's operation instruction to the smart home cloud 220 according to the user's operation, and the smart home cloud 220 sends the control instruction to the corresponding smart home device, thereby realizing the Control of the corresponding smart home equipment;
  • the user can send a voice command to the voice assistant APP, and the voice assistant APP converts the user's voice command into text content and sends it to the voice assistant cloud 210, and the voice assistant cloud 210 according to the user's text content
  • the content is semantically analyzed, the user's intention and slot are obtained, and the user's intention and slot are sent to the smart home cloud 220, and the smart home cloud 220 generates corresponding control instructions according to
  • Perception service APP It is a software program installed on the terminal used by the user that is different from the smart home APP.
  • the perception service APP may have an operation interface or may not have an operation interface, it may be a resident APP of the terminal system, or it may be an application installed when the terminal leaves the factory.
  • the perception service APP After the perception service APP is connected to the smart home gateway through Wi-Fi, it calls the functions of the inertial sensor and the direction sensor through the API, and then obtains the user's walking information (such as the user's walking direction, step length, and the walking direction deviation of two adjacent steps). Angle shift), use PDR combined with the distribution map of smart home equipment in each room to determine the room where the terminal is located.
  • the voice assistant APP After the user sends a voice control command to a smart home device through the voice assistant APP at home, the voice assistant APP combines with the voice assistant cloud 210 to determine the user's intention and slot, and when the slot lacks the information of the user's room, the The API calls the user's room determined by the perception service APP, and sends the information of the user's room to the voice assistant cloud.
  • the voice assistant cloud 210 sends the user's intention and the room where the user is located to the smart home cloud 220, and the smart home cloud 220 generates a corresponding control instruction according to the user's intention and the room where the user is located, and sends the control instruction to the corresponding smart home device. , and then realize the control of the corresponding smart home equipment.
  • the perception service APP can also be called by the smart home APP through an API.
  • the "slot" in this embodiment of the present application may also be expressed as a "slot value".
  • Terminal 100 refers to a device used to control smart home devices, such as portable devices, such as mobile phones, tablet computers, artificial intelligence (artificial intelligence, AI) smart voice terminals, wearable devices (such as smart watches, smart bracelets) ), augmented reality (AR)/virtual reality (VR) devices, etc.
  • Smart home APP, voice assistant APP, perception service APP, compass APP and sensors can be installed on the terminal.
  • the portable device includes, but is not limited to, an exemplary hardware structure of a terminal according to an embodiment of the present application, as shown in FIG. 3 .
  • the terminal 100 includes a processor 110 , an internal memory 121 , an external memory interface 122 , a camera 131 , a display screen 132 , a sensor module 140 , a button 151 , and a universal serial bus (USB) interface 152 , a charging management module 160 , a power management module 161 , a battery 162 , a mobile communication module 171 and a wireless communication module 172 .
  • the terminal 100 may further include a subscriber identification module (SIM) card interface, an audio module, a speaker 153, a receiver, a microphone 154, an earphone interface, a motor, an indicator, a button, and the like.
  • SIM subscriber identification module
  • the terminal 100 in this embodiment of the present application may have more or less components than the terminal 100 shown in the figure, may combine two or more components, or may have different component configurations.
  • the various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (AP), a modem, a graphics processing unit (GPU), an image signal processor (ISP), a controller, a video codec, Digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (neural-network processing unit, NPU), etc.
  • AP application processor
  • GPU graphics processing unit
  • ISP image signal processor
  • controller a video codec
  • Digital signal processor digital signal processor
  • DSP digital signal processor
  • baseband processor baseband processor
  • neural-network processing unit neural-network processing unit
  • a buffer may also be provided in the processor 110 for storing instructions and/or data.
  • the buffer in the processor 110 may be a cache memory.
  • the cache may be used to hold instructions and/or data that have just been used, generated, or recycled by the processor 110 . If the processor 110 needs to use the instruction or data, it can be called directly from the buffer. This helps to reduce the time for the processor 110 to obtain instructions or data, thereby helping to improve the efficiency of the system.
  • Internal memory 121 may be used to store programs and/or data.
  • the internal memory 121 includes a stored program area and a stored data area.
  • the storage program area may be used to store an operating system (such as Android, IOS, etc.), a computer program required for at least one function (such as a voice wake-up function, a sound playback function), and the like.
  • the storage data area may be used to store data (such as audio data) created and/or collected during the use of the terminal 100, and the like.
  • the processor 110 may cause the terminal 100 to execute a corresponding method by calling programs and/or data stored in the internal memory 121, thereby implementing one or more functions.
  • the processor 110 invokes certain programs and/or data in the internal memory, so that the terminal 100 executes the speech recognition method provided in the embodiments of the present application, thereby realizing the speech recognition function.
  • the internal memory 121 may adopt a high-speed random access memory, and/or a non-volatile memory, or the like.
  • the non-volatile memory may include at least one of one or more magnetic disk storage devices, flash memory devices, and/or universal flash storage (UFS), among others.
  • the external memory interface 122 can be used to connect an external memory card (eg, a Micro SD card), so as to expand the storage capacity of the terminal 100 .
  • the external memory card communicates with the processor 110 through the external memory interface 122 to realize the data storage function.
  • the terminal 100 may save images, music, videos and other files in the external memory card through the external memory interface 122 .
  • the camera 131 may be used to capture moving, still images, and the like.
  • the camera 131 includes a lens and an image sensor.
  • the optical image generated by the object through the lens is projected onto the image sensor, and then converted into an electrical signal for subsequent processing.
  • the image sensor may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CCD charge coupled device
  • CMOS complementary metal-oxide-semiconductor
  • the image sensor converts the optical signal into an electrical signal, and then passes the electrical signal to the ISP to convert it into a digital image signal.
  • the terminal 100 may include one or N cameras 131 , where N is a positive integer greater than one.
  • Display screen 132 may include a display panel for displaying a user interface.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode or an active matrix organic light emitting diode (active-matrix organic light emitting diode). diode, AMOLED), flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light emitting diode (quantum dot light emitting diodes, QLED) and so on.
  • the terminal 100 may include one or M display screens 132 , where M is a positive integer greater than one.
  • the terminal 100 may implement a display function through a GPU, a display screen 132, an application processor, and the like.
  • the terminal 100 may display the user interface of the smart home APP through the display screen 132, such as the main interface of the smart home APP, the control interface of the smart home device, and the like.
  • the microphone 154 can be used to acquire voice.
  • the microphone can acquire the user's voice commands
  • the speaker 153 can broadcast the computer language in the natural language of the human being. The results are broadcast in human natural language, for example, "The air conditioner has been turned on for you.”
  • Sensor module 140 may include one or more sensors.
  • the inertial sensor 14 may include, for example, an acceleration sensor 142, a gyro sensor 143, etc.; the sensor module 140 may further include an ambient light sensor, a distance sensor, a proximity light sensor, a bone conduction sensor, a temperature sensor, and the like.
  • the direction sensor 141 is used to determine the direction in which the terminal 100 is located.
  • the acceleration sensor 142 is used to determine the acceleration of the terminal on the X-axis, the Y-axis and the Z-axis in the three-axis coordinate system.
  • the gyroscope sensor 143 is used to determine the angular velocity of the terminal's rotation, and judge the motion state of the terminal through the acceleration and the angular velocity.
  • the air pressure sensor 144 is used to measure the air pressure at the location of the terminal. When the air pressure decreases, the terminal is in an upstairs state, and when the air pressure increases, the terminal is in a downstairs state, thereby determining the floor where the terminal is located.
  • the fingerprint sensor 145 is used to collect fingerprints.
  • the terminal 100 can use the collected fingerprint characteristics to realize fingerprint unlocking, accessing application locks, taking photos with fingerprints, answering incoming calls with fingerprints, and the like.
  • the pressure sensor 146 is used to sense pressure signals, and can convert the pressure signals into electrical signals.
  • the pressure sensor 146 may be provided on the display screen 132 . Among them, touch operations acting on the same touch position but with different touch operation intensities may correspond to different operation instructions.
  • the touch sensor 147 may also be referred to as a "touch panel”.
  • the touch sensor 147 may be disposed on the display screen 132 , and the touch sensor 147 and the display screen 132 form a touch screen, also referred to as a “touch screen”.
  • the touch sensor 147 is used to detect a touch operation on or near it.
  • the touch sensor 147 may communicate the detected touch operation to the application processor to determine the type of touch event.
  • the terminal 100 may provide visual output and the like related to touch operations through the display screen 132 .
  • the touch sensor 147 may also be disposed on the surface of the terminal 100 , which is different from the position where the display screen 132 is located.
  • the USB interface 152 is an interface that conforms to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like.
  • the USB interface 152 can be used to connect a charger to charge the terminal 100, and can also be used to transmit data between the terminal 100 and peripheral devices. It can also be used to connect headphones to play audio through the headphones.
  • the USB interface 152 can be used to connect other devices, such as AR devices, computers, and the like, in addition to being an earphone interface.
  • the charging management module 160 is used to receive charging input from the charger.
  • the charger may be a wireless charger,
  • the charging management module 160 may receive charging input from the wired charger through the USB interface 152 . In some wireless charging embodiments, the charging management module 160 may receive wireless charging input through the wireless charging coil of the terminal 100 . While the charging management module 160 charges the battery 162 , the terminal 100 can also be powered by the power management module 161 .
  • the power management module 161 is used for connecting the battery 162 , the charging management module 160 and the processor 110 .
  • the power management module 161 receives input from the battery 162 and/or the charge management module 160, and supplies power to the processor 110, the internal memory 121, the display screen 132, the camera 131, and the like.
  • the power management module 161 can also be used to monitor parameters such as battery capacity, battery cycle times, battery health status (leakage, impedance).
  • the power management module 161 may also be provided in the processor 110 .
  • the power management module 161 and the charging management module 160 may also be provided in the same device.
  • the mobile communication module 171 may provide a wireless communication solution including 2G/3G/4G/5G, etc. applied on the terminal 100 .
  • the mobile communication module 171 may include a filter, a switch, a power amplifier, a low noise amplifier (LNA), and the like.
  • the mobile communication module 171 can receive the electromagnetic wave signal from the antenna 11, filter, amplify, etc. the received electromagnetic wave signal, and transmit it to the modulation and demodulation processor for demodulation.
  • the mobile communication module 171 can also amplify the signal modulated by the modem processor, and then convert it into an electromagnetic wave signal and radiate it out through the antenna 11 .
  • at least part of the functional modules of the mobile communication module 171 may be provided in the processor 110 .
  • At least part of the functional modules of the mobile communication module 171 may be provided in the same device as at least part of the modules of the processor 110 .
  • the mobile communication module 171 can send voice to other devices, and can also receive voices sent by other devices.
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low frequency baseband signal is processed by the baseband processor and passed to the application processor.
  • the application processor outputs sound signals through audio devices (not limited to speakers, receivers, etc.), or displays images or videos through the display screen 132 .
  • the modem processor may be a stand-alone device.
  • the modulation and demodulation processor may be independent of the processor 110, and may be provided in the same device as the mobile communication module 171 or other functional modules.
  • the wireless communication module 172 may provide applications on the user terminal 100 including WLAN (such as Wi-Fi network), Bluetooth (Bluetooth, BT), global navigation satellite system (GNSS), frequency modulation (frequency modulation, FM) , near field communication technology (near field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions.
  • the wireless communication module 172 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 172 receives the electromagnetic wave signal via the antenna 12 , frequency modulates and filters the electromagnetic wave signal, and sends the processed signal to the processor 110 .
  • the wireless communication module 172 can also receive the signal to be sent from the processor 110 , perform frequency modulation and amplification on the signal, and then convert it into an electromagnetic wave signal and radiate it through the antenna 12 .
  • the terminal 100 can connect to a router to access a Wi-Fi network through the wireless communication module 172 .
  • the antenna 11 of the terminal 100 is coupled with the mobile communication module 171, and the antenna 12 is coupled with the wireless communication module 172, so that the terminal 100 can communicate with other devices.
  • the mobile communication module 171 can communicate with other devices through the antenna 11
  • the wireless communication module 172 can communicate with other devices through the antenna 12 .
  • the wireless communication technologies may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code Division Multiple Access (WCDMA), Time Division Code Division Multiple Access (TD-SCDMA), Long Term Evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include GPS, global navigation satellite system (GLONASS), Beidou navigation satellite system (BDS), quasi-zenith satellite system (QZSS) and/or Satellite based augmentation systems (SBAS).
  • the terminal 100 may also be connected to the smart home device through the mobile communication module 171 or the wireless communication module 172 based on wireless signal transmission. For example, the terminal 100 sends an input operation based on a wireless signal to the smart home device through the mobile communication module 171 or the wireless communication module 172; form of status data, etc.
  • the interface connection relationship between the modules illustrated in the embodiments of the present application is only a schematic illustration, and does not constitute a structural limitation of the terminal 100 .
  • the terminal 100 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
  • the hardware structure shown in FIG. 3 is only an example.
  • the terminals of the embodiments of the present application may have more or less components than those shown in the figures, may combine two or more components, or may have different component configurations.
  • the various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
  • Embodiment 1 is a diagrammatic representation of Embodiment 1:
  • the terminal is a movable terminal
  • the movable terminal may be a mobile phone, a tablet computer, a smart wearable device, or other devices.
  • the user can walk around the room with the mobile terminal.
  • FIG. 9 the method for selecting smart home devices is described in detail by taking the user entering the living room D with a mobile terminal and walking into the master bedroom E from the living room D, and issuing a voice command of "turn on the air conditioner" in the master bedroom E as an example. illustrate.
  • the smart home device selection method may include the following steps:
  • Step S1000 Obtain a distribution map of smart home devices in each room.
  • the room distribution map may include, for example, location information and/or room information of the plurality of smart home devices.
  • the location information includes the coordinates of the smart home device in the distribution map; the room information includes the room corresponding to the smart home device, for example, the smart air conditioner 3b is located in the master bedroom E; the smart air conditioner 4b is located in the children's room F.
  • the distribution map of smart home devices in each room can be obtained in various ways. For example, users can directly upload the distribution map of smart home devices in each room to the smart home APP or smart home cloud, or they can go through the steps shown in Figure 5. S1001-Acquire in step S1006.
  • Step S1001 The user opens the smart home App, and enables the function of drawing the distribution map of the smart home in each room (for the specific implementation principle of this function, please refer to the description after step S1005).
  • Step S1002 The user selects the location where the smart home gateway is located as the starting location.
  • the smart home gateway can be a router and is located at door 0 of the room as shown in Figure 8. It should be clear that the user can select any position in the room as the starting position, which is not limited in this application.
  • Step S1003 The user holds the terminal to walk along the room for the first week, and the terminal obtains the room layout plan according to the user's walking trajectory.
  • the user walks along the inner peripheral side of each room (ie, the side near the wall) with the terminal.
  • the user can keep the terminal in the same posture, for example, can keep the terminal in a horizontal forward posture.
  • the user's walking trajectory line (the line connecting the user's position calculated by the PDR) constitutes the boundary line of each room.
  • movable terminals such as sweeping robots, etc.
  • sweeping robots can also be used to walk around the inner peripheral side of each room (that is, the side close to the wall), which is not limited in this application, as long as the boundary line of the room can be drawn. Can.
  • Step S1004 The user marks the location where the smart home device is located in the room layout plan.
  • the smart home device can be marked on the boundary line of the room where it is located. inside and as close to the smart home device as possible.
  • Step S1005 The user marks the location of the room door in the room layout plan as a PDR beacon.
  • the acceleration sensor is used to obtain the acceleration data of the user while walking
  • the gyroscope sensor is used to obtain the rotational angular acceleration data of the user while walking
  • the direction sensor is used to obtain the walking direction of the user.
  • the perception service APP can obtain the data recorded by the acceleration sensor, gyroscope sensor and orientation sensor through the API, and use the PDR to calculate the coordinates of the room layout plan when the user moves each step.
  • PDR technology to calculate the user's step size and the user's walking direction, significant accumulated errors will occur over time.
  • the PDR technology is used to determine the There is a deviation between the user's step and the actual user's step.
  • the coordinates of the user in the room layout plan calculated by the perception service APP using PDR will be deviated, resulting in inaccurate positioning of the user. Therefore, some PDR beacons are needed to correct the coordinates of the user in the room layout plan. Since the coordinates of the PDR beacons in the room layout plan are known and fixed, when the user passes near these PDR beacons, the coordinates of the PDR beacons are used to correct the coordinates of the user.
  • the user's movement direction usually changes relatively greatly. For example, as shown in FIG. There are relatively obvious changes (greater than or equal to 90 degrees) in the movement direction angles of adjacent steps. Similarly, when the user walks from the X position to the kitchen A, bathroom B, study C and other rooms, the movement direction angle of the user's several steps near the door of these rooms will also have relatively obvious angular changes. Therefore, the location of the room door can be used as a PDR beacon.
  • the location of the PDR beacon is not limited to the location of the door of the room, but can also be any other location in the room.
  • a PDR beacon may be any location where the user's movement direction may change significantly, including but not limited to doors, wall corners of rooms, corridors, corridor corners, and the like.
  • the PDR beacons may include: door 1 of kitchen A, door 2 of bathroom B, door 3 of study C, door 0 of living room D, Door 4 of bedroom E, door 5 of children's room F.
  • the user when using the PDR beacon to correct the user's trajectory, the user can be adjusted at the heading angles ⁇ n+1, ⁇ n+2, ⁇ n+3, ⁇ n+4 (
  • the heading angle can be the angle between the user's walking direction and the N direction in the figure.
  • the distances between the positions of Sn+1, Sn+2, Sn+3 and Sn+4 and the position of the PDR beacon are all less than the first distance, the The position of one step closest to the position of the PDR beacon is corrected to the position of the PDR beacon.
  • the heading angle may also be the angle between the user's walking direction and other directions, and other algorithms may also be used to correct the user's trajectory, which is not limited in this application.
  • the influence of the error of the inertial sensor on the calculation of the user's position is not obvious, and the PDR beacon may not be set.
  • Step S1006 the user walks along the room for the second week with the hand-held terminal, and the terminal corrects the room layout plan according to the trajectories of the first week and the second week of the user's walking, and obtains a distribution map of smart home devices in each room.
  • the user's walking direction during the second cycle may be opposite to the direction in which the user walked the first cycle. If the user walks in a clockwise direction during the first cycle, then the user walks the second cycle in a clockwise direction. You can walk in a counterclockwise direction.
  • the distribution map of the rooms of the smart home device on other floors can also be obtained according to steps S1001 to S1006 respectively.
  • the altitude of the user can be determined, and then combined with the floor height to calculate the floor where the user is located, or whether the user's floor has changed.
  • the obtained distribution map includes: kitchen A, bathroom B, study room C, living room D, master bedroom E, and children's room F, wherein the smart home equipment in kitchen A includes smart lights 1a, smart refrigerator 1d; smart home devices in bathroom B include smart light 2a; smart home devices in study C include: smart light 3a and smart air conditioner 1b; smart home devices in living room D include: smart TV 1c, smart air conditioner 2b, smart home Light 4a, smart door lock 1e, smart home gateway 1f, and smart security monitoring 1g; smart home devices in master bedroom E include: smart TV 2c, smart air conditioner 3b, and smart lights 5a; smart home devices in children's room F include: smart lights 6a and smart air conditioner 4b.
  • the smart home equipment in kitchen A includes smart lights 1a, smart refrigerator 1d
  • smart home devices in bathroom B include smart light 2a
  • smart home devices in study C include: smart light 3a and smart air conditioner 1b
  • smart home devices in living room D include: smart TV 1c, smart air conditioner 2b, smart home Light 4a
  • the distribution map may be stored on the terminal, or the distribution map may be uploaded to the smart home cloud, which is not limited in this application.
  • Step S2000 The perception service APP determines the room where the terminal is located according to the distribution map and the walking information of the user.
  • step S2000 may include the following sub-steps:
  • Step S2001 the perception service APP acquires the distribution map of the user's smart home devices in the room from the smart home cloud.
  • the perception service APP After the perception service APP is connected to the smart home gateway through Wi-Fi, it calls the function of the smart home APP through the API, and the smart home APP queries the distribution map of the user's smart home devices in the room from the smart home cloud.
  • the perception service APP can also directly call the distribution map stored on the terminal through the API.
  • the perception service APP can be the resident software of the system. In order to reduce the occupation of the terminal processor, the perception service APP can start the PDR function after connecting to the smart home gateway through Wi-Fi, and disconnect the connection with the smart home gateway. Then turn off the PDR function.
  • Step S2002 The perception service APP uses the PDR technology to determine the room where the terminal is located according to the distribution map and the user's walking information.
  • the perception service APP After the perception service APP is connected to the smart home gateway through Wi-Fi, it calls the functions of the inertial sensor, direction sensor and/or air pressure sensor 144 through the API to obtain the user's walking acceleration, rotational angular velocity, user's walking direction and/or air pressure; And obtain the distribution map stored on the smart home cloud or smart home APP through API; use PDR technology to determine the room where the user is located.
  • inertial sensors may include: acceleration sensors and gyroscope sensors.
  • the acceleration sensor is used to determine the user's acceleration while walking; the gyroscope sensor is used to determine the rotational angular velocity.
  • the direction sensor obtains the user's walking direction.
  • the perception service APP can determine whether the user has taken a step according to the acceleration, and then calculate the user's step size d; calculate the offset angle of the walking direction of the user's two adjacent steps according to the rotation angular velocity, and optionally, according to this The offset angle predicts the direction of the user in the next step; the heading angle ⁇ of each step of the user is calculated according to the walking direction and/or the offset angle; according to the air pressure sensor 144 used to obtain the air pressure, when the air pressure decreases, the terminal is in the state of going upstairs , when the air pressure rises, the terminal is in the downstairs state, and then the floor where the terminal is located is determined.
  • (E 0 , N 0 ) are the coordinates of the user's initial position in the distribution map (where E 0 is the user's initial position in the E direction) coordinates, N 0 is the coordinate of the user's initial position in the N direction), n is the nth step of the user's walking, dn is the step length of the nth step of the user's walking, and ⁇ n is the nth step of the user's walking.
  • the heading angle of , E k is the coordinate in the E direction when the user takes k steps; N k is the coordinate in the N direction when the user takes k steps.
  • step S2002 is only an example. It should be noted that, according to the different types of sensors mounted on the terminal, the acquired walking data of the user may also be different. This application does not limit the types of sensors, as long as the types of sensors can be The obtained user's walking data and the principle of formula (1) can be used to calculate the user's position.
  • FIG. 12 shows a schematic diagram of calculating a user's position in the distribution diagram shown in FIG. 9 by using PDR according to an embodiment of the present application.
  • the user when the user is connected to the smart home gateway through Wi-Fi before arriving at home (without passing through home door 0), it means that the terminal is carried by the user into the smart home environment.
  • the direction sensor When the user opens door 0, with the opening of door 0
  • the direction sensor also detects the angle change, indicating that the user opens and passes through door 0.
  • the coordinates of door 0 on the distribution map are used as the initial position S0 (the coordinates of door 0 are drawn in steps S1001-step S1006 to draw the smart home
  • the equipment is already known in the distribution map of each room).
  • the step size d of each step of the user (for the sake of clarity, there is no marked dn, and the lines between S0-S1, S1-S2... in the figure represent the user Step size), the heading angle ⁇ of this step, accumulate the projected length of each step of the user in the N direction and the E direction at the initial position, and then calculate the coordinates of the distribution map after the user walks k steps.
  • the step size of the user can be calculated based on the relationship between the acceleration and time obtained by the acceleration sensor; the heading angle can be calculated based on the relationship between the angular acceleration obtained by the gyro sensor and time and/or detected based on the direction sensor.
  • N direction and E direction are the directions of the coordinate axes (N axis, E axis) marked on the floor plan of the user's room (distribution map of the smart home in each room).
  • N axis, E axis are the directions of the coordinate axes marked on the floor plan of the user's room (distribution map of the smart home in each room).
  • the distribution diagrams shown in FIGS. 8-9 and FIG. 12 take the N direction and the E direction which are perpendicular to each other as the directions of the coordinate axes for reference.
  • the direction of the reference coordinate axis can also be different. It can be any two directions of N, S, W, and E that are perpendicular to each other, or other directions. This application does not limit this.
  • the user may not open the door 0, but when the terminal is connected to the smart home gateway through Wi-Fi, the perception service APP can determine an approximate location according to the detected Wi-Fi strength.
  • the perception service APP uses the PDR technology to determine the user's location based on the coordinates of the approximate location, the user's walking data acquired by the inertial sensor, and/or changes in Wi-Fi strength.
  • the user's position is close to door 0 and the distance from door 0 is less than the first distance, the user's position is corrected to the position of door 0 (for correcting the user's position to the position of door 0, refer to step S1005).
  • the perception service APP when the user has already arrived at home (has passed through home door 0) before connecting to the smart home gateway through Wi-Fi, for example, as shown in FIG.
  • the perception service APP cannot obtain the current location of the user, and can only determine an approximate location based on the Wi-Fi strength detected by the terminal.
  • the perception service APP can determine the user's location using the PDR technology based on the coordinates of the approximate location, the user's walking data acquired by the inertial sensor, and/or changes in Wi-Fi strength .
  • the coordinates of the door 4 on the distribution map are used as the initial position.
  • the initial position is accumulated for each step of the user in the N direction and the E direction. Projection length, and then calculate the coordinates in the distribution map after the user walks k steps.
  • the floor where the user is located can be determined according to the air pressure obtained by the air pressure sensor 144, and the distribution map of the smart home corresponding to the floor where the user is located in each room is the same as step S2002. method to determine which room the user is in.
  • Step S3000 Determine the smart home device that the user intends to control according to the user's voice command and the room where the terminal is located, where the home device is located in the room.
  • Step S3000 may include, for example, the following sub-steps, as shown in FIG. 7 ,
  • Step S3001 The user says “turn on the air conditioner” to the voice assistant APP on the terminal.
  • the user walks from the door 0 to the position X of the living room D, then walks from the living room D to the position Y of the master bedroom E, and at the position Y makes a voice "turn on the air conditioner".
  • Step S3002 The voice assistant APP converts the voice content "turn on the air conditioner" into text content.
  • the user's voice content is obtained by using the audio pickup function provided by the hardware microphone on the terminal, and the voice assistant APP can use ASR to perform voice recognition on the user's voice content, convert the original voice content into text content, and convert the text content to the text content. Send to Voice Assistant Cloud.
  • Step S3003 The voice assistant cloud performs semantic analysis on the text content to obtain the user's intention and slot.
  • the voice assistant cloud can use NLU to perform semantic analysis on the text content of the user's voice, and obtain the user's intention and slot, where the user's intention is: the intention to turn on the air conditioner, and the information of the room where the terminal is located is missing in the slot.
  • semantic analysis can also be performed at the terminal to obtain the user's intent and slot.
  • a voice assistant APP has the semantic analysis capability. After the information of the room where the terminal is located, the function of the perception service APP is called through the API, and then the room where the terminal is located is obtained (refer to step S2001 and step S2002), and step S3006 is executed.
  • Step S3004 The voice assistant cloud sends an instruction to collect the information of the room where the terminal is located to the voice assistant APP.
  • Step S3005 The voice assistant APP calls the function of the perception service APP through the API to determine the room where the terminal is located.
  • step S3005 the voice assistant APP invokes the function of the perception service APP according to the instruction from the voice assistant cloud.
  • the perception service APP determines the room where the terminal is located through steps S2001-S2002.
  • Step S3006 The voice assistant APP feeds back the information of the room where the terminal is located to the voice assistant cloud.
  • Step S3007 The voice assistant cloud sends the user's intention, the slot, and the room where the terminal is located to the smart home cloud.
  • Step S3008 The smart home cloud determines a list of rooms where the specified smart home device is located according to the user's intention and slot, and filters out a unique smart home device according to the room where the terminal is located.
  • the smart home cloud when the smart home cloud does not store the user's distribution map, the smart home cloud can filter out a unique smart home device according to the room where the terminal is located according to the information of the room to which the multiple smart home devices belong.
  • the list of rooms where the designated smart home device is obtained in step S3008 includes: The smart air conditioner 2b, the smart air conditioner 4b in the children's room F, the smart air conditioner 3b in the master bedroom E, and the smart air conditioner 1b in the study C; Smart home devices.
  • Step S3009 The smart home cloud finds the control instruction of the controlled smart home device according to the unique smart home device, and sends the control instruction to the smart home device.
  • the smart furniture cloud finds out the control instruction to turn on the smart air conditioner 3b of the master bedroom E.
  • the smart home cloud reports to the voice assistant cloud that there are multiple identical controlled smart home devices in the room where the terminal is located, and the voice The assistant cloud feedback to the voice assistant APP, asking the user to clarify the instructions of the controlled smart home device, and carry out a new round of voice interaction until the user instructs the only smart home device.
  • the smart home cloud can also send corresponding control instructions to the multiple smart home devices of the same type, or the smart home cloud can send a corresponding control instruction to the user last time.
  • the controlled smart home device sends a control instruction, which is not limited in this application.
  • Step S4001 The smart home device feeds back the execution result of the instruction to the smart home cloud.
  • Step S4002 The smart home cloud feeds back the execution result of the instruction to the voice assistant cloud.
  • Step S4003 The voice assistant cloud can construct the screen display content and broadcast statement according to the result of the execution of the instruction, and send it to the voice assistant APP.
  • Step S4004 The voice assistant APP of the mobile phone displays according to the constructed screen display content: "It has been opened for you", and at the same time performs a voice broadcast according to the broadcast sentence: "It has been opened for you”.
  • Embodiment 2 is a diagrammatic representation of Embodiment 1:
  • the terminal is a terminal that does not move frequently.
  • Terminals that do not move frequently may be, for example, smart TVs, smart large screens, speakers with screens, speakers without screens, and the like.
  • infrequently moving terminals usually cannot be carried around and do not have inertial sensors, so they cannot be carried by users and cannot be used to calculate the user's position in the room through PDR technology.
  • the smart home cloud can determine the controlled smart home device matching the voice command according to the smart home devices in the room where the infrequently moving terminal is located.
  • the voice assistant cloud will Determine the user's intention and slot, and if there is a lack of slot in the user's voice command, you can directly determine the room where the infrequently mobile terminal is located through the distribution map of the smart home cloud, and further determine the user's control. Smart home devices.
  • the method for selecting a smart home device in this embodiment will be described in detail by taking the terminal that does not move frequently as a smart TV 1c located in the living room D, and the user issues a voice command of "turn on the air conditioner" in the living room D as an example. .
  • the user can obtain the distribution map of the smart home devices in each room through steps S1001 to S1006. Since the position of the smart TV 1c has been marked in the corresponding room in step S1004, when the user initiates a voice command through the voice assistant APP on the smart TV 1c, the voice assistant APP and/or the voice assistant cloud can execute the S3002-step S3003 to determine the user's intention and slot, and in the case of lack of slots in the user's voice command, the function of the perception service APP can be called through the API, and the perception service APP can directly determine the distribution map in step S2001.
  • the room where the smart TV 1c is located ie, the living room D
  • steps S3006 to S4004 are executed, so that the smart air conditioner 2b located in the living room D, which is the same as the smart TV 1c, is turned on.
  • each embodiment and each step in each embodiment of the embodiment of the present application may be used in combination with each other, or may be used alone, and each step may be performed in the same or different order as the embodiment of the present application to achieve different technical effects.
  • the methods provided by the embodiments of the present application are introduced from the perspective of an electronic device as an execution subject.
  • the electronic device may include a hardware structure and/or software modules, and implement the above functions in the form of a hardware structure, a software module, or a hardware structure plus a software module. Whether one of the above functions is performed in the form of a hardware structure, a software module, or a hardware structure plus a software module depends on the specific application and design constraints of the technical solution.
  • the embodiments of the present application provide an electronic device, and the electronic device is used to implement the smart home device selection method in the above figures.
  • the electronic device 1500 may include one or more processors 1510 and one or more memories (not shown in FIG.
  • a display screen 1520 an inertial sensor 1530 , a speaker 153 , a microphone 154 , a transceiver a processor 1550, and one or more computer programs stored in the memory, the one or more computer programs including instructions; the display screen 1520 for displaying a user interface;
  • the speaker 153 can be used to broadcast the voice
  • the microphone 154 can be used to obtain the user's voice command
  • the inertial sensor 1530 can be used to collect the walking information of the terminal in the natural coordinate system
  • the transceiver 1550 can be used to receive the information from the cloud. data and send data to the cloud; when the instruction is invoked and executed by the one or more processors 1510 , the terminal can execute the various method embodiments shown in FIG.
  • the processor may be a general-purpose processor, a digital signal processor, an application-specific integrated circuit, a field programmable gate array or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component, which can implement or
  • a general purpose processor may be a microprocessor or any conventional processor or the like.
  • the steps of the methods disclosed in conjunction with the embodiments of the present application may be directly embodied as executed by a hardware processor, or executed by a combination of hardware and software modules in the processor.
  • the memory may be a non-volatile memory, such as a hard disk drive (HDD) or a solid-state drive (SSD), etc., or may also be a volatile memory (volatile memory), for example Random-access memory (RAM).
  • Memory is, but is not limited to, any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
  • the memory in this embodiment of the present application may also be a circuit or any other device capable of implementing a storage function, for storing program instructions and/or data.
  • the present application also provides a computer storage medium, where a computer program is stored in the computer storage medium, and when the computer program is executed by a computer, the computer is made to execute the above shown in FIGS. 4-7 .
  • Embodiments of the present application further provide a computer-readable storage medium or a computer non-volatile readable storage medium, on which a computer program is stored, and when the program is executed by a processor, is used to execute a method for generating diversified problems, The method includes at least one of the solutions described in each of the above embodiments.
  • the computer storage medium of the embodiments of the present application may adopt any combination of one or more computer-readable media.
  • the computer-readable medium may be a computer-readable signal medium or a computer-readable storage medium.
  • the computer readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or any combination of the above.
  • a computer-readable storage medium can be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a propagated data signal in baseband or as part of a carrier wave, with computer-readable program code embodied thereon. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device .
  • Program code embodied on a computer readable medium may be transmitted using any suitable medium including, but not limited to, wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for performing the operations of the present application may be written in one or more programming languages, including object-oriented programming languages—such as Java, Smalltalk, C++, but also conventional Procedural programming language - such as the "C" language or similar programming language.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or wide area network (WAN), or may be connected to an external computer (eg, through the Internet using an Internet service provider) connect).
  • LAN local area network
  • WAN wide area network
  • Internet service provider an external computer
  • Embodiments of the present application further provide a computer program product, including instructions, which, when executed on a computer, cause the computer to execute the various method embodiments shown in FIG. 4 to FIG. 7 above.
  • the methods provided by the embodiments of the present application are introduced from the perspective of an electronic device as an execution subject.
  • the electronic device may include a hardware structure and/or software modules, and implement the above functions in the form of a hardware structure, a software module, or a hardware structure plus a software module. Whether one of the above functions is performed in a hardware structure, a software module, or a hardware structure plus a software module depends on the specific application and design constraints of the counting scheme.
  • the processors involved in each of the above-mentioned embodiments may be general-purpose processors, digital signal processors (DSPs), application specific integrated circuits (ASICs), ready-made programmable gate arrays (FPGAs), and field programmable gate arrays (FPGAs). ) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
  • DSPs digital signal processors
  • ASICs application specific integrated circuits
  • FPGAs ready-made programmable gate arrays
  • FPGAs field programmable gate arrays
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the steps of the method disclosed in conjunction with the embodiments of the present application may be directly embodied as executed by a hardware decoding processor, or executed by a combination of hardware and software modules in the decoding processor.
  • Software modules can be located in random access memory (RAM), flash memory, read-only memory (ROM), programmable read-only memory or electrically erasable programmable memory, registers, etc. in the storage medium.
  • RAM random access memory
  • ROM read-only memory
  • the storage medium is located in the memory, and the processor reads the instructions in the memory, and completes the steps of the above method in combination with its hardware.
  • the unit described as a separate component may or may not be physically separated, and the unit is displayed as a unit.
  • the components shown may or may not be physical units, that is, may be located in one place, or may be distributed over multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the functions, if implemented in the form of software functional units and sold or used as independent products, may be stored in a computer-readable storage medium.
  • the technical solution of the present application can be embodied in the form of a software product in essence, or the part that contributes to the prior art or the part of the technical solution.
  • the computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk and other mediums that can store program codes.

Abstract

一种智能家居设备选择方法及终端,应用于智能家居环境中,智能家居系统(1000)包括智能家居云(220)、多个智能家居设备(410,420,430)和终端(100),多个智能家居设备(410,420,430)中的至少部分智能家居设备位于不同房间,智能家居云(220)与多个智能家居设备(410,420,430)以及终端(100)连接以进行通信,包括:利用行人航迹推算PDR技术、根据房间分布图确定终端当前所在房间;根据终端当前所在房间以及用户意图确定被控智能家居设备(410,420,430),使得在语音控制智能家居设备(410,420,430)的场景下,在存在多个目标智能家居设备(410,420,430)备选时,根据用户的语音指示,确定出用户意图控制的智能家居设备(410,420,430),减少用户与终端(100)的语音进行语音交互的次数,提升用户体验。

Description

一种智能家居设备选择方法及终端 技术领域
本申请涉及终端技术领域,特别涉及一种智能家居设备选择方法及终端。
背景技术
随着智能家居技术的发展,用户能够通过向终端上的语音助手发送语音来控制智能家居设备。当用户家中存在多个智能家居设备时,需要用户在语音中明确唯一的被控智能家居设备,如:打开主卧的灯、调高儿童房的空调等,智能家居云才能根据从用户语音提取的意图来确定相应的被控智能家居设备。但是,如果用户在语音中没有明确唯一的被控智能家居设备,如:开灯、调高温度等,智能家居云无法根据该意图确定相应的被控智能家居设备,因此,智能家居云会向语音助手反馈让用户明确唯一被控智能家居设备的指令,直至确定相应的被控智能家居设备。而用户与终端进行多次语音交互的过程造成用户体的体验不佳。
发明内容
鉴于现有技术的以上问题,本申请的目的在于提供一种智能家居设备选择方法及终端,使得在语音控制智能家居设备的场景下,存在多个目标智能家居设备备选的时候,根据用户的语音指示,确定出用户意图控制的智能家居设备,减少用户与终端的语音进行语音交互的次数,提升用户的体验。
本申请实施例的第一方面,提供了一种智能家居设备选择方法,所述方法应用于智能家居系统中,所述智能家居系统包括智能家居云、多个智能家居设备和终端,所述多个智能家居设备中的至少部分智能家居设备位于不同房间,所述智能家居云与所述多个智能家居设备以及所述终端连接以进行通信,所述方法包括:所述终端利用行人航迹推算PDR技术、根据房间分布图确定所述终端当前所在房间,其中,所述房间分布图包括所述多个智能家居设备的所在位置信息和/或所属房间信息;所述智能家居云根据所述终端当前所在房间以及用户意图确定被控智能家居设备,其中,所述用户意图是基于用户语音指令获得的,所述智能家居云存储有所述房间分布图,或者,所述智能家居云存储有所多个智能家居设备的所属房间信息。
通过上述设置,能够在用户的语音指令未包含房间信息时,从用户所持终端所在的房间中确定被控智能家居设备,减少用户与终端进行语音交互的次数,提升用户的体验;无需额外的芯片支持,降低硬件成本。
在一种可能的实现方式中,所述智能家居云根据所述终端当前所在房间和用户意图确定被控智能家居设备,具体包括:所述智能家居云根据所述用户意图确定智能家居设备列表,根据所述终端当前所在房间从所述智能家居设备列表中确定所述被控智能家居设备,其中,所述智能家居设备列表包括位于不同房间的智能家居设备。
通过上述设置,能够在智能家居云未存储有分布图的情况下,通过确定智能家居设备列表被控智能家居设备。
在一种可能的实现方式中,所述终端利用行人航迹推算PDR技术、根据房间分布图确定所述终端当前所在房间,具体包括:
所述终端获取加速度传感器采集的加速度信息、陀螺仪传感器采集的角速度信息、方向传感器采集的方向信息和/或气压传感器采集的气压信息;
所述终端根据所述加速度信息、所述角速度信息、所述方向信息和/或所述气压信息,利用所述PDR技术计算所述终端位置;
所述终端根据所述终端位置和所述房间分布图确定所述终端当前所在房间。
通过上述设置,能够在终端进入智能家居环境中时,获取终端当前所在的房间,为后续从终端所在的房间中确定被控智能家居设备做准备,使得在用户发出语音指示时,能够立即根据终端所在的房间确定用户意图控制的智能家居设备,提升用户体验。
在一种可能的实现方式中,所述房间分布图还包括PDR信标,在所述终端根据所述终端位置和所述房间分布图确定所述终端当前所在房间之前,所述方法还包括:
所述终端根据所述PDR信标校正所述终端位置。
通过上述设置,能够避免因PDR累计误差而造成的对用户定位的偏差,提升对用户在分布图中定位的准确性。
在一种可能的实现方式中,所述PDR信标是由用户在所述房间分布图中进行标注得到的,所述PDR信标包括门、房间的墙壁拐角和/或楼道。
通过上述设置,能够在用户经过这些其行进方向角有可能会产生较大方向变化的地方时,对用户的位置进行校正,避免因PDR累计误差而造成的对用户定位的偏差,提升对用户在分布图中定位的准确性。
在一种可能的实现方式中,所述房间分布图是通过用户携带所述终端沿房间行走、利用所述PDR技术绘制得到的;
所述多个智能家居设备的所在位置信息和/或所属房间信息是由用户在所述房间布局图中进行标注得到的。
通过上述设置,能够获取到智能家居设备在房间的分布图,为后续利用PDR技术确定终端所在的房间中做准备,使得在用户发出语音指示时,能够立即根据终端所在的房间确定用户意图控制的智能家居设备,提升用户体验。
在一种可能的实现方式中,所述方法还包括:
所述智能家居云根据所述被控智能家居设备确定控制指令,并向所述被控智能家居设备发送所述控制指令,其中,所述控制指令用于控制所述被控智能家居设备。
通过上述设置,能够控制用户当前所在房间中的智能家居设备,提升用户体验。
在一种可能的实现方式中,所述方法还包括:所述终端获取所述用户语音指令。
本申请实施例的第二方面,提供了一种确定终端所在房间的方法,包括:所述终端利用行人航迹推算PDR技术、根据房间分布图确定所述终端当前所在房间,其中,所述房间分布图包括多个智能家居设备的所在位置信息和/或所属房间信息;所述终端发送所述终端当前所在房间的信息。
通过上述设置,能够在终端进入智能家居环境中时,获取终端当前所在的房间,为后续从终端所在的房间中确定被控智能家居设备做准备,使得在用户发出语音指示时,能够立即根据终端所在的房间确定用户意图控制的智能家居设备,提升用户体验。
在一种可能的实现方式中,所述终端利用PDR技术、根据房间分布图确定所述终端当前所在房间,具体包括:所述终端获取加速度传感器采集的加速度信息、陀螺仪传感器采集的角速度信息、方向传感器采集的方向信息和/或气压传感器采集的气压信息;所述终端根据所述加速度信息、所述角速度信息、所述方向信息和/或所述气压信息,利用所述PDR技术计算所述终端在所述房间分布图中的位置;所述终端根据所述终端在所述房间分布图中的位置和所述房间分布图确定所述终端当前所在房间。
通过上述设置,能够在终端进入智能家居环境中时,根据终端上的传感器获得的用户的步行信息来和分布图获取终端当前所在的房间,为后续从终端所在的房间中确定被控智能家居设备做准备,使得在用户发出语音指示时,能够立即根据终端所在的房间确定用户意图控制的智能家居设备,提升用户体验。
在一种可能的实现方式中,所述房间分布图还包括PDR信标,在所述终端根据所述终端在所述房间分布图中的位置和所述房间分布图确定所述终端当前所在房间之前,所述方法还包括:所述终端根据所述PDR信标校正所述终端位置。
通过上述设置,能够避免因PDR累计误差而造成的对用户定位的偏差,提升对用户在分布图中定位的准确性。
在一种可能的实现方式中,所述PDR信标是由用户在所述房间分布图中进行标注得到的,所述PDR信标包括门、房间的墙壁拐角和/或楼道。
通过上述设置,能够在用户经过这些其行进方向角有可能会产生较大方向变化的地方时,对用户的位置进行校正,避免因PDR累计误差而造成的对用户定位的偏差,提升对用户在分布图中定位的准确性。
在一种可能的实现方式中,所述房间分布图是通过用户携带所述终端沿房间行走、利用所述PDR技术绘制得到的;所述多个智能家居设备的所在位置信息和/或所属房间信息是由用户在所述房间布局图中进行标注得到的。
通过上述设置,能够获取到智能家居设备在房间的分布图,为后续利用PDR技术确定终端所在的房间中做准备,使得在用户发出语音指示时,能够立即根据终端所在的房间确定用户意图控制的智能家居设备,提升用户体验。
本申请实施例的第三方面,提供了一种智能家居设备选择方法,所述方法应用于智能家居系统中,所述智能家居系统包括智能家居云、多个智能家居设备,所述多个智能家居设备中的至少部分智能家居设备位于不同房间,所述智能家居云与所述多个智能家居设备连接以进行通信,所述方法包括:所述智能家居云获取终端当前所在房间的信息,其中,所述终端当前所在的房间的信息是通过所述终端利用行人航迹推算PDR技术、根据房间分布图确定的,所述房间分布图包括多个智能家居设备的所在位置信息和/或所属房间信息;所述智能家居云根据所述终端当前所在房间以及用户意图确定被控智能家居设备,其中,所述用户意图是基于用户语音指令获得的,所述智能家居云存储有所述房间分布图,或者,所述智能家居云存储有所述多个智能家居设备的所属房间信息。
通过上述设置,能够在用户的语音指令未包含房间信息时,从用户所持终端所在的房间中确定被控智能家居设备,减少用户与终端进行语音交互的次数,提升用户的体验;无需额外的芯片支持,降低硬件成本。
在一种可能的实现方式中,所述方法还包括:所述智能家居云根据所述被控智能家居设备确定控制指令,并向所述被控智能家居设备发送所述控制指令,其中,所述控制指令用于控制所述被控智能家居设备。
通过上述设置,能够控制用户当前所在房间中的智能家居设备,提升用户体验。
本申请实施例的第四方面,提供了一种智能家居系统,所述智能家居系统包括智能家居云和终端,所述智能家居云、所述终端包括存储器、处理器,所述存储器存储有指令,当所述指令被所述处理器调用执行时,使得所述智能家居云、所述终端执行如本申请实施例第一方面及其可能的实现方式中任一项所述的方法。
本申请实施例的第五方面,提供了一种终端,包括:处理器、存储器、显示屏、扬声器、麦克风、方向传感器、陀螺仪传感器、加速度传感器以及计算机程序,所述计算机程序被存储在所述存储器中,所述计算机程序包括指令;所述显示屏,用于显示用户界面;所述扬声器,用于播报用户语音;所述麦克风,用于获取用于语音;所述加速度传感器,用于采集所述终端的移动加速度;所述方向传感器,用于确定终端的方向;所述陀螺仪传感器,用于采集所述终端旋转的角速度;当所述指令被所述处理器调用执行时,使得所述终端执行如本申请实施例第二方面及其可能的实现方式中任一项所述的方法。
本申请实施例的第六方面提供了一种计算机可读存储介质或者非易失性计算机可读存储介质,所述计算机可读存储介质或者非易失性计算机可读存储介质包括计算机程序,当计算机程序在电子设备上运行时,使得所述电子设备执行如本申请实施例第二方面及其可能的实现方式中任一项所述的方法,或者,使得所述电子设备执行如本申请第三方面及其可能的实现方式中任一项所述的方法。
本申请实施例的第四至第六方面所带来的技术效果与本申请第一至第三方面及其可能的实现方式所带来的技术效果相同,为了简洁起见,在此不再赘述。
本申请的这些和其它方面在以下(多个)实施例的描述中会更加简明易懂。
附图说明
以下参照附图来进一步说明本申请的各个特征和各个特征之间的联系。附图均为示例性的,一些特征并不以实际比例示出,并且一些附图中可能省略了本申请所涉及领域的惯常的且对于本申请非必要的特征,或是额外示出了对于本申请非必要的特征,附图所示的各个特征的组合并不用以限制本申请。另外,在本说明书全文中,相同的附图标记所指代的内容也是相同的。具体的附图说明如下:
图1是本申请实施例提供的一种选择智能家居设备的方法流程图;
图2是本申请实施例提供的一种智能家居设备系统的结构示意图;
图3是本申请实施例提供的一种终端的硬件结构示意图;
图4是本申请实施例提供的一种智能家居设备的选择方法的流程图;
图5是申请实施例提供的一种获取智能家居设备在各个房间的分布图的方法流程图;
图6是申请实施例提供的一种获取终端所在房间的方法的流程图;
图7是申请实施例提供的一种智能家居设备的选择方法的流程图;
图8是本申请实施例提供的一种智能家居设备在各个房间的分布图;
图9是本申请实施例提供的一种终端在房间中的运动轨迹示意图;
图10是本申请实施例提供的另一种终端在房间中的运动轨迹示意图;
图11是本申请实施例提供的一种利用PDR信标校正用户位置的原理图;
图12示出了本申请实施例提供的一种利用PDR计算用户在图9示出的分布图中的位置的原理图;
图13为本申请实施例的一种电子设备的结构示意图。
具体实施方式
下面结合实施方式中的附图,对本申请的具体实施方式所涉及的技术方案进行描述。在对技术方案的具体内容进行描述前,先简单说明一下本申请中所使用的术语。
说明书和权利要求书中的词语“第一、第二、第三等”或模块A、模块B、模块C等类似用语,仅用于区别类似的对象,不代表针对对象的特定排序,可以理解地,在允许的情况下可以互换特定的顺序或先后次序,以使这里描述的本申请实施例能够以除了在这里图示或描述的以外的顺序实施。
说明书和权利要求书中使用的术语“包括”不应解释为限制于其后列出的内容;它不排除其它的元件或步骤。因此,其应当诠释为指定所提到的所述特征、整体、步骤或部件的存在,但并不排除存在或添加一个或更多其它特征、整体、步骤或部件及其组群。因此,表述“包括装置A和B的设备”不应局限为仅由部件A和B组成的设备。
本说明书中提到的“一个实施例”或“实施例”意味着与该实施例结合描述的特定特征、结构或特性包括在本申请的至少一个实施例中。因此,在本说明书各处出现的用语“在一个实施例中”或“在实施例中”并不一定都指同一实施例,但可以指同一实施例。此外,在一个或多个实施例中,能够以任何适当的方式组合各特定特征、结构或特性,如从本公开对本领域的普通技术人员显而易见的那样。
关键术语的定义:
行人航迹推算(pedestrians dead reckoning,PDR),是一种基于惯性传感器获得人的步行数据,对人进行定位的技术。其原理为根据惯性传感器获得的人的角运动数据和线运动数据,确定的人的行走方向、人的步长,进而计算出行人的位置。基于行人的位置的连线即可以获得行人航迹。
自动语音识别(automatic speech recognition,ASR),是一种将语音转换为文本的技术。
自然语言理解(natural language understanding,NLU),是一种使计算机理解人的自然语言的技术,其能够根据人的自然语言文本获得人的意图。
对话管理(Dialog Manager,DM),是一种管理人与计算机进行对话过程中的上下文,对对话过程中涉及到的不同服务进行编排的技术。
从文本到语音(text to speech,TTS),是一种让计算机将文本用人的自然语言播报出来的技术。
应用程序(application,App),指为完成某项或多项工作的计算机程序。
应用程序接口(application program interface,API),是一种实现应用程序之间 的相互通信的技术。应用程序通过API而使操作系统去执行该应用程序的指令。
超宽带(ultra-wide band,UWB),是一种无线载波通信技术,其利用纳秒至微秒级的非正弦波窄脉冲传输数据。
接收的信号强度指示(received signal strength indication,RSSI),是无线发送层的可选部分,用来判定链接质量,以及是否增大广播发送强度。其通过接收到的信号强弱测定信号点与接收点的距离,进而根据相应数据进行定位计算的一种定位技术。
紫蜂(ZigBee),是一种短距离、低功耗的无线通信技术。
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。如有不一致,以本说明书中所说明的含义或者根据本说明书中记载的内容得出的含义为准。另外,本文中所使用的术语只是为了描述本申请实施例的目的,不是旨在限制本申请。
在语音控制智能家居设备的场景下,当用户家中存在多个房间和多个相同类型的智能家居设备时,用户需要在语音中明确出其意在控制的唯一智能家居设备,否则,用户与机器需要进行多轮语音交互,直至机器确定出唯一的被控智能家居设备。
如图1所示,当用户说打开某智能家居设备,例如“打开灯”时,智能家居云根据用户的意图执行步骤S01:判断智能家居设备的数量,当智能家居设备的数量等于1时,打开这一唯一的智能家居设备,然后执行步骤S02:语音助手语音播报:“好的,已经打开”;当智能家居设备的数量大于1时,智能家居云执行步骤S03:判断房间的数量,当房间的数量等于1时,控制打开该房间内的所有智能家居设备,然后执行步骤S05:语音助手语音播报:“所有智能家居设备已经打开”;当房间数量大于1时,智能家居云执行步骤S04:判断房间的数量,当房间的数量小于等于3时,执行步骤S06:从房间列表中选择房间,即手机的语音助手播报“请问您想要打开客厅的灯,还是卧室的灯?”和/或终端设备的显示屏上显示房间列表,用户选择确认开启哪个房间的智能设备;当房间数量大于3时,例如,用户家中有客厅、主卧、次卧、厨房、卫生间5个房间,执行步骤S07:控制语音助手语音播报:“我发现有多个智能家居设备,试试按照房间名或者智能家居设备名进行选择吧”。此时,用户需要第二次发出语音指令,例如,用户说“打开卧室的灯”,智能家居云根据用户再次发出的语音指令再次执行步骤S01:判断智能家居设备的数量;此时,确定智能灯的数量为2(主卧的智能灯和次卧的智能灯)且大于1;执行步骤S03:判断房间的数量,此时卧室的数量为2大于1,则执行步骤S04:判断房间的数量;此时房间的数量小于3,则执行步骤S06控制语音助手语音播报:“请问您想要打开主卧的灯,还是次卧的灯?”。此时,用户需要第三次发出语音指令,例如,用户说:“打开主卧的灯”,智能家居设备根据用户第三次发出的语音指令第三次执行步骤S01:判断智能家居设备的数量,此时,主卧中的智能灯的数量为1,则执行步骤S02,语音助手语音播报:“好的,已经打开”。
从上述描述中能够看出,在通过语音控制智能灯打开的这一过程中,用户与语音助手进行了多轮的语音交互,反复地执行步骤S01-S07才确定出需要打开的智能灯,这一过程无疑降低了用户的体验。
为了提升用户体验,一种可能的实施方式提供了一种基于UWB技术控制智能家 居设备的方法,用户使用安装有UWB收发芯片的终端,通过“指一指”操作,即用户使用安装有UWB收发芯片的终端指向同样安装有UWB收发芯片的被控智能家居设备,就可实现对该被控智能家居设备的操作。UWB技术是一种无线定位技术,跟全球定位系统(global positioning system,GPS)不同的是,它的定位精度更高,尤其适用于室内这种GPS信号弱的地方。被控智能家居设备会在收到指向该智能家居设备的终端的控制信号后反馈给终端一个信号,终端的屏幕上就会瞬间弹出相对应的操作界面,从而完成对该被控智能家居设备的各种操作。
这一实施方式存在以下缺陷:1、终端和被控智能家居设备都需要安装有专用的UWB芯片,其成本较高。2、用户在操作时需要将终端对准被控智能家居设备,不适用于通过语音来控制被控智能家居设备的场景。
另一种可能的实施方式提供了一种智能家居设备选择方法及终端,用户利用终端来指向其意要控制的智能家居设备,通过终端上的方向传感器采集到的方向,确定终端指向的目标智能家居设备,进而在终端的显示屏上显示被该目标智能家居设备的控制界面,从而实现对该目标智能家居设备的控制。
这一实施方式存在以下缺陷:1、这一实施方式是基于Wi-Fi和RSSI技术判定目标智能家居的坐标,由于Wi-Fi信号衰减波动幅度大,在室内定位场景下,定位精确度较低,定位误差较大。2、这一实施方式需要将终端对准被控智能家居设备,同样不适用于通过语音来控制目标智能家居设备的场景。
有鉴于此,为了在语音控制智能家居设备场景下,避免用户与终端进行多次语音交互,提升用户体验,本申请的实施例提供了一种智能家居控制方法,其可以在用户发出语音指令时,根据智能家居在用户的房间的分布图,利用PDR技术确定用户当前所在的房间,并根据用户所在的房间确定唯一的被控智能家居设备。减少用户与终端进行语音交互的次数,提升用户体验。
下面参照图2对本申请实施例提供的智能家居控制方法涉及的智能家居系统1000的进行说明。
图2示出了智能家居系统1000的结构示意图,智能家居系统1000包括智能家居设备410、420、430、终端100、语音助手云210、智能家居云220以及智能家居网关300。终端100上安装有传感器以及智能家居APP、语音助手APP、感知服务APP等应用程序。在智能家居设备410、420、430通过Wi-Fi连接至智能家居网关300后,用户可以经由终端100上的应用程序结合云端服务器(语音助手云210、智能家居云220)实现对各种智能家居设备410、420、430的控制。
下面,结合附图2,分别对智能家居设备410、420、430、终端100、安装于终端100上的应用程序(智能家居APP、语音助手APP、感知服务APP等)、语音助手云210、智能家居云220以及智能家居网关300进行说明。
智能家居设备410、420、430是通过Wi-Fi、ZigBee、蓝牙等无线通信技术接入智能家居网关300,通过接收用户通过智能家居APP或者通过语音助手APP发出的控制指令,来执行相应操作的硬件设备。智能家居设备例如包括:智能照明410、智能电视420、智能空调430、智能家居网关300、智能音箱、智能安防设备、智能投影等。
智能家居网关300:也被称为路由器,用于连接两个或多个网络的硬件设备,在 网络间起到网关的作用,是读取每一个数据包的地址然后决定如何传送的专用智能性的网络设备。路由器通过与手机或平板电脑等终端的无线连接,可以方便用户对各智能家居设备的轻松控制。一般路由器提供Wi-Fi热点,智能家居设备410、420、430和终端100通过接入路由器的Wi-Fi热点来接入Wi-Fi网络,智能家居设备410、420、430和终端100接入的路由器可以相同也可以不同。
传感器服务:例如可以包括安装在终端100上的用于获得用户的步行信息的传感器和/或能够显示用户的终端方向的指南针APP。用于获得用户的步行信息的传感器可以包括惯性传感器,例如:加速度传感器142、陀螺仪传感器143等,还可以包括方向传感器和气压传感器144等。在一些实施例中,如图3所示,加速度传感器142用于确定终端在三轴坐标系下在X轴、Y轴和Z轴上的加速度。陀螺仪传感器143用于确定终端旋转的角速度。方向传感器可以获得终端的方向,指南针APP能够根据方向传感器获得的终端的方向在显示屏132上显示终端的方向。气压传感器144可以获得气压。根据传感器获得的加速度、角速度和/或终端的方向,利用PDR技术即可计算出终端在任一时刻下的位置。还可以根据传感器获得的气压,利用气压和海拔高度的对应关系即可计算出终端在任一时刻下的海拔高度。
智能家居APP:是安装在用户使用的终端上的对各种智能家居设备进行选择和控制的软件程序。智能家居APP可以具有操作界面,用户可以通过操作该操作界面来实现对相应智能家居设备的控制。智能家居APP还可以具有绘制智能家居设备在各个房间的分布图的功能,当绘制智能家居设备在各个房间的分布图时,智能家居APP可以通过Wi-Fi接入智能家居网关,用户手持终端沿各个房间的墙壁行走一周,获得用户的房间布局平面图;用户在房间布局平面图中标注智能家居设备在各个房间的位置和房门的位置,获得智能家居设备在各个房间的分布图。在一些实施例中,智能家居APP也可以从其他设备接收、通过网络接收或者由用户上传绘制好的房间布局平面图和/或智能家居设备在各个房间的分布图。智能家居设备在各个房间的分布图可以保存在智能家居APP中,也可以上传至智能家居云220中。下文所指的智能家居APP,可以是终端出厂时已安装的应用,也可以是用户在使用终端的过程中从网络下载或从其他设备获取的应用。
语音助手APP:是安装在用户使用的终端上提供语音控制功能的APP。其可以利用终端上的硬件麦克风提供的收音功能获取用户的语音指令,通过ASR将用户输入的语音指令转成文本内容,并发送给语音助手云210;语音助手云210还可以根据被控智能家居设备的指令执行的结果生成文本语句,通过TTS将文本语句用人的自然语言播报出来。在一些实施例中,语音助手APP也可以将用户的语音指令发送给语音助手云210,由语音助手云将用户语音指令转换成文本内容。
语音助手云210:用于给语音助手APP提供云侧功能。其通过NLU对用户的文本内容进行语义分析,得到用户的意图和槽位;通过DM对用户的文本内容进行上下文管理,并根据用户的意图和槽位,通过API而使语音助手APP执行相应的操作。在一些实施例中,语音助手APP也可以具有NLU和DM功能。
智能家居云220:是一种远端的服务器,用于给智能家居APP、智能家居设备提供云侧功能;或者,是一种安装于用户家中的智能家居中央控制设备,包含收发器、 处理器、存储器。一方面,用户可以操作智能家居APP的操作界面,智能家居APP根据用户的操作,将用户的操作指令发送至智能家居云220,智能家居云220向相应的智能家居设备发送控制指令,进而实现对相应智能家居设备的控制;另一方面,用户可以向语音助手APP发送语音指令,语音助手APP将用户的语音指令转换成文本内容,并发送给语音助手云210,语音助手云210根据用户的文本内容进行语义分析,得到用户的意图和槽位,并将用户的意图和槽位发送至智能家居云220,智能家居云220根据用户的意图和槽位生成相应的控制指令,并向相应的智能家居设备发送控制指令,进而实现对相应智能家居设备的控制。
感知服务APP:是安装在用户使用的终端上的不同于智能家居APP的软件程序。感知服务APP可以具备操作界面,也可以不具备操作界面,可以是终端系统的常驻APP,也可是终端出厂时已安装的应用。感知服务APP在通过Wi-Fi接入智能家居网关后,通过API调用惯性传感器和方向传感器的功能,进而获得的用户的步行信息(如用户的步行方向、步长以及相邻两步步行方向偏移角),利用PDR结合智能家居设备在各个房间的分布图确定终端所在房间。
当用户在家中通过语音助手APP发送对某智能家居设备的语音控制指令后,语音助手APP结合语音助手云210确定用户的意图和槽位,并在槽位中缺少用户所在房间的信息时,通过API调用感知服务APP确定的用户所在房间,并将用户所在的房间的信息发送给语音助手云。语音助手云210将用户的意图、用户所在的房间发送至智能家居云220,智能家居云220根据用户的意图结合用户所在的房间生成相应的控制指令,并向相应的智能家居设备发送该控制指令,进而实现对相应智能家居设备的控制。在一些实施例中,感知服务APP还可以被智能家居APP通过API调用。可选地,本申请实施例中所述“槽位”也可表述为“槽位值”。
终端100:指用于对智能家居设备进行控制的设备,比如可以为便携式设备,诸如手机、平板电脑、人工智能(artificial intelligence,AI)智能语音终端、可穿戴设备(例如智能手表、智能手环)、增强现实(augmented reality,AR)/虚拟现实(virtual reality,VR)设备等。终端上可以安装有智能家居APP、语音助手APP、感知服务APP、指南针APP及传感器。便携式设备包括但不限于搭载示例性的,如图3所示,为本申请实施例的一种终端的硬件结构示意图。具体的如图所示,终端100包括处理器110、内部存储器121、外部存储器接口122、摄像头131、显示屏132、传感器模块140、按键151、通用串行总线(universal serial bus,USB)接口152、充电管理模块160、电源管理模块161、电池162、移动通信模块171和无线通信模块172。在另一些实施例中,终端100还可以包括用户标识模块(subscriber identification module,SIM)卡接口、音频模块、扬声器153、受话器、麦克风154、耳机接口、马达、指示器、按键等。
应理解,图3所示的硬件结构仅是一个示例。本申请实施例的终端100可以具有比图中所示终端100更多的或者更少的部件,可以组合两个或更多的部件,或者可以具有不同的部件配置。图中所示出的各种部件可以在包括一个或多个信号处理和/或专用集成电路在内的硬件、软件、或硬件和软件的组合中实现。
其中,处理器110可以包括一个或多个处理单元。例如:处理器110可以包括应用处理器(application processor,AP)、调制解调器、图形处理器(graphics processing unit, GPU)、图像信号处理器(image signal processor,ISP)、控制器、视频编解码器、数字信号处理器(digital signal processor,DSP)、基带处理器、和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
在一些实施例中,处理器110中还可以设置缓存器,用于存储指令和/或数据。示例的,处理器110中的缓存器可以为高速缓冲存储器。该缓存器可以用于保存处理器110刚用过的、生成的、或循环使用的指令和/或数据。如果处理器110需要使用该指令或数据,可从该缓存器中直接调用。有助于减少了处理器110获取指令或数据的时间,从而有助于提高系统的效率。
内部存储器121可以用于存储程序和/或数据。在一些实施例中,内部存储器121包括存储程序区和存储数据区。其中,存储程序区可以用于存储操作系统(如Android、IOS等操作系统)、至少一个功能所需的计算机程序(比如语音唤醒功能、声音播放功能)等。存储数据区可以用于存储终端100使用过程中所创建、和/或采集的数据(比如音频数据)等。示例的,处理器110可以通过调用内部存储器121中存储的程序和/或数据,使得终端100执行相应的方法,从而实现一种或多种功能。例如,处理器110调用内部存储器中的某些程序和/或数据,使得终端100执行本申请实施例中所提供的语音识别方法、从而实现语音识别功能。其中,内部存储器121可以采用高速随机存取存储器、和/或非易失性存储器等。例如,非易失性存储器可以包括一个或多个磁盘存储器件、闪存器件、和/或通用闪存存储器(universal flash storage,UFS)等中的至少一个。
外部存储器接口122可以用于连接外部存储卡(例如,Micro SD卡),实现扩展终端100的存储能力。外部存储卡通过外部存储器接口122与处理器110通信,实现数据存储功能。例如终端100可以通过外部存储器接口122将图像、音乐、视频等文件保存在外部存储卡中。
摄像头131可以用于捕获动、静态图像等。通常情况下,摄像头131包括镜头和图像传感器。其中,物体通过镜头生成的光学图像投射到图像传感器上,然后转换为电信号,在进行后续处理。示例的,图像传感器可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。
图像传感器把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。需要说明的是,终端100可以包括1个或N个摄像头131,其中,N为大于1的正整数。
显示屏132可以包括显示面板,用于显示用户界面。显示面板可以采用液晶显示屏(liquid crystal display,LCD)、有机发光二极管(organic light-emitting diode,OLED)、有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organiclight emitting diode,AMOLED)、柔性发光二极管(flex light-emitting diode,FLED)、Miniled、MicroLed、Micro-oLed、量子点发光二极管(quantum dot light emittingdiodes,QLED)等。需要说明的是,终端100可以包括1个或M个显示屏132,M为大于1的正整数。示例的,终端100可以通过GPU、显 示屏132、应用处理器等实现显示功能。本发明实施例中,终端100可以通过显示屏132显示智能家居APP的用户界面,如智能家居APP的主界面,智能家居设备的控制界面等。
麦克风154可以用于获取语音,在一些实施例中,麦克风可以获得用户的语音指令,扬声器153可以将计算机语言用人的自然语言播报出来,在一些实施例中,扬声器153可以将智能家居设备的执行结果用人的自然语言播报出来,例如,“已经为您打开空调”。
传感器模块140可以包括一个或多个传感器。例如,惯性传感器14、方向传感器141、气压传感器144、指纹传感器145、压力传感器146以及触摸传感器147等。在一些实施例中,惯性传感器14例如可以包括:加速度传感器142、陀螺仪传感器143等;传感器模块140还可以包括环境光传感器、距离传感器、接近光传感器、骨传导传感器、温度传感器等。
方向传感器141用于确定终端100所在的方向。
加速度传感器142用于确定终端在三轴坐标系下在X轴、Y轴和Z轴上的加速度。陀螺仪传感器143用于确定终端旋转的角速度,通过加速度和角速度判断终端的运动状态。
气压传感器144,用于测量终端所处位置的气压,在气压降低的情况下,终端处于上楼状态,在气压升高时,终端处于下楼状态,进而确定终端所在的楼层。
指纹传感器145用于采集指纹。终端100可以利用采集的指纹特性实现指纹解锁、访问应用锁、指纹拍照、指纹接听来电等。
压力传感器146用于感受压力信号,可以将压力信号转换成电信号。示例的,压力传感器146可以设置于显示屏132。其中,作用于相同触摸位置,但不同触摸操作强度的触摸操作,可以对应不同的操作指令。
触摸传感器147,也可称为“触控面板”。触摸传感器147可以设置于显示屏132,由触摸传感器147与显示屏132组成触摸屏,也称“触控屏”。触摸传感器147用于检测作用于其上或附近的触摸操作。触摸传感器147可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。终端100可以通过显示屏132提供与触摸操作相关的视觉输出等。在另一些实施例中,触摸传感器147也可以设置于终端100的表面,与显示屏132所处的位置不同。
USB接口152是符合USB标准规范的接口,具体可以是Mini USB接口、Micro USB接口、USB Type C接口等。USB接口152可以用于连接充电器为终端100充电,也可以用于终端100与外围设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。示例的,USB接口152除了可以为耳机接口以外,还可以用于连接其他设备,例如AR设备、计算机等。
充电管理模块160用于从充电器接收充电输入。其中,充电器可以是无线充电器,
也可以是有线充电器。在一些有线充电的实施例中,充电管理模块160可以通过USB接口152接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块160可以通过终端100的无线充电线圈接收无线充电输入。充电管理模块160为电池162充电的同时,还可以通过电源管理模块161为终端100供电。
电源管理模块161用于连接电池162、充电管理模块160与处理器110。电源管理模块161接收电池162和/或充电管理模块160的输入,为处理器110、内部存储器121、显示屏132、摄像头131等供电。电源管理模块161还可以用于监测电池容量、电池循环次数、电池健康状态(漏电、阻抗)等参数。在其他一些实施例中,电源管理模块161也可以设置于处理器110中。在另一些实施例中,电源管理模块161和充电管理模块160也可以设置于同一个器件中。
移动通信模块171可以提供应用在终端100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块171可以包括滤波器、开关、功率放大器、低噪声放大器(low noise amplifier,LNA)等。移动通信模块171可以由天线11接收电磁波信号,并对接收的电磁波信号进行滤波、放大等处理,传送至调制解调处理器进行解调。移动通信模块171还可以对经调制解调处理器调制后的信号放大,经天线11转为电磁波信号辐射出去。在一些实施例中,移动通信模块171的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块171的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。例如,移动通信模块171可以向其它设备发送语音,也可以接收其它设备发送的语音。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器、受话器等)输出声音信号,或通过显示屏132显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块171或其他功能模块设置在同一个器件中。
无线通信模块172可以提供应用在用户终端100上的包括WLAN(如Wi-Fi网络)、蓝牙(Bluetooth,BT)、全球导航卫星系统(global navigation satellite system,GNSS)、调频(frequency modulation,FM)、近距离无线通信技术(near field communication,NFC)、红外技术(infrared,IR)等无线通信的解决方案。无线通信模块172可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块172经由天线12接收电磁波信号,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块172还可以从处理器110接收待发送的信号,对其进行调频、放大,经天线12转为电磁波信号辐射出去。在一些实施例中,终端100可以通过无线通信模块172连接路由器接入Wi-Fi网络。
在一些实施例中,终端100的天线11和移动通信模块171耦合,天线12和无线通信模块172耦合,使得终端100可以与其他设备通信。具体的,移动通信模块171可以通过天线11与其它设备通信,无线通信模块172可以通过天线12与其它设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM)、通用分组无线服务(general packet radio service,GPRS)、码分多址接入(code division multiple access,CDMA)、宽带码分多址(wideband code division multiple access,WCDMA)、时分码分多址(time-division code division multiple access,TD-SCDMA)、长期演进(long  term evolution,LTE)、BT、GNSS、WLAN、NFC、FM、和/或IR技术等。所述GNSS可以包括GPS、全球导航卫星系统(global navigation satellite system,GLONASS)、北斗卫星导航系统(beidou navigation satellite system,BDS)、准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。
在本申请实施例中,终端100也可以通过移动通信模块171或者无线通信模快172基于无线信号传输的方式与智能家居设备连接。比如,终端100通过移动通信模块171或者无线通信模块172向智能家居设备发送基于无线信号形式的输入操作;或者,终端100通过移动通信模块171或者无线通信模块172接收智能家居设备发送的基于无线信号形式的状态数据等。
可以理解的是,本申请实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对终端100的结构限定。在本申请另一些实施例中,终端100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。应理解,图3所示的硬件结构仅是一个示例。本申请实施例的终端可以具有比图中所示出的更多的或者更少的部件,可以组合两个或更多的部件,或者可以具有不同的部件配置。图中所示出的各种部件可以在包括一个或多个信号处理和/或专用集成电路在内的硬件、软件、或硬件和软件的组合中实现。
下面结合图4-12对本申请实施例提供的智能家居设备的选择方法的具体实施方式进行说明。
实施方式一:
在实施方式一中,终端为可移动的终端,例如,可移动的终端可以为手机、平板电脑、智能穿戴设备等设备。用户可以持可移动的终端在房间内行走。下面,请参阅图9,以用户手持可移动的终端进入客厅D并从客厅D步行进入主卧E,在主卧E发出“开空调”的语音指令为例,对智能家居设备选择方法进行详细说明。
如图4所示,本申请实施方式提供的智能家居设备选择方法可以包括以下步骤:
步骤S1000:获取智能家居设备在各个房间的分布图。
所述房间分布图例如可以包括所述多个智能家居设备的所在位置信息和/或所属房间信息。所述位置信息包括智能家居设备在分布图中的坐标;所述房间信息包括智能家居设备所对应的房间,例如:智能空调3b位于主卧E;智能空调4b位于儿童房F。
智能家居设备在各个房间的分布图可以通过多种方式获取,例如,用户可以直接将智能家居在各个房间的分布图上传至智能家居APP或智能家居云,也可以通过如图5所示的步骤S1001-步骤S1006获取。
步骤S1001:用户打开智能家居App,启用绘制智能家居在各个房间的分布图的功能(该功能具体的实现原理参见后文步骤S1005后的描述)。
步骤S1002:用户选择智能家居网关所在的位置作为起始位置。
智能家居网关可以是路由器,并位于如图8所示的房间的门0处。应明确,用户可以选择房间的任何位置作为起始位置,本申请对此不做限制。
步骤S1003:用户手持终端沿房间行走第一周,终端根据用户行走的轨迹获得房 间布局平面图。
在一些实施例中,用户持终端沿各个房间内周侧(即靠近墙体位置侧)行走一周。在一些实施例中,用户在手持终端行走时,可以使终端保持同一姿态,例如可以使终端保持水平向前的姿态。用户的行走轨迹线(通过PDR计算出的用户的位置的连线)构成每个房间的边界线。
当然,也可以使用可移动的其他终端(例如扫地机器人等)沿各个房间内周侧(即靠近墙体位置侧)行走一周,本申请对此不做限制,只要能够绘制出房间的边界线即可。
步骤S1004:用户在房间布局平面图中标注智能家居设备所在的位置。
在一些情况中,由于部分智能家居设备,如智能冰箱、智能洗衣机等,占地面积较大,不能被用户的行走轨迹覆盖,因此,可以将该智能家居设备标注在其位于的房间的边界线内,且尽可能地靠近该智能家居设备的附近。
步骤S1005:用户在房间布局平面图中标注房间门所在的位置,作为PDR信标。
在用户手持终端在房间内运动时,加速度传感器用于获得用户在行走时的加速度数据,陀螺仪传感器用于获得用户在行走时的旋转角加速度数据,方向传感器用于获得用户的行走方向。感知服务APP可以通过API获得加速度传感器、陀螺仪传感器和方向传感器记录的这些数据,并利用PDR计算用户每运动一步时在房间布局平面图中的坐标。然而,在利用PDR技术来计算用户的步长和用户行走方向时,会随着时间的增长而产生显著的积累误差,例如,在基于加速度来确定用户的步长时,利用PDR技术来确定的用户的一步与实际用户的一步存在偏差,在此基础上,感知服务APP利用PDR计算出的用户在房间布局平面图中的坐标则会出现偏差,导致对用户的定位不准。因此,需要一些PDR信标来校正用户在房间布局平面图中的坐标。由于PDR信标在房间布局平面图中的坐标是已知且固定不变的,当用户经过这些PDR信标附近时,利用PDR信标的坐标来校正用户的坐标。
由于用户在进入某个房间时,用户的运动方向通常会发生相对大的变化,例如,如图9所示,用户在从位置X行走到位置Y这一过程中,用户在门4附近的相邻几步的运动方向角存在相对明显的变化(大于或等于90度)。同理,在用户从X位置行走到厨房A、卫生间B、书房C等房间内的过程中,用户在这些房间的门附近的几步的运动方向角也会存在相对明显的角度变化。因此,可以将房间门所在的位置作为PDR信标。当然,PDR信标的位置不限于房间门所在的位置,还可以是房间内的其他任何位置。例如,PDR信标可以是用户的运动方向可能发生明显变化的任何位置,包括但不限于门、房间的墙壁拐角、楼道、楼道拐角等。
在图8示出的智能家居设备在各个房间的分布图的示例中,PDR信标可以包括:厨房A的门1、卫生间B的门2、书房C的门3、客厅D的门0、主卧E的门4、儿童房F的门5。
在一些实施例中,如图11所示,在利用PDR信标校正用户的轨迹时,可以在用户在相邻4步的航向角θn+1、θn+2、θn+3、θn+4(航向角可以为用户行走方向与图中N方向的夹角)变化的绝对值都在90°±α的范围内,α为角度误差阈值,α≥0°、用户在相邻4步的位置(Sn+1、Sn+2、Sn+3以及Sn+4)与PDR信标(图中 以黑色三角形示出)的位置之间的距离是都小于第一距离时,将在相邻4步中与PDR信标位置最近的一步的位置校正为PDR信标的位置。当然,航向角也可以是用户行走方向与其他方向的夹角,同时还可以用其他算法来校正用户的轨迹,本申请对此不做限制。
在一些实施例中,当用户房间的面积较小时,惯性传感器的误差对计算用户的位置的影响不明显,也可以不设置PDR信标。
步骤S1006:用户手持终端沿房间行走第二周,终端根据用户行走的第一周和第二周的轨迹,校正房间布局平面图,获得智能家居设备在各个房间的分布图。
在一些实施例中,用户在行走第二周时的行走方向可以与其在行走第一周的方向相反,如果用户在行走第一周时是沿顺时针方向行走,那么用户在行走第二周时可以沿逆时针方向行走。
在一些实施例中,当用户的家中存在多个楼层时,同样可以按照步骤S1001-步骤S1006来分别获取智能家居设备在其他楼层的房间的分布图。可根据气压传感器采集的气压和气压与海拔高度的对应关系,确定用户所处的海拔高度,进而结合层高推算用户所在的楼层,或者用户所在楼层是否发生变化。
在一些实施例中,如图8所示,获得的分布图中包括:厨房A、卫生间B、书房C、客厅D、主卧E以及儿童房F,其中,厨房A的智能家居设备包括智能灯1a、智能冰箱1d;卫生间B的智能家居设备包括智能灯2a;书房C的智能家居设备包括:智能灯3a和智能空调1b;客厅D的智能家居设备包括:智能电视1c、智能空调2b、智能灯4a、智能门锁1e、智能家居网关1f以及智能安防监控1g;主卧E的智能家居设备包括:智能电视2c、智能空调3b以及智能灯5a;儿童房F的智能家居设备包括:智能灯6a和智能空调4b。
在一些实施例中,可以将分布图存储在终端上,也可以将分布图上传至智能家居云中,本申请对此不做限制。
步骤S2000:感知服务APP根据所述分布图、用户的步行信息确定终端所在的房间。
如图6所示,步骤S2000可以包括以下子步骤:
步骤S2001:感知服务APP从智能家居云中获取用户的智能家居设备在房间的分布图。
感知服务APP在通过Wi-Fi接入智能家居网关后,通过API调用智能家居APP的功能,智能家居APP从智能家居云上查询用户的智能家居设备在房间内的分布图。当然,感知服务APP也可以通过API直接调用存储在终端上的分布图。
感知服务APP可以是系统常驻软件,为了降低其对终端的处理器的占用,感知服务APP可以在通过Wi-Fi连接到智能家居网关后才启动PDR功能,在断开与智能家居网关的连接后再关闭PDR功能。
步骤S2002:感知服务APP根据分布图和用户的步行信息,利用PDR技术确定终端所在的房间。
感知服务APP在通过Wi-Fi接入智能家居网关后,通过API调用惯性传感器、方向传感器和/或气压传感器144的功能,获得用户的行走加速度、旋转角速度、用户 的行走方向和/或气压;以及通过API获得存储在智能家居云或智能家居APP上的分布图;利用PDR技术确定用户所在的房间。
在一些实施例中,惯性传感器可以包括:加速度传感器和陀螺仪传感器。加速度传感器用于确定用户在行走时的加速度;陀螺仪传感器用于确定旋转角速度。方向传感器获得用户的行走方向。可选地,感知服务APP可以根据加速度来判断用户是否走了一步,进而计算用户的步长d;根据旋转角速度计算用户相邻两步的步行方向的偏移角,并且可选地,根据该偏移角预测用户在下一步的方向;根据行走方向和/或偏移角进而计算用户每一步的航向角θ;根据气压传感器144用于获取气压,在气压降低的情况下,终端处于上楼状态,在气压升高时,终端处于下楼状态,进而确定终端所在的楼层。
根据公式(1)计算用户在分布图中的位置(坐标),其中,(E 0,N 0)为用户在分布图中的初始位置的坐标(其中E 0为用户初始位置在E方向上的坐标、N 0为用户初始位置在N方向上的坐标),n为用户的行走的第n步,d n为用户的行走的第n步的步长,θ n为用户的行走的第n步的航向角,E k为用户行走k步时在E方向上的坐标;N k为用户行走k步时在N方向上的坐标。
Figure PCTCN2022077290-appb-000001
以上步骤S2002仅是示例,需要说明的是,根据装配在终端上的传感器的种类不同,获取到的用户的步行数据可能也会不相同,本申请对传感器的种类不做限制,只要能够根据传感器获得的用户的步行数据和公式(1)的原理来计算得到用户的位置即可。
图12示出了本申请实施例提供的一种利用PDR计算用户在图9示出的分布图中的位置的原理图。如图12所示,在用户尚未到家(尚未经过家门0)就通过Wi-Fi连接至智能家居网关时,表示终端被用户携带进入智能家居环境,用户打开家门0时,随着门0的开启时角度的变化,方向传感器也检测到角度变化,表示用户打开并经过门0,此时,将门0在分布图上的坐标作为初始位置S0(门0的坐标在步骤S1001-步骤S1006绘制智能家居设备在各个房间的分布图中已经知晓)。
基于该初始位置S0的坐标(E 0,N 0)、用户每一步的步长d(为了清晰起见,没有标注的dn,图中S0-S1、S1-S2……之间的连线代表用户的步长)、这一步的航向角θ,在初始位置上累加用户每一步步长在N方向和E方向的投影长度,进而计算出用户行走k步后在分布图中的坐标。其中,用户的步长可以基于加速度传感器获得的加速度与时间的关系,通过计算得到;航向角可以基于陀螺仪传感器获得的角加速度与时间的关系通过计算得到和/或基于方向传感器检测到得的相邻两步的角度变化通过计算得到;上述N方向和E方向是用户的房间的平面图(智能家居在各个房间内的分布图)上标注的坐标轴(N轴、E轴)的方向。需要说明的是,为了方便描述,图8-9和图12示出的分布图以相互垂直的N方向和E方向作为参考的坐标轴的方向。实际上,根据房间的不同、以及用户的喜好,参考坐标轴的方向也可以各不相同,可以是N、S、W、E方向中的任意两个互相垂直的方向,也可以是其他方向,本申请对此不 做限定。
在一些实施例中,可能用户并没有开启门0的动作,但在终端通过Wi-Fi连接至智能家居网关时,感知服务APP可以根据其检测到的Wi-Fi强度确定一个大概位置。感知服务APP基于该大概位置的坐标、惯性传感器获取的用户的步行数据和/或Wi-Fi强度的变化,利用PDR技术确定用户的位置。在用户的位置接近门0,并与门0的距离小于第一距离时,将用户的位置校正为门0的位置(将用户的位置校正为门0的位置可以参考步骤S1005)。
在一些实施例中,在用户已经到家(已经经过家门0)才通过Wi-Fi连接至智能家居网关时,例如,如图10所示,在用户在进入家门直至行走至主卧E,感知服务APP才通过Wi-Fi连接至智能家居网关时,感知服务APP无法获得用户当前的位置,只能根据终端检测到的Wi-Fi强度确定一大概位置。在用户从主卧出来经过门4并进入客厅时,感知服务APP可以基于该大概位置的坐标、惯性传感器获取的用户的步行数据和/或Wi-Fi强度的变化,利用PDR技术确定用户的位置。在用户经的位置接近门4并与门4的距离小于第一距离时,将门4在分布图上的坐标作为初始位置。以与图12所示的实施例相同的方法,基于该初始位置的坐标、用户每一步的步长、这一步的航向角,在初始位置上累加用户每一步步长在N方向和E方向的投影长度,进而计算出用户行走k步后在分布图中的坐标。
在本申请提供的实施例中,不需要精确的确定用户在房间内的位置,仅需要使用上述步骤S1000-S2000及其子步骤来判断终端(用户)从一个房间运动到另一个房间即可。
在一些实施例中,当用户家中存在多个楼层时,可以根据气压传感器144获取的气压来判断用户所在的楼层,根据用户所在楼层对应的智能家居在各个房间的分布图,以与步骤S2002相同的方法来确定用户所在的房间。
步骤S3000:根据用户的语音指令、所述终端所在的房间,确定出用户意在控制的智能家居设备,所述家居设备位于所述房间。
步骤S3000可以包括例如以下子步骤,如图7所示,
步骤S3001:用户对终端上的语音助手APP说“开空调”。
如图9所示,用户从门0位置步行至客厅D的位置X,然后从客厅D步行至主卧E的位置Y,在位置Y处发出语音“开空调”。
步骤S3002:语音助手APP将语音内容“开空调”转换为文本内容。
在一些实施例中,利用终端上的硬件麦克风提供的收音功能获取用户的语音内容,语音助手APP可以利用ASR对用户的语音内容进行语音识别,将原始语音内容转成文本内容,并将文本内容发送至语音助手云。
步骤S3003:语音助手云对文本内容进行语义分析,得到用户的意图和槽位。
语音助手云可以利用NLU对用户语音的文本内容进行语义分析,得到用户的意图和槽位,其中,用户的意图为:开空调意图,槽位中缺少终端所在的房间的信息。
在一些实施例中,也可以在终端处进行语义分析,得到的用户的意图和槽位,例如,语音助手APP具备语义分析能力,在获得了用户的意图和槽位后,在槽位中缺少终端所在的房间的信息后,通过API调用感知服务APP的功能,进而获得的终端所 在房间(请参见步骤S2001和步骤S2002),并执行步骤S3006。
步骤S3004:语音助手云向语音助手APP下发收集终端所在房间的信息的指令。
步骤S3005:语音助手APP通过API调用感知服务APP的功能,确定的终端所在的房间。
在步骤S3005中,语音助手APP根据来自语音助手云的指令,调用感知服务APP的功能。感知服务APP通过步骤S2001-S2002确定终端所在的房间。
步骤S3006:语音助手APP将终端所在的房间的信息反馈给语音助手云。
步骤S3007:语音助手云将用户的意图、槽位、以及终端所在的房间发送给智能家居云。
步骤S3008:智能家居云根据用户的意图和槽位确定指定智能家居设备所在房间的列表,根据终端所在房间,筛选出唯一的智能家居设备。
在一些实施例中,在智能家居云不存储用户的分布图的情况下,智能家居云可以根据多个智能家居设备所属的房间的信息,根据终端所在的房间,筛选出唯一的智能家居设备。
在一些实施例中,如图9所示,当用户从客厅D走进主卧E后发出“开空调”的指令,在步骤S3008获取的指定智能家居设备所在房间的列表包括:位于客厅D的智能空调2b、位于儿童房F的智能空调4b、位于主卧E的智能空调3b、位于书房C的智能空调1b;根据终端所在主卧E,筛选出主卧E的智能空调3b作为唯一被控智能家居设备。
步骤S3009:智能家居云根据唯一智能家居设备,找到该被控智能家居设备的控制指令,并将控制指令发送给该智能家居设备。
在一些实施例中,智能家具云找出主卧E的智能空调3b的打开控制指令。
在一些实施例中,如果终端所在的房间存在多个相同类型的智能家居设备,如存在多个空调,智能家居云向语音助手云反馈终端所在房间存在多个相同的被控智能家居设备,语音助手云向语音助手APP反馈请用户明确被控智能家居设备的指令,进行新一轮的语音交互,直至用户指示唯一的智能家居设备。
在一些实施例中,如果终端所在的房间存在多个相同类型的智能家居设备,智能家居云也可以向多个相同类型的智能家居设备发送相应的控制指令,或者,智能家居云对用户上一次控制的智能家居设备发送控制指令,本申请对此不做限制。
步骤S4001:智能家居设备将指令的执行结果反馈给智能家居云。
步骤S4002:智能家居云将指令的执行结果反馈给语音助手云。
步骤S4003:语音助手云可以根据指令执行的结果,构造屏幕显示内容和播报语句,并发送给语音助手APP。
步骤S4004:手机的语音助手APP根据构造的屏幕显示内容显示:“已经为你打开了”,同时根据播报语句进行语音播报:“已经为你打开了”。
实施方式二:
实施方式二与实施方式一的不同在于:终端为不经常移动的终端。不经常移动的终端例如可以为智能电视、智能大屏、有屏音箱、无屏音箱等。
这些不常移动的终端通常无法随身携带,并且不具备惯性传感器,因此也无法被 用户携带、无法通过PDR技术计算出用户在房间内的位置。当用户对这些不常移动的终端发出语音指令时,智能家居云可以根据该不常移动的终端所在的房间内的智能家居设备确定与所述语音指令匹配的被控智能家居设备。
由于在绘制智能家居设备在各个房间的分布图时,已经将这些不常移动的终端标注在相应的房间内,因此,在用户通过该终端上的语音助手APP发起语音指令时,语音助手云在确定用户的意图和槽位,并在用户的语音指令中缺少槽位的情况下,可以直接通过智能家居云的分布图来确定该不常移动的终端所在的房间,进一步确定出用户所要控制的智能家居设备。
下面,请参阅图8,以不常移动的终端为位于客厅D的智能电视1c,用户在客厅D发出“开空调”的语音指令为例,对本实施方式中的智能家居设备选择方法进行详细说明。
与实施方式一相同,用户可以通过步骤S1001-步骤S1006获得智能家居设备在各个房间的分布图。由于在步骤S1004中,已经在相应房间中标注该智能电视1c的位置,因此,当用户通过该智能电视1c上的语音助手APP发起语音指令时,语音助手APP和/或语音助手云可以通过执行S3002-步骤S3003来确定用户的意图和槽位,并在用户的语音指令中缺少槽位的情况下可以通过API调用感知服务APP的功能,感知服务APP在步骤S2001中能够直接根据分布图确定该智能电视1c所在的房间(即客厅D),然后执行步骤S3006-步骤S4004,从而与智能电视1c同样位于客厅D的智能空调2b被打开。
本申请实施例中各个实施例及各个实施例中的各个步骤可以相互结合使用,也可以单独使用,各步骤可以按照与本申请实施例相同或不同的顺序执行,以实现不同的技术效果。
上述本申请提供的实施例中,从电子设备作为执行主体的角度对本申请实施例提供的方法进行了介绍。为了实现上述本申请实施例提供的方法中的各功能,电子设备可以包括硬件结构和/或软件模块,以硬件结构、软件模块、或硬件结构加软件模块的形式来实现上述各功能。上述各功能中的某个功能以硬件结构、软件模块、还是硬件结构加软件模块的方式来执行,取决于技术方案的特定应用和设计约束条件。
基于以上实施例,本申请实施例提供了一种电子设备,所述电子设备用于实现以上各图中的智能家居设备选择方法。参阅图13所示,所述电子设备1500可以包括一个或多个处理器1510和一个或多个存储器(图13中未示出)、显示屏1520、惯性传感器1530、扬声器153、麦克风154、收发器1550,以及一个或多个计算机程序,所述一个或多个计算机程序被存储在所述存储器中,所述一个或多个计算机程序包括指令;所述显示屏1520,用于显示用户界面;所述扬声器153可以用于播报语音,麦克风154可以用于获取用户的语音指令、所述惯性传感器1530,用于采集所述终端在自然坐标系下的步行信息、收发器1550用于接收云端的数据和向云端发送数据;当所述指令被所述一个或多个处理器1510调用执行时,使得所述终端执行可以执行上述图4-图7所示的各个方法实施例。在本申请实施例中,处理器可以是通用处理器、数字信号处理器、专用集成电路、现场可编程门阵列或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件,可以实现或者执行本申请实施例中的公开的各 方法、步骤及逻辑框图。通用处理器可以是微处理器或者任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。
在本申请实施例中,存储器可以是非易失性存储器,比如硬盘(hard disk drive,HDD)或固态硬盘(solid-state drive,SSD)等,还可以是易失性存储器(volatile memory),例如随机存取存储器(random-access memory,RAM)。存储器是能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。本申请实施例中的存储器还可以是电路或者其它任意能够实现存储功能的装置,用于存储程序指令和/或数据。
应理解,该电子设备可以用于实现本申请实施例的图4-图7所示的方法,相关特征可以参照上文,此处不再赘述。
基于以上实施例,本申请还提供了一种计算机存储介质,所述计算机存储介质中存储有计算机程序,所述计算机程序被计算机执行时,使得所述计算机执行以上图4-图7所示的各个方法实施例。本申请实施例还提供了一种计算机可读存储介质或计算机非易失性可读存储介质,其上存储有计算机程序,该程序被处理器执行时用于执行一种多样化问题生成方法,该方法包括上述各个实施例所描述的方案中的至少之一。
本申请实施例的计算机存储介质,可以采用一个或多个计算机可读的介质的任意组合。计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质。计算机可读存储介质例如可以是,但不限于,电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本文件中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。
计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。
计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括、但不限于无线、电线、光缆、RF等等,或者上述的任意合适的组合。
可以以一种或多种程序设计语言或其组合来编写用于执行本申请操作的计算机程序代码,所述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络,包括局域网(LAN)或广域网(WAN),连接到用户计算机,或者,可以连接到外 部计算机(例如利用因特网服务提供商来通过因特网连接)。
本申请实施例中还提供一种计算机程序产品,包括指令,当其在计算机上运行时,使得计算机执行以上图4-图7所示的各个方法实施例。
上述本申请提供的实施例中,从电子设备作为执行主体的角度对本申请实施例提供的方法进行了介绍。为了实现上述本申请实施例提供的方法中的各功能,电子设备可以包括硬件结构和/或软件模块,以硬件结构、软件模块、或硬件结构加软件模块的形式来实现上述各功能。上述各功能中的某个功能以硬件结构、软件模块、还是硬件结构加软件模块的方式来执行,取决于计数方案的特定应用和设计约束条件。
上述各个实施例中涉及处理器可以是通用处理器、数字信号处理器(digital signal processor,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现成可编程门阵列(field programmable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存取存储器(random access memory,RAM)、闪存、只读存储器(read-only memory,ROM)、可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的指令,结合其硬件完成上述方法的步骤。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显
示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或 者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(processor)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方注意,上述仅为本申请的较佳实施例及所运用的技术原理。本领域技术人员会理解,本申请不限于这里所述的特定实施例,对本领域技术人员来说能够进行各种明显的变化、重新调整和替代而不会脱离本申请的保护范围。因此,虽然通过以上实施例对本申请进行了较为详细的说明,但是本申请不仅仅限于以上实施例,在不脱离本申请的构思的情况下,还可以包括更多其他等效实施例,均属于本申请的保护范畴。

Claims (17)

  1. 一种智能家居设备选择方法,其特征在于,所述方法应用于智能家居系统中,所述智能家居系统包括智能家居云、多个智能家居设备和终端,所述多个智能家居设备中的至少部分智能家居设备位于不同房间,所述智能家居云与所述多个智能家居设备以及所述终端连接以进行通信,
    所述方法包括:
    所述终端利用行人航迹推算PDR技术、根据房间分布图确定所述终端当前所在房间,其中,所述房间分布图包括所述多个智能家居设备的所在位置信息和/或所属房间信息;
    所述智能家居云根据所述终端当前所在房间以及用户意图确定被控智能家居设备,其中,所述用户意图是基于用户语音指令获得的,所述智能家居云存储有所述房间分布图,或者,所述智能家居云存储有所述多个智能家居设备的所属房间信息。
  2. 根据权利要求1所述的方法,其特征在于,所述智能家居云根据所述终端当前所在房间和用户意图确定被控智能家居设备,具体包括:
    所述智能家居云根据所述用户意图确定智能家居设备列表,根据所述终端当前所在房间从所述智能家居设备列表中确定所述被控智能家居设备,其中,所述智能家居设备列表包括位于不同房间的智能家居设备。
  3. 根据权利要求1或2所述的方法,其特征在于,所述终端利用PDR技术、根据房间分布图确定所述终端当前所在房间,具体包括:
    所述终端获取加速度传感器采集的加速度信息、陀螺仪传感器采集的角速度信息、方向传感器采集的方向信息和/或气压传感器采集的气压信息;
    所述终端根据所述加速度信息、所述角速度信息、所述方向信息和/或所述气压信息,利用所述PDR技术计算所述终端位置;
    所述终端根据所述终端位置和所述房间分布图确定所述终端当前所在房间。
  4. 根据权利要求3所述的方法,其特征在于,所述房间分布图还包括PDR信标,在所述终端根据所述终端位置和所述房间分布图确定所述终端当前所在房间之前,所述方法还包括:
    所述终端根据所述PDR信标校正所述终端位置。
  5. 根据权利要求4所述的方法,其特征在于,所述PDR信标是由用户在所述房间分布图中进行标注得到的,所述PDR信标包括门、房间的墙壁拐角和/或楼道。
  6. 根据权利要求1-5中任一项所述的方法,其特征在于,所述房间分布图是通过用户携带所述终端沿房间行走、利用所述PDR技术绘制得到的;
    所述多个智能家居设备的所在位置信息和/或所属房间信息是由用户在所述房间布局图中进行标注得到的。
  7. 根据权利要求1-6中任一项所述的方法,其特征在于,所述方法还包括:
    所述智能家居云根据所述被控智能家居设备确定控制指令,并向所述被控智能家居设备发送所述控制指令,其中,所述控制指令用于控制所述被控智能家居设备。
  8. 一种确定终端所在房间的方法,其特征在于,包括:
    所述终端利用行人航迹推算PDR技术、根据房间分布图确定所述终端当前所在房间,其中,所述房间分布图包括多个智能家居设备的所在位置信息和/或所属房间信息;
    所述终端发送所述终端当前所在房间的信息。
  9. 根据权利要求8所述的方法,其特征在于,所述终端利用PDR技术、根据房间分布图确定所述终端当前所在房间,具体包括:
    所述终端获取加速度传感器采集的加速度信息、陀螺仪传感器采集的角速度信息、方向传感器采集的方向信息和/或气压传感器采集的气压信息;
    所述终端根据所述加速度信息、所述角速度信息、所述方向信息和/或所述气压信息,利用所述PDR技术计算所述终端在所述房间分布图中的位置;
    所述终端根据所述终端在所述房间分布图中的位置和所述房间分布图确定所述终端当前所在房间。
  10. 根据权利要求9所述的方法,其特征在于,所述房间分布图还包括PDR信标,在所述终端根据所述终端在所述房间分布图中的位置和所述房间分布图确定所述终端当前所在房间之前,所述方法还包括:
    所述终端根据所述PDR信标校正所述终端位置。
  11. 根据权利要求10所述的方法,其特征在于,所述PDR信标是由用户在所述房间分布图中进行标注得到的,所述PDR信标包括门、房间的墙壁拐角和/或楼道。
  12. 根据权利要求8-11中任一项所述的方法,其特征在于,所述房间分布图是通过用户携带所述终端沿房间行走、利用所述PDR技术绘制得到的;
    所述多个智能家居设备的所在位置信息和/或所属房间信息是由用户在所述房间布局图中进行标注得到的。
  13. 一种智能家居设备选择方法,其特征在于,所述方法应用于智能家居系统中,所述智能家居系统包括智能家居云、多个智能家居设备,所述多个智能家居设备中的至少部分智能家居设备位于不同房间,所述智能家居云与所述多个智能家居设备连接以进行通信,
    所述方法包括:
    所述智能家居云获取终端当前所在房间的信息,其中,所述终端当前所在的房间的信息是通过所述终端利用行人航迹推算PDR技术、根据房间分布图确定的,所述房间分布图包括多个智能家居设备的所在位置信息和/或所属房间信息;
    所述智能家居云根据所述终端当前所在房间以及用户意图确定被控智能家居设备,其中,所述用户意图是基于用户语音指令获得的,所述智能家居云存储有所述房间分布图,或者,所述智能家居云存储有所述多个智能家居设备的所属房间信息。
  14. 根据权利要求13中所述的方法,其特征在于,所述方法还包括:
    所述智能家居云根据所述被控智能家居设备确定控制指令,并向所述被控智能家居设备发送所述控制指令,其中,所述控制指令用于控制所述被控智能家居设备。
  15. 一种智能家居系统,其特征在于,所述智能家居系统包括智能家居云和终端,所述智能家居云、所述终端包括存储器、处理器,所述存储器存储有指令,当所述指 令被所述处理器调用执行时,使得所述智能家居云、所述终端执行如权利要求1-7中任一项所述的方法。
  16. 一种终端,其特征在于,包括:处理器、存储器、显示屏、扬声器、麦克风、方向传感器、陀螺仪传感器、加速度传感器以及计算机程序,所述计算机程序被存储在所述存储器中,所述计算机程序包括指令;
    所述显示屏,用于显示用户界面;
    所述扬声器,用于播报用户语音;
    所述麦克风,用于获取用户语音;
    所述加速度传感器,用于采集所述终端的移动加速度;
    所述方向传感器,用于确定所述终端的方向;
    所述陀螺仪传感器,用于采集所述终端旋转的角速度;
    当所述指令被所述处理器调用执行时,使得所述终端执行如权利要求8-12中任一所述的方法。
  17. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质包括计算机程序,当所述计算机程序在电子设备上运行时,使得所述电子设备执行如权利要求8-12中任一所述的方法,或者,使得所述电子设备执行如权利要求13或14所述的方法。
PCT/CN2022/077290 2021-03-04 2022-02-22 一种智能家居设备选择方法及终端 WO2022183936A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110238952.7A CN115016298A (zh) 2021-03-04 2021-03-04 一种智能家居设备选择方法及终端
CN202110238952.7 2021-03-04

Publications (1)

Publication Number Publication Date
WO2022183936A1 true WO2022183936A1 (zh) 2022-09-09

Family

ID=83064188

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/077290 WO2022183936A1 (zh) 2021-03-04 2022-02-22 一种智能家居设备选择方法及终端

Country Status (2)

Country Link
CN (1) CN115016298A (zh)
WO (1) WO2022183936A1 (zh)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150195365A1 (en) * 2014-01-07 2015-07-09 Korea Advanced Institute Of Science And Technology Smart Access Point and Method for Controlling Internet of Things Apparatus Using the Smart Access Point Apparatus
US20160366635A1 (en) * 2015-06-15 2016-12-15 At&T Mobility Ii Llc Consumer Service Cloud for Implementing Location-Based Services to Control Smart Devices
CN106289282A (zh) * 2016-07-18 2017-01-04 北京方位捷讯科技有限公司 一种室内地图行人航迹匹配方法
CN106681282A (zh) * 2015-11-05 2017-05-17 丰唐物联技术(深圳)有限公司 智能家居的控制方法及系统
CN110262274A (zh) * 2019-07-22 2019-09-20 青岛海尔科技有限公司 基于物联网操作系统的智能家居设备控制显示方法及系统
CN110456755A (zh) * 2019-09-17 2019-11-15 苏州百宝箱科技有限公司 一种基于云平台的智能家居远程控制方法
CN110738994A (zh) * 2019-09-25 2020-01-31 北京爱接力科技发展有限公司 一种智能家居的控制方法、装置、机器人及系统
CN111174778A (zh) * 2019-11-26 2020-05-19 广东小天才科技有限公司 一种基于行人航迹推算的建筑入口确定方法及装置
CN111475212A (zh) * 2020-04-02 2020-07-31 深圳创维-Rgb电子有限公司 一种设备驱动方法及装置

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150195365A1 (en) * 2014-01-07 2015-07-09 Korea Advanced Institute Of Science And Technology Smart Access Point and Method for Controlling Internet of Things Apparatus Using the Smart Access Point Apparatus
US20160366635A1 (en) * 2015-06-15 2016-12-15 At&T Mobility Ii Llc Consumer Service Cloud for Implementing Location-Based Services to Control Smart Devices
CN106681282A (zh) * 2015-11-05 2017-05-17 丰唐物联技术(深圳)有限公司 智能家居的控制方法及系统
CN106289282A (zh) * 2016-07-18 2017-01-04 北京方位捷讯科技有限公司 一种室内地图行人航迹匹配方法
CN110262274A (zh) * 2019-07-22 2019-09-20 青岛海尔科技有限公司 基于物联网操作系统的智能家居设备控制显示方法及系统
CN110456755A (zh) * 2019-09-17 2019-11-15 苏州百宝箱科技有限公司 一种基于云平台的智能家居远程控制方法
CN110738994A (zh) * 2019-09-25 2020-01-31 北京爱接力科技发展有限公司 一种智能家居的控制方法、装置、机器人及系统
CN111174778A (zh) * 2019-11-26 2020-05-19 广东小天才科技有限公司 一种基于行人航迹推算的建筑入口确定方法及装置
CN111475212A (zh) * 2020-04-02 2020-07-31 深圳创维-Rgb电子有限公司 一种设备驱动方法及装置

Also Published As

Publication number Publication date
CN115016298A (zh) 2022-09-06

Similar Documents

Publication Publication Date Title
US20220223150A1 (en) Voice wakeup method and device
CN109891934B (zh) 一种定位方法及装置
TWI442081B (zh) 多裝置間轉移工作的方法及手持通訊裝置
WO2021017836A1 (zh) 控制大屏设备显示的方法、移动终端及第一系统
US20200344661A1 (en) Unmanned aerial vehicle control method and apparatus
US20220191668A1 (en) Short-Distance Information Transmission Method and Electronic Device
WO2022028537A1 (zh) 一种设备识别方法及相关装置
EP4171135A1 (en) Device control method, and related apparatus
WO2022116930A1 (zh) 内容共享方法、电子设备及存储介质
CN106231559A (zh) 网络访问方法、装置及终端
WO2019052450A1 (zh) 基于移动终端的照片拍摄控制方法、系统及存储介质
US20230379408A1 (en) Positioning Method and Electronic Device
WO2021197354A1 (zh) 一种设备的定位方法及相关装置
WO2022100219A1 (zh) 数据转移方法及相关装置
WO2021170129A1 (zh) 一种位姿确定方法以及相关设备
WO2021147419A1 (zh) 一种数据传输方法、电子设备及存储介质
CN111176338B (zh) 导航方法、电子设备及存储介质
WO2022183936A1 (zh) 一种智能家居设备选择方法及终端
WO2022166461A1 (zh) 确定设备位置的方法、装置及系统
US20230400592A1 (en) Positioning method and related apparatus
WO2022068670A1 (zh) 设备间触碰建立无线连接的方法、电子设备及芯片
WO2022037575A1 (zh) 一种低功耗定位方法及相关装置
CN114079691B (zh) 一种设备识别方法及相关装置
CN115119135A (zh) 一种数据发送方法、接收方法和装置
WO2022237396A1 (zh) 终端设备、定位方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22762402

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22762402

Country of ref document: EP

Kind code of ref document: A1