CN115016298A - Intelligent household equipment selection method and terminal - Google Patents

Intelligent household equipment selection method and terminal Download PDF

Info

Publication number
CN115016298A
CN115016298A CN202110238952.7A CN202110238952A CN115016298A CN 115016298 A CN115016298 A CN 115016298A CN 202110238952 A CN202110238952 A CN 202110238952A CN 115016298 A CN115016298 A CN 115016298A
Authority
CN
China
Prior art keywords
terminal
room
user
intelligent
intelligent home
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110238952.7A
Other languages
Chinese (zh)
Inventor
黄益贵
乔登龙
夏潘斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202110238952.7A priority Critical patent/CN115016298A/en
Priority to PCT/CN2022/077290 priority patent/WO2022183936A1/en
Publication of CN115016298A publication Critical patent/CN115016298A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B15/00Systems controlled by a computer
    • G05B15/02Systems controlled by a computer electric
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/418Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS], computer integrated manufacturing [CIM]
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/26Pc applications
    • G05B2219/2642Domotique, domestic, home control, automation, smart house
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Abstract

The utility model provides an intelligent household equipment selection method and terminal, relates to terminal technical field, is applied to in the intelligent home environment, intelligent home system includes intelligent house cloud, a plurality of intelligent household equipment and terminal, at least some intelligent household equipment in a plurality of intelligent household equipment is located different rooms, the intelligent house cloud with a plurality of intelligent household equipment and the terminal connection is in order to communicate, include: determining the current room of the terminal according to a room distribution map by utilizing a Pedestrian Dead Reckoning (PDR) technology; and determining the controlled intelligent household equipment according to the current room of the terminal and the user intention. Under the scene of voice control of the intelligent household equipment, when a plurality of target intelligent household equipment alternatives exist, the intelligent household equipment controlled by the intention of the user is determined according to the voice indication of the user, the number of times of voice interaction between the user and the voice of the terminal is reduced, and the user experience is improved.

Description

Intelligent household equipment selection method and terminal
Technical Field
The application relates to the technical field of terminals, in particular to an intelligent household equipment selection method and a terminal.
Background
With the development of the intelligent home technology, a user can control the intelligent home equipment by sending voice to the voice assistant on the terminal. When a plurality of smart home devices exist in a user's home, it is necessary for the user to clearly identify a unique controlled smart home device in the voice, for example: the intelligent home cloud can determine corresponding controlled intelligent home devices according to the intention extracted from the user voice by turning on a lamp for the main sleeping, heightening an air conditioner of a child room and the like. However, if the user does not have a specific and unique controlled smart home device in the voice, such as: the intelligent home cloud can not determine corresponding controlled intelligent home equipment according to the intention by turning on a lamp, increasing the temperature and the like, so that the intelligent home cloud can feed back an instruction for enabling a user to clearly and uniquely determine the controlled intelligent home equipment to the voice assistant until the corresponding controlled intelligent home equipment is determined. And the experience of the user body is poor due to the process that the user and the terminal perform voice interaction for many times.
Disclosure of Invention
In view of the above problems in the prior art, an object of the present application is to provide a method and a terminal for selecting an intelligent home device, so that in a scenario where an intelligent home device is controlled by voice, when multiple target intelligent home devices are available, an intelligent home device that is intended to be controlled by a user is determined according to a voice instruction of the user, the number of times of voice interaction between the user and the voice at the terminal is reduced, and user experience is improved.
In a first aspect of an embodiment of the present application, a method for selecting smart home devices is provided, where the method is applied to a smart home system, the smart home system includes a smart home cloud, a plurality of smart home devices, and a terminal, at least some smart home devices of the plurality of smart home devices are located in different rooms, the smart home cloud is connected to the plurality of smart home devices and the terminal to perform communication, and the method includes: the terminal determines a current room of the terminal according to a room distribution map by using a Pedestrian Dead Reckoning (PDR) technology, wherein the room distribution map comprises the position information and/or the room information of the intelligent household equipment; the intelligent home cloud determines the controlled intelligent home equipment according to the current room of the terminal and the user intention, wherein the user intention is obtained based on a user voice instruction, the room distribution map is stored in the intelligent home cloud, or the room information of the intelligent home equipment is stored in the intelligent home cloud.
Through the arrangement, when the voice command of the user does not contain room information, the controlled intelligent home equipment can be determined from the room where the terminal held by the user is located, the times of voice interaction between the user and the terminal are reduced, and the user experience is improved; extra chip support is not needed, and hardware cost is reduced.
In a possible implementation manner, the determining, by the smart home cloud, the controlled smart home device according to the current room of the terminal and the user intention specifically includes: the intelligent home cloud determines an intelligent home device list according to the user intention, and determines the controlled intelligent home devices from the intelligent home device list according to the current room where the terminal is located, wherein the intelligent home device list comprises intelligent home devices located in different rooms.
Through the setting, the controlled intelligent household equipment can be determined through the intelligent household equipment list under the condition that the distribution map is not stored in the intelligent household cloud.
In a possible implementation manner, the determining, by the terminal, a current room where the terminal is located according to a room distribution map by using a pedestrian dead reckoning PDR technique specifically includes:
the terminal acquires acceleration information acquired by an acceleration sensor, angular velocity information acquired by a gyroscope sensor, direction information acquired by a direction sensor and/or air pressure information acquired by an air pressure sensor;
the terminal calculates the position of the terminal by using the PDR technology according to the acceleration information, the angular velocity information, the direction information and/or the air pressure information;
and the terminal determines the current room of the terminal according to the terminal position and the room distribution map.
Through the setting, when the terminal enters the intelligent home environment, the current room where the terminal is located can be obtained, and preparation is made for determining the controlled intelligent home equipment in the room where the terminal is located subsequently, so that when a user sends out a voice instruction, the intelligent home equipment controlled by the intention of the user can be immediately determined according to the room where the terminal is located, and the user experience is improved.
In one possible implementation, the room distribution map further includes a PDR beacon, and before the terminal determines the room in which the terminal is currently located according to the terminal location and the room distribution map, the method further includes:
and the terminal corrects the position of the terminal according to the PDR beacon.
Through the setting, the deviation of user positioning caused by the accumulated error of the PDR can be avoided, and the accuracy of positioning the user in the distribution diagram is improved.
In one possible implementation, the PDR beacon is tagged by a user in the room profile, and includes a door, a wall corner of a room, and/or a corridor.
Through the arrangement, when a user passes through the places where the traveling direction angle of the user is likely to generate large direction change, the position of the user can be corrected, the deviation of user positioning caused by the accumulated error of the PDR is avoided, and the accuracy of user positioning in the distribution diagram is improved.
In a possible implementation manner, the room distribution map is obtained by a user walking along a room with the terminal and drawing by using the PDR technology;
the position information and/or the room information of the intelligent household devices are obtained by labeling the room layout drawing by a user.
Through the arrangement, the distribution diagram of the intelligent household equipment in the room can be obtained, preparation is made for determining the room where the terminal is located by utilizing the PDR technology subsequently, when the user sends out the voice instruction, the intelligent household equipment controlled by the user intention can be immediately determined according to the room where the terminal is located, and the user experience is improved.
In one possible implementation, the method further includes:
and the intelligent home cloud determines a control instruction according to the controlled intelligent home equipment and sends the control instruction to the controlled intelligent home equipment, wherein the control instruction is used for controlling the controlled intelligent home equipment.
Through the setting, the intelligent household equipment in the room where the user is located at present can be controlled, and the user experience is improved.
In one possible implementation, the method further includes: and the terminal acquires the user voice instruction.
In a second aspect of the embodiments of the present application, a method for determining a room in which a terminal is located is provided, including: the terminal determines a current room of the terminal according to a room distribution map by using a Pedestrian Dead Reckoning (PDR) technology, wherein the room distribution map comprises the position information and/or the room information of a plurality of intelligent household devices; and the terminal sends the information of the current room of the terminal.
Through the arrangement, when the terminal enters the intelligent home environment, the current room where the terminal is located can be obtained, and preparation is made for determining the controlled intelligent home equipment in the room where the terminal is located subsequently, so that when the user sends out voice indication, the intelligent home equipment controlled by the user intention can be immediately determined according to the room where the terminal is located, and the user experience is improved.
In a possible implementation manner, the determining, by the terminal, a current room where the terminal is located according to a room distribution map by using a PDR technique specifically includes: the terminal acquires acceleration information acquired by an acceleration sensor, angular velocity information acquired by a gyroscope sensor, direction information acquired by a direction sensor and/or air pressure information acquired by an air pressure sensor; the terminal calculates the position of the terminal in the room distribution map by using the PDR technology according to the acceleration information, the angular velocity information, the direction information and/or the air pressure information; and the terminal determines the current room of the terminal according to the position of the terminal in the room distribution diagram and the room distribution diagram.
Through the arrangement, when the terminal enters the intelligent home environment, the room where the terminal is located at present can be obtained according to the walking information and the distribution map of the user obtained by the sensor on the terminal, and preparation is made for determining the controlled intelligent home equipment in the room where the terminal is located subsequently, so that when the user sends out voice indication, the intelligent home equipment controlled by the user intention can be immediately determined according to the room where the terminal is located, and the user experience is improved.
In one possible implementation, the room distribution map further includes a PDR beacon, and before the terminal determines the room in which the terminal is currently located according to the position of the terminal in the room distribution map and the room distribution map, the method further includes: and the terminal corrects the position of the terminal according to the PDR beacon.
Through the setting, the deviation of user positioning caused by PDR accumulated error can be avoided, and the accuracy of user positioning in the distribution diagram is improved.
In one possible implementation, the PDR beacons are labeled by the user in the room profile, including doors, wall corners of rooms, and/or corridors.
Through the arrangement, when a user passes through the places where the traveling direction angle of the user is likely to generate large direction change, the position of the user can be corrected, the deviation of user positioning caused by the accumulated error of the PDR is avoided, and the accuracy of user positioning in the distribution diagram is improved.
In a possible implementation manner, the room distribution map is obtained by carrying the terminal by a user to walk along a room and drawing by using the PDR technology; the position information and/or the room information of the intelligent household devices are obtained by labeling the room layout drawing by a user.
Through the arrangement, the distribution diagram of the intelligent household equipment in the room can be obtained, preparation is made for determining the room where the terminal is located by utilizing the PDR technology subsequently, when the user sends out the voice instruction, the intelligent household equipment controlled by the user intention can be immediately determined according to the room where the terminal is located, and the user experience is improved.
A third aspect of the embodiments of the present application provides a method for selecting smart home devices, where the method is applied to a smart home system, where the smart home system includes a smart home cloud and a plurality of smart home devices, at least some of the smart home devices are located in different rooms, the smart home cloud is connected to the smart home devices for communication, and the method includes: the intelligent home cloud acquires information of a room where the terminal is located currently, wherein the information of the room where the terminal is located currently is determined by the terminal through a Pedestrian Dead Reckoning (PDR) technology and according to a room distribution map, and the room distribution map comprises location information of a plurality of intelligent home devices and/or room information of the intelligent home devices; the intelligent home cloud determines the controlled intelligent home equipment according to the current room of the terminal and the user intention, wherein the user intention is obtained based on a user voice instruction, the room distribution map is stored in the intelligent home cloud, or the room information of the intelligent home equipment is stored in the intelligent home cloud.
Through the arrangement, when the voice instruction of the user does not contain room information, the controlled intelligent home equipment can be determined from the room where the terminal held by the user is located, the frequency of voice interaction between the user and the terminal is reduced, and the user experience is improved; extra chip support is not needed, and hardware cost is reduced.
In one possible implementation, the method further includes: the intelligent home cloud determines a control instruction according to the controlled intelligent home equipment and sends the control instruction to the controlled intelligent home equipment, wherein the control instruction is used for controlling the controlled intelligent home equipment.
Through the setting, the intelligent household equipment in the room where the user is located at present can be controlled, and the user experience is improved.
In a fourth aspect of the embodiments of the present application, an intelligent home system is provided, where the intelligent home system includes an intelligent home cloud and a terminal, the intelligent home cloud and the terminal include a memory and a processor, and the memory stores instructions, and when the instructions are called by the processor and executed, the intelligent home cloud and the terminal execute the method according to any one of the first aspect and possible implementation manners of the embodiments of the present application.
In a fifth aspect of the embodiments of the present application, a terminal is provided, including: a processor, a memory, a display screen, a speaker, a microphone, an orientation sensor, a gyroscope sensor, an acceleration sensor, and a computer program, the computer program stored in the memory, the computer program comprising instructions; the display screen is used for displaying a user interface; the loudspeaker is used for broadcasting user voice; the microphone is used for acquiring voice; the acceleration sensor is used for acquiring the movement acceleration of the terminal; the direction sensor is used for determining the direction of the terminal; the gyroscope sensor is used for acquiring the angular speed of the rotation of the terminal; when the instructions are called and executed by the processor, the terminal is caused to execute the method according to the second aspect of the embodiment of the present application and any one of the possible implementation manners thereof.
A sixth aspect of embodiments of the present application provides a computer-readable storage medium or a non-volatile computer-readable storage medium, which includes a computer program, which, when run on an electronic device, causes the electronic device to perform the method according to any one of the second aspect and possible implementations of embodiments of the present application, or causes the electronic device to perform the method according to any one of the third aspect and possible implementations of embodiments of the present application.
The technical effects of the fourth to sixth aspects of the embodiments of the present application are the same as those of the first to third aspects and possible implementations thereof, and for the sake of brevity, no further description is provided here.
These and other aspects of the present application will be more readily apparent in the following description of the embodiment(s).
Drawings
The various features and the connections between the various features of the present application are further described below with reference to the drawings. The figures are exemplary, some features are not shown to scale, and some of the figures may omit features that are conventional in the art to which the application relates and are not essential to the application, or show additional features that are not essential to the application, and the combination of features shown in the figures is not intended to limit the application. In addition, the same reference numerals are used throughout the specification to designate the same components. The specific drawings are illustrated below:
fig. 1 is a flowchart of a method for selecting an intelligent home device according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of an intelligent home device system provided in an embodiment of the present application;
fig. 3 is a schematic hardware structure diagram of a terminal according to an embodiment of the present application;
fig. 4 is a flowchart of a method for selecting smart home devices according to an embodiment of the present application;
fig. 5 is a flowchart of a method for obtaining a distribution map of smart home devices in each room according to an embodiment of the application;
fig. 6 is a flowchart of a method for acquiring a room where a terminal is located according to an embodiment of the present application;
fig. 7 is a flowchart of a method for selecting smart home devices according to an embodiment of the present application;
fig. 8 is a distribution diagram of smart home devices in each room according to an embodiment of the present application;
fig. 9 is a schematic diagram of a motion trajectory of a terminal in a room according to an embodiment of the present application;
fig. 10 is a schematic diagram of a motion trajectory of another terminal in a room according to an embodiment of the present application;
fig. 11 is a schematic diagram of correcting a user position by using PDR beacons according to an embodiment of the present application;
FIG. 12 is a schematic diagram illustrating a method for calculating a position of a user in the histogram shown in FIG. 9 by using a PDR according to an embodiment of the present disclosure;
fig. 13 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Technical solutions according to embodiments of the present application will be described below with reference to the drawings in the embodiments. Before describing the details of the technical solution, the terms used in the present application will be briefly described.
The terms "first, second, third, etc. in the description and in the claims, or the like, may be used solely to distinguish one from another and are not intended to imply a particular order to the objects, but rather are to be construed in a manner that permits interchanging particular sequences or orderings where permissible such that embodiments of the present application may be practiced otherwise than as specifically illustrated or described herein.
The term "comprising" as used in the specification and claims should not be construed as being limited to the contents listed thereafter; it does not exclude other elements or steps. It should therefore be interpreted as specifying the presence of the stated features, integers, steps or components as referred to, but does not preclude the presence or addition of one or more other features, integers, steps or components, and groups thereof. Thus, the expression "an apparatus comprising the devices a and B" should not be limited to an apparatus consisting of only the components a and B.
Reference in the specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the application. Thus, appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment, but may. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments, as would be apparent to one of ordinary skill in the art from this disclosure.
Definition of key terms:
pedestrian Dead Reckoning (PDR) is a technique for locating a person by obtaining walking data of the person based on an inertial sensor. The principle of the method is that the walking direction of a person and the step length of the person are determined according to angular motion data and linear motion data of the person obtained by an inertial sensor, and then the position of the person is calculated. The pedestrian track can be obtained based on the connection of the positions of the pedestrians.
Automatic Speech Recognition (ASR) is a technology that converts speech into text.
Natural Language Understanding (NLU) is a technique for making a computer understand a natural language of a person, which is capable of obtaining an intention of the person from a natural language text of the person.
Dialog Manager (DM) is a technology for managing the context of a Dialog process between a person and a computer and arranging different services involved in the Dialog process.
Text To Speech (TTS) is a technology for computers to broadcast texts in human natural language.
An application (App) refers to a computer program that performs one or more tasks.
An Application Program Interface (API) is a technology for realizing mutual communication between application programs. The application program causes the operating system to execute the instructions of the application program through the API.
Ultra-wideband (UWB), a wireless carrier communication technology, transmits data using narrow pulses of non-sinusoidal waves on the order of nanoseconds to microseconds.
Received Signal Strength Indication (RSSI) is an optional part of the radio transmission layer to determine the link quality and whether to increase the broadcast transmission strength. The distance between a signal point and a receiving point is measured according to the strength of the received signal, and then positioning calculation is carried out according to corresponding data.
The ZigBee (ZigBee) is a short-distance, low-power consumption wireless communication technology.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. In the case of inconsistency, the meaning described in the present specification or the meaning derived from the content described in the present specification shall control. In addition, the terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the present application.
Under the scene of voice control of the intelligent household equipment, when a plurality of rooms and a plurality of intelligent household equipment of the same type exist in a user home, the user needs to clearly determine the only intelligent household equipment which the user intends to control in voice, otherwise, the user and the machine need to carry out multiple rounds of voice interaction until the machine determines the only controlled intelligent household equipment.
As shown in fig. 1, when the user says to turn on a certain smart home device, for example, "turn on a light", the smart home cloud performs step S01 according to the intention of the user: judging the number of the intelligent household devices, opening the unique intelligent household device when the number of the intelligent household devices is equal to 1, and then executing the step S02: voice assistant voice broadcast: "good, already open"; when the number of the smart home devices is greater than 1, the smart home cloud performs step S03: judging the number of rooms, controlling to open all the smart home devices in the room when the number of the rooms is equal to 1, and then executing the step S05: voice assistant voice broadcast: "all smart home devices have been turned on"; when the number of rooms is greater than 1, the smart home cloud performs step S04: judging the number of rooms, and when the number of rooms is less than or equal to 3, executing step S06: choose a room from the list of rooms, i.e. the voice assistant of the handset broadcasts "ask you to turn on the lights in the living room, or the lights in the bedroom? The room list is displayed on a display screen of the terminal equipment, and a user selects and confirms which intelligent equipment of a room is started; when the number of rooms is greater than 3, for example, there are 5 rooms in the living room, the main bed, the sub bed, the kitchen and the toilet in the user' S home, the step S07 is executed: controlling voice assistant to broadcast by voice: "I finds that there are a plurality of intelligent home devices, tries to select a bar according to the name of a room or the name of the intelligent home device". At this time, the user needs to issue a voice instruction for the second time, for example, the user says "turn on the lamp in the bedroom", and the smart home cloud performs step S01 again according to the voice instruction issued again by the user: judging the number of the intelligent household equipment; at the moment, the number of the intelligent lamps is determined to be 2 (the intelligent lamp for the primary lying and the intelligent lamp for the secondary lying) and is more than 1; step S03 is executed: judging the number of rooms, wherein the number of bedrooms is 2 or more than 1, executing step S04: judging the number of rooms; at this time, if the number of rooms is less than 3, step S06 is executed to control the voice assistant to perform voice broadcast: asking you to turn on the light for the main bed or the light for the sub-bed? ". At this time, the user needs to issue a voice instruction for the third time, for example, the user says: the intelligent home device executes the step S01 for the third time according to the voice instruction sent by the user for the third time: judge the quantity of intelligent household equipment, at this moment, the quantity of the intelligent lamp in the master sleeping is 1, then carries out step S02, voice assistant voice broadcast: "good, already open".
As can be seen from the above description, in the process of controlling the smart light to be turned on by voice, the user performs multiple rounds of voice interaction with the voice assistant, and the steps S01-S07 are repeatedly executed to determine the smart light to be turned on, which undoubtedly reduces the user experience.
In order to improve user experience, a possible implementation manner provides a method for controlling an intelligent home device based on a UWB technology, and a user uses a terminal provided with a UWB transceiver chip, and can realize operation of a controlled intelligent home device by means of a 'finger-to-finger' operation, that is, the user uses the terminal provided with the UWB transceiver chip to point to the controlled intelligent home device also provided with the UWB transceiver chip. UWB is a wireless positioning technology, and unlike Global Positioning System (GPS), the positioning accuracy is higher, and UWB is particularly suitable for indoor locations where GPS signals are weak. The controlled intelligent household equipment feeds back a signal to the terminal after receiving a control signal pointing to the terminal of the intelligent household equipment, and a corresponding operation interface can be popped up instantly on a screen of the terminal, so that various operations on the controlled intelligent household equipment are completed.
This embodiment has the following drawbacks: 1. the terminal and the controlled intelligent household equipment are required to be provided with a special UWB chip, and the cost is higher. 2. The user needs to aim the terminal at the controlled intelligent household equipment during operation, and the method is not suitable for controlling the scene of the controlled intelligent household equipment through voice.
Another possible embodiment provides an intelligent home device selection method and a terminal, wherein a user points to an intelligent home device which the user intends to control by using the terminal, determines a target intelligent home device pointed by the terminal according to a direction acquired by a direction sensor on the terminal, and displays a control interface of the target intelligent home device on a display screen of the terminal, so that the target intelligent home device is controlled.
This embodiment has the following drawbacks: 1. the embodiment is used for judging the coordinates of the target smart home based on Wi-Fi and RSSI technologies, and due to the fact that Wi-Fi signal attenuation fluctuation range is large, positioning accuracy is low and positioning errors are large in an indoor positioning scene. 2. This embodiment requires aligning the terminal with the controlled smart home device, and is also not suitable for controlling the target smart home device by voice.
In view of this, in order to avoid multiple voice interactions between a user and a terminal and improve user experience in a scenario of controlling an intelligent home device through voice, an embodiment of the present application provides an intelligent home control method, which may determine, according to a distribution map of an intelligent home in a room of the user, a current room where the user is located through a PDR technique and determine a unique controlled intelligent home device according to the room where the user is located when the user sends a voice instruction. The frequency of voice interaction between the user and the terminal is reduced, and the user experience is improved.
An intelligent home system 1000 related to the intelligent home control method provided by the embodiment of the present application is described below with reference to fig. 2.
Fig. 2 shows a schematic structural diagram of a smart home system 1000, where the smart home system 1000 includes smart home devices 410, 420, and 430, a terminal 100, a voice assistant cloud 210, a smart home cloud 220, and a smart home gateway 300. The terminal 100 is provided with a sensor, an intelligent home APP, a voice assistant APP, a sensing service APP and other application programs. After the smart home devices 410, 420, and 430 are connected to the smart home gateway 300 through Wi-Fi, the user may control the smart home devices 410, 420, and 430 by using an application program on the terminal 100 and a cloud server (the voice assistant cloud 210 and the smart home cloud 220).
With reference to fig. 2, the smart home devices 410, 420, and 430, the terminal 100, the applications (the smart home APP, the voice assistant APP, the sensing service APP, and the like) installed on the terminal 100, the voice assistant cloud 210, the smart home cloud 220, and the smart home gateway 300 will be described respectively.
The smart home devices 410, 420, and 430 are hardware devices that access the smart home gateway 300 through wireless communication technologies such as Wi-Fi, ZigBee, and bluetooth, and execute corresponding operations by receiving a control instruction sent by a user through the smart home APP or through the voice assistant APP. The smart home devices include, for example: the intelligent lighting system comprises an intelligent lighting 410, an intelligent television 420, an intelligent air conditioner 430, an intelligent home gateway 300, an intelligent sound box, intelligent security equipment, intelligent projection and the like.
The intelligent home gateway 300: also known as routers, hardware devices used to connect two or more networks, acting as gateways between the networks, are specialized intelligent network devices that read the address of each packet and then decide how to transmit. The router is in wireless connection with terminals such as a mobile phone or a tablet personal computer, and therefore the user can conveniently control the intelligent household equipment. The general router provides a Wi-Fi hotspot, the smart home devices 410, 420, and 430 and the terminal 100 access a Wi-Fi network through the Wi-Fi hotspot of the access router, and the routers accessed by the smart home devices 410, 420, and 430 and the terminal 100 may be the same or different.
Sensor service: for example, may include a sensor mounted on the terminal 100 for obtaining walking information of the user and/or a compass APP capable of displaying the terminal direction of the user. The sensors for obtaining walking information of the user may comprise inertial sensors, for example: the acceleration sensor 142, the gyro sensor 143, and the like may further include a direction sensor, an air pressure sensor 144, and the like. In some embodiments, as shown in FIG. 3, acceleration sensor 142 is used to determine acceleration of the terminal in the X, Y, and Z axes in a three-axis coordinate system. The gyro sensor 143 is used to determine the angular velocity of the terminal rotation. The direction sensor may obtain the direction of the terminal, and the compass APP may be capable of displaying the direction of the terminal on the display screen 132 according to the direction of the terminal obtained by the direction sensor. The air pressure sensor 144 may obtain air pressure. The position of the terminal at any moment can be calculated by utilizing the PDR technology according to the acceleration, the angular velocity and/or the direction of the terminal obtained by the sensor. The altitude of the terminal at any moment can be calculated by utilizing the corresponding relation between the air pressure and the altitude according to the air pressure obtained by the sensor.
Smart home APP: the intelligent home equipment selection and control system is a software program which is installed on a terminal used by a user and used for selecting and controlling various intelligent home equipment. The smart home APP can be provided with an operation interface, and a user can control corresponding smart home equipment by operating the operation interface. The smart home APP can also have a function of drawing a distribution graph of the smart home equipment in each room, when the distribution graph of the smart home equipment in each room is drawn, the smart home APP can be accessed to the smart home gateway through the Wi-Fi, and the user hand-held terminal walks for a circle along the wall of each room to obtain a room layout plan of the user; and marking the positions of the intelligent household equipment in all the rooms and the positions of the doors in the room layout plan by the user to obtain the distribution map of the intelligent household equipment in all the rooms. In some embodiments, the smart home APP may also receive a drawn room layout plan and/or a distribution map of the smart home devices in each room from other devices, over a network, or uploaded by a user. The distribution graph of the smart home devices in each room can be stored in the smart home APP or uploaded to the smart home cloud 220. The smart home APP referred to below may be an application installed when the terminal leaves a factory, or an application downloaded from a network or acquired from another device during the process of using the terminal by the user.
Voice assistant APP: is an APP installed on a terminal used by a user to provide a voice control function. The voice instruction of the user can be acquired by using a radio function provided by a hardware microphone on the terminal, the voice instruction input by the user is converted into text content through ASR, and the text content is sent to the voice assistant cloud 210; the voice assistant cloud 210 can also generate a text sentence according to a result of the instruction execution of the controlled smart home device, and broadcast the natural language of the text sentence for people through TTS. In some embodiments, the voice assistant APP may also send the user's voice instructions to the voice assistant cloud 210, which converts the user's voice instructions into text content.
Voice assistant cloud 210: for providing cloud side functionality to the voice assistant APP. The method comprises the steps of performing semantic analysis on text contents of a user through an NLU (non line of sight) to obtain the intention and slot positions of the user; and performing context management on the text content of the user through the DM, and enabling the voice assistant APP to execute corresponding operation through the API according to the intention and the slot position of the user. In some embodiments, the voice assistant APP may also have NLU and DM functionality.
Smart home cloud 220: the system is a remote server and is used for providing cloud side functions for smart home APP and smart home equipment; or, the intelligent household central control equipment installed in the home of the user comprises a transceiver, a processor and a memory. On one hand, a user can operate an operation interface of the smart home APP, the smart home APP sends an operation instruction of the user to the smart home cloud 220 according to the operation of the user, and the smart home cloud 220 sends a control instruction to corresponding smart home equipment, so that the control of the corresponding smart home equipment is realized; on the other hand, the user can send a voice instruction to the voice assistant APP, the voice assistant APP converts the voice instruction of the user into text content and sends the text content to the voice assistant cloud 210, the voice assistant cloud 210 performs semantic analysis according to the text content of the user to obtain the intention and the slot position of the user, sends the intention and the slot position of the user to the smart home cloud 220, the smart home cloud 220 generates a corresponding control instruction according to the intention and the slot position of the user, and sends the control instruction to corresponding smart home equipment, and then control of the corresponding smart home equipment is achieved.
Perception service APP: the method is a software program which is different from an intelligent home APP and is installed on a terminal used by a user. The sensing service APP may or may not have an operation interface, and may be a resident APP of the terminal system or an application installed when the terminal leaves a factory. After the sensing service APP is accessed to the intelligent home gateway through Wi-Fi, functions of the inertial sensor and the direction sensor are called through the API, so that walking information (such as walking direction and step length of a user and deviation angles of walking directions of two adjacent steps) of the user is obtained, and a distribution diagram of PDR (product data Rate) combined with intelligent home equipment in each room is used for determining the room where the terminal is located.
After a user sends a voice control instruction to certain intelligent home equipment at home through a voice assistant APP, the voice assistant APP determines the intention and the slot position of the user by combining with the voice assistant cloud 210, and when the slot position lacks information of a room where the user is located, the room where the user is located determined by a perception service APP is called through an API, and the information of the room where the user is located is sent to the voice assistant cloud. The voice assistant cloud 210 sends the intention of the user and the room where the user is located to the smart home cloud 220, the smart home cloud 220 generates a corresponding control instruction according to the intention of the user and the room where the user is located, and sends the control instruction to the corresponding smart home device, so that the corresponding smart home device is controlled. In some embodiments, the sensing service APP may also be called by the smart home APP through an API. Alternatively, the "slot" described in the embodiments of the present application may also be expressed as a "slot bit value".
The terminal 100: the present invention relates to a device for controlling a smart home device, and for example, the device may be a portable device, such as a mobile phone, a tablet computer, an Artificial Intelligence (AI) intelligent voice terminal, a wearable device (e.g., a smart watch, a smart bracelet), an Augmented Reality (AR)/Virtual Reality (VR) device, and the like. The terminal can be provided with an intelligent home APP, a voice assistant APP, a perception service APP, a compass APP and a sensor. The portable device includes but is not limited to a portable device, as shown in fig. 3, which is a schematic hardware structure diagram of a terminal according to an embodiment of the present application. Specifically, as shown in the figure, the terminal 100 includes a processor 110, an internal memory 121, an external memory interface 122, a camera 131, a display 132, a sensor module 140, a key 151, a Universal Serial Bus (USB) interface 152, a charging management module 160, a power management module 161, a battery 162, a mobile communication module 171, and a wireless communication module 172. In other embodiments, the terminal 100 may further include a Subscriber Identification Module (SIM) card interface, an audio module, a speaker 153, a receiver, a microphone 154, an earphone interface, a motor, an indicator, a key, and the like.
It should be understood that the hardware configuration shown in fig. 3 is only one example. The terminal 100 of embodiments of the present application may have more or fewer components than the terminal 100 shown in the figures, may combine two or more components, or may have a different configuration of components. The various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
Processor 110 may include one or more processing units, among others. For example: the processor 110 may include an Application Processor (AP), a modem, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), and the like. The different processing units may be separate devices or may be integrated into one or more processors.
In some embodiments, a buffer may also be provided in processor 110 for storing instructions and/or data. As an example, the cache in the processor 110 may be a cache memory. The buffer may be used to hold instructions and/or data that have just been used, generated, or recycled by processor 110. If the processor 110 needs to use the instruction or data, it can be called directly from the buffer. Helping to reduce the time for processor 110 to fetch instructions or data and thus helping to improve the efficiency of the system.
The internal memory 121 may be used to store programs and/or data. In some embodiments, the internal memory 121 includes a program storage area and a data storage area. The storage program area may be used to store an operating system (e.g., an operating system such as Android and IOS), a computer program required by at least one function (e.g., a voice wake-up function, a sound playing function), and the like. The storage data area may be used to store data (e.g., audio data) created and/or collected during use of the terminal 100, etc. For example, the processor 110 may implement one or more functions by calling programs and/or data stored in the internal memory 121 to cause the terminal 100 to perform corresponding methods. For example, the processor 110 calls some programs and/or data in the internal memory, so that the terminal 100 performs the voice recognition method provided in the embodiment of the present application, thereby implementing the voice recognition function. The internal memory 121 may be a high-speed random access memory, a nonvolatile memory, or the like. For example, the non-volatile memory may include at least one of one or more magnetic disk storage devices, flash memory devices, and/or universal flash memory (UFS), among others.
The external memory interface 122 may be used to connect an external memory card (e.g., a Micro SD card) to extend the memory capability of the terminal 100. The external memory card communicates with the processor 110 through the external memory interface 122 to implement a data storage function. For example, the terminal 100 can save files such as images, music, videos, and the like in the external memory card through the external memory interface 122.
The camera 131 may be used to capture motion, still images, and the like. Typically, the camera 131 includes a lens and an image sensor. The optical image generated by the object through the lens is projected on the image sensor, and then is converted into an electric signal for subsequent processing. For example, the image sensor may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
The image sensor converts the optical signal into an electrical signal and then transmits the electrical signal to the ISP to be converted into a digital image signal. It should be noted that the terminal 100 may include 1 or N cameras 131, where N is a positive integer greater than 1.
Display screen 132 may include a display panel for displaying a user interface. The display panel may be a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-o led, a quantum dot light-emitting diode (QLED), or the like. It should be noted that the terminal 100 may include 1 or M display screens 132, where M is a positive integer greater than 1. Illustratively, the terminal 100 may implement display functionality via a GPU, display screen 132, application processor, or the like. In this embodiment of the present invention, the terminal 100 may display a user interface of the smart home APP, such as a main interface of the smart home APP and a control interface of the smart home device, through the display screen 132.
The microphone 154 may be used to obtain voice, in some embodiments, the microphone may obtain voice instructions of the user, and the speaker 153 may broadcast natural language of people for computer languages, and in some embodiments, the speaker 153 may broadcast natural language of people for execution results of the smart home device, for example, "air conditioner has been turned on for you".
The sensor module 140 may include one or more sensors. For example, the inertial sensor 14, the direction sensor 141, the air pressure sensor 144, the fingerprint sensor 145, the pressure sensor 146, and the touch sensor 147 are exemplified. In some embodiments, the inertial sensors 14 may include, for example: an acceleration sensor 142, a gyro sensor 143, and the like; the sensor module 140 may also include an ambient light sensor, a distance sensor, a proximity light sensor, a bone conduction sensor, a temperature sensor, and the like.
The direction sensor 141 is used to determine the direction in which the terminal 100 is located.
The acceleration sensor 142 is used to determine the acceleration of the terminal in the X, Y and Z axes in a three-axis coordinate system. The gyro sensor 143 is used to determine the angular velocity of the rotation of the terminal, and the motion state of the terminal is judged by the acceleration and the angular velocity.
And the air pressure sensor 144 is used for measuring the air pressure at the position of the terminal, the terminal is in an upstairs state when the air pressure is reduced, and the terminal is in a downstairs state when the air pressure is increased, so that the floor where the terminal is located is determined.
The fingerprint sensor 145 is used to collect a fingerprint. The terminal 100 can implement fingerprint unlocking, access to an application lock, fingerprint photographing, fingerprint incoming call answering, etc. using the collected fingerprint characteristics.
The pressure sensor 146 is used for sensing a pressure signal and converting the pressure signal into an electrical signal. For example, the pressure sensor 146 may be disposed on the display screen 132. The touch operations which act on the same touch position but have different touch operation intensities can correspond to different operation instructions.
The touch sensor 147 may also be referred to as a "touch panel". The touch sensor 147 may be disposed on the display screen 132, and the touch sensor 147 and the display screen 132 form a touch screen, which is also called a "touch screen". The touch sensor 147 is used to detect a touch operation applied thereto or nearby. The touch sensor 147 can communicate the detected touch operation to the application processor to determine the touch event type. The terminal 100 may provide visual output related to touch operations, etc. through the display screen 132. In other embodiments, the touch sensor 147 can also be disposed on a surface of the terminal 100 at a location different from the display screen 132.
The USB interface 152 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 152 may be used to connect a charger to charge the terminal 100, and may also be used to transmit data between the terminal 100 and peripheral devices. And the earphone can also be used for connecting an earphone and playing audio through the earphone. For example, the USB interface 152 may be used to connect other devices, such as AR devices, computers, and the like, in addition to being a headset interface.
The charging management module 160 is configured to receive charging input from a charger. Wherein, the charger can be a wireless charger,
or may be a wired charger. In some wired charging embodiments, the charging management module 160 may receive charging input from a wired charger via the USB interface 152. In some wireless charging embodiments, the charging management module 160 may receive a wireless charging input through a wireless charging coil of the terminal 100. The charging management module 160 may also supply power to the terminal 100 through the power management module 161 while charging the battery 162.
The power management module 161 is used to connect the battery 162, the charging management module 160 and the processor 110. The power management module 161 receives input from the battery 162 and/or the charging management module 160, and supplies power to the processor 110, the internal memory 121, the display 132, the camera 131, and the like. The power management module 161 may also be used to monitor parameters such as battery capacity, battery cycle count, battery state of health (leakage, impedance), etc. In other embodiments, the power management module 161 may also be disposed in the processor 110. In other embodiments, the power management module 161 and the charging management module 160 may be disposed in the same device.
The mobile communication module 171 may provide a solution including wireless communication of 2G/3G/4G/5G, etc. applied to the terminal 100. The mobile communication module 171 may include a filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 171 may receive the electromagnetic wave signal from the antenna 11, perform filtering, amplification, and other processing on the received electromagnetic wave signal, and transmit the electromagnetic wave signal to the modem processor for demodulation. The mobile communication module 171 can also amplify the signal modulated by the modem processor, and convert the signal into an electromagnetic wave signal through the antenna 11 for radiation. In some embodiments, at least some of the functional modules of the mobile communication module 171 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 171 may be disposed in the same device as at least some of the modules of the processor 110. For example, the mobile communication module 171 may transmit and receive voice transmitted from other devices.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then passed to the application processor. The application processor outputs sound signals through an audio device (not limited to speakers, headphones, etc.) or displays images or video through the display screen 132. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 171 or other functional modules, independent of the processor 110.
The wireless communication module 172 may provide solutions for wireless communication applied to the user terminal 100, including WLAN (e.g., Wi-Fi network), Bluetooth (BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 172 may be one or more devices integrating at least one communication processing module. The wireless communication module 172 receives an electromagnetic wave signal via the antenna 12, performs frequency modulation and filtering processing on the electromagnetic wave signal, and transmits the processed signal to the processor 110. The wireless communication module 172 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into an electromagnetic wave signal via the antenna 12 to radiate the signal. In some embodiments, the terminal 100 may connect to a router through the wireless communication module 172 to access a Wi-Fi network.
In some embodiments, the antenna 11 of the terminal 100 is coupled to the mobile communication module 171 and the antenna 12 is coupled to the wireless communication module 172 so that the terminal 100 can communicate with other devices. Specifically, the mobile communication module 171 may communicate with other devices through the antenna 11, and the wireless communication module 172 may communicate with other devices through the antenna 12. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (TD-SCDMA), Long Term Evolution (LTE), LTE, BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include GPS, global navigation satellite system (GLONASS), beidou navigation satellite system (BDS), quasi-zenith satellite system (QZSS), and/or Satellite Based Augmentation System (SBAS).
In this embodiment, the terminal 100 may also be connected to the smart home device through the mobile communication module 171 or the wireless communication module 172 based on a wireless signal transmission mode. For example, the terminal 100 sends an input operation based on a wireless signal form to the smart home device through the mobile communication module 171 or the wireless communication module 172; alternatively, the terminal 100 receives status data or the like in the form of a wireless signal transmitted from the smart home device through the mobile communication module 171 or the wireless communication module 172.
It should be understood that the connection relationship between the modules illustrated in the embodiment of the present application is only an exemplary illustration, and does not limit the structure of the terminal 100. In other embodiments of the present application, the terminal 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments. It should be understood that the hardware configuration shown in fig. 3 is only one example. A terminal of an embodiment of the application may have more or fewer components than shown, may combine two or more components, or may have a different configuration of components. The various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
A specific implementation of the method for selecting smart home devices provided in the embodiment of the present application is described below with reference to fig. 4 to 12.
The first implementation mode comprises the following steps:
in the first embodiment, the terminal is a mobile terminal, for example, the mobile terminal may be a mobile phone, a tablet computer, an intelligent wearable device, or the like. The user can walk in the room with the mobile terminal. Next, referring to fig. 9, a detailed description will be given of a method for selecting smart home devices, taking an example that a user enters a living room D by holding a mobile terminal, walks from the living room D to a master bedroom E, and sends a voice command of "turning on an air conditioner" to the master bedroom E.
As shown in fig. 4, the method for selecting smart home devices provided in the embodiments of the present application may include the following steps:
step S1000: and acquiring a distribution map of the intelligent household equipment in each room.
The room distribution map may include, for example, location information and/or room information of the plurality of smart home devices. The position information comprises coordinates of the intelligent household equipment in a distribution diagram; the room information includes a room corresponding to the smart home device, for example: the intelligent air conditioner 3b is positioned at the master E; the intelligent air conditioner 4b is located in the child room F.
The distribution graph of the smart home device in each room may be obtained in a variety of ways, for example, the user may directly upload the distribution graph of the smart home in each room to the smart home APP or the smart home cloud, or may obtain the distribution graph through steps S1001 to S1006 shown in fig. 5.
Step S1001: the user opens the smart home App and starts a function of drawing a distribution diagram of the smart home in each room (for a specific implementation principle of the function, see the description after step S1005).
Step S1002: and the user selects the position of the intelligent home gateway as the initial position.
The smart home gateway may be a router and is located at door 0 of the room as shown in fig. 8. It should be clear that the user can select any location of the room as the starting location, which is not limited by the present application.
Step S1003: the user holds the terminal in hand and walks for the first circle along the room, and the terminal obtains a room layout plan according to the walking track of the user.
In some embodiments, the user-held terminal walks one turn along the inner peripheral side (i.e., the side close to the wall position) of each room. In some embodiments, the user may hold the terminal in the same pose while walking on the hand, e.g., the terminal may be held in a horizontal forward pose. The walking trajectory of the user (the line connecting the positions of the user calculated by the PDR) constitutes the boundary line of each room.
Of course, other terminals (e.g., sweeping robots) that are movable may be used to travel along the inner periphery of each room (i.e., the side close to the wall) for one turn.
Step S1004: and the user marks the position of the intelligent household equipment in the room layout plan.
In some cases, since some smart home devices, such as a smart refrigerator, a smart washing machine, etc., have a large floor area and cannot be covered by the walking track of the user, the smart home devices may be marked within the boundary line of the room where the smart home devices are located and as close as possible to the vicinity of the smart home devices.
Step S1005: the user marks the location of the room door in the room layout plan as a PDR beacon.
When the user hand-held terminal moves in a room, the acceleration sensor is used for obtaining acceleration data when the user walks, the gyroscope sensor is used for obtaining rotation angle acceleration data when the user walks, and the direction sensor is used for obtaining the walking direction of the user. The perception service APP can obtain these data recorded by the acceleration sensor, the gyro sensor and the direction sensor through the API and calculate the coordinates in the room layout plan each step the user moves with the PDR. However, when calculating the step length and the walking direction of the user by using the PDR technique, a significant accumulated error may occur with time, for example, when determining the step length of the user based on the acceleration, the step of the user determined by using the PDR technique may deviate from the step of the actual user, and on the basis of this, the coordinates of the user in the room layout plan calculated by using the PDR by the perception service APP may deviate, resulting in inaccurate positioning of the user. Therefore, some PDR beacons are needed to correct the coordinates of the user in the room layout plan. Since the coordinates of the PDR beacons in the room layout plan are known and fixed, the coordinates of the PDR beacons are used to correct the coordinates of the user as the user passes in the vicinity of these PDR beacons.
Since the direction of movement of the user usually changes relatively greatly when the user enters a room, for example, as shown in fig. 9, the angle of the direction of movement of the user in the vicinity of the door 4 changes relatively significantly (greater than or equal to 90 degrees) during the course of walking from the position X to the position Y. Similarly, during the course of a user walking from the X position into a room such as kitchen a, bathroom B, study C, etc., there may be a relatively significant angular change in the direction of the user's movement in steps near the doors of these rooms. Therefore, the location of the room door can be used as a PDR beacon. Of course, the location of the PDR beacon is not limited to the location of the room door, but may be any other location within the room. For example, the PDR beacon can be any location where the direction of motion of the user can change significantly, including but not limited to a door, a wall corner of a room, a corridor corner, and the like.
In the example of the profile of smart home devices in various rooms shown in fig. 8, the PDR beacon may include: a door 1 of a kitchen A, a door 2 of a toilet B, a door 3 of a study room C, a door 0 of a living room D, a door 4 of a master bedroom E and a door 5 of a child room F.
In some embodiments, as shown in fig. 11, when the track of the user is corrected by using the PDR beacon, when the absolute values of changes of the heading angles θ N +1, θ N +2, θ N +3, θ N +4 (the heading angle may be an included angle between the walking direction of the user and the N direction in the figure) of the user in the adjacent 4 steps are all within a range of 90 ° ± α, α is an angle error threshold, α ≧ 0 °, and the distances between the positions (Sn +1, Sn +2, Sn +3, and Sn +4) of the user in the adjacent 4 steps and the position of the PDR beacon (shown by a black triangle in the figure) are all smaller than the first distance, the position of the step closest to the PDR beacon position in the adjacent 4 steps may be corrected to the position of the PDR beacon. Of course, the heading angle may also be an included angle between the walking direction of the user and other directions, and meanwhile, the trajectory of the user may also be corrected by using other algorithms, which is not limited in this application.
In some embodiments, when the area of the user's room is small, the effect of the inertial sensor error on calculating the user's location is insignificant, and the PDR beacon may not be set.
Step S1006: the user holds the terminal by hand to walk along the room for the second week, and the terminal corrects the room layout plan according to the track of the first week and the track of the second week of walking of the user to obtain the distribution map of the intelligent household equipment in each room.
In some embodiments, the direction of the user's walk when walking the second week may be opposite to the direction of the user's walk when walking the first week, and if the user walks in a clockwise direction when walking the first week, the user may walk in a counter-clockwise direction when walking the second week.
In some embodiments, when there are multiple floors in the home of the user, the distribution maps of the smart home devices in the rooms of other floors may also be obtained according to steps S1001 to S1006. The altitude of the user can be determined according to the air pressure acquired by the air pressure sensor and the corresponding relation between the air pressure and the altitude, and then the floor where the user is located is calculated by combining the floor height, or whether the floor where the user is located changes.
In some embodiments, as shown in fig. 8, the profile obtained includes: the system comprises a kitchen A, a toilet B, a study room C, a living room D, a master bed E and a child room F, wherein intelligent household equipment of the kitchen A comprises an intelligent lamp 1a and an intelligent refrigerator 1D; the intelligent household equipment of the toilet B comprises an intelligent lamp 2 a; study C's intelligent household equipment includes: an intelligent lamp 3a and an intelligent air conditioner 1 b; the intelligent household equipment in living room D includes: the intelligent security system comprises an intelligent television 1c, an intelligent air conditioner 2b, an intelligent lamp 4a, an intelligent door lock 1e, an intelligent home gateway 1f and an intelligent security monitor 1 g; the intelligent household equipment of master E includes: the intelligent television 2c, the intelligent air conditioner 3b and the intelligent lamp 5 a; the intelligent household equipment in children's room F includes: intelligent lamp 6a and intelligent air conditioner 4 b.
In some embodiments, the profile may be stored on the terminal, or the profile may be uploaded to the smart home cloud, which is not limited in this application.
Step S2000: and the sensing service APP determines the room where the terminal is located according to the distribution graph and the walking information of the user.
As shown in fig. 6, step S2000 may include the following sub-steps:
step S2001: the sensing service APP obtains a distribution graph of the intelligent household equipment of the user in the room from the intelligent household cloud.
After the sensing service APP is accessed into the intelligent home gateway through the Wi-Fi, the function of the intelligent home APP is called through the API, and the intelligent home APP inquires a distribution diagram of intelligent home equipment of a user in a room from an intelligent home cloud. Of course, the awareness service APP may also directly call the profile stored on the terminal through the API.
The perception service APP can be system resident software, in order to reduce occupation of the perception service APP on a processor of the terminal, the perception service APP can be connected to the intelligent home gateway through Wi-Fi and then starts a PDR function, and the PDR function is closed after connection with the intelligent home gateway is disconnected.
Step S2002: and the perception service APP determines the room where the terminal is located by utilizing a PDR technology according to the distribution graph and the walking information of the user.
After the sensing service APP is accessed to the intelligent home gateway through the Wi-Fi, functions of the inertial sensor, the direction sensor and/or the air pressure sensor 144 are called through the API, and walking acceleration, rotation angular velocity and walking direction and/or air pressure of a user are obtained; obtaining a distribution map stored on the smart home cloud or the smart home APP through the API; the PDR technique is used to determine the room in which the user is located.
In some embodiments, the inertial sensor may include: acceleration sensors and gyroscope sensors. The acceleration sensor is used for determining the acceleration of the user when walking; the gyro sensor is used to determine the angular velocity of rotation. The direction sensor obtains the walking direction of the user. Optionally, the sensing service APP may determine whether the user has stepped by one step according to the acceleration, and further calculate a step length d of the user; calculating a deviation angle of the walking direction of two adjacent steps of the user according to the rotation angular velocity, and optionally predicting the direction of the user at the next step according to the deviation angle; calculating the course angle theta of each step of the user according to the walking direction and/or the offset angle; according to the air pressure sensor 144, the terminal is in the upstairs state when the air pressure is reduced, and in the downstairs state when the air pressure is increased, so as to determine the floor where the terminal is located.
Calculating the position (coordinates) of the user in the profile according to formula (1), wherein (E) 0 ,N 0 ) Is the coordinate of the initial position of the user in the profile (where E 0 For the coordinate of the initial position of the user in the E direction, N 0 Coordinates of the initial position of the user in the N direction), N is the nth step of the walking of the user, d n Step size of nth step, theta, of user's walk n Course angle of step n for user's walk, E k Coordinates in the E direction when the user walks k steps; n is a radical of k The coordinates in the N direction when the user walks k steps are shown.
Figure BDA0002961409000000161
The above step S2002 is only an example, and it should be noted that the obtained walking data of the user may be different according to different types of sensors mounted on the terminal, and the application does not limit the types of the sensors, as long as the position of the user can be calculated according to the walking data of the user obtained by the sensors and the principle of the formula (1).
Fig. 12 is a schematic diagram illustrating a method for calculating a position of a user in a distribution graph shown in fig. 9 by using a PDR according to an embodiment of the present application. As shown in fig. 12, when the user is not at home (has not passed through the home door 0) and is connected to the smart home gateway through Wi-Fi, the terminal is carried by the user to enter the smart home environment, and when the user opens the home door 0, the direction sensor also detects an angle change along with a change in an angle when the door 0 is opened, which indicates that the user opens and passes through the door 0, and at this time, coordinates of the door 0 on a distribution diagram are taken as an initial position S0 (the coordinates of the door 0 are known in the distribution diagram of each room in steps S1001-S1006).
Based on the coordinates (E) of the initial position S0 0 ,N 0 ) The step length d of each step of the user (for the sake of clarity, the un-labeled dn is the connecting line between S0-S1 and S1-S2 … … in the figure to represent the step length of the user), the heading angle theta of the step, the projection lengths of the step length of each step of the user in the N direction and the E direction are accumulated at the initial position, and the coordinates in the distribution diagram after the user walks for k steps are calculated. The step length of the user can be obtained by calculation based on the relation between the acceleration obtained by the acceleration sensor and the time; the course angle can be obtained by calculation based on the relation between the angular acceleration obtained by the gyroscope sensor and time and/or obtained by calculation based on the angle change of two adjacent steps detected by the direction sensor; the N-direction and the E-direction are directions of coordinate axes (N-axis and E-axis) marked on a plan view of a room of the user (a distribution diagram of the smart home in each room). Note that, for convenience of description, the profiles shown in fig. 8 to 9 and fig. 12 have the N direction and the E direction perpendicular to each other as the directions of the coordinate axes of reference. In fact, the directions of the reference coordinate axes may be different according to different rooms and user preferences, and may be any two directions perpendicular to each other in the N, S, W, E directions, or other directions, which is not limited in the present application.
In some embodiments, it may be that the user does not have the action of opening the door 0, but when the terminal is connected to the smart home gateway through Wi-Fi, the sensing service APP may determine an approximate location according to the Wi-Fi intensity detected by the terminal. The perception service APP determines the user's location using PDR techniques based on the coordinates of the approximate location, the user's walking data obtained by the inertial sensors and/or changes in Wi-Fi intensity. When the user 'S position approaches the door 0 and the distance from the door 0 is less than the first distance, the user' S position is corrected to the position of the door 0 (refer to step S1005).
In some embodiments, when the user has arrived at home (has passed through the home gate 0) and is not connected to the smart home gateway through Wi-Fi, for example, as shown in fig. 10, when the user enters the home gate until walking to the main bed E and the sensing service APP is not connected to the smart home gateway through Wi-Fi, the sensing service APP cannot obtain the current position of the user, and only an approximate position can be determined according to the Wi-Fi intensity detected by the terminal. As the user leaves the master bedroom, passes the door 4 and enters the living room, the sensing service APP may determine the user's position using PDR techniques based on the coordinates of this approximate position, the user's walking data obtained by the inertial sensors and/or the change in Wi-Fi intensity. When the user passes through the position close to the door 4 and the distance from the door 4 is less than the first distance, the coordinates of the door 4 on the histogram are taken as the initial position. In the same way as the embodiment shown in fig. 12, based on the coordinates of the initial position, the step length of each step of the user, and the heading angle of the step, the projection lengths of the step length of each step of the user in the N direction and the E direction are accumulated at the initial position, and then the coordinates in the distribution diagram after the user walks for k steps are calculated.
In the embodiments provided herein, the user' S position within the room need not be precisely determined, but only the above steps S1000-S2000 and their sub-steps need to be used to determine the movement of the terminal (user) from one room to another.
In some embodiments, when there are multiple floors in the home of the user, the floor where the user is located may be determined according to the air pressure obtained by the air pressure sensor 144, and the room where the user is located is determined in the same method as that in step S2002 according to the distribution map of the smart home in each room corresponding to the floor where the user is located.
Step S3000: and determining the intelligent household equipment which is intended to be controlled by the user according to the voice instruction of the user and the room where the terminal is located, wherein the household equipment is located in the room.
Step S3000 may include, for example, the following sub-steps, as shown in figure 7,
step S3001: the user says "turn on the air conditioner" to the voice assistant APP on the terminal.
As shown in fig. 9, the user walks from the door 0 position to the position X of the living room D, then walks from the living room D to the position Y of the main bed E, and utters the voice "turn on the air conditioner" at the position Y.
Step S3002: the voice assistant APP converts the voice content 'on air conditioner' into text content.
In some embodiments, the voice content of the user is acquired by using a radio reception function provided by a hardware microphone on the terminal, and the voice assistant APP may perform voice recognition on the voice content of the user by using ASR, convert the original voice content into text content, and send the text content to the voice assistant cloud.
Step S3003: and the voice assistant cloud carries out semantic analysis on the text content to obtain the intention and the slot position of the user.
The voice assistant cloud can perform semantic analysis on the text content of the voice of the user by using the NLU to obtain the intention and the slot position of the user, wherein the intention of the user is as follows: the intention of the air conditioner is turned on, and information of a room in which the terminal is located is absent in the slot.
In some embodiments, semantic analysis may also be performed at the terminal, the obtained intention and slot of the user are obtained, for example, the voice assistant APP has semantic analysis capability, after the intention and slot of the user are obtained, and after the slot lacks information of the room where the terminal is located, a function of the sensing service APP is called through the API, and then the obtained room where the terminal is located (please refer to step S2001 and step S2002), and step S3006 is performed.
Step S3004: and the voice assistant cloud issues an instruction for collecting the information of the room where the terminal is located to the voice assistant APP.
Step S3005: and the voice assistant APP calls the functions of the perception service APP through the API, and the room where the terminal is located is determined.
In step S3005, the voice assistant APP calls the function of the sensing service APP according to the instruction from the voice assistant cloud. The awareness service APP determines the room where the terminal is located through steps S2001-S2002.
Step S3006: and the voice assistant APP feeds back the information of the room where the terminal is located to the voice assistant cloud.
Step S3007: the voice assistant cloud sends the intention, the slot position and the room where the terminal is located of the user to the intelligent home cloud.
Step S3008: the intelligent home cloud determines a list of rooms where the designated intelligent home devices are located according to the intention and the slot positions of the user, and screens out the unique intelligent home devices according to the rooms where the terminals are located.
In some embodiments, under the condition that the smart home cloud does not store the distribution map of the user, the smart home cloud may screen out the unique smart home device according to the information of the room to which the plurality of smart home devices belong and according to the room in which the terminal is located.
In some embodiments, as shown in fig. 9, when the user enters the main bed E from the living room D and then issues an instruction to turn on the air conditioner, the list of the rooms where the specific smart home devices are located, which is obtained in step S3008, includes: the intelligent air conditioner 2b is positioned in the living room D, the intelligent air conditioner 4b is positioned in the child room F, the intelligent air conditioner 3b is positioned in the master bedroom E, and the intelligent air conditioner 1b is positioned in the study room C; and screening out the intelligent air conditioner 3b of the main lying E as the only controlled intelligent household equipment according to the main lying E of the terminal.
Step S3009: and the intelligent home cloud finds the control instruction of the controlled intelligent home equipment according to the unique intelligent home equipment, and sends the control instruction to the intelligent home equipment.
In some embodiments, the smart furniture cloud finds out the opening control instruction of the smart air conditioner 3b of the master bedroom E.
In some embodiments, if a plurality of intelligent home devices of the same type exist in a room where the terminal is located, if a plurality of air conditioners exist, the intelligent home cloud feeds back a plurality of identical controlled intelligent home devices in the room where the terminal is located to the voice assistant cloud, and the voice assistant cloud feeds back an instruction for asking the user to clearly determine the controlled intelligent home devices to the voice assistant APP to perform a new round of voice interaction until the user indicates the only intelligent home devices.
In some embodiments, if a plurality of smart home devices of the same type exist in a room where the terminal is located, the smart home cloud may also send corresponding control instructions to the plurality of smart home devices of the same type, or the smart home cloud sends control instructions to the smart home devices that the user controls last time, which is not limited in this application.
Step S4001: and the intelligent household equipment feeds back the execution result of the instruction to the intelligent household cloud.
Step S4002: and the intelligent home cloud feeds back the execution result of the instruction to the voice assistant cloud.
Step S4003: the voice assistant cloud can construct screen display contents and broadcast sentences according to the instruction execution result, and send the screen display contents and the broadcast sentences to the voice assistant APP.
Step S4004: and displaying the voice assistant APP of the mobile phone according to the constructed screen display content: "opened for you", carry out the voice broadcast according to broadcasting the statement simultaneously: "has been opened for you".
The second embodiment:
the second embodiment is different from the first embodiment in that: the terminal is a terminal that is not moved frequently. The terminals which do not move frequently can be smart televisions, smart large screens, sound boxes with screens, sound boxes without screens and the like.
These infrequently mobile terminals are generally not portable and do not have inertial sensors and therefore are not carried by the user and the position of the user in the room cannot be calculated by PDR techniques. When the user sends a voice instruction to the infrequently-moving terminals, the intelligent home cloud can determine the controlled intelligent home devices matched with the voice instruction according to the intelligent home devices in the room where the infrequently-moving terminals are located.
When the distribution graph of the intelligent home equipment in each room is drawn, the infrequently-moving terminals are marked in the corresponding room, so that when a user initiates a voice instruction through the voice assistant APP on the terminal, the voice assistant cloud determines the intention and the slot position of the user, and under the condition that the slot position is absent in the voice instruction of the user, the room where the infrequently-moving terminal is located can be determined directly through the distribution graph of the intelligent home cloud, and the intelligent home equipment to be controlled by the user is further determined.
Next, referring to fig. 8, taking an example that a mobile terminal is a smart television 1c located in the living room D and a user issues a voice command of "turn on the air conditioner" in the living room D, a method for selecting smart home devices in the present embodiment will be described in detail.
As in the first embodiment, the user may obtain the distribution map of the smart home devices in each room through steps S1001 to S1006. Since the position of the smart tv 1c is already marked in the corresponding room in step S1004, when the user initiates a voice instruction through the voice assistant APP on the smart tv 1c, the voice assistant APP and/or the voice assistant cloud can determine the intention and the slot position of the user by performing S3002-step S3003, and can call the function of the sensing service APP through the API in the case that the slot position is absent in the voice instruction of the user, the sensing service APP can determine the room (i.e., the living room D) where the smart tv 1c is located directly according to the profile in step S2001, and then perform steps S3006-step S4004, so that the smart air conditioner 2b located in the living room D as well as the smart tv 1c is turned on.
Each embodiment and each step in each embodiment in the present application may be used in combination with each other, or may be used separately, and each step may be executed according to the same order as or a different order from that in the embodiment of the present application, so as to achieve different technical effects.
In the embodiments provided in the present application, the method provided in the embodiments of the present application is introduced from the perspective of using the electronic device as an execution subject. In order to implement the functions in the methods provided by the embodiments of the present application, the electronic device may include a hardware structure and/or a software module, and the functions are implemented in the form of a hardware structure, a software module, or a hardware structure plus a software module. Whether any of the above-described functions is implemented as a hardware structure, a software module, or a hardware structure plus a software module depends upon the particular application and design constraints imposed on the technical solution.
Based on the above embodiments, the embodiment of the application provides an electronic device, and the electronic device is used for realizing the intelligent home device selection method in the above figures. Referring to fig. 13, the electronic device 1500 may include one or more processors 1510 and one or more memories (not shown in fig. 13) in which the one or more computer programs are stored, a display 1520, an inertial sensor 1530, a speaker 153, a microphone 154, a transceiver 1550, and one or more computer programs comprising instructions; the display 1520 for displaying a user interface; the speaker 153 may be used to broadcast voice, the microphone 154 may be used to obtain a voice instruction of a user, the inertial sensor 1530 is used to collect walking information of the terminal in a natural coordinate system, and the transceiver 1550 is used to receive data of a cloud and send data to the cloud; when invoked for execution by the one or more processors 1510, cause the terminal to perform various method embodiments that may be described above in fig. 4-7. In the embodiments of the present application, the processor may be a general processor, a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component, and may implement or execute the methods, steps, and logic blocks disclosed in the embodiments of the present application. The general purpose processor may be a microprocessor or any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in a processor.
In the embodiment of the present application, the memory may be a nonvolatile memory, such as a Hard Disk Drive (HDD) or a solid-state drive (SSD), and may also be a volatile memory, for example, a random-access memory (RAM). The memory is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory in the embodiments of the present application may also be circuitry or any other device capable of performing a storage function for storing program instructions and/or data.
It should be understood that the electronic device may be used to implement the methods shown in fig. 4 to 7 according to the embodiments of the present application, and reference may be made to the above for related features, which are not described herein again.
Based on the above embodiments, the present application also provides a computer storage medium, in which a computer program is stored, and when the computer program is executed by a computer, the computer program causes the computer to execute the various method embodiments shown in fig. 4 to 7. The embodiments of the present application also provide a computer-readable storage medium or a computer non-volatile readable storage medium, on which a computer program is stored, which when executed by a processor is configured to perform a diversification problem generation method, the method including at least one of the solutions described in the above embodiments.
The computer storage media of the embodiments of the present application may take any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
Also provided in embodiments of the present application is a computer program product comprising instructions which, when executed on a computer, cause the computer to perform the various method embodiments illustrated in fig. 4-7 above.
In the embodiments provided in the present application, the method provided in the embodiments of the present application is described from the perspective of an electronic device as an execution subject. In order to implement the functions in the method provided by the embodiments of the present application, the electronic device may include a hardware structure and/or a software module, and the functions are implemented in the form of a hardware structure, a software module, or a hardware structure and a software module. Whether such functionality is implemented as a hardware structure, a software module, or a combination of hardware and software modules depends upon the particular application and design constraints imposed on the computing arrangement.
The processors referred to in the various embodiments above may be general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components. The various methods, steps, and logic blocks disclosed in the embodiments of the present application may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software modules may be located in a Random Access Memory (RAM), a flash memory, a read-only memory (ROM), a programmable ROM, an electrically erasable programmable memory, a register, or other storage media that are well known in the art. The storage medium is located in a memory, and a processor reads instructions in the memory and combines hardware thereof to complete the steps of the method.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application.
It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The elements described as separate parts may or may not be physically separate and are shown as units
The illustrated components may or may not be physical units, i.e., may be located in one place, or may be distributed across multiple network elements. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions may be stored in a computer-readable storage medium if they are implemented in the form of software functional units and sold or used as separate products. Based on such understanding, the technical solutions of the present application, which are essential or part of the technical solutions contributing to the prior art, may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.
The foregoing is directed to embodiments of the present application and the following description of the embodiments of the present application. Those skilled in the art will appreciate that the present application is not limited to the particular embodiments described herein, but is capable of many obvious modifications, rearrangements and substitutions without departing from the scope of the application. Therefore, although the present application has been described in more detail with reference to the above embodiments, the present application is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present application.

Claims (17)

1. A method for selecting intelligent household equipment is characterized in that the method is applied to an intelligent household system, the intelligent household system comprises an intelligent household cloud, a plurality of intelligent household equipment and a terminal, at least part of the intelligent household equipment in the intelligent household equipment is located in different rooms, the intelligent household cloud is connected with the intelligent household equipment and the terminal for communication,
the method comprises the following steps:
the terminal determines a current room of the terminal according to a room distribution map by using a Pedestrian Dead Reckoning (PDR) technology, wherein the room distribution map comprises the position information and/or the room information of the intelligent household equipment;
the intelligent home cloud determines the controlled intelligent home equipment according to the current room of the terminal and the user intention, wherein the user intention is obtained based on a user voice instruction, the room distribution map is stored in the intelligent home cloud, or the room information of the intelligent home equipment is stored in the intelligent home cloud.
2. The method according to claim 1, wherein the intelligent home cloud determines the controlled intelligent home device according to a room where the terminal is currently located and a user intention, and specifically comprises:
the intelligent home cloud determines an intelligent home device list according to the user intention, and determines the controlled intelligent home devices from the intelligent home device list according to the current room where the terminal is located, wherein the intelligent home device list comprises intelligent home devices located in different rooms.
3. The method according to claim 1 or 2, wherein the terminal determines the current room of the terminal according to a room distribution map by using a PDR technique, specifically comprising:
the terminal acquires acceleration information acquired by an acceleration sensor, angular velocity information acquired by a gyroscope sensor, direction information acquired by a direction sensor and/or air pressure information acquired by an air pressure sensor;
the terminal calculates the position of the terminal by using the PDR technology according to the acceleration information, the angular velocity information, the direction information and/or the air pressure information;
and the terminal determines the current room of the terminal according to the terminal position and the room distribution map.
4. The method of claim 3, wherein the room profile further comprises a PDR beacon, and wherein before the terminal determines the room in which the terminal is currently located according to the terminal location and the room profile, the method further comprises:
and the terminal corrects the position of the terminal according to the PDR beacon.
5. The method of claim 4, wherein the PDR beacons are labeled by a user in the room profile, the PDR beacons including doors, wall corners of a room, and/or corridors.
6. The method according to any one of claims 1-5, wherein the room distribution map is drawn by the PDR technique by a user walking along a room with the terminal;
the position information and/or the room information of the intelligent household devices are obtained by labeling the room layout drawing by a user.
7. The method according to any one of claims 1-6, further comprising:
the intelligent home cloud determines a control instruction according to the controlled intelligent home equipment and sends the control instruction to the controlled intelligent home equipment, wherein the control instruction is used for controlling the controlled intelligent home equipment.
8. A method for determining a room in which a terminal is located, comprising:
the terminal determines a current room of the terminal according to a room distribution map by using a Pedestrian Dead Reckoning (PDR) technology, wherein the room distribution map comprises the position information and/or the room information of a plurality of intelligent household devices;
and the terminal sends the information of the current room of the terminal.
9. The method according to claim 8, wherein the determining, by the terminal, the current room of the terminal according to the room distribution map by using the PDR technique specifically includes:
the terminal acquires acceleration information acquired by an acceleration sensor, angular velocity information acquired by a gyroscope sensor, direction information acquired by a direction sensor and/or air pressure information acquired by an air pressure sensor;
the terminal calculates the position of the terminal in the room distribution map by using the PDR technology according to the acceleration information, the angular velocity information, the direction information and/or the air pressure information;
and the terminal determines the current room of the terminal according to the position of the terminal in the room distribution diagram and the room distribution diagram.
10. The method of claim 9, wherein the room profile further comprises a PDR beacon, and wherein before the terminal determines the room in which the terminal is currently located according to the terminal's position in the room profile and the room profile, the method further comprises:
and the terminal corrects the position of the terminal according to the PDR beacon.
11. The method of claim 10, wherein the PDR beacons are labeled by a user in the room profile, the PDR beacons including doors, wall corners of a room, and/or corridors.
12. The method according to any one of claims 8-11, wherein the room profile is plotted using the PDR technique by a user walking along a room with the terminal;
the position information and/or the room information of the intelligent household devices are obtained by labeling the room layout drawing by a user.
13. A method for selecting intelligent household equipment is characterized in that the method is applied to an intelligent household system, the intelligent household system comprises an intelligent household cloud and a plurality of intelligent household equipment, at least part of the intelligent household equipment in the intelligent household equipment is located in different rooms, the intelligent household cloud is connected with the intelligent household equipment for communication,
the method comprises the following steps:
the intelligent home cloud acquires information of a room where the terminal is located currently, wherein the information of the room where the terminal is located currently is determined by the terminal through a Pedestrian Dead Reckoning (PDR) technology according to a room distribution map, and the room distribution map comprises location information of a plurality of intelligent home devices and/or room information of the intelligent home devices;
the intelligent home cloud determines the controlled intelligent home equipment according to the current room of the terminal and the user intention, wherein the user intention is obtained based on a user voice instruction, the room distribution map is stored in the intelligent home cloud, or the room information of the intelligent home equipment is stored in the intelligent home cloud.
14. The method of claim 13, further comprising:
and the intelligent home cloud determines a control instruction according to the controlled intelligent home equipment and sends the control instruction to the controlled intelligent home equipment, wherein the control instruction is used for controlling the controlled intelligent home equipment.
15. An intelligent home system, comprising an intelligent home cloud and a terminal, wherein the intelligent home cloud and the terminal comprise a memory and a processor, and the memory stores instructions that, when executed by the processor, cause the intelligent home cloud and the terminal to perform the method according to any one of claims 1-7.
16. A terminal, comprising: a processor, a memory, a display screen, a speaker, a microphone, an orientation sensor, a gyroscope sensor, an acceleration sensor, and a computer program, the computer program stored in the memory, the computer program comprising instructions;
the display screen is used for displaying a user interface;
the loudspeaker is used for broadcasting user voice;
the microphone is used for acquiring user voice;
the acceleration sensor is used for acquiring the movement acceleration of the terminal;
the direction sensor is used for determining the direction of the terminal;
the gyroscope sensor is used for acquiring the angular speed of the rotation of the terminal;
the instructions, when executed by the processor, cause the terminal to perform the method of any of claims 8-12.
17. A computer-readable storage medium, comprising a computer program which, when run on an electronic device, causes the electronic device to perform the method of any of claims 8-12 or causes the electronic device to perform the method of claim 13 or 14.
CN202110238952.7A 2021-03-04 2021-03-04 Intelligent household equipment selection method and terminal Pending CN115016298A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110238952.7A CN115016298A (en) 2021-03-04 2021-03-04 Intelligent household equipment selection method and terminal
PCT/CN2022/077290 WO2022183936A1 (en) 2021-03-04 2022-02-22 Smart home device selection method and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110238952.7A CN115016298A (en) 2021-03-04 2021-03-04 Intelligent household equipment selection method and terminal

Publications (1)

Publication Number Publication Date
CN115016298A true CN115016298A (en) 2022-09-06

Family

ID=83064188

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110238952.7A Pending CN115016298A (en) 2021-03-04 2021-03-04 Intelligent household equipment selection method and terminal

Country Status (2)

Country Link
CN (1) CN115016298A (en)
WO (1) WO2022183936A1 (en)

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101560470B1 (en) * 2014-01-07 2015-10-16 한국과학기술원 Smart access point apparatus and method for controlling internet of things apparatus using the smart access point apparatus
US9843987B2 (en) * 2015-06-15 2017-12-12 At&T Intellectual Property I, L.P. Consumer service cloud for implementing location-based services to control smart devices
CN106681282A (en) * 2015-11-05 2017-05-17 丰唐物联技术(深圳)有限公司 Control method and system for smart home
CN106289282A (en) * 2016-07-18 2017-01-04 北京方位捷讯科技有限公司 A kind of indoor map pedestrian's track matching method
CN110262274B (en) * 2019-07-22 2022-07-05 青岛海尔科技有限公司 Intelligent household equipment control display method and system based on Internet of things operating system
CN110456755A (en) * 2019-09-17 2019-11-15 苏州百宝箱科技有限公司 A kind of smart home long-range control method based on cloud platform
CN110738994A (en) * 2019-09-25 2020-01-31 北京爱接力科技发展有限公司 Control method, device, robot and system for smart homes
CN111174778B (en) * 2019-11-26 2022-03-01 广东小天才科技有限公司 Building entrance determining method and device based on pedestrian dead reckoning
CN111475212B (en) * 2020-04-02 2021-11-23 深圳创维-Rgb电子有限公司 Equipment driving method and device

Also Published As

Publication number Publication date
WO2022183936A1 (en) 2022-09-09

Similar Documents

Publication Publication Date Title
CN110495819B (en) Robot control method, robot, terminal, server and control system
CN106254914A (en) The operational approach of portable set, the operational approach of content reproducing device, portable set and content reproducing device
KR20160049759A (en) Method for scanning neighboring devices and electronic apparatus thereof
WO2020259542A1 (en) Control method for display apparatus, and related device
CN102724396A (en) Wireless real-time display, control and cloud storage image pickup system based on WIFI (wireless fidelity)
CN112134995A (en) Method, terminal and computer readable storage medium for searching application object
WO2022007944A1 (en) Device control method, and related apparatus
EP4216577A1 (en) Positioning method and electronic device
WO2021197354A1 (en) Device positioning method and relevant apparatus
CN107484138A (en) Micro-base station localization method and device
WO2021170129A1 (en) Pose determination method and related device
CN116074143A (en) Scene synchronization method and device, electronic equipment and readable storage medium
CN115734303A (en) Method and related device for switching network
CN116546132A (en) Network identification method, device, mobile terminal and computer readable storage medium
CN115016298A (en) Intelligent household equipment selection method and terminal
WO2022166461A1 (en) Method, apparatus and system for determining device position
WO2022037575A1 (en) Low-power consumption positioning method and related apparatus
CN115701032A (en) Device control method, electronic device, and storage medium
CN115119135A (en) Data sending method, receiving method and device
CN114691064A (en) Double-path screen projection method and electronic equipment
WO2023237061A1 (en) Network searching method, electronic device and medium
WO2022237396A1 (en) Terminal device, and positioning method and apparatus
WO2023185687A1 (en) Method for acquiring location of vehicle, and electronic device
WO2024022288A1 (en) Method for installing smart device, and electronic device
EP4369046A1 (en) Display method, electronic device, and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination