CN116794989A - User state determining method and device based on scene equipment linkage - Google Patents

User state determining method and device based on scene equipment linkage Download PDF

Info

Publication number
CN116794989A
CN116794989A CN202210256922.3A CN202210256922A CN116794989A CN 116794989 A CN116794989 A CN 116794989A CN 202210256922 A CN202210256922 A CN 202210256922A CN 116794989 A CN116794989 A CN 116794989A
Authority
CN
China
Prior art keywords
target
user
state
scene
parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210256922.3A
Other languages
Chinese (zh)
Inventor
陈小平
林勇进
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Linkage All Things Technology Co Ltd
Original Assignee
Guangzhou Linkage All Things Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Linkage All Things Technology Co Ltd filed Critical Guangzhou Linkage All Things Technology Co Ltd
Priority to CN202210256922.3A priority Critical patent/CN116794989A/en
Publication of CN116794989A publication Critical patent/CN116794989A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B15/00Systems controlled by a computer
    • G05B15/02Systems controlled by a computer electric
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/418Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS], computer integrated manufacturing [CIM]
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/26Pc applications
    • G05B2219/2642Domotique, domestic, home control, automation, smart house

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Manufacturing & Machinery (AREA)
  • Quality & Reliability (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a user state determining method and device based on scene equipment linkage, wherein the method comprises the following steps: detecting whether target intelligent devices in an operating state exist in a target scene, if so, determining a time-varying parameter corresponding to each target intelligent device according to a monitoring list; according to the time-varying parameters, determining a working area corresponding to each target intelligent device, and determining scene coordinates of a target user in a target scene according to all the working areas; and the target user state acquisition equipment corresponding to the control scene coordinates acquires state parameters of the target user, analyzes the state parameters and time-varying parameters corresponding to the plurality of target intelligent devices based on a preset user state analysis model, and obtains state information of the target user. Therefore, the method and the device can intelligently determine the state information of the target user, improve the determination efficiency of determining the state information of the user, and improve the reliability and accuracy of the determined state information of the user.

Description

User state determining method and device based on scene equipment linkage
Technical Field
The invention relates to the technical field of intelligent home, in particular to a user state determining method and device based on scene equipment linkage.
Background
Along with the improvement of the living standard of people and the popularization of the intelligent concept, living intelligence has become a great trend of people living and developing nowadays, wherein the most common intelligent means comprise intelligent home furnishings.
With the development of smart home, the intelligent requirements of people on smart home are gradually improved, and meanwhile, the intelligent development direction of smart home is mainly aimed at improving user experience, wherein the real-time requirements of users are adapted to the running process of equipment requiring smart home, and common intelligent development requirements comprise: through reducing the interaction between user and the intelligent house, also reduce the number of times of manual operation of user to intelligent house promptly, can improve the service level of intelligent house simultaneously, satisfy the user to the user demand of intelligent house, this just requires intelligent house to gather user's state to the intelligent house is according to the operating mode of user's state adjustment intelligent house itself.
However, in the prior art, the ways in which the smart home is used to determine the status of the user include: the body data of the user is collected by wearing articles such as a bracelet and the like, so that the state of the user is determined according to the collected body data; however, the manner in which the physical data of the user is collected by the wearing article is limited to requiring the user to wear the wearing article in real time, and in the case that the user does not wear the wearing article, the state of the user cannot be determined. It can be seen that how to improve the accuracy of determining the user status based on solving the above-mentioned problems is of particular importance.
Disclosure of Invention
The invention aims to solve the technical problem of providing a scene equipment linkage-based user state determining method and device, which can intelligently determine real-time state information of a user, improve the determining efficiency of determining the state information of the user and improve the reliability and accuracy of the determined state information of the user.
In order to solve the technical problem, the first aspect of the invention discloses a user state determining method based on scene equipment linkage, which comprises the following steps:
detecting whether at least one target intelligent device in an operation state exists in a target scene, wherein all the target intelligent devices are intelligent devices in a preset monitoring list, and the intelligent devices included in the monitoring list have an association relationship with user state acquisition equipment;
when at least one target intelligent device exists in the target scene, determining a time-varying parameter corresponding to each target intelligent device according to the monitoring list, wherein the time-varying parameter comprises the device type of each target intelligent device;
determining a working area corresponding to each target intelligent device according to the time-varying parameters, and determining scene coordinates of a target user in the target scene according to all the working areas;
And controlling a target user state acquisition device corresponding to the scene coordinates to acquire state parameters of the target user, analyzing the state parameters and time-varying parameters corresponding to a plurality of target intelligent devices based on a preset user state analysis model to obtain state information of the target user, wherein the state parameters comprise body movement amplitude and/or body temperature change data of the target user, and the state information of the target user comprises the state type of the target user.
As an alternative embodiment, in the first aspect of the present invention, the method further includes:
determining a target scene template from a plurality of scene templates which are determined in advance according to the time-varying parameters, wherein the target scene template is a scene template with the matching degree between the time-varying parameters and the preset standard time-varying parameters being larger than a preset matching degree threshold;
determining a first state corresponding to the target user according to the target scene template, wherein the first state is a preset user state in the target scene template;
the analyzing the state parameters and the time-varying parameters corresponding to the plurality of target intelligent devices based on a preset user state analysis model to obtain the state information corresponding to the target user comprises the following steps:
Based on a preset user state analysis model, analyzing the state parameters and time-varying parameters corresponding to a plurality of target intelligent devices to obtain temporary states corresponding to the target users;
based on the user state analysis model, analyzing the state level of the temporary state and the state level of the first state to obtain an analysis result, and determining a state with a higher state level in the analysis result as state information corresponding to the target user;
and when the state level of the temporary state is smaller than or equal to the state level of the first state, the state information corresponding to the target user is the first state.
As an optional implementation manner, in the first aspect of the present invention, before determining, according to the time-varying parameter, a working area corresponding to each of the target smart devices, the method further includes:
judging whether a voice control instruction for setting a target scene is received or not, and executing the operation of determining a working area corresponding to each target intelligent device according to the time-varying parameters when judging that the voice control instruction is not received;
When the voice control instruction is judged to be received, the voice control instruction is analyzed to obtain a target field for setting the target scene;
judging whether the target field comprises a user state field for representing the current state of a user, and determining the state of a user corresponding to the user state field as state information corresponding to the target user when the target field comprises the user state field;
when the target field is judged to not comprise the user state field, judging whether a post-level instruction associated with the scene setting operation comprises the user state field or not in the process of executing the scene setting operation matched with the target field, wherein the post-level instruction is an instruction of which the execution sequence is after the scene setting operation;
when judging that the user state field is not included in a post-level instruction associated with the scene setting operation in the process of executing the scene setting operation matched with the target field, executing the operation according to the time-varying parameters and determining a working area corresponding to each target intelligent device;
And when judging that the user state field is included in a post-hierarchy instruction associated with the scene setting operation in the process of executing the scene setting operation matched with the target field, determining the user state field included in the post-hierarchy instruction as state information corresponding to the target user.
As an optional implementation manner, in the first aspect of the present invention, the time-varying parameter further includes a start-up time of each of the target smart devices; determining a working area corresponding to each target intelligent device according to the time-varying parameters, and determining scene coordinates of a target user in the target scene according to all the working areas, wherein the method comprises the following steps:
determining a working area and a running time corresponding to each target intelligent device according to the device type of each target intelligent device and the starting time of each target intelligent device, which are included by the time-varying parameters;
screening out target operation time lengths smaller than a preset time length threshold value from all the operation time lengths to obtain first intelligent equipment corresponding to the target operation time lengths;
determining a working area corresponding to the first intelligent device as a scanning area, and controlling a user state acquisition device corresponding to the scanning area to scan the scanning area to obtain a scanning result, wherein the scanning area is an area with the probability of a target user in the area being greater than a preset probability threshold;
And when the scanning result indicates that the target user is in the scanning area, determining the coordinates corresponding to the moving range of the target user in the scanning area, which are included in the scanning result, as scene coordinates of the target user in the target scene.
In an optional implementation manner, in a first aspect of the present invention, before the analyzing the state parameters and the time-varying parameters corresponding to the plurality of target smart devices based on the preset user state analysis model to obtain the state of the target user, the method further includes:
analyzing the state parameters acquired by the target user state acquisition equipment to obtain the undetermined user type of the target user;
judging whether target types matched with the undetermined user types exist in all preset user self-defined types stored in a database;
when judging that the target type matched with the undetermined user type does not exist in all the user self-defined types stored in the database, determining that the standard type is the current user type of the target user from all the user self-defined types;
when judging that the target type matched with the undetermined user type exists in all the user self-defined types stored in the database, determining the target type as the current user type corresponding to the target user;
The step of analyzing the state parameters and the time-varying parameters corresponding to the plurality of target intelligent devices based on a preset user state analysis model to obtain a temporary state corresponding to the target user comprises the following steps:
and analyzing the state parameters, time-varying parameters corresponding to a plurality of target intelligent devices and the current user type based on a preset user state analysis model to obtain a temporary state corresponding to the target user.
In a first aspect of the present invention, as an optional implementation manner, the state type includes a steady state type or a fluctuating state type, where the steady state type is used to indicate that a body temperature variation amplitude of the target user is less than or equal to a preset temperature variation threshold, and the fluctuating state indicates that the body temperature variation amplitude corresponding to the target user is greater than the temperature variation threshold;
the method further comprises the steps of after analyzing the state parameters and time-varying parameters corresponding to a plurality of target intelligent devices based on a preset user state analysis model and obtaining state information corresponding to the target user:
when the state type of the target user is the fluctuation state type, acquiring body temperature data corresponding to the target user at preset time intervals, and analyzing the body temperature data to obtain a body temperature change trend corresponding to the target user;
Estimating a target environment parameter and an estimated time period required by the target user to change from the fluctuation state type to the stable state type based on the body temperature change trend and the user state analysis model, wherein the target environment parameter comprises at least one of temperature, humidity, air flow rate and light brightness corresponding to the target scene;
and generating control parameters of a second intelligent device for adjusting the environmental parameters of the target scene in the target scene according to the target environmental parameters and the estimated time length, and controlling the second intelligent device to adjust the environmental parameters of the target scene according to the control parameters of the second intelligent device so as to adjust the body sensing comfort temperature range, corresponding to the target user, determined by the body sensing model of the user.
In an optional implementation manner, in a first aspect of the present invention, the generating, according to the target environmental parameter and the estimated duration, a control parameter of a second intelligent device in the target scene, where the control parameter is used to adjust the environmental parameter of the target scene, includes:
analyzing the scene coordinates according to a predetermined user somatosensory model to obtain the distance between the target user and the target intelligent equipment;
According to a predetermined user somatosensory model, analyzing the target environment parameter, the estimated time length, the state information corresponding to the target user and the distance to obtain a somatosensory comfort parameter matched with the target user, wherein the somatosensory comfort parameter matched with the target user comprises the somatosensory comfort temperature range;
generating control parameters of a second intelligent device in the target scene for adjusting environmental parameters of the target scene according to the somatosensory comfort parameters matched with the target user;
and, the method further comprises:
judging whether the data fluctuation condition of the body temperature data corresponding to the target user in the estimated time length is matched with the fluctuation condition included in the fluctuation curve estimated by the user state analysis model or not in the process of controlling the second intelligent device to adjust the environmental parameters of the target scene;
when judging that the fluctuation condition of the body temperature data corresponding to the target user is not matched with the fluctuation condition included in the fluctuation curve estimated by the user state analysis model in the estimated time period, updating the body temperature change trend corresponding to the target user according to the data fluctuation condition, and re-executing the operation based on the body temperature change trend and the user state analysis model to estimate the target environment parameters and the estimated time period required by the target user to change from the fluctuation state type to the stable state type.
The second aspect of the invention discloses a user state determining device based on scene equipment linkage, which comprises:
the detection module is used for detecting whether at least one target intelligent device in an operation state exists in a target scene, all the target intelligent devices are intelligent devices in a preset monitoring list, and the intelligent devices included in the monitoring list are in association with the user state acquisition device;
the determining module is used for determining a time-varying parameter corresponding to each target intelligent device according to the monitoring list when the detecting module detects that at least one target intelligent device exists in the target scene, wherein the time-varying parameter comprises the device type of each target intelligent device;
the determining module is further configured to determine a working area corresponding to each target intelligent device according to the time-varying parameters, and determine scene coordinates of a target user in the target scene according to all the working areas;
the control module is used for controlling the target user state acquisition equipment corresponding to the scene coordinates to acquire state parameters of the target user, wherein the state parameters comprise body movement amplitude and/or body temperature change data of the target user;
The first analysis module is used for analyzing the state parameters obtained by the control module and the time-varying parameters corresponding to the plurality of target intelligent devices obtained by the determination module based on a preset user state analysis model to obtain the state information of the target user, wherein the state information of the target user comprises the state type of the target user.
In a second aspect of the present invention, the determining module is further configured to determine, according to the time-varying parameter, a target scene template from a plurality of scene templates determined in advance, where the target scene template is a scene template with a matching degree between the time-varying parameter and a preset standard time-varying parameter being greater than a preset matching degree threshold;
the determining module is further configured to determine a first state corresponding to the target user according to the target scene template, where the first state is a preset user state in the target scene template;
the first analysis module comprises:
the first analysis submodule is used for analyzing the state parameters and time-varying parameters corresponding to the plurality of target intelligent devices based on a preset user state analysis model to obtain temporary states corresponding to the target users;
The second analysis submodule is used for analyzing the state level of the temporary state and the state level of the first state based on the user state analysis model to obtain an analysis result;
a determining submodule, configured to determine a state with a higher state level in the analysis result obtained by the second analysis submodule as state information corresponding to the target user;
and when the state level of the temporary state is smaller than or equal to the state level of the first state, the state information corresponding to the target user is the first state.
As an alternative embodiment, in the second aspect of the present invention, the apparatus further includes:
the first judging module is used for judging whether a voice control instruction for setting a target scene is received before the determining module determines the working area corresponding to each target intelligent device according to the time-varying parameters, and triggering the determining module to execute the operation of determining the working area corresponding to each target intelligent device according to the time-varying parameters when judging that the voice control instruction is not received;
The second analysis module is used for analyzing the voice control instruction to obtain a target field for setting the target scene when the first judgment module judges that the voice control instruction is received;
the first judging module is further configured to judge whether the target field includes a user status field for indicating a current status of the user;
the determining module is further configured to determine, when the first judging module judges that the target field includes the user status field, that a user corresponding to the user status field is status information corresponding to the target user from the status;
the second judging module is used for judging whether the user state field is included in a post-level instruction associated with the scene setting operation or not in the process of executing the scene setting operation matched with the target field when the first judging module judges that the target field does not include the user state field, wherein the post-level instruction is an instruction of which the execution sequence is after the scene setting operation;
the second judging module is further configured to trigger the determining module to execute the operation according to the time-varying parameter to determine a working area corresponding to each target intelligent device when it is judged that the user state field is not included in a post-level instruction associated with the scene setting operation in the process of executing the scene setting operation matched with the target field;
And the determining module is further configured to determine, when the second judging module judges that the user status field is included in a post-hierarchy instruction associated with the scene setting operation in the process of executing the scene setting operation matched with the target field, the user status field included in the post-hierarchy instruction as status information corresponding to the target user.
As an optional implementation manner, in the second aspect of the present invention, the time-varying parameter further includes a start-up time of each of the target smart devices;
the determining module determines a working area corresponding to each target intelligent device according to the time-varying parameters, and determines scene coordinates of a target user in the target scene according to all the working areas, wherein the determining module specifically comprises:
determining a working area and a running time corresponding to each target intelligent device according to the device type of each target intelligent device and the starting time of each target intelligent device, which are included by the time-varying parameters;
screening out target operation time lengths smaller than a preset time length threshold value from all the operation time lengths to obtain first intelligent equipment corresponding to the target operation time lengths;
Determining a working area corresponding to the first intelligent device as a scanning area, and controlling a user state acquisition device corresponding to the scanning area to scan the scanning area to obtain a scanning result, wherein the scanning area is an area with the probability of a target user in the area being greater than a preset probability threshold;
and when the scanning result indicates that the target user is in the scanning area, determining the coordinates corresponding to the moving range of the target user in the scanning area, which are included in the scanning result, as scene coordinates of the target user in the target scene.
As an alternative embodiment, in the second aspect of the present invention, the apparatus further includes:
the third analysis module is used for analyzing the state parameters acquired by the target user state acquisition device to obtain the undetermined user type of the target user before the state parameters and the time-varying parameters corresponding to the plurality of target intelligent devices are analyzed by the first analysis module based on a preset user state analysis model to obtain the state of the target user;
the third judging module is used for judging whether target types matched with the undetermined user types exist in all preset user self-defined types stored in the database;
The determining module is further configured to determine, when the third determining module determines that a target type matched with the pending user type does not exist in all the user self-determined types stored in the database, that a standard type is a current user type of the target user from all the user self-determined types;
the determining module is further configured to determine, when the third determining module determines that a target type matched with the pending user type exists in all the user self-determined types stored in the database, the target type as a current user type corresponding to the target user;
the first analysis submodule analyzes the state parameters and time-varying parameters corresponding to a plurality of target intelligent devices based on a preset user state analysis model, and the method for obtaining the temporary state corresponding to the target user specifically comprises the following steps:
and analyzing the state parameters, time-varying parameters corresponding to a plurality of target intelligent devices and the current user type based on a preset user state analysis model to obtain a temporary state corresponding to the target user.
In a second aspect of the present invention, as an optional implementation manner, the state type includes a steady state type or a fluctuating state type, where the steady state type is used to indicate that a body temperature variation amplitude of the target user is less than or equal to a preset temperature variation threshold, and the fluctuating state indicates that the body temperature variation amplitude corresponding to the target user is greater than the temperature variation threshold;
The apparatus further comprises:
the acquisition module is used for acquiring body temperature data corresponding to the target user at preset time intervals when the state type of the target user is the fluctuation state type after the state parameters and time-varying parameters corresponding to a plurality of target intelligent devices are analyzed by the first analysis module based on a preset user state analysis model to obtain the state information corresponding to the target user;
the fourth analysis module is used for analyzing the body temperature data acquired by the acquisition module to obtain a body temperature change trend corresponding to the target user;
the estimating module is used for estimating target environment parameters and estimated duration required by the target user to change from the fluctuation state type to the stable state type based on the body temperature change trend and the user state analysis model, wherein the target environment parameters comprise at least one of temperature, humidity, air flow rate and light brightness corresponding to the target scene;
the generation module is used for generating control parameters of a second intelligent device for adjusting the environmental parameters of the target scene in the target scene according to the target environmental parameters and the estimated duration obtained by the estimation module;
The control module is further used for controlling the second intelligent device to adjust the environmental parameters of the target scene according to the control parameters of the second intelligent device so as to adjust the body sensing temperature corresponding to the target user to be in a body sensing comfortable temperature range determined by a predetermined user body sensing model.
In a second aspect of the present invention, the generating module generates, according to the target environmental parameter and the estimated duration, the control parameter of the second intelligent device in the target scene for adjusting the environmental parameter of the target scene specifically includes:
analyzing the scene coordinates according to a predetermined user somatosensory model to obtain the distance between the target user and the target intelligent equipment;
according to a predetermined user somatosensory model, analyzing the target environment parameter, the estimated time length, the state information corresponding to the target user and the distance to obtain a somatosensory comfort parameter matched with the target user, wherein the somatosensory comfort parameter matched with the target user comprises the somatosensory comfort temperature range;
generating control parameters of a second intelligent device in the target scene for adjusting environmental parameters of the target scene according to the somatosensory comfort parameters matched with the target user;
And, the apparatus further comprises:
a fourth judging module, configured to judge, in the process of controlling the second intelligent device to adjust the environmental parameter of the target scene, whether the data fluctuation condition of the body temperature data corresponding to the target user in the estimated duration matches with the fluctuation condition included in the fluctuation curve estimated by the user state analysis model;
and the updating processing module is used for updating the body temperature change trend corresponding to the target user according to the data fluctuation condition when the fourth judging module judges that the data fluctuation condition of the body temperature data corresponding to the target user is not matched with the fluctuation condition included in the fluctuation curve estimated by the user state analysis model within the estimated time period, triggering the estimating module to re-execute the operation based on the body temperature change trend and the user state analysis model and estimating the target environment parameters and the estimated time period required by the change of the target user from the fluctuation state type to the stable state type.
The third aspect of the present invention discloses another user state determining device based on scene equipment linkage, the device comprising:
A memory storing executable program code;
a processor coupled to the memory;
the processor invokes the executable program code stored in the memory to execute the scene equipment linkage-based user state determining method disclosed in the first aspect of the invention.
A fourth aspect of the present invention discloses a computer storage medium storing computer instructions for executing the scene device linkage-based user state determining method disclosed in the first aspect of the present invention when the computer instructions are called.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
the embodiment of the invention provides a user state determining method based on scene equipment linkage, which comprises the following steps: detecting whether at least one target intelligent device in an operating state exists in a target scene, wherein all the target intelligent devices are intelligent devices in a preset monitoring list, and the intelligent devices included in the monitoring list and the user state acquisition device establish an association relationship; when at least one target intelligent device exists in the target scene, determining a time-varying parameter corresponding to each target intelligent device according to the monitoring list, wherein the time-varying parameter comprises the device type of each target intelligent device; according to the time-varying parameters, determining a working area corresponding to each target intelligent device, and determining scene coordinates of a target user in a target scene according to all the working areas; the method comprises the steps that a target user state acquisition device corresponding to a control scene coordinate acquires state parameters of a target user, based on a preset user state analysis model, the state parameters and time-varying parameters corresponding to a plurality of target intelligent devices are analyzed to obtain state information of the target user, the state parameters comprise body movement amplitude and/or body temperature change data of the target user, and the state information of the target user comprises state types of the target user. Therefore, the method and the device can automatically detect the target intelligent devices running in the monitoring list, intelligently determine the time-varying parameters of each target intelligent device, and are beneficial to improving the determination efficiency of the time-varying parameters; further, after the time-varying parameters are determined, the working area corresponding to each target intelligent device can be intelligently determined according to the time-varying parameters, then the scene coordinates of the target user in the target scene are determined according to all the working areas, and the exact user real-time coordinates are obtained through analysis of the working areas among the intelligent devices in the running state, so that the determination efficiency and the determination accuracy of the user real-time coordinates are improved; furthermore, after the scene coordinates are determined, the state parameters of the target user can be acquired by the target user state acquisition equipment in the scene coordinates, so that the state parameters of the target user can be accurately controlled, and the acquisition efficiency of the state parameters of the target user and the accuracy of the acquired acquisition result are improved; according to the method, through information sharing among intelligent devices in an operating state in a target scene, accurate positioning of a target user is achieved, so that the target user state acquisition device is linked, state parameters of the target user are accurately and efficiently acquired, finally, the state information of the target user is intelligently determined based on the acquired state parameters in combination with time-varying parameters corresponding to the target intelligent device, and the determination efficiency of determining the state information of the user and the reliability and accuracy of the determined state information of the user are improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic view of a scene to which a scene device linkage-based user state determining method is applied according to an embodiment of the present invention;
fig. 2 is a schematic flow chart of a user state determining method based on scene equipment linkage according to an embodiment of the invention;
FIG. 3 is a flow chart of another method for determining user status based on scene device linkage according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a user state determining device based on scene equipment linkage according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a first analysis module in a user state determining device based on scene equipment linkage according to an embodiment of the present invention;
FIG. 6 is a schematic structural diagram of another user state determining apparatus based on scene device linkage according to an embodiment of the present invention;
Fig. 7 is a schematic structural diagram of another user state determining apparatus based on scene device linkage according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terms first, second and the like in the description and in the claims and in the above-described figures are used for distinguishing between different objects and not necessarily for describing a sequential or chronological order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, apparatus, article, or article that comprises a list of steps or elements is not limited to only those listed but may optionally include other steps or elements not listed or inherent to such process, method, article, or article.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the invention. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
The invention discloses a user state determining method and device based on scene equipment linkage, which can automatically detect target intelligent equipment running in a monitoring list, further intelligently determine time-varying parameters of each target intelligent equipment, and is beneficial to improving the determining efficiency of the time-varying parameters; further, after the time-varying parameters are determined, the working area corresponding to each target intelligent device can be intelligently determined according to the time-varying parameters, then the scene coordinates of the target user in the target scene are determined according to all the working areas, and the exact user real-time coordinates are obtained through analysis of the working areas among the intelligent devices in the running state, so that the determination efficiency and the determination accuracy of the user real-time coordinates are improved; furthermore, after the scene coordinates are determined, the state parameters of the target user can be acquired by the target user state acquisition equipment in the scene coordinates, so that the state parameters of the target user can be accurately controlled, and the acquisition efficiency of the state parameters of the target user and the accuracy of the acquired acquisition result are improved; according to the method, through information sharing among intelligent devices in an operating state in a target scene, accurate positioning of a target user is achieved, so that the target user state acquisition device is linked, state parameters of the target user are accurately and efficiently acquired, finally, the state information of the target user is intelligently determined based on the acquired state parameters in combination with time-varying parameters corresponding to the target intelligent device, and the determination efficiency of determining the state information of the user and the reliability and accuracy of the determined state information of the user are improved. The following will describe in detail.
In order to better understand the scene device linkage-based user state determining method and apparatus described in the present invention, first, a scene architecture to which the scene device linkage-based user state determining method is applied is described, specifically, the scene architecture may be as shown in fig. 1, a scene to which the scene device linkage-based user state determining method is applied may include a plurality of intelligent devices, such as an intelligent air conditioner, an intelligent door lock, an intelligent smoke extractor, an intelligent refrigerator, an intelligent sweeper and an intelligent washing machine, where each intelligent device and other intelligent devices are mutually associated, that is, each intelligent device may perform information sharing with other intelligent devices, so that when a certain intelligent device is in an operating state, an instruction for sensing a user may be generated, to trigger a user state acquisition device in the application scene to acquire a state parameter of the user, for example, the intelligent air conditioner is in an operating state, that is, a default user is in a working area of the intelligent air conditioner, and an instruction for sensing the user is generated by the air conditioner, so that the user state acquisition device loaded on the intelligent sweeper acquires the state parameter of the user.
It should be noted that, the schematic view of the scenario shown in fig. 1 is only for illustrating a scenario to which the user state determining method based on the linkage of the scenario device is applicable, each related intelligent device is only schematically shown, and a specific structure/size/shape/location/installation manner and a communication manner between each intelligent device may be adaptively adjusted according to an actual scenario, which is not limited by the scenario shown in fig. 1.
Example 1
Referring to fig. 2, fig. 2 is a flow chart of a user state determining method based on scene equipment linkage according to an embodiment of the present invention. The method for determining the user state based on the scene equipment linkage described in fig. 2 may be applied to a device for determining the user state based on the scene equipment linkage, which is not limited by the embodiment of the present invention. As shown in fig. 2, the scene device linkage-based user state determining method may include the following operations:
101. and detecting whether at least one target intelligent device in an operating state exists in the target scene.
In the embodiment of the invention, all the target intelligent devices are intelligent devices in a preset monitoring list, and the intelligent devices included in the monitoring list are associated with user state acquisition devices, wherein the user state acquisition devices can be radar, infrared detectors and the like.
In the embodiment of the invention, it is to be noted that the intelligent device in the monitoring list can be used for determining whether the user is in a target scene, such as an intelligent television and an intelligent air conditioner, when the user is in the target scene, the intelligent television and the intelligent air conditioner are in an in-operation state, and when the user is not in the target scene, the intelligent television and the intelligent air conditioner are in a closed state, i.e. whether the target scene comprises the target user can be primarily determined through the operation states of the intelligent television and the intelligent air conditioner; similarly, the intelligent device such as the intelligent refrigerator is in an operation state for a long time, and the intelligent device with the operation state basically maintained unchanged is not in the monitoring list, and the type of the intelligent device included in the monitoring list in practical application is not limited by the embodiment of the invention.
102. When at least one target intelligent device exists in the target scene, determining a time-varying parameter corresponding to each target intelligent device according to the monitoring list.
In the embodiment of the invention, the time-varying parameters include the device type of each target intelligent device, and the device types can be classified according to application scenes, such as living room type devices: the intelligent television, the intelligent sweeper, the intelligent air conditioner and the intelligent router; kitchen-type apparatus: intelligent smoke extractor, intelligent refrigerator and intelligent dish washer; bathroom type equipment: the embodiment of the invention is not limited by the intelligent bathtub, the intelligent water heater, the intelligent closestool and the intelligent washing machine.
103. And determining a working area corresponding to each target intelligent device according to the time-varying parameters, and determining scene coordinates of the target user in the target scene according to all the working areas.
In the embodiment of the invention, the time-varying parameters can also comprise the starting time of each target intelligent device; according to the time-varying parameters, determining the working area corresponding to each target intelligent device, and determining the scene coordinates of the target user in the target scene according to all the working areas, including:
determining a working area and a running time corresponding to each target intelligent device according to the device type of each target intelligent device and the starting time of each target intelligent device, which are included in the time-varying parameters;
screening out target operation time lengths smaller than a preset time length threshold value from all operation time lengths to obtain first intelligent equipment corresponding to the target operation time lengths;
determining a working area corresponding to the first intelligent device as a scanning area, and controlling a user state acquisition device corresponding to the scanning area to scan the scanning area to obtain a scanning result, wherein the scanning area is an area with the probability of a target user in the area being greater than a preset probability threshold;
and when the scanning result shows that the target user is in the scanning area, determining the coordinates corresponding to the moving range of the target user in the scanning area, which are included in the scanning result, as scene coordinates of the target user in the target scene.
In the embodiments of the present invention, for easy understanding, the following is exemplified: assuming that the device type of the target intelligent device is a living room type, the starting time corresponding to the target intelligent device is 15.30 pm, so that the working area corresponding to the target intelligent device is a living room, the operation time of the target intelligent device can be known through monitoring the target intelligent device by a monitoring list, assuming that the first intelligent device is an intelligent device (such as an intelligent television and an intelligent sweeper) of the living room type, and the intelligent television just operates for 3 minutes, a default target user watches media content through the intelligent television, and accordingly the living room where the intelligent television and the intelligent sweeper are located can be determined to be a scanning area, if the working area corresponding to the intelligent sweeper comprises the living room and a bedroom, the living room of the working area corresponding to the intelligent television and the intelligent sweeper is taken as the scanning area, and therefore the user state acquisition device equipped in the living room is triggered to scan the scanning area.
Therefore, in the embodiment of the invention, the working area and the operation time length of each target intelligent device can be determined according to the device type and the starting time of each target intelligent device included by the time-varying parameters, so that the area range of the user is preliminarily determined, the scene coordinates of the user in the target scene are further reduced after the target user is scanned after the area range is scanned, and the reliability and the accuracy of the determined scene coordinates of the target user are improved.
104. And controlling the target user state acquisition equipment corresponding to the scene coordinates to acquire the state parameters of the target user.
In the embodiment of the invention, the state parameters comprise body movement amplitude and/or body temperature change data of the target user.
105. Based on a preset user state analysis model, analyzing state parameters and time-varying parameters corresponding to a plurality of target intelligent devices to obtain state information of a target user.
In the embodiment of the invention, the state information of the target user comprises the state type of the target user; the state information of the target user may include any one of a user type of the target user, such as an old person, a child, a woman, a man, and an average, where when the user type is the average, the user type may be automatically determined to be the average type when the user type of the target user cannot be accurately identified, as an intermediate fault-tolerant type, so as to reduce the occurrence of a situation that a subsequent program cannot run when the user type cannot be identified.
Therefore, by implementing the user state determining method based on scene equipment linkage described in fig. 2, the target intelligent equipment running in the monitoring list can be automatically detected, so that the time-varying parameter of each target intelligent equipment can be intelligently determined, and the determining efficiency of the time-varying parameter can be improved; further, after the time-varying parameters are determined, the working area corresponding to each target intelligent device can be intelligently determined according to the time-varying parameters, then the scene coordinates of the target user in the target scene are determined according to all the working areas, and the exact user real-time coordinates are obtained through analysis of the working areas among the intelligent devices in the running state, so that the determination efficiency and the determination accuracy of the user real-time coordinates are improved; furthermore, after the scene coordinates are determined, the state parameters of the target user can be acquired by the target user state acquisition equipment in the scene coordinates, so that the state parameters of the target user can be accurately controlled, and the acquisition efficiency of the state parameters of the target user and the accuracy of the acquired acquisition result are improved; according to the method, through information sharing among intelligent devices in an operating state in a target scene, accurate positioning of a target user is achieved, so that the target user state acquisition device is linked, state parameters of the target user are accurately and efficiently acquired, finally, the state information of the target user is intelligently determined based on the acquired state parameters in combination with time-varying parameters corresponding to the target intelligent device, and the determination efficiency of determining the state information of the user and the reliability and accuracy of the determined state information of the user are improved.
In an alternative embodiment, the scene device linkage-based user state determining method may further include the steps of:
determining a target scene template from a plurality of scene templates which are determined in advance according to the time-varying parameters, wherein the target scene template is a scene template with the matching degree between the time-varying parameters and the preset standard time-varying parameters being greater than a preset matching degree threshold;
determining a first state corresponding to a target user according to a target scene template, wherein the first state is a preset user state in the target scene template;
analyzing the state parameters and the time-varying parameters corresponding to the plurality of target intelligent devices based on the preset user state analysis model to obtain state information corresponding to the target user, including:
based on a preset user state analysis model, analyzing state parameters and time-varying parameters corresponding to a plurality of target intelligent devices to obtain temporary states corresponding to target users;
based on a user state analysis model, analyzing the state level of the temporary state and the state level of the first state to obtain an analysis result, and determining a state with a higher state level in the analysis result as state information corresponding to a target user;
when the state level of the temporary state is smaller than or equal to the state level of the first state, the state information corresponding to the target user is the first state.
In the optional embodiment, it should be noted that, a plurality of scene templates may be preset for a user to set templates corresponding to operation states of each intelligent device in different scenes, for example, in a user entertainment and leisure scene, the intelligent television, the intelligent air conditioner and the intelligent router may be in operation states; under the scene of bath by a user, the intelligent water heater, the intelligent washing machine and the intelligent exhaust equipment are in an operating state; for example, the intelligent air conditioner, the intelligent running machine and the intelligent sound box can be in an operating state under the sports scene of the user; and then under different scene templates, the normal states of the user can be corresponding, such as a state that the entertainment and leisure scene is in rest corresponding to the user, a state that the blood flow speed is accelerated corresponding to the user in a bathing scene, and a state that the body temperature is rapidly increased and the user sweats corresponding to the user in a sports scene.
In this optional embodiment, in a special case, when there is an old person and/or a child in the target scene, and there is a young person in motion, and when the distance (for example, 5 distance units) between the temperature control device and the young person is greater than the distance (for example, 2 distance units) between the old person and/or the child and the temperature control device, the priority of the motion state of the young person may be increased to be higher than the priority of the rest state of the old person and/or the child, and the specific case is adaptively adjusted according to different scenes, which is not limited by the embodiment of the present invention.
In this optional embodiment, the target scene template can be intelligently determined according to the time-varying parameters, so that the first state of the target user is determined, and after the state level of the first state and the state level of the temporary state are analyzed through the user state analysis model, the state information of the target user is determined, so that the accuracy and reliability of the determined state information of the target user are improved.
Example two
Referring to fig. 3, fig. 3 is a flowchart of another method for determining a user status based on scene device linkage according to an embodiment of the present invention. The method for determining the user state based on the scene equipment linkage described in fig. 3 may be applied to a device for determining the user state based on the scene equipment linkage, which is not limited by the embodiment of the present invention. As shown in fig. 3, the scene device linkage-based user state determining method may include the following operations:
201. and detecting whether at least one target intelligent device in an operating state exists in the target scene.
202. When at least one target intelligent device exists in the target scene, determining a time-varying parameter corresponding to each target intelligent device according to the monitoring list.
203. And judging whether a voice control instruction for setting the target scene is received or not.
In the embodiment of the present invention, before determining the working area corresponding to each target intelligent device according to the time-varying parameters, step 203 is executed, and when the determination result of step 203 is no, step 209 is executed, and when the determination result of step 203 is yes, step 204 is executed.
204. And analyzing the voice control instruction to obtain a target field for setting the target scene.
205. It is determined whether a user status field for indicating a current status of the user is included in the target field.
In the embodiment of the present invention, when the determination result of step 205 is yes, step 206 is executed, and when the determination result of step 205 is no, step 207 is executed.
In the embodiment of the present invention, it should be noted that, in a specific application scenario, the user status field may be a "motion mode", "leisure mode" or "sleep mode", and for example, the user status field may be an "old man/child mode", and the voice field does not include a field that intuitively indicates the user status, but the voice field includes a field that performs scene setting on a target scenario, so that the current status of the user may be determined through the field set by the scene, for example, the old man/child is more sensitive to the external body temperature, the user status corresponds to a quiet status, and the specific application scenario may be adaptively adjusted.
206. And determining the self-described state of the user corresponding to the user state field as the state information corresponding to the target user.
207. It is determined whether a user status field is included in a post-hierarchy instruction associated with a scene setting operation during execution of the scene setting operation matching the target field.
In the embodiment of the invention, the post-hierarchy instruction is an instruction of which the execution sequence is after the scene setting operation; when the determination result of step 207 is no, step 209 is performed, and when the determination result of step 207 is yes, step 208 is performed.
208. And determining a user state field included in the post-hierarchy instruction as state information corresponding to the target user.
209. And determining a working area corresponding to each target intelligent device according to the time-varying parameters, and determining scene coordinates of the target user in the target scene according to all the working areas.
210. And controlling the target user state acquisition equipment corresponding to the scene coordinates to acquire the state parameters of the target user.
211. Based on a preset user state analysis model, analyzing state parameters and time-varying parameters corresponding to a plurality of target intelligent devices to obtain state information of a target user.
In the embodiment of the present invention, for other descriptions of step 201, step 202 and step 209 to step 211, please refer to other specific descriptions of step 101 to step 105 in the first embodiment, and the description of the embodiment of the present invention is omitted.
In the embodiment of the present invention, in order to facilitate understanding, the following exemplifies a specific application scenario of the scene device linkage-based user state determining method: assuming that the indoor environment is summer at present, a user outputs a voice control instruction of 'child mode', 'elder mode', and the distance between the child/elder and the air conditioner is sensed to be 2 meters, then the indoor scene coordinates of the user can be calculated through the distance of 2 meters, further the state parameters of the target user are collected through the target user state collection device corresponding to the scene coordinates controlled by the scene coordinates, further the indoor related intelligent device is controlled to adjust the device parameters, for example, the cold air flow output by the intelligent air conditioner reaches the air flow rate corresponding to the child/elder to be 0.2m/s, so that the calculated comfortable temperature of the user corresponding to the child/elder is 27.5 ℃, and the parameters can be adjusted in a self-adaptive mode in practical application.
Therefore, by implementing the user state determining method based on scene equipment linkage described in fig. 3, accurate positioning of the target user can be realized through information sharing among intelligent equipment in an operating state in a target scene, so that the target user state acquisition equipment is linked, the state parameters of the target user are accurately and efficiently acquired, finally, the state information of the target user is intelligently determined based on the acquired state parameters in combination with time-varying parameters corresponding to the target intelligent equipment, the determining efficiency of the state information of the user is improved, and the reliability and the accuracy of the determined state information of the user are improved; in addition, after receiving the voice control instruction, the user state field which represents the current state of the user and is included in the voice control instruction or a post-level instruction after the voice control instruction can be automatically extracted, and the user state field is determined to be the state information of the target user.
In an optional embodiment, the method for determining a user state based on scene device linkage before the state of the target user is obtained by analyzing the state parameter and the time-varying parameters corresponding to the plurality of target intelligent devices based on the preset user state analysis model may further include the following steps:
analyzing the state parameters acquired by the target user state acquisition equipment to obtain the undetermined user type of the target user;
judging whether target types matched with the types of the undetermined users exist in all preset user self-defined types stored in a database;
when judging that the target type matched with the undetermined user type does not exist in all the user self-defined types stored in the database, determining the standard type as the current user type of the target user from all the user self-defined types;
when judging that the target type matched with the undetermined user type exists in all the user self-defined types stored in the database, determining the target type as the current user type corresponding to the target user;
the above analysis of state parameters and time-varying parameters corresponding to a plurality of target intelligent devices based on a preset user state analysis model to obtain a temporary state corresponding to a target user includes:
Based on a preset user state analysis model, analyzing state parameters, time-varying parameters corresponding to a plurality of target intelligent devices and current user types to obtain temporary states corresponding to target users.
In this optional embodiment, the collected state parameters can be intelligently analyzed, and the corresponding operation of determining the standard type or the target type as the current user type is executed according to the matching result between the user self-determined type and the user type to be determined stored in the database, so that the accuracy of determining the current user type is improved, and the reliability and accuracy of the temporary state result of the subsequent determination target user are improved to a certain extent.
In another optional embodiment, the state type includes a steady state type or a fluctuating state type, where the steady state type is used to indicate that a body temperature variation amplitude of the target user is less than or equal to a preset temperature variation threshold, and the fluctuating state indicates that the body temperature variation amplitude corresponding to the target user is greater than the temperature variation threshold;
the user state determining method based on scene equipment linkage after analyzing the state parameters and the time-varying parameters corresponding to the plurality of target intelligent devices based on the preset user state analysis model to obtain the state information corresponding to the target user can further comprise the following steps:
When the state type of the target user is a fluctuation state type, acquiring body temperature data corresponding to the target user at each interval for a preset time length, and analyzing the body temperature data to obtain a body temperature change trend corresponding to the target user;
based on the body temperature change trend and the user state analysis model, predicting target environment parameters and predicted time required by a target user to change from a fluctuation state type to a stable state type, wherein the target environment parameters comprise at least one of temperature, humidity, air flow rate and light brightness corresponding to a target scene;
and generating control parameters of a second intelligent device for adjusting the environmental parameters of the target scene in the target scene according to the target environmental parameters and the estimated time length, and controlling the second intelligent device to adjust the environmental parameters of the target scene according to the control parameters of the second intelligent device so as to adjust the body sensing temperature corresponding to the target user to be in a body sensing comfort temperature range determined by a predetermined user body sensing model.
In this alternative embodiment, the comfort temperature range of the body feeling is adaptively adjusted in different situations, for example, in the summer, the distance between the elderly and the temperature control device (such as an air conditioner) is 2 meters, when the cold air flow reaches the elderly, the air flow rate is 0.2m/s, the calculated comfort temperature of the body feeling for the elderly is about 28.2 ℃, the error can be 0.5 ℃, and the embodiment of the invention is not limited.
In this optional embodiment, the target environmental parameter and the estimated duration required for the target user to restore to the stable state (such as the quiet state) can be intelligently estimated when the user state is in the fluctuating state (such as the severe motion state), so as to generate the control parameter of the second intelligent device for adjusting the environmental parameter of the target scene, and the second intelligent device is set according to the control parameter, so that the target user is always in the comfortable temperature sensing range, which is beneficial to improving the use experience of the user and the viscosity of the user.
In this optional embodiment, further optionally, generating, according to the target environmental parameter and the estimated duration, a control parameter of a second intelligent device in the target scene for adjusting the environmental parameter of the target scene includes:
analyzing scene coordinates according to a predetermined user somatosensory model to obtain the distance between a target user and target intelligent equipment;
according to a predetermined user somatosensory model, analyzing a target environment parameter, estimated time length, state information corresponding to a target user and a distance to obtain a somatosensory comfort parameter matched with the target user, wherein the somatosensory comfort parameter matched with the target user comprises a somatosensory comfort temperature range;
Generating control parameters of a second intelligent device for adjusting environmental parameters of the target scene in the target scene according to the somatosensory comfort parameters matched with the target user;
the scene equipment linkage-based user state determining method can further comprise the following steps:
in the process of controlling the second intelligent device to adjust the environmental parameters of the target scene, judging whether the data fluctuation condition of the body temperature data corresponding to the target user in the estimated time length is matched with the fluctuation condition included in the fluctuation curve estimated by the user state analysis model;
when judging that the fluctuation condition of the body temperature data corresponding to the target user is not matched with the fluctuation condition included in the fluctuation curve estimated by the user state analysis model in the estimated time period, updating the body temperature change trend corresponding to the target user according to the data fluctuation condition, and re-executing the operation based on the body temperature change trend and the user state analysis model to estimate the target environment parameters and the estimated time period required by the target user to change from the fluctuation state type to the stable state type.
In this alternative embodiment, it should be noted that the comfort parameter may further include comfort humidity, comfort gas flow rate, comfort light intensity, and comfort noise intensity, which are not limited by the embodiment of the present invention.
In the optional embodiment, the distance between the target user and the target intelligent device is determined through the scene coordinates, and then the body feeling comfort parameter matched with the target user is obtained through analyzing the distance, the target environment parameter, the estimated time length and the state information of the target user, so that the accuracy of the determined body feeling comfort parameter is improved, and the target intelligent device is enabled to be at a comfortable body feeling temperature after adjusting the environment parameter of the target scene, and the use experience and the use viscosity of the target user are improved to a certain extent; in addition, in the process of controlling the second intelligent device to adjust the environmental parameters of the target scene, an error self-checking mechanism is further provided, and the probability that the determined somatosensory comfort parameters are not matched with the current somatosensory comfort of the target user is reduced by judging whether the data fluctuation condition of the target user is matched with the fluctuation condition included in the fluctuation curve estimated by the user state analysis model in the estimated time period and executing the matched body temperature change trend updating operation aiming at different judging results.
Example III
Referring to fig. 4, fig. 4 is a schematic structural diagram of a user state determining apparatus based on scene device linkage according to an embodiment of the present invention. The user state determining device based on scene equipment linkage can be a user state determining terminal based on scene equipment linkage, a user state determining device based on scene equipment linkage, a user state determining system based on scene equipment linkage or a user state determining server based on scene equipment linkage, wherein the user state determining server based on scene equipment linkage can be a local server, a remote server or a cloud server (also called cloud server), and when the user state determining server based on scene equipment linkage is a non-cloud server, the non-cloud server can be in communication connection with the cloud server, and the embodiment of the invention is not limited. As shown in fig. 4, the scene device linkage-based user state determining apparatus may include a detection module 301, a determination module 302, a control module 303, and a first analysis module 304, where:
The detection module 301 is configured to detect whether at least one target intelligent device in an operating state exists in a target scene, where all target intelligent devices are intelligent devices in a preset monitoring list, and an association relationship is established between an intelligent device included in the monitoring list and a user state acquisition device.
The determining module 302 is configured to determine, when the detecting module 301 detects that at least one target smart device exists in the target scene, a time-varying parameter corresponding to each target smart device according to the monitoring list, where the time-varying parameter includes a device type of each target smart device.
The determining module 302 is further configured to determine a working area corresponding to each target intelligent device according to the time-varying parameter, and determine scene coordinates of the target user in the target scene according to all the working areas.
The control module 303 is configured to control the target user state acquisition device corresponding to the scene coordinate determined by the determining module 302 to acquire a state parameter of the target user, where the state parameter includes body movement amplitude and/or body temperature change data of the target user.
The first analysis module 304 is configured to obtain state information of the target user based on a preset user state analysis model, where the state parameter obtained by the analysis control module 303 and the time-varying parameters corresponding to the plurality of target intelligent devices obtained by the determination module 302, and the state information of the target user includes a state type of the target user.
Therefore, the user state determining device based on scene equipment linkage described in fig. 4 can automatically detect the target intelligent equipment running in the monitoring list, and further intelligently determine the time-varying parameters of each target intelligent equipment, so that the determining efficiency of the time-varying parameters is improved; further, after the time-varying parameters are determined, the working area corresponding to each target intelligent device can be intelligently determined according to the time-varying parameters, then the scene coordinates of the target user in the target scene are determined according to all the working areas, and the exact user real-time coordinates are obtained through analysis of the working areas among the intelligent devices in the running state, so that the determination efficiency and the determination accuracy of the user real-time coordinates are improved; furthermore, after the scene coordinates are determined, the state parameters of the target user can be acquired by the target user state acquisition equipment in the scene coordinates, so that the state parameters of the target user can be accurately controlled, and the acquisition efficiency of the state parameters of the target user and the accuracy of the acquired acquisition result are improved; according to the method, through information sharing among intelligent devices in an operating state in a target scene, accurate positioning of a target user is achieved, so that the target user state acquisition device is linked, state parameters of the target user are accurately and efficiently acquired, finally, the state information of the target user is intelligently determined based on the acquired state parameters in combination with time-varying parameters corresponding to the target intelligent device, and the determination efficiency of determining the state information of the user and the reliability and accuracy of the determined state information of the user are improved.
In an alternative embodiment, the determining module 302 is further configured to determine, according to the time-varying parameter, a target scene template from a plurality of scene templates determined in advance, where the target scene template is a scene template with a matching degree between the time-varying parameter and a preset standard time-varying parameter being greater than a preset matching degree threshold;
the determining module 302 is further configured to determine, according to the target scene template, a first state corresponding to the target user, where the first state is a user state preset in the target scene template.
As shown in fig. 5, the first analysis module 304 may include a first analysis sub-module 3041, a second analysis sub-module 3042, and a determination sub-module 3043, wherein:
the first analysis submodule 3041 is configured to analyze the state parameters and time-varying parameters corresponding to the plurality of target intelligent devices based on a preset user state analysis model, and obtain a temporary state corresponding to the target user.
The second analysis sub-module 3042 is configured to analyze the state level of the temporary state and the state level of the first state obtained by the first analysis sub-module 3041 based on the user state analysis model, and obtain an analysis result.
The determining submodule 3043 is configured to determine a state with a higher state level in the analysis result obtained by the second analysis submodule 3042 as state information corresponding to the target user.
When the state level of the temporary state is smaller than or equal to the state level of the first state, the state information corresponding to the target user is the first state.
Therefore, the user state determining device based on the scene equipment linkage described in fig. 5 can intelligently determine the target scene template according to the time-varying parameters, so as to determine the first state of the target user, and after the state level of the first state and the state level of the temporary state are analyzed through the user state analysis model, determine the state information of the target user, thereby improving the accuracy and reliability of the determined state information of the target user.
In another alternative embodiment, as shown in fig. 6, the scene device linkage-based user status determining apparatus may further include a first judging module 305, a second analyzing module 306, and a second judging module 307, where:
the first determining module 305 is configured to determine whether a voice control instruction for setting a target scene is received before the determining module 302 determines a working area corresponding to each target intelligent device according to the time-varying parameter, and when it is determined that the voice control instruction is not received, trigger the determining module 302 to execute the above operation of determining the working area corresponding to each target intelligent device according to the time-varying parameter.
The second analysis module 306 is configured to, when the first determination module 305 determines that the voice control instruction is received, analyze the voice control instruction to obtain a target field for setting a target scene.
The first determining module 305 is further configured to determine whether a user status field for indicating a current status of the user is included in the target field.
The determining module 302 is further configured to determine, when the first determining module 305 determines that the target field includes the user status field, the user corresponding to the user status field as status information corresponding to the target user from the status.
The second judging module 307 is configured to, when the first judging module 305 judges that the target field does not include the user status field, judge whether, in executing the scene setting operation matching the target field, the user status field is included in a post-hierarchy instruction associated with the scene setting operation, the post-hierarchy instruction being an instruction whose execution order follows the scene setting operation.
The second determining module 307 is further configured to trigger the determining module 302 to execute the above operation according to the time-varying parameter to determine the working area corresponding to each target smart device when it is determined that the user status field is not included in the post-hierarchy instruction associated with the scene setting operation during the execution of the scene setting operation matching the target field.
The determining module 302 is further configured to determine, when the second determining module 307 determines that, during execution of the scene setting operation matching the target field, a user status field is included in a post-hierarchy instruction associated with the scene setting operation, the user status field included in the post-hierarchy instruction is determined to be status information corresponding to the target user.
It can be seen that implementing the user state determining device based on scene device linkage described in fig. 6, after receiving the voice control command, can automatically extract the user state field that expresses the current state of the user and is included in the voice control command or the post-level command after the voice control command, and determine the user state field as the state information of the target user.
In yet another alternative embodiment, the time-varying parameter further includes a start-up time of each target smart device; the determining module 302 determines a working area corresponding to each target intelligent device according to the time-varying parameters, and determines scene coordinates of the target user in the target scene according to all the working areas, where the determining method specifically includes:
determining a working area and a running time corresponding to each target intelligent device according to the device type of each target intelligent device and the starting time of each target intelligent device, which are included in the time-varying parameters;
screening out target operation time lengths smaller than a preset time length threshold value from all operation time lengths to obtain first intelligent equipment corresponding to the target operation time lengths;
determining a working area corresponding to the first intelligent device as a scanning area, and controlling a user state acquisition device corresponding to the scanning area to scan the scanning area to obtain a scanning result, wherein the scanning area is an area with the probability of a target user in the area being greater than a preset probability threshold;
and when the scanning result shows that the target user is in the scanning area, determining the coordinates corresponding to the moving range of the target user in the scanning area, which are included in the scanning result, as scene coordinates of the target user in the target scene.
Therefore, the user state determining device based on the scene device linkage described in fig. 6 can determine the working area and the operation time length of each target intelligent device according to the device type and the starting time of each target intelligent device included in the time-varying parameters, so as to preliminarily determine the area range of the user, and after the area range is scanned, the scene coordinates of the user in the target scene are further reduced after the target user is determined, and the reliability and the accuracy of the determined scene coordinates of the target user are improved.
In another alternative embodiment, as shown in fig. 6, the scene device linkage-based user status determining apparatus may further include a third analysis module 308 and a third judgment module 309, where:
the third analysis module 308 is configured to analyze the state parameters acquired by the target user state acquisition device to obtain the pending user type of the target user before the first analysis module 304 analyzes the state parameters and the time-varying parameters corresponding to the plurality of target intelligent devices based on the preset user state analysis model to obtain the state of the target user.
A third judging module 309, configured to judge whether there is a target type matching the pending user type obtained by the third analyzing module 308 in all the preset user self-determined types stored in the database.
The determining module 302 is further configured to determine that the standard type is the current user type of the target user from all the user-defined types when the third determining module 309 determines that the target type matching the pending user type does not exist in all the user-defined types stored in the database.
The determining module 302 is further configured to determine the target type as the current user type corresponding to the target user when the third determining module 309 determines that the target type matching the pending user type exists in all the user self-determined types stored in the database.
The first analysis submodule 3041 analyzes the state parameters and time-varying parameters corresponding to the plurality of target intelligent devices based on a preset user state analysis model, and the method for obtaining the temporary state corresponding to the target user specifically includes:
based on a preset user state analysis model, analyzing state parameters, time-varying parameters corresponding to a plurality of target intelligent devices and current user types to obtain temporary states corresponding to target users.
Therefore, the user state determining device based on scene equipment linkage described in fig. 6 can intelligently analyze the acquired state parameters, and execute the corresponding operation of determining the standard type or the target type as the current user type according to the matching result between the user self-determined type and the user type to be determined stored in the database, so that the accuracy of determining the current user type is improved, and meanwhile, the reliability and the accuracy of the temporary state result of the subsequent determined target user are improved to a certain extent.
In yet another optional embodiment, the state type includes a steady state type or a fluctuating state type, where the steady state type is used to indicate that a body temperature variation amplitude of the target user is less than or equal to a preset temperature variation threshold, and the fluctuating state indicates that the body temperature variation amplitude corresponding to the target user is greater than the temperature variation threshold; as shown in fig. 6, the scene device linkage-based user state determining apparatus may further include an acquisition module 310, a fourth analysis module 311, an estimation module 312, and a generation module 313, where:
the acquisition module 310 is configured to acquire body temperature data corresponding to the target user at intervals for a preset duration after the first analysis module 304 analyzes the state parameters and time-varying parameters corresponding to the plurality of target intelligent devices based on a preset user state analysis model to obtain state information corresponding to the target user, where the state type of the target user is a fluctuation state type.
The fourth analysis module 311 is configured to analyze the body temperature data collected by the collection module 310 to obtain a body temperature variation trend corresponding to the target user.
The estimating module 312 is configured to estimate, based on the body temperature variation trend and the user state analysis model obtained by the fourth analyzing module 311, a target environmental parameter and an estimated duration required for the target user to change from the fluctuation state type to the steady state type, where the target environmental parameter includes at least one of a temperature, a humidity, an air flow rate, and a light brightness corresponding to the target scene.
The generating module 313 is configured to generate control parameters of the second intelligent device for adjusting the environmental parameters of the target scene in the target scene according to the target environmental parameters and the estimated duration obtained by the estimating module 312.
The control module 303 is further configured to control the second intelligent device to adjust an environmental parameter of the target scene according to the control parameter of the second intelligent device generated by the generating module 301, so as to adjust the somatosensory comfort temperature range determined by the predetermined user somatosensory model, where the somatosensory comfort temperature corresponds to the target user.
Therefore, the user state determining device based on scene equipment linkage described in fig. 6 can intelligently estimate the target environment parameters and the estimated time period required by the target user to restore to the stable state (such as the quiet state) when the user state is in the fluctuation state (such as the intense motion state), so as to generate the control parameters of the second intelligent equipment for adjusting the environment parameters of the target scene, and set the second intelligent equipment according to the control parameters, so that the target user is always in the comfortable temperature sensing range, which is beneficial to improving the use experience of the user and the viscosity of the user.
In this optional embodiment, further optionally, the generating module 313 generates, according to the target environmental parameter and the estimated duration, the control parameter of the second smart device for adjusting the environmental parameter of the target scene in the target scene specifically includes:
Analyzing scene coordinates according to a predetermined user somatosensory model to obtain the distance between a target user and target intelligent equipment;
according to a predetermined user somatosensory model, analyzing a target environment parameter, estimated time length, state information corresponding to a target user and a distance to obtain a somatosensory comfort parameter matched with the target user, wherein the somatosensory comfort parameter matched with the target user comprises a somatosensory comfort temperature range;
generating control parameters of a second intelligent device for adjusting environmental parameters of the target scene in the target scene according to the somatosensory comfort parameters matched with the target user;
the scene device linkage-based user state determining apparatus may further include a fourth judging module 314 and an update processing module 315, where:
the fourth judging module 314 is configured to judge, in the process of controlling the second intelligent device by the control module 303 to adjust the environmental parameter of the target scene, whether the data fluctuation condition of the body temperature data corresponding to the target user in the estimated duration obtained by the estimating module 312 matches the fluctuation condition included in the fluctuation curve estimated by the user state analysis model.
The update processing module 315 is configured to update the body temperature variation trend corresponding to the target user according to the data fluctuation situation when the fourth judging module 314 judges that the data fluctuation situation of the body temperature data corresponding to the target user is not matched with the fluctuation situation included in the fluctuation curve estimated by the user state analysis model within the estimated time period, and trigger the estimating module 312 to re-execute the operations based on the body temperature variation trend and the user state analysis model, and estimate the target environmental parameters and the estimated time period required for the target user to change from the fluctuation state type to the steady state type.
As can be seen, implementing the user state determining device based on scene device linkage described in fig. 6, determining the distance between the target user and the target intelligent device through the scene coordinates, further obtaining the comfort body feeling parameter matched with the target user through analyzing the distance, the target environment parameter, the estimated duration and the state information of the target user, improving the accuracy of the determined comfort body feeling parameter, further enabling the target intelligent device to adjust the environment parameter of the target scene, enabling the target user to be at a comfortable comfort body feeling temperature, and improving the use experience and the use viscosity of the target user to a certain extent; in addition, in the process of controlling the second intelligent device to adjust the environmental parameters of the target scene, an error self-checking mechanism is further provided, and the probability that the determined somatosensory comfort parameters are not matched with the current somatosensory comfort of the target user is reduced by judging whether the data fluctuation condition of the target user is matched with the fluctuation condition included in the fluctuation curve estimated by the user state analysis model in the estimated time period and executing the matched body temperature change trend updating operation aiming at different judging results.
Example IV
Referring to fig. 7, a schematic structural diagram of another user state determining apparatus based on scene device linkage according to an embodiment of the present invention is shown. As shown in fig. 7, the scene device linkage-based user state determining apparatus may include:
a memory 401 storing executable program codes;
a processor 402 coupled with the memory 401;
the processor 402 invokes executable program codes stored in the memory 401 to perform the steps in the scene device linkage-based user state determination method described in the first or second embodiment of the present invention.
Example five
The embodiment of the invention discloses a computer storage medium which stores computer instructions for executing the steps in the scene equipment linkage-based user state determining method described in the first or second embodiment of the invention when the computer instructions are called.
Example six
An embodiment of the present invention discloses a computer program product, which includes a non-transitory computer storage medium storing a computer program, and the computer program is operable to cause a computer to perform the steps in the scene device linkage-based user state determination method described in the first embodiment or the second embodiment.
The apparatus embodiments described above are merely illustrative, wherein the modules illustrated as separate components may or may not be physically separate, and the components shown as modules may or may not be physical, i.e., may be located in one place, or may be distributed over a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above detailed description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus necessary general hardware platforms, or of course by means of hardware. Based on such understanding, the foregoing technical solutions may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer storage medium including Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), programmable Read-Only Memory (PROM), erasable programmable Read-Only Memory (Erasable Programmable Read Only Memory, EPROM), one-time programmable Read-Only Memory (OTPROM), electrically erasable programmable Read-Only Memory (EEPROM), compact disc Read-Only Memory (Compact Disc Read-Only Memory, CD-ROM) or other optical disc Memory, magnetic disk Memory, tape Memory, or any other medium readable by a computer that can be used to carry or store data.
Finally, it should be noted that: the embodiment of the invention discloses a user state determining method and device based on scene equipment linkage, which are disclosed by the embodiment of the invention only for illustrating the technical scheme of the invention, but not limiting the technical scheme; although the invention has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art will understand that; the technical scheme recorded in the various embodiments can be modified or part of technical features in the technical scheme can be replaced equivalently; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions.

Claims (10)

1. A method for determining a user state based on scene equipment linkage, the method comprising:
detecting whether at least one target intelligent device in an operation state exists in a target scene, wherein all the target intelligent devices are intelligent devices in a preset monitoring list, and the intelligent devices included in the monitoring list have an association relationship with user state acquisition equipment;
when at least one target intelligent device exists in the target scene, determining a time-varying parameter corresponding to each target intelligent device according to the monitoring list, wherein the time-varying parameter comprises the device type of each target intelligent device;
Determining a working area corresponding to each target intelligent device according to the time-varying parameters, and determining scene coordinates of a target user in the target scene according to all the working areas;
and controlling a target user state acquisition device corresponding to the scene coordinates to acquire state parameters of the target user, analyzing the state parameters and time-varying parameters corresponding to a plurality of target intelligent devices based on a preset user state analysis model to obtain state information of the target user, wherein the state parameters comprise body movement amplitude and/or body temperature change data of the target user, and the state information of the target user comprises the state type of the target user.
2. The scene device linkage-based user state determination method according to claim 1, further comprising:
determining a target scene template from a plurality of scene templates which are determined in advance according to the time-varying parameters, wherein the target scene template is a scene template with the matching degree between the time-varying parameters and the preset standard time-varying parameters being larger than a preset matching degree threshold;
determining a first state corresponding to the target user according to the target scene template, wherein the first state is a preset user state in the target scene template;
The analyzing the state parameters and the time-varying parameters corresponding to the plurality of target intelligent devices based on a preset user state analysis model to obtain the state information corresponding to the target user comprises the following steps:
based on a preset user state analysis model, analyzing the state parameters and time-varying parameters corresponding to a plurality of target intelligent devices to obtain temporary states corresponding to the target users;
based on the user state analysis model, analyzing the state level of the temporary state and the state level of the first state to obtain an analysis result, and determining a state with a higher state level in the analysis result as state information corresponding to the target user;
and when the state level of the temporary state is smaller than or equal to the state level of the first state, the state information corresponding to the target user is the first state.
3. The method for determining a user state based on scene device linkage according to claim 1 or 2, wherein before determining a working area corresponding to each target intelligent device according to the time-varying parameter, the method further comprises:
Judging whether a voice control instruction for setting a target scene is received or not, and executing the operation of determining a working area corresponding to each target intelligent device according to the time-varying parameters when judging that the voice control instruction is not received;
when the voice control instruction is judged to be received, the voice control instruction is analyzed to obtain a target field for setting the target scene;
judging whether the target field comprises a user state field for representing the current state of a user, and determining the state of a user corresponding to the user state field as state information corresponding to the target user when the target field comprises the user state field;
when the target field is judged to not comprise the user state field, judging whether a post-level instruction associated with the scene setting operation comprises the user state field or not in the process of executing the scene setting operation matched with the target field, wherein the post-level instruction is an instruction of which the execution sequence is after the scene setting operation;
when judging that the user state field is not included in a post-level instruction associated with the scene setting operation in the process of executing the scene setting operation matched with the target field, executing the operation according to the time-varying parameters and determining a working area corresponding to each target intelligent device;
And when judging that the user state field is included in a post-hierarchy instruction associated with the scene setting operation in the process of executing the scene setting operation matched with the target field, determining the user state field included in the post-hierarchy instruction as state information corresponding to the target user.
4. The scene device linkage-based user state determination method according to claim 1 or 2, wherein the time-varying parameters further include a start-up time of each of the target smart devices; determining a working area corresponding to each target intelligent device according to the time-varying parameters, and determining scene coordinates of a target user in the target scene according to all the working areas, wherein the method comprises the following steps:
determining a working area and a running time corresponding to each target intelligent device according to the device type of each target intelligent device and the starting time of each target intelligent device, which are included by the time-varying parameters;
screening out target operation time lengths smaller than a preset time length threshold value from all the operation time lengths to obtain first intelligent equipment corresponding to the target operation time lengths;
Determining a working area corresponding to the first intelligent device as a scanning area, and controlling a user state acquisition device corresponding to the scanning area to scan the scanning area to obtain a scanning result, wherein the scanning area is an area with the probability of a target user in the area being greater than a preset probability threshold;
and when the scanning result indicates that the target user is in the scanning area, determining the coordinates corresponding to the moving range of the target user in the scanning area, which are included in the scanning result, as scene coordinates of the target user in the target scene.
5. The method for determining a user state based on scene device linkage according to claim 2, wherein the method further comprises, before the state parameters and time-varying parameters corresponding to the plurality of target intelligent devices are analyzed based on a preset user state analysis model to obtain the state of the target user:
analyzing the state parameters acquired by the target user state acquisition equipment to obtain the undetermined user type of the target user;
judging whether target types matched with the undetermined user types exist in all preset user self-defined types stored in a database;
When judging that the target type matched with the undetermined user type does not exist in all the user self-defined types stored in the database, determining that the standard type is the current user type of the target user from all the user self-defined types;
when judging that the target type matched with the undetermined user type exists in all the user self-defined types stored in the database, determining the target type as the current user type corresponding to the target user;
the step of analyzing the state parameters and the time-varying parameters corresponding to the plurality of target intelligent devices based on a preset user state analysis model to obtain a temporary state corresponding to the target user comprises the following steps:
and analyzing the state parameters, time-varying parameters corresponding to a plurality of target intelligent devices and the current user type based on a preset user state analysis model to obtain a temporary state corresponding to the target user.
6. The scene-device-linkage-based user state determination method according to any one of claims 1 to 5, wherein the state type includes a steady state type or a fluctuating state type, wherein the steady state type is used for indicating that a body temperature change amplitude of the target user is less than or equal to a preset temperature change threshold, and the fluctuating state indicates that the body temperature change amplitude corresponding to the target user is greater than the temperature change threshold;
The method further comprises the steps of after analyzing the state parameters and time-varying parameters corresponding to a plurality of target intelligent devices based on a preset user state analysis model and obtaining state information corresponding to the target user:
when the state type of the target user is the fluctuation state type, acquiring body temperature data corresponding to the target user at preset time intervals, and analyzing the body temperature data to obtain a body temperature change trend corresponding to the target user;
estimating a target environment parameter and an estimated time period required by the target user to change from the fluctuation state type to the stable state type based on the body temperature change trend and the user state analysis model, wherein the target environment parameter comprises at least one of temperature, humidity, air flow rate and light brightness corresponding to the target scene;
and generating control parameters of a second intelligent device for adjusting the environmental parameters of the target scene in the target scene according to the target environmental parameters and the estimated time length, and controlling the second intelligent device to adjust the environmental parameters of the target scene according to the control parameters of the second intelligent device so as to adjust the body sensing comfort temperature range, corresponding to the target user, determined by the body sensing model of the user.
7. The method for determining a user state based on scene device linkage according to claim 6, wherein the generating control parameters of a second intelligent device for adjusting environmental parameters of the target scene in the target scene according to the target environmental parameters and the estimated duration comprises:
analyzing the scene coordinates according to a predetermined user somatosensory model to obtain the distance between the target user and the target intelligent equipment;
according to a predetermined user somatosensory model, analyzing the target environment parameter, the estimated time length, the state information corresponding to the target user and the distance to obtain a somatosensory comfort parameter matched with the target user, wherein the somatosensory comfort parameter matched with the target user comprises the somatosensory comfort temperature range;
generating control parameters of a second intelligent device in the target scene for adjusting environmental parameters of the target scene according to the somatosensory comfort parameters matched with the target user;
and, the method further comprises:
judging whether the data fluctuation condition of the body temperature data corresponding to the target user in the estimated time length is matched with the fluctuation condition included in the fluctuation curve estimated by the user state analysis model or not in the process of controlling the second intelligent device to adjust the environmental parameters of the target scene;
When judging that the fluctuation condition of the body temperature data corresponding to the target user is not matched with the fluctuation condition included in the fluctuation curve estimated by the user state analysis model in the estimated time period, updating the body temperature change trend corresponding to the target user according to the data fluctuation condition, and re-executing the operation based on the body temperature change trend and the user state analysis model to estimate the target environment parameters and the estimated time period required by the target user to change from the fluctuation state type to the stable state type.
8. A scene device linkage-based user state determining apparatus, the apparatus comprising:
the detection module is used for detecting whether at least one target intelligent device in an operation state exists in a target scene, all the target intelligent devices are intelligent devices in a preset monitoring list, and the intelligent devices included in the monitoring list are in association with the user state acquisition device;
the determining module is used for determining a time-varying parameter corresponding to each target intelligent device according to the monitoring list when the detecting module detects that at least one target intelligent device exists in the target scene, wherein the time-varying parameter comprises the device type of each target intelligent device;
The determining module is further configured to determine a working area corresponding to each target intelligent device according to the time-varying parameters, and determine scene coordinates of a target user in the target scene according to all the working areas;
the control module is used for controlling the target user state acquisition equipment corresponding to the scene coordinates to acquire state parameters of the target user, wherein the state parameters comprise body movement amplitude and/or body temperature change data of the target user;
the first analysis module is used for analyzing the state parameters obtained by the control module and the time-varying parameters corresponding to the plurality of target intelligent devices obtained by the determination module based on a preset user state analysis model to obtain the state information of the target user, wherein the state information of the target user comprises the state type of the target user.
9. A scene device linkage-based user state determining apparatus, the apparatus comprising:
a memory storing executable program code;
a processor coupled to the memory;
the processor invokes the executable program code stored in the memory to perform the scene device linkage based user state determination method of any of claims 1-7.
10. A computer storage medium storing computer instructions which, when invoked, are operable to perform the scene device linkage based user state determination method of any one of claims 1-7.
CN202210256922.3A 2022-03-16 2022-03-16 User state determining method and device based on scene equipment linkage Pending CN116794989A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210256922.3A CN116794989A (en) 2022-03-16 2022-03-16 User state determining method and device based on scene equipment linkage

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210256922.3A CN116794989A (en) 2022-03-16 2022-03-16 User state determining method and device based on scene equipment linkage

Publications (1)

Publication Number Publication Date
CN116794989A true CN116794989A (en) 2023-09-22

Family

ID=88035016

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210256922.3A Pending CN116794989A (en) 2022-03-16 2022-03-16 User state determining method and device based on scene equipment linkage

Country Status (1)

Country Link
CN (1) CN116794989A (en)

Similar Documents

Publication Publication Date Title
US10535349B2 (en) Controlling connected devices using a relationship graph
US11050577B2 (en) Automatically learning and controlling connected devices
US20210160326A1 (en) Utilizing context information of environment component regions for event/activity prediction
US11243502B2 (en) Interactive environmental controller
CN109445848B (en) Equipment linkage method and device
US11315400B1 (en) Appearance based access verification
CN109951363B (en) Data processing method, device and system
WO2019199365A2 (en) Utilizing context information of environment component regions for event/activity prediction
CN114253190A (en) Intelligent shower control method and device, intelligent equipment and storage medium
CN110989430A (en) Smart home linkage method and system and readable storage medium
US20210133462A1 (en) State and event monitoring
CN116794989A (en) User state determining method and device based on scene equipment linkage
CN116400610A (en) Equipment control method, device, electronic equipment and storage medium
CN114063572A (en) Non-sensing intelligent device control method, electronic device and control system
CN115356943A (en) Wireless intelligent home system based on BP neural network
CN114187650A (en) Action recognition method and device, electronic equipment and storage medium
CN113963467A (en) Intelligent door lock control method, control device, intelligent equipment and storage medium
CN117515652A (en) Heating equipment intelligent control method and device based on user action induction
CN112051621B (en) Method and device for judging whether room is occupied or not
CN117368902B (en) Track tracking method, device, equipment and storage medium
CN117784625A (en) Intelligent prediction method and device for equipment to be controlled
CN117289613A (en) Pre-starting intelligent implementation method and device of intelligent equipment
CN116414041A (en) Intelligent device control method and device based on position information
CN117031975A (en) Panel configuration method and device of intelligent switch and computer storage medium
CN116339160A (en) Linkage control method and device of equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination