GB2579577A - Method and system for managing automated devices - Google Patents

Method and system for managing automated devices Download PDF

Info

Publication number
GB2579577A
GB2579577A GB1819762.4A GB201819762A GB2579577A GB 2579577 A GB2579577 A GB 2579577A GB 201819762 A GB201819762 A GB 201819762A GB 2579577 A GB2579577 A GB 2579577A
Authority
GB
United Kingdom
Prior art keywords
user
processes
devices
interaction
impact
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1819762.4A
Other versions
GB201819762D0 (en
Inventor
Michael Duffy David
John Wright Christopher
John Lawrenson Matthew
Giles Beard Timothy
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Centrica PLC
Original Assignee
Centrica PLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Centrica PLC filed Critical Centrica PLC
Priority to GB1819762.4A priority Critical patent/GB2579577A/en
Publication of GB201819762D0 publication Critical patent/GB201819762D0/en
Publication of GB2579577A publication Critical patent/GB2579577A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B15/00Systems controlled by a computer
    • G05B15/02Systems controlled by a computer electric
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/26Pc applications
    • G05B2219/2642Domotique, domestic, home control, automation, smart house
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/40Robotics, robotics mapping to robotics vision
    • G05B2219/40411Robot assists human in non-industrial environment like home or office

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Medical Treatment And Welfare Office Work (AREA)

Abstract

A system and method for configuring control actions for semi-automated control of devices in a user environment. The method comprising identifying a set of automatable control processes for devices in the environment 202, wherein the control processes comprise potential user interactions or actions completable by a user, determining for each process a measure of impact on the user if the process is carried out with user interaction 204, selecting, based on the impact measures determined for the process, one or more processes to be carried out with user interaction 206, and configuring one or more devices in the user environment to perform the selected one or more processes with user interaction 208.

Description

Intellectual Property Office Application No. GII1819762.4 RTM Date:3 June 2019 The following terms are registered trade marks and should be read as such wherever they occur in this document: Bluetooth -page 5, 9, 21, 23 Zigbee -page 5, 9, 21, 23 WiFi -page 5, 7, 9, 21, 23 Intellectual Property Office is an operating name of the Patent Office www.gov.uk /ipo
METHOD AND SYSTEM FOR MANAGING AUTOMATED DEVICES
The present invention relates to managing automated devices, such as home robots and artificial intelligence agents. In particular, the present invention is directed to controlling devices in a user environment to perform tasks alongside human interaction.
Home robots and Al (Artificial Intelligence) agents (e.g. integrated into smart speakers, personal computers or the like) can assist with, or fully automate, a range of functions and processes in the home. In addition to their practical utility, such devices are expected to become increasingly social as well as useful, especially for the ageing market. One factor in successfully expanding the scope of such devices is to enable collaboration with human users on tasks, augmenting the robot or Al's utility and helping to build effective collaborative relationships.
Home automation through home robots and Als will provide increasing support for conducting tasks in the home, both physical and mental. While this may make household tasks easier for the occupant, it could potentially take away opportunities to engage in the tasks which keep them physically and mentally healthy and socially engaged. This can be particularly problematic in the case of elderly people living alone.
It is understood that simple tasks such as housework can provide significant physical and mental health benefits, especially for the elderly. These household tasks do not need to be strenuous to have significant health benefits, and small mental tasks, such as planning a meal and shopping list, or writing and sending emails, can help maintain cognitive abilities. Whether and how much a task will be beneficial to the health of an individual can be dependent on a range of factors, such as the occupant health, mood, and how much sleep they have had.
Use of assistive robots and Als to increasingly take on physical and mental tasks within the home which ordinarily help maintain human physical and social health can therefore contribute to a general decline in elderly health. On the other hand, encouraging elderly people to help in tasks purely for a social interaction benefit may not provide a useful health benefit and may even be detrimental to health. Furthermore, some tasks within the home may require a degree of human assistance to be effective, but the assistance required may not have optimal health benefits, and may even be detrimental to health.
Embodiments of the invention seek to ameliorate at least some of the problems of home automation outlined above.
Overview Embodiments of the invention provide a system for configuring semi-automated tasks and processes in a smart home environment. A "smart home" or "connected home" is a home or similar environment comprising a range of network/Internet connected devices, including sensor devices (e.g. cameras) and controllable device (e.g. heating systems, home robots such as vacuum cleaners, etc.), robots and Al agent devices (e.g. smart speakers and other devices with natural language interfaces or conversational agents).
Embodiments described herein use smart home data to identify opportunities for human assistance in robot/AI tasks and assess them for their ability to help achieve the mental and physical health goals of occupants, optimise the completion of robot/AI tasks within the home, and improve the user perception (e.g. likability) of the robot/Als involved. In broad outline, the system performs the following steps: 1. The system identifies opportunities for occupants to assist with robot/AI automated tasks within the home.
2. The system assesses the identified opportunities for: a. The predicted health benefits to the occupant; b. The predicted social benefits to the occupant; c. The predicted effects on the efficacy of the completed task; d. Optionally, the predicted effects on the occupant's perception of the robot/AI device (e.g. it's likeability).
3. The system selects which opportunities to instigate based on the above assessments.
4. The system may monitor the resultant task completion to improve predictions in future.
The system may additionally create test protocols for occupants and Al/robot devices which are enacted to gather data to improve predictions.
Hence, there are described herein the following features: 1. A system to identify opportunities for possible human-robot interactions for robot/AI devices within the home capable of interaction with an individual; 2. A system to predict the physical and mental impact of possible human-robot interactions; 3. A system to determine the effects on the likability of the robot and utility of the robot/AI device due to possible human-Al collaborations; 4. A system to determine which human-robot collaboration opportunities to enact based on predicted physical and mental impacts on the occupant and of the likability and utility impacts on the robot/AI device; 5. An algorithm to design interaction tests with less invasive requests to provide data on the impact of future human-robot collaboration opportunities.
Connected homes can contain a range of devices which can monitor the interactions between occupants and the various robots/Als within the home, including video cameras, sensors, and the robots/Als themselves.
There is described herein a computer-implemented method of configuring control actions for semi-automated control of devices in a user environment, the method comprising: identifying a set of automatable (control) processes for devices in the environment, wherein the control processes comprise potential user interactions or actions completable by a user; determining for each process a measure of impact on the user if the process is carried out with user interaction; selecting, based on the impact measures determined for the processes, one or more processes to be carried out with user interaction; and configuring one or more devices in the user environment to perform the selected one or more processes with user interaction.
Advantageously, it is possible to provide a range of health and social benefits to the user by encouraging them to perform processes which would otherwise be performed automatically, e.g. by robots or other machines. In addition, processes which can be performed more efficiently with user interaction than without can be identified, which may lead to processes being completed more quickly and/or using fewer resources (e.g. less electrical energy or other fuel, or less of a consumable product).
The user interaction, or task, may comprise the user performing all or only a part of the process. For example, where the process is vacuum cleaning a floor, the user interaction may involve vacuum cleaning a first portion of the floor that is easily accessible to the user, whilst the remainder (a second portion) of the floor that is not easily accessible to the user (e.g. under furniture or in corners) may be performed automatically by a robot or automatic vacuum cleaner. Preferably, configuring one or more devices in the user environment to perform the selected one or more processes with user interaction comprises configuring the devices to complete the entire process with user interaction.
Preferably, one or more remaining, non-selected ones of the set of control processes are configured to be completed without user interaction, e.g. in fully automated fashion.
Preferably each of the automatable control processes is capable of being completed automatically, e.g. without user interaction. For example, each process may be capable of being completed by one or more devices, without input from a user. The inventors have found that, contrary to common perception, that by causing such a process to be completed partly or wholly by a user, it is possible to provide improvements to the quality of life of the user and/or to the efficiency of completing the process. Completing a process with human interaction may comprise a device completing only one part or aspect of the process, whilst the user completes another part or aspect.
Optionally, the step of selecting one or more processes to be carried out with user interaction comprises: determining whether each process in the set of automatable processes is to be carried out with user interaction, based on: the impact measure of each process; and the impact measure of the remainder of the selected processes in the set of automatable processes. For example, the impact measure could be measured on a numerical scale and a process with the highest impact measure compared to all the other processes may be selected.
In some examples, the step of selecting one or more processes to be carried out with user interaction is based on a target user impact, such as a predetermined target user impact. The target user impact could be based on information provided by the user or by a health advisor. The target user impact could, for example, be a target health impact, such as a target number of steps to be performed by a user, or a target heart rate to be reached (optionally for a target duration).
Preferably, configuring one or more devices in the user environment to perform the selected one or more processes with user interaction comprises: sending, to the one or more devices, one or more configuration messages operable to cause the devices to perform the selected one or more processes with user interaction. For example, the configuration messages could comprise control commands. Such messages can be sent via a wireless connection, such as Bluetooth or Zigbee or WiFi. Optionally the method further comprises: sending, to the one or more devices, one or more configuration messages operable to cause the devices to perform the one or more remaining, non-selected ones of the set of control processes to be completed without user interaction, e.g. in fully automated fashion. The configuration messages relating to the one or more remaining, non-selected ones of the set of control processes to be completed in fully automated fashion can also comprise control commands and may also be sent via a wireless connection, such as Bluetooth or Zigbee or WiFi. However, in some embodiments it is not necessary to send configuration messages regarding the one or more remaining, non-selected ones of the set of control processes, since the default configuration of the devices is to complete processes without user interaction.
Preferably, determining an impact measure comprises determining a measure of health impact, e.g. health impact on the user. A health impact measure may have a higher score (or be higher on the numerical scale) if it contributes more to the health of a user.
In some embodiments, health impacts are classified into a plurality of categories, optionally each with its own score. T Determining a health impact may comprise determining a measure of expected physical activity required to complete the interaction. Examples of health impacts include a measure of energy expended (e.g. in kcal) during completion of the interaction; a measure of steps taken during completion of the interaction; and a measure of sustained heart rate increase during completion of the interaction. Energy expenditure can be calculated or estimated based on the weight of the person and a standard measure of energy expenditure for that type of user interaction or activity, and optionally an expected duration of that interaction. The standard measure of energy expenditure could, for example, be derived from a lookup table.
In some embodiments determining a measure of impact comprises determining a social impact. The social impact can be based on the expected user interactions with humans or intelligent agents in performing the process. For example, the amount of conversation, e.g. on the telephone, required to complete the user interaction. In some embodiments the social impact is determined based on a monitored reaction from the user during one or more previous user interactions, e.g. from audio or video monitoring.
The monitored reaction could, for example, be classified into one of a plurality of categories, for example "positive", "negative" and "neutral". In some examples audio or video monitoring may be used in conjunction with Al to detect the emotional response of a user in response to performing the process with user interaction, e.g. audio monitoring could be by detecting the sound of a user laughing or from the pitch of their voice, whilst video monitoring could detect emotion from facial expression, such as a smile, frown or laugh.
The method can include ranking the processes in the identified set of automatable control processes according to the measure(s) of impact on the user, and optionally the measure of efficacy of the process; and selecting the one or more processes to be carried out with user interaction based on the ranking, preferably by selecting one or more of the highest ranking processes.
In some embodiments, the method comprises determining a plurality of measures of impact, for example at least one social impact and at least one health impact.
Each measure of impact or measure of efficacy may be assigned a weighting and the step of selecting one or more processes to be carried out with user interaction can take this weighting into account. For example, for some users health impact may be considered more important than a social impact.
In certain embodiments, the method further comprises: determining for each process a measure of efficacy of the process if the process is carried out with user interaction; and wherein selecting one or more processes to be carried out with user interaction is further based on the measure of efficacy determined for the process. The measure of efficacy could, for example, include a comparison of the (predicted) time taken to complete the process with and without human interaction and/or a (predicted) quality with which the process is completed with and without human interaction. The measure of efficacy could additionally or alternatively include the amount of resources, such as energy consumption, and/or the predicted depreciation in the devices/equipment (such as devices which have a predetermined, or even predicted (e.g. based on an average from the manufacturer) amount of use before they require replacing.
Preferably, selecting one or more processes to be carried out with user interaction is based on: user schedule data, preferably wherein the user schedule data comprises a determination that a user is, or is expected to be, present in the user environment. For example, the determination that the user is present in the user environment could be derived from an electronic calendar. It could also be based on historical or trend data for the user, e.g. predicting the user will be within the user environment at a certain time each day based on the user having been present at the user environment for more than a predetermined threshold number or proportion of days in a preceding period. User schedule data may also indicate a user is present in the user environment, but is busy or otherwise occupied, in which case processes with user interaction may not be scheduled for busy time periods.
Preferably, the user schedule data comprises a determination that a user is expected to be present in the user environment, and wherein the method further comprises: determining that the user is expected to be present in the user environment based on user monitoring data collected by one or more monitoring devices in the user environment. For example, it may be detected that a user is present based on detecting presence of a user's mobile device in or nearby the user environment (e.g. by GPS monitoring or by the mobile device connection to a Local Area Network, such as W-Fi).
Selecting one or more processes to be carried out with user interaction can be further based on: user schedule data for a plurality of users. User schedule data for a plurality of users may be particularly helpful when selecting a process is based on a social impact measure. For example, if user schedule data indicates at least two users are (likely to be) present in a user environment in a given time period, processes with a high social impact may not be selected (e.g. given a lower priority). Conversely, if schedule data indicates a user has been, or is likely to be, alone at the user environment, processes with a high social impact can be prioritised, or selected preferentially.
In some examples, selecting one or more processes to be carried out with user interaction is further based on a target completion deadline for each process. The target completion deadline could be set by the user, or could be pre-programmed. When selecting one or more processes to be carried out with user interaction is based on a target completion deadline for each process, an estimated length of time for completion of the process with and without user interaction and/or the user schedule data can be taken into account. Selecting one or more processes to be carried out with user interaction may be further based on a measure of how critical the target completion deadline is.
In some embodiments, determining for each process a measure of predicted effect on user perception of the process if the process is carried out with user interaction; and wherein selecting one or more processes to be carried out with user interaction is further based on the measure of predicted effect on user perception determined for the process. In some embodiments the measure of predicted effect on user perception is determined based on a monitored reaction from the user during one or more previous user interactions, e.g. from audio or video monitoring. The monitored reaction could, for example, be classified into one of a plurality of categories, for example "positive", "negative" and "neutral". In some examples audio or video monitoring may be used in conjunction with Al to detect the emotional response of a user in response to performing the process with user interaction, e.g. audio monitoring could be by detecting the sound of a user laughing or from the pitch of their voice, whilst video monitoring could detect emotion from facial expression, such as a smile, frown or laugh.
Preferably, the method further comprises: identifying one or more user interactions required for completion of each of the selected one or more processes; and outputting a set of instructions to a user for performing the identified one or more user interactions, for example by displaying on a display device, such as a mobile phone or tablet, or outputting audibly, such as output via a speaker.
Preferably, selecting one or more processes to be carried out with user interaction is further based on historical performance information collected for previous occasions on which the process was carried out with user interaction, preferably wherein one or both of: the impact measures, and the measure of efficacy of the process if the process is carried out with user interaction are based on the historical performance information.
Preferably, the method also comprises: monitoring the performance of the selected one or more processes to determine performance information. Such performance information can be used in future selection of processes for user interaction, e.g. as historical performance information.
Preferably the method further comprises: developing a test protocol for testing the effect of user interaction on a test group of processes comprising one or more of the set of automatable control processes by identifying one or more test user interactions for each process; causing the test group of processes to be performed according to the test protocol, wherein performing a control process according to the test protocol comprises performing the process with the test user interaction; and monitoring the performance of the test group of processes to determine test performance information. Advantageously, the step of determining for each process a measure of impact on the user if the process is carried out with user interaction can be based on the test performance data.
Optionally, causing the test protocol to be performed comprises: outputting a set of instructions to a user for performing the identified one or more test user interactions, preferably by displaying on a display device, such as a mobile phone or tablet, or audibly, such as output via a speaker.
Preferably, the identified one or more test user interactions are user interactions that would result in a portion or fraction, e.g. half or a quarter, of a process being completed.
Thus the test can be performed more quickly and with less user effort than for the selected one or more processes that are carried out in their entirety with user interaction.
Preferably, causing the test protocol to be performed comprises: sending, to each of the one or more devices, one or more test control messages operable to cause the devices to perform the test group of processes with one or more test user interactions. For example, the control messages could comprise control commands. Such messages can be sent via a wireless connection, such as Bluetooth or Zigbee or WiFi.
Preferably, causing the test protocol to be performed comprises: sending, to one or more monitoring devices, one or more monitoring control messages operable to cause the monitoring devices to monitor the performance of the test group of processes with one or more test user interactions.
Preferably, monitoring the performance of the selected one or more processes or the test group of processes comprises one or more of: recording visible or infrared light images of the user environment, preferably video images; recording sound in the user environment, preferably using a microphone; and recording the movement of a mobile user device, e.g. from the output of an accelerometer or by a location tracker, such as a GPS sensor; recording one or more health characteristics, such as steps taken by a user (e.g. from the output of an accelerometer), or a user's heart rate (e.g. from a heart rate monitor); and recording utility consumption for the one or more processes, e.g. electricity or gas consumption, e.g. from an electricity or gas meter. Where the recorded data is not numerical, e.g. visible images or sound, monitoring the performance may further comprise categorising the process according to the recorded data, e.g. by analysing the data via Al.
There is also described a non-transient computer-readable medium comprising instructions which, when executed by a computer, cause the computer to carry out a method substantially as described above.
There is also described a controller for configuring control actions for semi-automated control of devices in a user environment, the controller comprising: an interface operable to communicate with one or more devices in the environment; a memory for storing information regarding one or more automatable control processes for devices in the environment; and a processor operable to: identify a set of automatable control processes for devices in the environment, wherein the control processes comprise potential user interactions or actions completable by a user; determine for each process a measure of impact on the user if the process is carried out with user interaction; select, based on the impact measures determined for the process, one or more processes to be carried out with user interaction; and send, to one or more devices in the user environment, one or more control commands to cause the one or more devices to perform the selected one or more processes with user interaction.
The processor may further configured to perform any of the method steps as described above.
There is also described a system for configuring control actions for semi-automated control of devices in a user environment, the system comprising: a controller as described above; and one or more devices in the user environment operable to perform a set of automatable control processes, wherein the control processes comprise potential user interactions or actions completable by a user.
In some examples, the system further comprises: one or more monitoring devices operable to: monitor the performance of the selected one or more processes to determine performance information, and optionally to determine test performance information; and send the performance information, and optionally the test performance information, to the controller; and preferably operable to: receive one or more monitoring control messages operable to cause the monitoring devices to monitor the performance of one or more processes with one or more test user interactions.
Brief Description of the Figures
Embodiments will now be described, by way of example only and with reference to the accompanying drawings, in which: Figure 1 illustrates an exemplary system for configuring semi-automated tasks and processes in a smart home environment; Figure 2 illustrates a flow diagram of an exemplary method for configuring semi-20 automated tasks and processes in a smart home environment; Figure 3 illustrates an exemplary method of identifying potential tasks to be performed with user interaction; Figure 4 illustrates an exemplary method for assessing health impact; Figure 5 illustrates an exemplary user environment and server for configuring semi-automated tasks and processes in the user environment; and Figure 6 illustrates an exemplary test method for testing processes performed with user interaction.
Detailed description
As used herein, the term "Connected Home" preferably refers to a home with networked devices and sensors which can share data within and outside of the home environment. "Tasks" are the tasks that must be completed to maintain the normal operation of the Connected Home, such as vacuum cleaning, watering plants and completing shopping orders.
The term "occupant" refers to the occupant of the Connected Home. There may be many Occupants in a single Connected Home, with different social and health needs and limitations.
The term "Artificial Intelligence or Robotic Device (AIRD)" preferably refers to the collection of devices within a Connected Home capable of completing specific Tasks within the home. AIRDs may consist of, for example: * Al devices, such as Al assistants, chatbots, and automated schedule planners; * Robotic devices, such as robotic vacuum cleaners, automatic dishwashers and sprinkler systems.
The system is illustrated in overview in Figure 1, with various key components described below.
Monitoring Systems 118 are a collection of systems and devices within a Connected Home capable of monitoring events within the home and relaying this data to other systems for processing. Some AIRDs may also act as Monitoring Systems. Monitoring Systems may include, for example: * Video cameras; * Microphones; * User devices, such as wearable monitoring devices for e.g. Location tracking (e.g. GPS sensor), Fitness tracking (e.g. pedometer, heart rate sensor); accelerometers and the like, or mobile phones, which may themselves contain various component monitoring devices, * Al assistant devices; * Energy monitors.
The Task Database 110 is used to store data related to the Tasks which must be 30 completed in the Connected Home. The Task Database may contain the following data: * Task ID -A unique identifier, such as a number, assigned to each Task to be completed in the home.
* AIRD ID -The unique identifiers of any AIRDs assigned to each Task (if relevant).
* Task Schedule Data -data on when Tasks in the home are scheduled to be undertaken, and information on how critical the timing is, such as automatic brewing of coffee with breakfast rather than at midnight.
* Human Input Opportunities -a Task-specific list of the components of a Task which may be assisted by an Occupant.
Each Human Input Opportunity has associated data detailing the effects of the human input if implemented, which may be based on historical data gathered on the Occupant and/or on one or more tests (as described further below). These may include health impact data and social impact data.
Health impact data includes data relevant to health metrics. For example, this may include the steps required for the Occupant to take in assisting the Task, or the amount of time spent in the sun exposed to UV.
Social impact data includes data relevant to social metrics. For example, this may include the amount of phone conversation required by the Occupant to fulfil the Task. In more advanced embodiments, this may include data on the emotional response of the Occupant to the Human Input Opportunity, such as whether they smiled during conversation with a chatbot Al or not.
The impact data may further include task efficacy impact data, e.g. data relevant to how well the Task is performed. It may include benchmarks for how quickly or efficiently a Task can be performed with and without human interaction.
Optionally, impact data may also include likability impact data. This is data relevant to the impact of the Task on the Occupant's attitude towards the relevant Al RD(S), e.g. a change in how much the Occupant likes the relevant AIRD(s). In simple embodiments, this may include how much time an Occupant has spent in a room with the AIRD. In more advanced embodiments, this may include data on the emotional response of the Occupant to the considered AIRD, such as whether they smiled whilst interacting with it.
In some embodiments of the system, some or all Human Input Opportunities may include priority data for prioritisation in later selection. In simple embodiments this may be a ranking from "lowest priority" to "highest priority". More advanced embodiments may include conditions, such as "high priority" for interactions with certain Occupants or AIRDs. Priority Data may be valuable to the manufacturers or providers of AIRDs, and therefore in some embodiments Priority Data may be configurable (e.g. changed for a fee). In an example, AIRD manufacturers/providers may pay a fee to gain a "high priority" for Human Input Opportunities which involve interactions with their AIRDs, such that their product or service gains more exposure; or that involve interactions with particular demographics of Occupants, such that they can gather targeted user data.
The Occupant Database 108 is used to store data relating to the Occupants of the 10 Connected Home. In an embodiment, the Occupant Database contains the following data: * Occupant ID -An identifier, such as a number, assigned to each Occupant of the home.
* Occupant Health Status -Data on the current health status of the Occupant. In some embodiments, Occupant Health Status data may consist of data provided by fitness tracker applications, such as the steps taken by the Occupant that day, how much sleep they have had or historical heart-rate data. More advanced embodiments may include data gained through analysis of data from other Monitoring Systems, such as video analysis of the frequency and intensity of motion within the Connected Home.
* Occupant Health Goals -Data on the current health goals of the Occupant, such as target number of steps taken or target lengths for periods of raised heart-rate. Occupant Health Goals may have been input directly by the Occupant or their carer, may be retrieved from fitness tracker applications, or may be algorithmically generated based on the Occupant Health Status over time.
* Occupant Social Status -Data on the current social health of the Occupant. In simple embodiments, Occupant Social Status may comprise data provided by the Occupants personal mobile device or social media, such as number of calls or text messages sent or received, or how often they have left the house. More advanced embodiments may include data gained through analysis of data from other Monitoring Systems, such as video analysis of their interactions with others in the home to gain information on the frequency of, content or mood during social interactions, or through analysis of their vocal interactions with chatbot Als.
* Occupant Social Goals -Data on the current social health goals of the Occupant, such as total amount of time spent positively interacting with another human or AIRD. Occupant Social Goals may have been input directly by the Occupant or their carer, or may be algorithmically generated based on the Occupant Social Status over time.
* Occupant Schedule Data -Data on the upcoming movements and activities of the Occupant in the Connected Home. In simple embodiments, Occupant Schedule Data may be taken directly from the Occupant's calendar applications, giving a binary "at home" or "out of home" for different times in the day. In more advanced embodiments, Occupant Schedule Data may consist of the predicted location and activity of the Occupant within the home at a given time, which for example may be gained through ongoing analysis of data provided by Monitoring Systems.
The above provides examples of information stored in the Occupant Database in accordance with certain embodiments. In practice, the database may include any or all of the above types of data and/or may contain other data useful for interaction planning.
For example, the database may additionally include information on the Occupant Al RD Likability -e.g. data on how much the Occupant currently likes each Al RD. Data may be provided by algorithms processing Monitoring System outputs, such as chatbot responses or video processing for Occupant emotional responses to Al RDs.
The Potential Interaction Identification Algorithm 112 produces a candidate list of potential interactions for each Occupant, along with the relevant Occupant ID, Task ID and Al RD IDs, (the "Potential Interaction List"). Its inputs can include Occupant Schedule Data from the Occupant Database; Task Schedule Data from the Task Database; and Human Input Opportunities from the Task Database. The process for producing the candidate list of potential interactions will be described in more detail later.
The Potential Interaction Prediction Unit 102 is a system unit used to predict the health impact, social impact, and task efficacy impact and, optionally, predict the likability impact should the considered interactions in the Potential Interactions List be completed.
The Potential Interaction Prediction Unit comprises the following sub-components: * Health Impact Algorithm 104 -an algorithm which predicts a likely health impact were the potential interaction to be completed (the "Predicted Health Impact") for every potential interaction in the Potential Interactions List, which may be based on: the Occupant Health Status; the Occupant Health Goals; and the expected Health Effects due to the Human Input Opportunities.
* Social Impact Algorithm 105 -an algorithm which predicts a likely social impact were the potential interaction to be completed (the "Predicted Social Impact") for every potential interaction in the Potential Interactions List, which may be based on: the Occupant Social Status; the Occupant Social Goals; and the Social Effects due to the Human Input Opportunities.
* Task Impact Algorithm 106 -an algorithm predicts the likely impact on task efficacy were the potential interaction to be completed (the "Predicted Task Efficacy Impact") for every potential interaction in the Potential Interactions List, which may be based the Task Efficacy effects due to the Human Input Opportunities.
* Likability Impact Algorithm 107 -an algorithm which predicts the likely Likability Impact were the potential interaction to be completed (the "Predicted Likability Impact"), for every potential interaction in the Potential Interactions List. This may be based on: the Likability Effects due to the Human Input Opportunities; and the current Occupant AIRD Likability for the relevant AIRDs, from the Occupant Database.
Note that the above set of impact assessment algorithms are listed by way of example, and embodiments may use only one or some of these to evaluate impact.
For example, the assessment of likability impact may be omitted, with the impact prediction focussing on health and social impact. Other embodiments could focus purely on the health impact. Other types of impact that are not listed could additionally be incorporated.
The Interaction Assignment Algorithm 114 ranks the potential interactions in the Potential Interaction List and selects the highest ranking potential interactions for instigation, based on the impacts predicted by the Potential Interaction Prediction Unit 102, i.e. based on one or more of: the Predicted Health Impact (algorithm 104); the Predicted Social Impact (algorithm 105); the Predicted Task Efficacy Impact (algorithm 106); and optionally the Predicted Likability Impact (algorithm 107). Details of the operation of the assignment algorithm are described in more detail below.
The Interaction Assessment Algorithm 114 may additionally use data gathered by Monitoring Systems to assess, for every completed Task, one or more of: the actual Health Effects; the actual Social Effects; the actual Task Efficacy Effects; and the actual Likability Effects. The outputs of the algorithm may be used to update the Human Interaction Opportunities in the Task Database. The Interaction Assessment Algorithm may be one or many different types of algorithm dependent on the type of data available from the Monitoring Systems.
Some embodiments may additionally provide a Potential Interaction Testing Algorithm 120. This uses data from the Potential Interaction List to generate test protocols (the "Test Protocols") for AIRDs within the home. Test Protocols may mimic aspects of the Human Input Opportunities of a particular Task in the Potential Interaction List, but to a lesser degree, such that predictions of the likely effects of that potential interaction can be better made via the Potential Interaction Prediction Unit. Data gathered from the tests may be provided to the Interaction Assessment Algorithm for analysis.
Task assignment process Embodiments use the data and algorithms described above to provide a process for the assignment of Al/robot assisted tasks to occupants of a home whereby the physical and social health benefits of the task, as well as the task efficacy, are optimised. Optionally, the occupant likability of the Al/robot assistant may be optimised as well.
The process is shown in overview in Figure 2.
Firstly, in step 202, the Potential Interaction Identification Algorithm 112 produces the Potential Interaction List using Occupant Schedule Data from the Occupant Database; and Task Schedule Data and Human Input Opportunities from the Task Database.
This may be done via the following process, as further illustrated in Figure 3: * In step 302, the Potential Interaction Algorithm is initiated. The algorithm may be run periodically, such as once every week or every day. The algorithm is run to identify possible task interactions for a particular time period (e.g. the coming week or day).
* In step 304, scheduled Tasks are selected from the Task Database. Task Schedule Data in the Task Database is used to select only those tasks with Human Input Opportunities over the considered time period.
* In step 306, the Task Schedule Data of the selected Tasks is then compared to the Occupant Schedule Data, and Tasks which are scheduled at a time when an Occupant will be available to interact with them are selected and listed as the Potential Interaction List.
* In step 308, if there is no (or insufficient) commonality between the Task scheduling and the Occupant scheduling, the scheduling of Tasks with high schedule flexibility in the Task Schedule Data may be altered such that the Tasks can coincide with Occupant availability. The rescheduled tasks are then added to the Potential Interaction List.
Returning to Figure 2, the algorithms in the Potential Interaction Prediction Unit (102) output the Predicted Heath Impact, Predicted Social Impact, Predicted Task Efficacy Impact, and optionally the Predicted Likability Impact.
By way of example, operation of the Health Impact Algorithm is illustrated in Figure 4.
In step 402, the Health Impact Algorithm numerically compares the Health Goals of an Occupant with their current Health Status, such as their goal daily steps to take and their current number of steps taken, to get a metric for the required health change (the "Required Health Change"). In step 404, the Required Health Change is numerically compared to the Health Change given in the Human Input Opportunities in the Task Database, such as the stated number of steps required in a vacuum cleaning Task. In step 406, the Predicted Health Impact can be calculated, for example by calculating the numerical difference between the Required Health Change for the Occupant and the stated Health Change for the Task.
If the Required Health Change is low, for example if the Occupant has already taken all of the required steps to reach their Health Goal for the day, then the Predicted Health Impact would be low regardless of how many steps are given by the Health Change.
Other types of impact are evaluated using analogous processes. For example, social impact can be determined by comparing the occupant's social goals/requirements (e.g. increased human interaction) with social effects specified for a task (e.g. required telephone/email interactions for completing the task) to determine a social impact.
Returning to Figure 2, in step 206, the Interaction Assignment Algorithm 116 ranks the potential interactions in the Potential Interaction List and selects the highest ranking potential interactions to trigger. The Interaction Assignment Algorithm may create a ranking (the "Interaction Ranking") using one or more of: * The Predicted Health Impact; * The Predicted Social Impact; * The Predicted Task Efficacy Impact; * The Predicted Likability Impact; * Priority Data from the Task Database.
In one approach, the predicted impacts are averaged to get an Interaction Ranking for each interaction in the Potential Interaction List. Alternatively, the algorithm may give some of the impact scores a greater weight than others, for example the Predicted Health Impact may have an overall greater effect on the Interaction Ranking than the Predicted Task Efficacy (thus, the combined ranking may be computed as a weighted average of some or all of the above impact types).
In step 208, the Task is triggered, and the Occupant is given instructions from the Human Input Opportunities as to how to assist in the Task.
In step 210, once Tasks have been completed, the Interaction Assessment Algorithm 114 assesses the actual resulting effects of the interaction. In an embodiment, the Interaction Assessment Algorithm uses data from the Monitoring Systems to assess one or more of the actual Health Effect, Social Effect, Task Efficacy Effect and Likability Effect of the completed Task interactions. The assessment process may be dependent on the type of data received from the Monitoring Systems. The new data is then used to update the averages stored in the data with the Human Input Opportunities in the Task Database.
For example, fitness tracker data may be used to count the number of steps actually taken by an Occupant in a vacuum cleaning task, and used to update the average number of steps for the Task in the Human Input Opportunities in the Task Database. If dedicated fitness tracker data is not available, then step counts recorded by a smartphone could be used, or alternatively raw accelerometer data from a smartphone could be used to estimate a step count. As a further example, audio data from a microphone could be used to determine the duration of the vacuum cleaning task, with the Health Effect estimated based on the duration.
Optional Subprocess: Potential Interaction Testing In some embodiments, as an additional step a Test Protocol is developed and run to gather further information about interactions. Such testing is generally performed once the candidate list of potential interactions has been generated by the Potential Interaction Identification Algorithm 112 (step 202). One or more Test Protocols are generated by the Potential Interaction Testing Algorithm 120 to obtain further data, or test data, to inform the Potential Interaction Prediction Unit 102. An exemplary method 600 for implementing such Test Protocols is shown in Figure 6.
The Potential Interaction Testing Algorithm 120 is generally run before the algorithms in the Potential Interaction Prediction Unit 102, e.g. between step 202 and step 204.
At Step 602 running the Potential Interaction Testing Algorithm 120 is triggered. For example, the trigger may be based on determining that there is not enough Human Input Opportunities data in the Task Database 110, for example insufficient data to make a prediction or a decision or selection of tasks. Alternatively, the Potential Interaction Testing Algorithm 120 can be triggered each time the method 200 is performed (regardless of the Human Input Opportunities data in the Task Database 110), e.g. triggered by the candidate list of potential interactions being generated by the Potential Interaction Identification Algorithm 112 at step 202.
At step 604 the Potential Interaction Testing Algorithm 120 identifies one or more relevant Human Input Opportunities. For example, the candidate list of potential interactions generated in step 202 can be used to look up the relevant Human Input Opportunities in the Task Database 110.
For each relevant Human Input Opportunity the Potential Interaction Algorithm 120 generates 606 a Test Protocol. The Test Protocol can include instructions or control commands for one or more AIRDs and/or instructions for an Occupant to perform a human input. The Test Protocol can be generated using features of the Human Input Opportunity. For example, a Test Protocol for a Human Input Opportunity may comprise recording a Health Change that occurs when the Occupant performs the human input for a test period, e.g. a predetermined test period. In one example, for the Human Input Opportunity of sweeping the floor, the Health Change may be the change in (average) heart rate of the Occupant caused by the Occupant sweeping the floor for the test period, e.g. the increase in heart rate caused by sweeping the floor for 3 minutes. In such an example the Test Protocol comprises instructions for the considered Occupant to sweep the floor for a test period, e.g. 3 minutes. The Test Protocol can involve monitoring the Health Change for a test monitor period, e.g. a predetermined test monitoring period, that is different from (preferably longer than) the test period. In this example, the predetermined test monitoring period is around 10 minutes.
The Test Protocol may comprise a human input being performed within a part of the premises or home that is smaller, cleaner, more accessible and/or easier to move around in than other parts of the home.
Preferably the test period is less than 10 minutes, more preferably less than 5 minutes, most preferably around 3 minutes. Preferably the test monitoring period is greater than the test period. Preferably the test monitoring period is less than 30 minutes, more preferably less than 20 minutes, most preferably around 10 minutes. Preferably the test monitoring period is greater than 2 minutes and/or greater than 5 minutes.
Where the Test Protocol comprises one or more instructions for the Occupant, these instructions are output at step 608. For example, the instructions are output visually on a user interface, such as a smartphone or tablet screen, or audibly, e.g. via a speaker.
In step 610, where the Test Protocol comprises instructions for one or more Monitoring Devices, such instructions or commands are output to the one or more Monitoring Devices. For example, these Monitoring Devices may be cameras, motion sensors, sound sensors or microphones. The Monitoring Devices could alternatively or additionally be health monitoring devices, such as movement trackers or heartrate monitors worn by the Occupant. The commands for the Monitoring Devices can be output in the form of a message over a shod-range wireless communication, e.g. via Zigbee or WiFi or Bluetooth, to the Monitoring Devices. The commands may cause the Monitoring Devices to record data for the test monitoring period.
At step 612 the Test Protocols are run, e.g. the Occupant performs the task(s) and the resultant effects/impacts (e.g. health, social and likeability impacts) and, optionally, the task performance are monitored by the Monitoring Devices. The monitoring can be based on the control commands output in step 610. For example, the Monitoring Devices may measure health characteristics of the Occupant, such as heart rate, blood pressure and/or step count when performing the task. In some embodiments, the time for completion of the task may be recorded. Additionally or alternatively, the quality of task completion can be recorded. The Monitoring Devices could be arranged to record still or video images of the user completing the task, e.g. to see how well vacuumed an area is, or how well cleaned a window or mirror is.
At step 614 the results of the monitoring are received, e.g. in the form of test monitoring data received in a message from each Monitoring Device. The message can be received over a short-range wireless communication as described in relation to step 610. 15 Following method 600, the Interaction Assessment Algorithm 114 may be used to process the data, e.g. by the same method as step 210 of the method of Figure 2. Processing may, for example, include one or more of updating the Human Input Opportunities with improved data on the Health Change, Social Change, Task Efficacy Change and optional Likability Change associated with the Task.
Figure 5 shows an exemplary user environment 516, in which one or more semiautomatable processes can be performed. The user environment 516 is a user premises, in this example the user's home.
The user environment 516 includes a plurality of devices in the home that can perform automated or semi-automated tasks. The user environment 516 includes lights 520 that can be remotely/automatically controlled, for example "smart' or connected lights, and an HVAC system 522. The HVAC system 522 can include, for example, heaters, air conditioning units and/or devices for automatically opening and closing windows or blinds. The user environment 516 includes an automated appliance 524 and an automated vacuum cleaner 526, for example a vacuum robot, which can perform vacuum cleaning without user interaction. The automated appliance 524, could for example, be a sprinkler system for watering plants, or an automated dishwasher. In some examples, the automated appliance could be a semi-automated appliance, such as a coffee machine which can operate entirely without user interaction (e.g. on a timer or following a remote control signal) and with some sort of user interaction (e.g. user activation by a switch on the coffee machine, user combining frothed milk and coffee etc. The user environment 516 also includes a smart speaker 528 and a display 534, which can be used for outputting user instructions. The user environment additionally has a camera 530 and a plurality of sensors 532 for monitoring processes performed in the environment. The sensors 532 include movement sensors, as well as heart rate monitors and step counters (e.g. accelerometers) associated with the user. The sensors 532 may include motion sensors for detecting whether a user is present in the user environment.
The user environment 516 also includes a smart home monitoring/control system 518 for controlling devices in the environment 516. Thus home monitoring/control system 518 is operable to send control commands devices 520, 522, 524, 526, 528 and 534, and optionally also to devices 530 and 532. The home monitoring/control system 518 is able to receive monitoring data from the camera 530 and the sensors 532.
Such communications between home monitoring/control system 518 and other devices in the user environment 516 are generally via a wireless local area network, e.g. WiFi or Zigbee, or a short-range wireless connection such as Bluetooth.
The smart home monitoring/control system 518 is in communication with a wide area network, the Internet 514, e.g. via a modem at the user environment 516. Via the Internet 514, the smart home monitoring/control system 518 is in communication with a remote central server 500.
The central server 500 is operable to perform the method of Figure 2. The central server 500 includes a memory 504, a processor 502 and a network interface 512, providing communication with the Internet 514. The memory 504 can store data about the user environment 516 and available devices, such as the databases shown in Figure 1. The central server 500 also includes persistent storage 506, which holds processes 508, such as the algorithms shown in Figure 1, and a database management service 510 for creating, retrieving, updating and managing the databases used in the system.
In other embodiments, the method of Figure 2 is performed entirely by smart home monitoring/control system 518.
Although communication between the devices shown in Figure 5 is wireless, in alternatives wired communication is envisaged.
Advantages of the System The system described herein enables the automated selection of automated home tasks 10 to be assisted by home occupants. Tasks can be selected based on the physical and social health benefits to the occupant, as well as the effects on the task efficacy and effects on the occupant likability of the Al/robot involved in the task.
Specifically, one or more of the following advantages can be provided: In some embodiments the system allows coordination of a premises-wide cooperative system of occupant interactions with connected Al/robot devices that may be from different manufacturers or providers.
Automated identification and selection of opportunities for occupants to increase or optimise physical activity within the home whilst contributing to household tasks can provide improvement their physical health.
Automated identification and selection of opportunities for occupants to contribute to tasks in their household can increasing their sense of usefulness, and encourage occupants to socialise with other humans and/or Al/robots. Thus the occupants' social health can improved.
The efficacy of the household tasks can be improved by informed inclusion of human 3o assistance to the task.
The occupants' attitude or view towards Al/robots (e.g. their "likability") can be improved through the enablement of collaboration on household tasks.
While a specific architecture is shown, any appropriate hardware/software architecture may be employed. For example, external communication may be via a wired network connection.
The above embodiments and examples are to be understood as illustrative examples. Further embodiments, aspects or examples are envisaged. It is to be understood that any feature described in relation to any one embodiment, aspect or example may be used alone, or in combination with other features described, and may also be used in combination with one or more features of any other of the embodiments, aspects or examples, or any combination of any other of the embodiments, aspects or examples. Furthermore, equivalents and modifications not described above may also be employed without departing from the scope of the invention, which is defined in the accompanying claims.

Claims (29)

  1. CLAIMS1. A computer-implemented method of configuring control actions for semi-automated control of devices in a user environment, the method comprising: identifying a set of automatable control processes for devices in the environment, wherein the control processes comprise potential user interactions or actions completable by a user; determining for each process a measure of impact on the user if the process is carried out with user interaction; selecting, based on the impact measures determined for the processes, one or more processes to be carried out with user interaction; and configuring one or more devices in the user environment to perform the selected one or more processes with user interaction.
  2. 2. A method according to claim 1, wherein one or more remaining, non-selected ones of the set of control processes are configured to be completed in fully automated fashion without user interaction.
  3. 3. A method according to claim 1 or 2, wherein each of the automatable control processes is capable of being completed automatically, e.g. without user interaction.
  4. 4. A method according to any preceding claim, further comprising: ranking the processes in the identified set of automatable control processes according to the measure(s) of impact on the user, and optionally the measure of efficacy of the process; and wherein selecting the one or more processes to be carried out with user interaction is based on the ranking, preferably by selecting one or more of the highest ranking processes.
  5. 5. A method according to any preceding claim, wherein the step of selecting one or more processes to be carried out with user interaction comprises: determining whether each process in the set of automatable processes is to be carried out with user interaction, based on: the impact measure of each process; and the impact measure of the remainder of the selected processes in the set of automatable processes.
  6. 6. A method according to any preceding claim, wherein the step of selecting one or more processes to be carried out with user interaction is based on a target user impact.
  7. 7. A method according to any preceding claim, wherein configuring one or more devices in the user environment to perform the selected one or more processes with user interaction comprises: sending, to the one or more devices, one or more configuration messages operable to cause the devices to perform the selected one or more processes with user interaction; and optionally wherein the method further comprises: sending, to the one or more devices, one or more configuration messages operable to cause the devices to perform the one or more remaining, non-selected ones of the set of control processes to be completed in fully automated fashion without user interaction.
  8. 8. A method according to any preceding claim, wherein determining an impact measure comprises determining a measure of health impact.
  9. 9. A method according to claim 8, wherein determining a health impact comprises determining a measure of expected physical activity required to complete the interaction, optionally based on one or more of: a measure of energy expended during completion of the interaction; a measure of steps taken during completion of the interaction; and a measure of sustained heart rate increase during completion of the interaction.
  10. 10. A method according to any preceding claim, wherein determining a measure of impact comprises determining a social impact, optionally based on expected user interactions with humans or intelligent agents.
  11. 11. A method according to any preceding claim, further comprising: determining for each process a measure of efficacy of the process if the process is carried out with user interaction; and wherein selecting one or more processes to be carried out with user interaction is further based on the measure of efficacy determined for the process.
  12. 12. A method according to any preceding claim, wherein selecting one or more processes to be carried out with user interaction is based on: user schedule data, preferably wherein the user schedule data comprises a determination that a user is, or is expected to be, present in the user environment.
  13. 13. A method according to claim 12, wherein the user schedule data comprises a determination that a user is expected to be present in the user environment, and wherein the method further comprises: determining that the user is expected to be present in the user environment based on user monitoring data collected by one or more monitoring devices in the user environment.so
  14. 14. A method according to any preceding claim, wherein selecting one or more processes to be carried out with user interaction is further based on a target completion deadline for each process.
  15. 15. A method according to any preceding claim, further comprising: determining for each process a measure of predicted effect on user perception of the process if the process is carried out with user interaction; and wherein selecting one or more processes to be carried out with user interaction is further based on the measure of predicted effect on user perception determined for the process.
  16. 16. A method according to any preceding claim, further comprising: identifying one or more user interactions required for completion of each of the selected one or more processes; and outputting a set of instructions to a user for performing the identified one or more user interactions, preferably by displaying on a display device, such as a mobile phone or tablet, or audibly, such as output via a speaker.
  17. 17 A method according to any preceding claim, wherein selecting one or more processes to be carried out with user interaction is further based on historical performance information collected for previous occasions on which the process was carried out with user interaction, preferably wherein one or both of: the impact measures, and the measure of efficacy of the process if the process is carried out with user interaction are based on the historical performance information.
  18. 18. A method according to any preceding claim, further comprising: monitoring the performance of the selected one or more processes to determine performance information.
  19. 19. A method according to any preceding claim, further comprising: developing a test protocol for testing the effect of user interaction on a test group of processes comprising one or more of the set of automatable control processes by identifying one or more test user interactions for each process; causing the test group of processes to be performed according to the test protocol, wherein performing a control process according to the test protocol comprises performing the process with the test user interaction; and monitoring the performance of the test group of processes to determine test performance information.
  20. 20. A method according to claim 19, wherein the step of determining for each process a measure of impact on the user if the process is carried out with user interaction is based on the test performance data.
  21. 21. A method according to claim 19 or 20, wherein causing the test protocol to be performed comprises: outputting a set of instructions to a user for performing the identified one or more test user interactions, preferably by displaying on a display device, such as a mobile phone or tablet, or audibly, such as output via a speaker.
  22. 22. A method according to any of claims 19 to 21, wherein causing the test protocol to be performed comprises: sending, to each of the one or more devices, one or more test control messages operable to cause the devices to perform the test group of processes with one or more test user interactions.
  23. 23. A method according to any of claims 19 to 22, wherein causing the test protocol to be performed comprises: sending, to one or more monitoring devices, one or more monitoring control messages operable to cause the monitoring devices to monitor the performance of the test group of processes with one or more test user interactions.
  24. 24. A method according to any of claims 18 to 23, wherein monitoring the performance of the selected one or more processes or the test group of processes comprises one or more of: recording visible or infrared light images of the user environment, preferably video images; recording sound in the user environment, preferably using a microphone; and recording the movement of a mobile user device, e.g. from the output of an accelerometer or by a location tracker, such as a GPS sensor; recording one or more health characteristics, such as steps taken by a user (e.g. from the output of an accelerometer), or a user's heart rate (e.g. from a heart rate monitor); and recording utility consumption for the one or more processes, e.g. electricity or gas consumption, e.g. from an electricity or gas meter.
  25. 25. A non-transient computer-readable medium comprising instructions which, when executed by a computer, cause the computer to carry out the method of any preceding claim.
  26. 26. A controller for configuring control actions for semi-automated control of devices in a user environment, the controller comprising: an interface operable to communicate with one or more devices in the environment; a memory for storing information regarding one or more automatable control processes for devices in the environment; and a processor operable to: identify a set of automatable control processes for devices in the environment, wherein the control processes comprise potential user interactions or actions completable by a user; determine for each process a measure of impact on the user if the process is carried out with user interaction; select, based on the impact measures determined for the process, one or more processes to be carried out with user interaction; and send, to one or more devices in the user environment, one or more control commands to cause the one or more devices to perform the selected one or more processes with user interaction.
  27. 27. A controller according to claim 26, wherein the processor is further configured to perform the method of any of claims 2 to 23.
  28. 28. A system for configuring control actions for semi-automated control of devices in a user environment, the system comprising: a controller according to claim 26 or 27; and one or more devices in the user environment operable to perform a set of automatable control processes, wherein the control processes comprise potential user interactions or actions completable by a user.
  29. 29. A system according to claim 28, further comprising: one or more monitoring devices operable to: monitor the performance of the selected one or more processes to determine performance information, and optionally test performance information; and send the performance information, and optionally the test performance information, to the controller; and preferably operable to: receive one or more monitoring control messages operable to cause the monitoring devices to monitor the performance of one or more processes with one or more test user interactions.
GB1819762.4A 2018-12-04 2018-12-04 Method and system for managing automated devices Withdrawn GB2579577A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1819762.4A GB2579577A (en) 2018-12-04 2018-12-04 Method and system for managing automated devices

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1819762.4A GB2579577A (en) 2018-12-04 2018-12-04 Method and system for managing automated devices

Publications (2)

Publication Number Publication Date
GB201819762D0 GB201819762D0 (en) 2019-01-23
GB2579577A true GB2579577A (en) 2020-07-01

Family

ID=65029940

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1819762.4A Withdrawn GB2579577A (en) 2018-12-04 2018-12-04 Method and system for managing automated devices

Country Status (1)

Country Link
GB (1) GB2579577A (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180280175A1 (en) * 2015-09-30 2018-10-04 Koninklijke Philips N.V. Assistance system for cognitively impaired persons

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180280175A1 (en) * 2015-09-30 2018-10-04 Koninklijke Philips N.V. Assistance system for cognitively impaired persons

Also Published As

Publication number Publication date
GB201819762D0 (en) 2019-01-23

Similar Documents

Publication Publication Date Title
US20210288832A1 (en) Automatically learning and controlling connected devices
US20210160326A1 (en) Utilizing context information of environment component regions for event/activity prediction
US20070233285A1 (en) Apparatus Control System and Apparatus Control Method
US11233668B2 (en) Meeting insight computing system
US9691030B2 (en) Assisted labeling of devices with disaggregation
JP5178325B2 (en) Device control apparatus, device control method and program
US11134888B2 (en) Systems and methods for smart home control
EP3627246B1 (en) Determining information about devices in a building using different sets of features
US9152737B1 (en) Providing notifications to a user
JP4982511B2 (en) Action time ratio calculation device and action time ratio calculation method
JP2018514835A (en) Method and apparatus for controlling an environmental management system in a building
US20180101146A1 (en) Environment customization through local automation services
JP4951082B2 (en) Energy saving advice generator
JP5720491B2 (en) Information processing apparatus, information processing method, and program
US10605470B1 (en) Controlling connected devices using an optimization function
WO2016125260A1 (en) Mental state measurement system
CN109416573A (en) Apparatus control system
WO2016147298A1 (en) Recommendation device, recommendation method, and computer program
JP2020160608A (en) Abnormality detection system
JP2021518004A (en) Use of contextual information in the environment component area for event / activity prediction
JP6152853B2 (en) Control method and program
GB2579577A (en) Method and system for managing automated devices
Ramírez et al. An event detection framework for the representation of the AGGIR variables
Shahid et al. Unsupervised Forecasting and Anomaly Detection of ADLs in single-resident elderly smart homes
US20230351878A1 (en) Management of alerts based on user activity

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)