GB2579674A - Monitoring method and system - Google Patents

Monitoring method and system Download PDF

Info

Publication number
GB2579674A
GB2579674A GB1820274.7A GB201820274A GB2579674A GB 2579674 A GB2579674 A GB 2579674A GB 201820274 A GB201820274 A GB 201820274A GB 2579674 A GB2579674 A GB 2579674A
Authority
GB
United Kingdom
Prior art keywords
sensor
activity
time
sensors
notification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1820274.7A
Other versions
GB2579674B (en
GB201820274D0 (en
Inventor
Parson Oliver
Clark Timothy
Dyer Zoe
Kantepudi Selina
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Centrica PLC
Original Assignee
Centrica PLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Centrica PLC filed Critical Centrica PLC
Priority to GB1820274.7A priority Critical patent/GB2579674B/en
Publication of GB201820274D0 publication Critical patent/GB201820274D0/en
Publication of GB2579674A publication Critical patent/GB2579674A/en
Application granted granted Critical
Publication of GB2579674B publication Critical patent/GB2579674B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/04Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons
    • G08B21/0407Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons based on behaviour analysis
    • G08B21/0423Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons based on behaviour analysis detecting deviation from an expected pattern of behaviour or schedule
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B25/00Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
    • G08B25/14Central alarm receiver or annunciator arrangements
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/04Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons
    • G08B21/0407Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons based on behaviour analysis
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B25/00Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
    • G08B25/01Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium
    • G08B25/10Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium using wireless transmission systems
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B31/00Predictive alarm systems characterised by extrapolation or other computation using updated historic data

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Psychiatry (AREA)
  • Psychology (AREA)
  • Social Psychology (AREA)
  • General Health & Medical Sciences (AREA)
  • Gerontology & Geriatric Medicine (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computing Systems (AREA)
  • Alarm Systems (AREA)

Abstract

A system for identifying unusual activity in a monitoring space using a learning model, wherein the system: Monitors sensor activity timings for sensor activity measured by one or more sensors in the monitoring space during a first time period 510. Trains a probability model and obtains an activity probability threshold using the sensor activity timings measured in the first time period 512. Defines a threshold time or times for sensor activity based on the probability model and activity probability threshold. Monitors sensor activity measured by one or more sensors in the monitoring space during a second time period. Compares the sensor activity measured during the second time period with the threshold time or times to determine unusual activity 514. And raises an alert in response to determining unusual activity 516. Further aspects disclose processing and storing data, displaying activity information about a monitoring space, handling sensor data from a plurality of sensors in a monitoring space, and notifying a user of unusual activity.

Description

Intellectual Property Office Application No. GII1820274.7 RTM Date:12 June 2019 The following terms are registered trade marks and should be read as such wherever they occur in this document: WiFi Zigbee Bluetooth Intellectual Property Office is an operating name of the Patent Office www.gov.uk /ipo Monitoring Method and System
Technical Field
The present invention relates to a monitoring method and system. More particularly, to methods and systems for monitoring and reporting on activity in a monitoring space. The methods and systems described herein may assist a carer in remotely monitoring a caree, for example in a caree's home.
Background
There is a growing need to provide assistance for carers (e.g. of elderly people) to help them look after carees (e.g. elderly people).
There have been many attempts to provide services and products to meet this need, but the services and products currently available do not achieve a good balance between providing meaningful and valuable information and assistance to the carer, while at the same time respecting the privacy and independence of the caree.
The inventors have developed systems and methods which provide improved monitoring and reporting compared to conventional systems and methods.
Summary
Aspects of the invention are set out in the independent claims and preferable features are set out in the dependent claims.
There is described herein a method for identifying unusual activity in a monitoring space, the method comprising: receiving a plurality of sensor activity timings for sensor activity measured by one or more sensors in the monitoring space during a first time period; determining a probability model comprising a measure of probability of sensor activity over time based on the received sensor activity timings in the first time period; obtaining a probability threshold; defining a threshold time or times for sensor activity based on the probability model and the probability threshold; monitoring sensor activity measured by one or more sensors in the monitoring space during a second time period; comparing sensor activity timings in the second time period with the threshold time or times to determine unusual activity; and raising an alert in response to determining unusual activity.
The method may be performed at a remote server, such as a cloud server. The plurality of sensor activity timings may thus be received over a wide area network (WAN), such as over an Internet connection.
The probability threshold can be a maximum or a minimum probability threshold; for a maximum probability threshold alert time intervals (e.g. each defined by two threshold times) can be those of the set of time intervals for which the probability exceeds the probability threshold and conversely for a minimum probability threshold the alert time intervals can be those of the set of time intervals for which the probability is less than the probability threshold. The probability threshold may be predetermined or preset.
The threshold time or times for sensor activity can be a single time, in which case comparing sensor activity timings with the threshold time may comprise identifying sensor activity occurring either before or after the single threshold time (e.g. before or after that time in a day). This is analogous to there being a threshold time interval having either its start or end time at the start or end of the day (e.g. starts at 00:00:00 or ends at 11:59:59).
Alternatively the threshold time or times comprises at least two times defining a start and end of a time interval. Then comparing sensor activity timings with the threshold times may involve identifying sensor activity occurring within the time interval defined by the threshold times.
The measure of probability of sensor activity over time may comprise the probability of sensor activity occurring on or before each of a plurality of times (e.g. the cumulative probabilities of sensor activity occurring by each of a plurality of times in a day). In such a case the threshold time may be defined as the earliest time in the day for which the probability of sensor activity occurring on or before the time exceeds (for a maximum probability) or is less than (for a minimum probability) the probability threshold.
The measure of probability of sensor activity over time may be the probability of sensor activity occurring in each of a set of time intervals. In this case the threshold time or times can be times defining the set(s) of time intervals for which the probability exceeds (for a maximum probability) or is less than (for a minimum probability) the probability threshold, e.g. providing a threshold time interval. In some embodiments only the shortest of the set of time intervals which satisfies the probability criterion (e.g. either exceeds or is less than the probability threshold) is defined as a threshold time interval.
The measure of probability can be determined by considering the number (or proportion) of days that a certain event has occurred out of the total number of days in the learning period for example. Thus the measure of probability may be determined by estimating a probability distribution based on sample data gathered during the learning phase.
The set of time intervals may each commence at the same start time, but have different end times. In some examples each of the set of time intervals commences at 00:00, and thus the probability of sensor activity in each time interval is the probability of sensor activity being detected by the end time each day. Thus, in these examples, the set of time intervals can equally be defined by a set of single times (i.e. the end times).
In some embodiments the measure of probability of sensor activity over time is effectively the cumulative probability of the sensor activity occurring by each of a plurality of times.
A sensor activity may be defined as a sensor being "on" or "active" (e.g. "open" for a contact sensor, sensing movement for a motion sensor, or sensing power/current for an appliance sensor). A sensor activity can start with a sensor being activated/triggered, or transitioning to an "on" state, and end with the sensor being deactivated, or transitioning to an "off' state.
The received sensor activity timings comprise at least the start time or end time of each sensor activity, and may comprise both. In preferred embodiments the received sensor activity timings comprise at least the start time, or sensor activation time.
The probability of sensor activity occurring by a time is the probability of a sensor activity being detected/measured/recorded by a sensor before that time of day, e.g. the probability of a sensor activity start time, or sensor activation time, occurring before that time of day.
The probability of sensor activity in a time interval is the probability of a sensor activity being detected/measured/recorded by a sensor in the time interval, e.g. the probability of a sensor activity start time, or sensor activation time, occurring in the time interval.
The received plurality of sensor activity timings for sensor activity measured by one or more sensors and the measure of the probability of sensor activity may relate to sensor activity of any sensor (from a plurality of sensors in the measuring space), activity of one specific sensor, or to sensor activity of a group of sensors (e.g. sensors of the same type, or sensors located in close proximity/same room, or sensors arranged to monitor the same type of appliance/object).
Unusual activity may be determined in response to identifying sensor activity measured by one or more sensors in the monitoring space before or after the threshold time(s) in the second time period (e.g. receiving one or more sensor activity times in an alert time interval), or may be determined in response to identifying the absence of sensor activity in the monitoring space before or after the threshold time(s) in the second time period (e.g. not receiving any sensor activity times in an alert time interval).
The first time period is generally before the second time period. The first time period can be referred to as a learning time period and the second time period as a monitoring time period.
Raising an alert may comprise sending a message or notification indicative of the unusual activity to a user device. The alert or notification may comprise response or interaction options, such as an option for a user to close the notification or to telephone a caree in the monitoring space, or to forward the notification to another user.
Preferably, determining a probability model comprises: determining a probability of measuring sensor activity by any of a plurality of sensors in the monitoring space; and wherein comparing sensor activity timings in the second time period with the threshold time or times to determine unusual activity comprises: comparing sensor activity timings for sensor activity measured by any of the plurality of sensors in the monitoring space. Thus the firing of any one of the plurality of sensors will contribute to the probability measure.
Preferably, determining a probability model comprises: determining a probability of measuring sensor activity by a first sensor of a plurality of sensors in the monitoring space; and wherein comparing sensor activity timings in the second time period with the threshold time or times to determine unusual activity comprises: comparing sensor activity timings for sensor activity measured by the first sensor of the plurality of sensors in the monitoring space. Thus the alert can be triggered by one particular sensor not sensing activity by a particular time (e.g. by a threshold time), or not sensing activity in a particular time window or time interval (e.g. a time interval between two threshold times).
In some embodiments there may be multiple alerts which are monitored for concurrently, for example a first alert based on all the sensors in the space and a second alert based on only one specific sensor of the plurality of sensors. For example, the measure of probability of sensor activity over time may comprise: a first probability measure of measuring sensor activity by any of a plurality of sensors in the monitoring space; and a second probability measure of measuring sensor activity by a first sensor of the plurality of sensors in the monitoring space; wherein obtaining a probability threshold comprises: obtaining a first probability threshold; and obtaining a second probability threshold; wherein defining a threshold time or times for sensor activity based on the probability model and the probability threshold comprises: defining a first threshold time or times for sensor activity based on the first probability measure and the first probability threshold; and defining a second threshold time or times for sensor activity based on the second probability measure and the second probability threshold; and wherein comparing sensor activity timings in the second time period with the threshold time or times to determine unusual activity comprises: comparing sensor activity timings for sensor activity measured by any of the plurality of sensors in the monitoring space with the first threshold time or times; and comparing sensor activity timings for sensor activity measured by the first sensor of the plurality of sensors with the second threshold time or times. In some embodiments the first and second probability thresholds are maximum thresholds, in others both the first and second probability thresholds are maximum thresholds, whilst in yet others one of the first and second probability thresholds is a maximum threshold, whilst the other is a minimum threshold.
Optionally, determining a probability model comprising a measure of probability of sensor activity over time based on the received sensor activity timings in the first time period comprises: determining the probability of sensor activity being measured on or before each of a set of times. Thus the model is based on the proportion of days in the first time period on which sensor activity was recorded on or before each of the set of times. Effectively the probability model is a model of cumulative probability.
Preferably the set of times corresponds to the earliest measured sensor activity on each of the days in the first time period.
In some embodiments, defining a threshold time or times for sensor activity comprises defining one of the set of times as a threshold (or alert) time; and wherein unusual activity is determined if sensor activity is not measured in the second time period by the threshold (or alert) time.
Optionally, the threshold time or times define at least one alert time interval; and the measure of probability of sensor activity in each of the alert time intervals exceeds the probability threshold; and wherein unusual activity is detected if no sensor activity is measured in the monitoring space during any of the at least one alert time intervals in the second time period. Here, determining a measure of probability of sensor activity may comprise determining a measure of probability of sensor activity in each of a set of time intervals.
In alternative embodiments, the threshold time or times define at least one alert time interval; and the measure of probability of sensor activity in each of the alert time intervals is less than the probability threshold; and wherein unusual activity is detected if sensor activity is measured in the monitoring space during any of the at least one alert time intervals in the second time period.
In some embodiments, the plurality of time intervals each commence at the same time, but have different end times, and defining one or more alert time intervals comprises selecting the shortest interval for which the measure of the probability of sensor activity is either higher or lower than the probability threshold.
Determining a probability model may comprise: categorising the sensor activity in the first time period into at least a first activity type and second activity type; determining a probability of an activity of the first activity type being recorded in the monitoring space during each time interval in a first set of time intervals in the second time period, based on the received sensor activity timings in the first time period; and defining a first set of alert time intervals in the second time period during which the probability of an event occurring in the monitoring space meets a first probability criterion; determining a probability of an activity of the second activity type being recorded in the monitoring space during each time interval in a second set of time intervals in the second time period, based on the received sensor activity timings in the first time period; and defining a second set of alert time intervals in the second time period during which the probability of an activity being recorded in the monitoring space meets a second probability criterion; and wherein determining unusual behaviour comprises: identifying whether sensor activity of the first event type is detected in the monitoring space during one of the first set of alert time intervals; identifying whether activity of the second type is detected in the monitoring space during one of the second set of alert time intervals.
Similarly, there is described a method for identifying unusual activity in a monitoring space, the method comprising: receiving a plurality of sensor activity timings for sensor activity measured by one or more sensors in the monitoring space during a first time period; determining a probability model comprising a measure of the probability of sensor activity in each of a set of time intervals based on the received sensor activity timings in the first time period; determining a probability threshold; defining one or more of the set of time intervals as an alert time interval for sensor activity based on the probability model and the threshold probability; monitoring sensor activity measured by one or more sensors in the monitoring space during a second time period; comparing sensor activity timings in the second time period with the one or more alert time intervals to determine unusual activity; and raising an alert in response to determining unusual activity.
Preferably, determining a model of normal activity comprises: determining a probability of an event occurring in the monitoring space during each interval in a set of time intervals in the second time period, based on the received information indicating activity which occurred in the monitoring space during the first time period; and defining one or more alert time intervals in the second time period during which the probability of an event occurring in the monitoring space meets a probability criterion; and wherein raising the alert if the activity detected in the monitoring space during the second time period deviates from the model of normal activity comprises: raising the alert based on whether events are detected in the monitoring space during one of the one or more alert time intervals.
Optionally, determining a model of normal activity comprises: categorising the detected events into at least a first event type and second event type; determining a probability of an event of the first event type occurring in the monitoring space during each time interval in a first set of time intervals in the second time period, based on the activity which occurred in the monitoring space during the first time period; and defining a first set of alert time intervals in the second time period during which the probability of an event occurring in the monitoring space meets a first probability criterion; determining a probability of an event of the second event type occurring in the monitoring space during each time interval in a second set of time intervals in the second time period, based on the activity which occurred in the monitoring space during the first time period; and defining a second set of alert time intervals in the second time period during which the probability of an event occurring in the monitoring space meets a second probability criterion; and wherein raising an alert if the activity detected in the monitoring space during the second time period deviates from the model of normal activity comprises: raising the alert based on whether events of the first event type are detected in the monitoring space during one of the first set of alert time intervals; raising the alert based on whether events of the second type are detected in the monitoring space during one of the second set of alert time intervals.
In some embodiments the one or more alert time intervals in the second time period during which the probability of an event occurring in the monitoring space meets a probability criterion comprise the first set of time intervals and the second set of time intervals. However in other embodiments the one or more alert time intervals in the second time period during which the probability of an event occurring in the monitoring space meets a probability criterion are identified in addition to the first and second set of time intervals.
Preferably, determining a probability of an event occurring in the monitoring space during each interval in the set of time intervals in the second time period comprises: determining a probability of an event occurring in the monitoring space by each of a first plurality of times in the second time period; and wherein defining one or more alert time intervals in the second time period during which the probability of an event occurring in the monitoring space meets a probability criterion comprises: generating a first time value indicating the earliest of the first plurality of times in the second time period at which the probability of an event occurring in the monitoring space exceeds a first probability threshold; and wherein raising the alert based on whether events are detected in the monitoring space during one of the one or more alert time interval comprises: raising the alert if no events are detected in the monitoring space by the first time value.
Preferably the probability criterion is met if the probability of an event occurring in the monitoring space exceeds a probability threshold; and the alert is raised if a count of events detected in the monitoring space during one of the one or more alert time intervals does not exceed an event count threshold.
In some embodiments, the probability criterion is met if the probability of an event occurring in the monitoring space is less than a probability threshold; and the alert is raised if a count of events detected in the monitoring space during one of the one or more alert time periods exceeds an event count threshold.
Preferably, the plurality of time intervals each commence at the same time, but have different end times, and defining one or more alert time intervals comprises selecting the shortest interval for which the probability criterion is met.
Optionally, determining a model of normal activity comprises: determining a probability of an activity exceeding each of a plurality of durations; and defining a threshold duration based on the probability of an activity exceeding each of the plurality of durations; wherein the alert is raised if the activity exceeds the threshold duration. Activity duration may be determined based on the time between events, e.g. between activation and deactivation of sensors.
There is also described herein a computer-implemented method for processing and storing activity data about a monitoring space, the method comprising: receiving sensor data from a plurality of sensors in the monitoring space; analysing the sensor data to determine activation and deactivation times for each of the plurality of sensors; defining a sensor activity grouping having an activity grouping start time based on temporal proximity of the determined activation and deactivation times; maintaining a sensor log indicative of the current sensor state of each the plurality of sensors; upon determining an activity grouping start time, adding a sensor activity entry to an activity log, wherein the sensor activity entry identifies the determined activity grouping start time; determining an activity grouping end time based on temporal proximity of the determined activation and deactivation times; upon determining an activity grouping end time, calculating a duration of the sensor activity grouping; and appending the sensor activity entry in the activity log with the calculated duration.
This method may be performed, for example, on a user device, such as a mobile telephone or smartphone or tablet. In other embodiments, the method may be performed at a remote monitoring server.
Advantageously, it is possible to filter sensor data to prevent very frequent activation/deactivation sensor signals showing as separate events, or sensor activities.
Thus the data stored in the activity log can be processed more easily and may be presented to a user in a more user-friendly manner, from which the user can understand and distil the information better. For example, in the case of a motion sensor, it may detect motion if a user walks into a room, but not it the user is standing still in the room, and thus the activity may be identified as ended. However the user may fairly quickly move around the room again, or walk out. It is preferable that these events be grouped together as a separate activity grouping. Equally, when switching on a microwave, this may be done several times over a short period, e.g. to allow the user to stir food in the microwave several times throughout heating. However it is not necessary to inform the user/carer about the separate individual events, and indeed this information may be confusing and prevent best use of the system.
The sensor activity grouping may relate to activation and deactivation times from a single sensor in the plurality of sensors, or could relate to activation and deactivation times from a group of sensors in the plurality of sensors. The group of sensors could be selected based on proximity to each other, e.g. being in the same room or on the same floor, or being the same type of sensor (contact/motion/flow/electricity), or due to monitoring the same type of object or appliance (e.g. showers, radios).
Preferably, each sensor activity relates to the sensor being in a substantially constant state, e.g. a period of continuous sensor activation, such as commencing at a sensor activation time and ending at a sensor deactivation time. The method may comprise identifying sensor activity periods based on the activation and deactivation times.
Sensor activation, or the state in which the sensor is considered to be "active", may depend on what the sensor is measuring. For example a contact sensor has two states: contact or no contact. Where contact sensors are used to monitor doors or windows, the "contact" state is generally defined as "off' or "inactive" and the "no contact" state is generally defined as "active" or "on". For a current or electricity sensor, there may be a predetermined current or power threshold used to identify whether the sensor is active or not. For example, the current or power increasing above a first predetermined threshold can be used to identify that the current is on and the sensor is active. Preferably the first predetermined threshold is between 1W and 50W, more preferably between 5W and 30W. In preferred embodiments the first predetermined threshold is between 10W and 20W, more preferably around 15W. In some embodiments the current or power decreasing below the same predetermined current or power threshold may be used to identify a deactivation, or the sensor being "off', e.g. the first predetermined threshold may be used. In other embodiments a different predetermined threshold may be used, e.g. the current or power decreasing below a second predetermined threshold indicates the current is off and the sensor inactive. The second predetermined threshold may be lower than the first predetermined threshold. In some examples the second predetermined threshold is between around 1W and 30W, preferably between 5W and 20W, more preferably between 5W and 15W, such as around 10W. Motion sensors may be considered active if they are recording movement and inactive when not. Example motion sensors may be optical, microwave, infrared or acoustic sensors. Motion sensors are capable of detecting movement in their vicinity.
Preferably, determining an activity grouping end time based on the temporal proximity of the determined activation and deactivation times comprises: starting a timer immediately following a latest determined deactivation time; and selecting the latest determined deactivation time as the activity grouping end time in the absence of determining an activation time within a predetermined activity grouping time threshold.
In some embodiments, determining an activity grouping start time based on the temporal proximity of the determined activation and deactivation times comprises: determining a latest activation time; calculating a time difference between the latest activation time and the most recent preceding deactivation time; and selecting the latest determined activation time as the activity grouping start time if the time difference between the latest activation time and the most recent preceding deactivation time exceeds a/the predetermined activity grouping time threshold.
The latest time means the most recent time. The most recent preceding deactivation time refers to the most recent deactivation time preceding the latest activation time, i.e. before the activation time.
In some embodiments, the method further comprises: determining the type of sensor (e.g. contact, motion, current), or a type of appliance or object associated with the sensor; and selecting the predetermined activity grouping time threshold based on the type of sensor (e.g. contact, motion, current), or the type of appliance or object associated with the sensor.
In some embodiments, the method further comprises storing a different grouping time threshold for each type of sensor and/or type of appliance/object associated with the sensor.
Preferably the method further comprises: receiving historical sensor data relating to sensor activity recorded in the monitoring space over a first time period (e.g. the learning time period); and selecting the predetermined activity grouping time threshold based on the historical sensor data.
The method may further comprise: analysing the historical sensor data to determine a probability of an activation time occurring within each of a plurality of time intervals of a deactivation time in the first time period; selecting as the predetermined activation grouping time threshold the longest of the plurality of time intervals for which the probability of an activation time occurring within each of a plurality of time intervals of a deactivation time that satisfies a probability grouping criterion.
Preferably the predetermined activity grouping time threshold is less than 10 minutes, preferably less than 5 minutes, more preferably less than 3 minutes; and/or at least 1 second, preferably at least 2 seconds, more preferably at least 4 seconds.
In some embodiments, defining a sensor activity grouping is based on the determined activation and deactivation times of a group of at least two sensors. The group of at least two sensors is made up of sensors in the plurality of sensors. The grouping may be derived because sensors in the group of at least two sensor are the same type of sensor (e.g. all contact, all motion or all current) or sensors within a given area, e.g. in a single room in space, or within a predetermined distance of each other. The grouping may comprise all sensors associated with same type of object or appliance, e.g. all showers or all toilets in the monitoring space.
For example, determining an activity grouping start time based on the temporal proximity of the determined activation and deactivation times comprises: determining a latest activation time of any of the group of at least two sensors, being an activation time of a first sensor; calculating a time difference between the latest activation time and the most recent preceding deactivation time of any of the group of at least two sensors, being the deactivation time of a second sensor; and selecting the latest determined activation time as the activity grouping start time if the time difference between the latest activation time and the most recent preceding deactivation time exceeds a/the predetermined activity grouping time threshold.
Determining an activity grouping end time based on the temporal proximity of the determined activation and deactivation times comprises: starting a timer immediately following a latest determined deactivation time of any of the group of at least two sensors, being a deactivation time of a first sensor; and selecting the latest determined deactivation time as the activity grouping end time in the absence of determining an activation time of any sensor in the group of at least two sensors within a predetermined activity grouping time threshold.
Preferably, defining a sensor activity grouping is based on the determined activation and deactivation times of a single of the plurality of sensors.
In some embodiments the sensor log is based on the activation times or the activity grouping start times.
Preferably the method further comprises: providing an interface in an application for a user device, wherein the interface comprises: a first view displaying a portion of the sensor log relating to a subset of the plurality of sensors; and a second view displaying the activity log relating to a preceding time period for the subset of the plurality of sensors; and means to receive a user input to alternate between the first view and the second view.
There is also described herein a computer-implemented method for displaying activity information about a monitoring space, the method comprising: storing a sensor log indicative of the current sensor state of each the plurality of sensors; storing an activity log of a plurality of sensor activity entries, each sensor activity entry identifying an activity start time or an activity grouping start time; providing an interface in an application for a user device, wherein the interface comprises: a first view displaying a portion of the sensor log relating to a subset of the plurality of sensors; a second view displaying the activity log relating to a preceding time period the subset of the plurality of sensors; and means to receive a user input to alternate between the first view and the second view. This method will generally be performed at a user device, such as a mobile user device.
The means to receive a user input could be a selectable icon or button displayed in the user interface, preferably visible in both views and displayed in a different form depending on which view is being displayed (e.g. highlighted, circled, coloured). Preferably there are two selectable icons, one to select the first view and one for the second view. The selectable icons may also have different display form dependent on which view is displayed.
The subset of sensors could be all sensors in the plurality of sensors, but may be only a portion of the plurality of sensors. Preferably only first view or second view is displayed/visible at any one time.
Preferably, in the second view the sensor activity entries are displayed in time order, for example according to activation time, activity grouping start time, deactivation time or activity grouping end time; preferably with the most recent displayed first (or at the top).
Preferably, the method further comprises: determining the subset of the plurality of sensors based on the location and/or type of each sensor.
In some embodiments, the method further comprises: providing a third view displaying user-selectable objects associated with each of the plurality of sensors or each group of at least two sensors; wherein user selection of one of the user-selectable objects causes the corresponding sensor or group of at least two sensors to be added to the subset of the plurality of sensors for which sensor state is displayed in the first view and sensor activity is displayed in the second view.
Preferably, each sensor activity entry further comprises either: a duration for the sensor activity or sensor activity grouping; or an indication that the sensor (or at least one sensor in a group of at least two sensors in the plurality of sensors) is currently active.
Optionally, the sensor log comprises, for each sensor or for each group of at least two sensors, either: an indication that the sensor (or one sensor in the group of at least two sensors) is active (for sensors being active, or having an activity grouping start time that is not followed by an activity grouping end time); or an indication of the last time the sensor (or one sensor in the group of at least two sensors) was active (for sensor/sensor groups not currently active). The last time sensor was active could be last/latest/most recent activation time or deactivation time of the sensor, or most recent determined activity grouping start/end time.
Preferably, one or both of the sensor activity entry and the sensor log identifies an appliance or object associated with the sensor. An object could be a door, window or cupboard. An appliance could be a kettle, fridge, coffee maker, microwave, radio, or television.
Preferably, one or both of the sensor activity entry and the sensor log identifies the type of sensor and/or an individual sensor or type of the group of at least two sensors (where all sensors in group are of same type and where activity grouping is based on multiple sensors), such as by a sensor identifier. The sensor identifier can comprise a user-input description of the sensor, and received sensor data includes a sensor code indicative of the sensor In some embodiments, one or both of the sensor activity entry and the sensor log identifies the location of the sensor, or group of at least two sensors, within the monitoring space. The location identified may be the room or floor in which the sensor(s) are located. The sensor activity entry may comprises a user-input indication of the sensor location (e.g. input on setup of the system).
Preferably the sensor data is a time series of sensor readings for each of the plurality of sensors, preferably wherein the sensor readings indicate whether or not the sensor is active at the respective time, e.g. a binary measure. The time series of sensor readings may comprise a sensor reading for each sensor at predetermined time intervals, such as a sampling period, e.g. every second, 5 seconds or 10 seconds, whether the sensor has recorded anything (e.g. is "activated" or not). Alternatively the time series of sensor readings may comprise a sensor reading for each sensor only for times when the sensor is active, or positively recording. For example, in the case of a voltage or current sensor on a plug, the time series may comprise a simple binary indication of whether or not there is a voltage/current present at each time (e.g. whether the sensor has been activated), or may only include readings for the sampling times at which the there is a voltage/current. In other cases, the time series could comprise values (e.g. size of voltage or current) at each of the sampling times.
There is also described: a device for processing and storing activity data about a monitoring space, the device comprising: a communication interface for receiving sensor data from a plurality of sensors in the monitoring space; and a memory storing: an activity log of sensor activity entries; and a sensor log indicative of the current sensor state of each the plurality of sensors; a processor configured to: analyse the received sensor data to determine activation and deactivation times for each of the plurality of sensors; define a sensor activity grouping having an activity grouping start time based on temporal proximity of the determined activation and deactivation times; upon determining an activity grouping start time, add a sensor activity entry to an activity log, wherein the sensor activity entry identifies the determined activity grouping start time; determine an activity grouping end time based on temporal proximity of the determined activation and deactivation times; upon determining an activity grouping end time, calculate a duration of the sensor activity grouping; and append the sensor activity entry in the activity log with the calculated duration.
This device may be a user device, e.g. a user mobile device, in which case the sensor data may be received from a remote server over a WAN. Alternatively, the device may be a remote server, in which case the sensor data would be received from the plurality of sensors in the monitoring space (normally via a hub or router at the monitoring space) over a WAN.
There is also described a system for handling sensor data from a plurality of sensors in a monitoring space, the system comprising: a remote server configured to: receive (e.g. over a long-range communication network, such as a WAN) raw sensor information (e.g. sensor activation and deactivation times, or a time series of sensor states) from a plurality of sensors in the monitoring space; determine an operating state of at least one of the plurality of sensors based on the raw sensor information; create at least one notification based on the determined operating state; receive a pull request from an application on user device, the pull request relating to at least one of the plurality of sensors; send the at least one notification to a user device (again, over a long-range communication network or WAN); a user device configured to: send the pull request to the remote server; receive the notification; interpret the notification by: identifying whether the notification is indicative of a new activity or an earlier activity; updating an activity log based on the received notification by: adding a new activity entry to the activity log if the notification is indicative of a new activity; updating an existing activity entry if the notification is indicative of an earlier activity; and updating a sensor log to show currently operating sensors by interpreting sensor activation and deactivation notifications and associating the notifications with sensors based on a sensor identifier in each received notification.
Thus an application on the user device can receive sensor data in response to a pull request and from that sensor data construct or update a sensor log and an activity log. This may have advantages over receiving the full sensor log and activity log from the remote server each time a pull request is sent, e.g. less data needs to be sent.
In addition, it is possible for the user device to request and update the logs as required, e.g. in case the user device has been out of connectivity with the remote server for a period of time. For example, a pull request may be triggered by the user device determining connectivity to a wide area network (WAN) has been enabled (e.g. that the user device has an active Internet connection) In alternative embodiments, an activity log and sensor log are stored at the remote server and updated by the remote server based on raw sensor information received by the remote server. In such a case the notification sent to the user device in response to the pull request may comprise only updates to the sensor log and to the activity log, rather than the raw sensor data. This can reduce the processing performed at the user device.
For example, there may be provided a system for handling sensor data from a plurality of sensors in a monitoring space, the system comprising: a plurality of sensors for locating in the monitoring space; a user device arranged to communicate with the plurality of sensors (optionally via a remote server and/or a hub); and a processor means arranged to communicate with the plurality of sensors and the user device (optionally via a hub); wherein the processor means or each sensor is configured to: determine an operating state of at least one of the plurality of sensors based on raw sensor information from each of the plurality of sensors; and create at least one notification based on the determined operating state; wherein the processor means is configured to: receive a pull request from an application on user device, the pull request relating to at least one of the plurality of sensors; send the at least one notification to a user device; wherein one of the processor means and the user device is configured to: identify whether the raw sensor data is indicative of a new activity or an earlier activity; update an activity log based on the received notification by: adding a new activity entry to the sensor activity log if the notification is indicative of a new activity; and updating an existing activity entry if the notification is indicative of an earlier activity; and update a sensor log to show currently operating sensors by interpreting raw sensor data indicative of sensor activation and deactivation and associating the raw sensor data with sensors based on a sensor identifier.
In some examples the processor means is provided in: a hub for locating in the monitoring space and arranged to communicate with the plurality of sensors; and/or a remote server arranged to communicate the user device and plurality of sensors and optionally with the hub.
The remote server remote from the monitoring space. Communication with the hub and/or user device may be over a WAN.
The sensor log and activity log may be displayed to a user on a display of the user device, for example there may be separate views in the application on the user device, one for the sensor log and one for the activity log.
Preferably, the user device is configured to: maintain a timer for each sensor device, the timer adapted to record the activation time of a device monitored by the sensor; start a time event based on the timer when an activation or deactivation sensor notification is received; display the time of the time event on the sensor log; freeze the time event displayed on the sensor log when a deactivation or activation sensor notification is received; preferably wherein the time can be used for multiple time events based on the plurality of sensors. For example, the same timer may be used for each of the plurality of sensors, enabling timing of multiple sensor events/activities concurrently. Advantageously, a single clock may be used, but each event/activity tracked separately.
Optionally, each notification comprises an event detail and a time detail, the event detail identifying the sensor and the time detail defining the time of the event, wherein the user device is configured to: search the sensor log for a sensor entry for the identified sensor; and insert or append the notification to the sensor entry in the sensor log. An event may further comprise a state of the sensor, e.g. an indication of whether the sensor is active or inactive Inserting the notification to the sensor entry may comprise: inserting the event and recalculating the sensor line.
The remote server may be configured to: identify whether the determined operating state is indicative of a priority event; upon determining the operating state is indicative of a priority event, send a push notification to the user device, the push notification configured to cause the user device to display a message; and monitor for a response message from the user device indicative of user action by a user of the user device. A priority event could be identified by the identification or triggering of an alert, e.g. as described above.
Preferably, the user device is operable to: receive the push notification indicative of a priority event from the remote server; upon receiving the push notification indicative of a priority event, display the push notification and an indication of the priority event on a user interface, for example on a lock screen of the user device.
Optionally, the remote server is configured to: identify whether the determined operating state is indicative of a priority event; upon determining the operating state is indicative of a priority event, send a command to the user device causing the application on the user device to create and send a pull request to the remote server; and upon receiving a pull request from the user device, send a notification indicative of a priority event to the user device.
The user device may be configured to: upon receiving a notification indicative of a priority event, highlight the priority event to the user, e.g. by display on a user interface of the user device. For example, the event may be highlighted in colour or in bold. The event may be highlighted by display on a lock screen of the user device (e.g. when the application is not open).
Preferably, the user device is configured to: store a list of received notifications; rank the priority of each received notification based on recorded user responses to previous notifications; and display the notification in a notification log in the application or on a lock screen of the user device based on the priority ranking of each notification.
Preferably each of the plurality of sensors is one of: a contact sensor; a motion sensor; and a current sensor, e.g. for monitoring an electrical plug, such as integrated within a smart plug.
There is also described herein a method for notifying a user of unusual activity in a monitoring space, the method comprising: detecting unusual activity in a monitoring space; providing a first notification on a first user device; determining whether a signal acknowledging the notification is received within a response time period; and providing a second notification on a second user device if the signal acknowledging the notification is not received within the response time period.
There is also described herein: a method for notifying users of unusual activity in a monitoring space, the method comprising: detecting unusual activity in a monitoring space; providing a first notification on a first user device and a second notification on a second user device; receiving a user input indicative of an interaction with the first notification on the first user device; updating the first notification based on the user input; and updating the second 10.notification based on the user input.
An interaction could be marking the notification as closed, viewing the notification, contacting a caree in the monitoring space or forwarding the notification to another user. Preferably updating the first notification comprises adding a user interaction entry indicative of the interaction, preferably the user interaction entry comprises the time of the user interaction.
Thus it is possible to update the users of multiple devices (e.g. multiple carers) about the actions of other users in response to a notification or alert. Updating the second notification based on the user input may comprise the first device sending a message to the second device. Alternatively, the first device may send a message to a remote server, which may in turn send an update message to the second device.
There is also described a method for notifying a user of unusual activity in a monitoring space, the method comprising: detecting unusual activity in a monitoring space; providing a notification on a first user device; determining whether a forwarding criterion for forwarding the notification is satisfied; and forwarding the notification to a second user device if the forwarding criterion is satisfied.
The step of detecting unusual activity in a monitoring space may be performed on a remote server, such as a cloud server. The remote server can receive (optionally over a long-range communication network or WAN) raw sensor information from a plurality of sensors in the monitoring space; determine an operating state of at least one of the plurality of sensors based on the raw sensor information; compare sensor activity timings in the second time period with the threshold time or times to determine unusual activity; and providing the first notification on the first user device in response to determining unusual activity.
Preferably providing the notification to the first user device is performed by the remote server sending the notification to the first user device (e.g. over a long range communication network, or WAN); the first user device receiving the notification and displaying the notification on a screen of the first user device.
Preferably, a forwarding criterion is satisfied if a message from the first user device indicating the notification should be forwarded is received; preferably wherein providing a notification on a first user device comprises displaying a user-selectable option for indicating a notification should be forwarded. Optionally the user-selectable option contains selectable identifiers for one or more potential user devices, or associated users and the forwarding criterion is satisfied if the message from the first user device indicates the user selected an identifier for the second user device.
Optionally, providing the notification to the first user device is performed by: creating a notification indicative of the detected unusual activity in the monitoring space; selecting the first user device from a plurality of potential user devices based on a notification category of the notification; looking up an address of the first user device from a list of user device addresses; and sending the notification to the first user device using the address. These steps may be performed by the remote server.
Preferably the method further comprises: assigning the notification to one of a plurality of notification categories based on one or more of: the timing of the unusual activity, such as the time of day or day of week; e.g. the time the unusual activity is detected; a determined severity of the unusual activity; recorded user responses to previous notifications; and the type or severity of unusual activity determined.
Optionally the method further comprises: assigning each of the plurality of potential user devices assigned to a notification category based on one or more of: the proximity of the user device to the monitoring space; the identity of a user associated with the user device, e.g. based on the user's relationship to the caree in the monitoring space; and recorded user responses to previous notifications received from the user device, such as based on the likelihood or probability of the user responding or interacting with the notification, type of response; wherein selecting the first user device comprises selecting a user device having a notification category that matches the notification category of the notification.
There is also described herein a device for identifying unusual activity in a monitoring space, the device comprising: a memory; a communication interface for receiving information indicating activity which occurred in the monitoring space during a first time period, wherein activity is determined based on the timing of each of a plurality of events detected in the monitoring space using a plurality of sensors; and a processor operable to: determine a model of normal activity based on the received information; monitor activity in the monitoring space during a second time period; and raise an alert if the activity in the monitoring space during the second time period deviates from the model of normal activity.
The device may be operable to perform any of the methods as described above.
There is also described a system for identifying unusual activity in a monitoring space, the system comprising: a device for identifying unusual activity in a monitoring space; a plurality of sensors for detecting activity in the monitoring space; and a user device for displaying alerts to a user.
Any system feature as described herein may also be provided as a method feature, and vice versa. As used herein, means plus function features may be expressed alternatively in terms of their corresponding structure.
Any feature in one aspect of the invention may be applied to other aspects of the invention, in any appropriate combination. In particular, method aspects may be applied to system aspects, and vice versa. Furthermore, any, some and/or all features in one aspect can be applied to any, some and/or all features in any other aspect, in any appropriate combination.
It should also be appreciated that particular combinations of the various features described and defined in any aspects of the invention can be implemented and/or supplied and/or used independently.
Brief Description of Drawings
Methods and systems for processing and display of are described by way of example only, in relation to the Figures, wherein: Figure 1 shows an exemplary monitoring system; Figure 2A shows an exemplary method for learning a wake up alert criteria; Figure 2B shows an example plot of the probability of any activity being detected by a plurality of analysis times throughout the day; Figure 3 shows an example of the probability of activity being detected so far by each of a plurality of sensors throughout the day for an example monitoring space; Figure 4A shows an example method for determining a night time alarm condition; Figure 4B shows an example plot of the probability of motion being detected during each hour of the day for an example monitoring space; Figure 5A shows a method for identifying alert time intervals for an ongoing activity alert; Figure 5B shows a plot of the probability of recording at least one sensor activity in each of a plurality of analysis time periods; Figure 6 shows a screen display of a notification on an application on a user device; Figure 7A shows a current activity view of sensor activity on an application on a user device; Figure 7B shows a current activity view of sensor activity on an application on a user device; Figure 8A shows a timeline view displayed on a user device; Figure 8B shows a timeline view displayed on a user device wherein events, or activities, are displayed with the most recent at the top of the screen; Figure 9 shows a notification screen on an application on a user device; Figure 10A shows a share view which can be displayed on a user device as part of a notification; Figure 10B shows a notification history view which can be displayed on a user device as part of a notification; Figure 11 shows a screen display of an application on a user device showing a notification of an alert; Figure 12 shows a screen display of an application on a user device; Figure 13 shows a notification on a lock screen of a user device; Figures 14 to 19 each show a screen display of a notification on a user device; Figure 20 shows a method for grouping individual recorded sensor activities into a sensor activity grouping; Figure 21 shows a screen showing a notification and an indication that the notification has been closed; Figure 22 shows a screen showing a notification interaction log; Figure 23 shows a filter view for allowing the user to select which sensor data is shown on the current activity and timeline views; and Figure 24 shows a method of processing and transmitting sensor and alert data in a monitoring system.
Detailed description
1. System Overview The monitoring system aims to support carees to live independently in their own homes by sending alerts to a carer should something unusual happen in the caree's home. Some embodiments of the monitoring system also allow a carer to view present and previous activity in the caree's home. However the monitoring system is not limited to a caree's home and could also be deployed in other environments, such as offices, hospitals or nursing homes.
The monitoring system comprises a plurality of sensors, installed at a monitoring location/space (generally a caree's home), and a learning and alert system, which is generally located remotely from the monitoring space, e.g. a remote monitoring/cloud server. However, the learning and alert system may also be located in the monitoring space -for example, in a monitoring device 10 connected to the plurality of sensors. The plurality of sensors provide information indicating activity in the monitoring space. Example activity includes movement around the space (e.g. detectable by motion sensors), the use of an (electrical) appliance (e.g. detectable by a current or power detector), etc. The learning and alert system determines a model of normal activity based on received sensor information, and raises an alert if anything unusual happens.
Figure 1 shows an exemplary system 1 for monitoring a monitoring space.
The system 1 comprises a plurality of devices and sensors within a monitoring space. For example, the monitoring space may be a caree's home or a business premises. The monitoring space could be a single room or encompass multiple rooms, potentially on different floors.
The system 1 comprises a monitoring device 10, for example a smart device or hub installed within or proximate to the monitoring space having a plurality of sensors. Here the communication between the monitoring device 10 and the sensors is via a short range wireless communication, e.g. through a wireless local area network (WLAN), e.g. Zigbee or WiFi. However other methods of communication, such as Bluetooth or Bluetooth Low Energy (BLE) could be used.
Short-range wireless communication as described herein generally refers to communication via communication protocols with a range of less than 200m, less than 150m, or around than 100m. In some embodiments short-range wireless communication protocols have a range of less than 80m or less than 50m. Preferably short-range wireless communication protocols have a range of at least 2m, more preferably at least 5m, more preferably at least 10m. In preferred embodiments, short-range wireless communication protocols have a range of between 10m and 100m.
In alternative embodiments the communication between the monitoring device 10 and sensors could be through a wired connection.
Although specific sensors (contact, flow, current and motion) have been included in the embodiment of Figure 1, sensors for other characteristics (e.g. visible light, sound, heat) may also be provided. Connected appliances with communication capabilities may also be used to provide sensor data, for example connected/smart TVs, kettles or lights, especially by reporting their own activity/.usage to the monitoring device 10.
In alternative embodiments, more/fewer devices or appliances of different types to those shown in Figure 1 may be present. Examples of other devices that may be monitored in the monitoring space to provide sensor data include kitchen blenders, ovens, toasters, air computers, showers. Various other (household or workplace) appliances or devices whose operative state could be monitored using this system would be apparent to the skilled person.
2. Sensors Activity is determined based on the timing of each of a plurality of events detected in the monitoring space using a plurality of sensors.
At least one of the plurality of sensors may be configured to detect events. An event triggers the sensor. Example events include the switching on or off of an appliance, such as a light, kettle, television, or microwave; the detection of motion; the opening or closing of a door, for example the fridge door, a cupboard door, or the front door; or any other events. An event may be discrete in time. For example, an event is inferred when an appliance transitions from an 'off' state to an 'on' state, or when it transitions from an 'on' state to an 'off state, and not continuously while the appliance is on (in case the TV is left on).
A first type of event sensor which may be used by the monitoring system is sensors which detects motion, sometimes referred to as a motion event detector. A motion event detector may be associated with a particular area of the house. The area may be a room, such as a kitchen, living room, or bedroom. The area may be smaller than a room, for example the kitchen area of a combined kitchen and living area. When motion occurs in the particular area of the house associated with the motion event detector, the motion sensor detects an event. In the exemplary monitoring system illustrated in Figure 1, the monitoring device 10 is in communication with a first motion sensor 12 located in the living room of the monitoring space, and a second motion sensor 14 located in the kitchen of the monitoring space.
A second type of event sensor which may be used by the monitoring system is a contact sensor, which may e.g. be used to detect the opening or closing of a door or window. An event sensor which detects the opening or closing of a door may be referred to as a door event detector, such a contact sensor. A door event detector may be configured to detect the opening or closing of a front door, a back door, an appliance door, a cupboard door, or any other door. A door event detector detects an event when the door is opened or when the door is closed. Some door event detectors detect events both when the door is opened and when the door is closed. Generally contact sensors detect events both when the door is opened and when the door is closed and can distinguish between the event where the door is opened and the event where the door is closed. For example, a door event detector may detect a first type of event when a caree's front door is opened, and a second type of event when the front door is closed. The monitoring device 10 is also in communication with a first contact sensor 16, arranged to detect whether a bedroom door 30 is open or closed, and a second contact sensor 18, arranged to detect whether the front door 32 is open or closed.
A third type of event sensor which may be used by the monitoring system is sensors which detect an appliance being switched on, or an appliance being switched off. Sensors which detect an appliance being switched on or an appliance being switched off may be referred to as appliance event sensors. They may also be referred to using a particular appliance name. For example, a kettle event sensor is an appliance event sensor which detects a kettle being switched on, or a kettle being switched off. An appliance event sensor detects an event when the appliance transitions from an 'off' state to an 'on' state, or when the appliance transitions from an 'on' state to an 'off state. Some appliance event sensors detect events when the appliance transitions from an 'off' state to an 'on' state and when the appliance transitions from an 'on' state to an 'off state. An appliance event sensor which detects events when the appliance transitions from an 'off state to an 'on' state and when the appliance transitions from an 'on' state to an 'off' state may be configured to distinguish between the event of the appliance being switched on and the event of the appliance being switched off. For example, an appliance event sensor (in this example, a kettle event sensor) may detect a first type of event when a kettle transitions from an 'off' state to an 'on' state, and a second type of event when the kettle transitions from an 'on' state to an 'off state. In the exemplary monitoring system illustrated in Figure 1, the monitoring device 10 is also in communication with a first current sensor 24 arranged to detect (and optionally measure the value of) power or current in a kettle 38 and second current sensor 26 arranged to detect current or power in a microwave 40. The current sensors 24, 26 could be integrated into smart plugs or sockets, through which the kettle 38 and microwave 40, respectively, are connected to an electricity supply. The event where the kettle transitions from the 'off state to the 'on' state, and the event where the kettle transitions from the 'on' state to the 'off' state can be determined using a current sensor using the method described below.
Another type of sensor which may be used by the monitoring system is a flow detector. In the exemplary monitoring system illustrated in Figure 1, the monitoring device 10 is further in communication with a first flow detector 20, arranged to detect whether a toilet 34 has been flushed (e.g. by detecting flow in a water pipe supplying the toilet 34) and a second flow detector 22, arranged to detect whether a water tap 36 is in use (e.g. by detecting flow in a water pipe supplying the tap 36).
At least one of the sensors may be configured to detect a status, rather than an event. Example statuses include whether or not an appliance (for example, a light, kettle, television, or microwave) is currently switched on; whether or not a door (for example, an appliance door, front door, or back door) is currently open; or any other status of operation. Using this information, it can be determined whether or not an event has occurred. For instance, by identifying when the status detected by the sensor changes from a first state, such as an 'off state, to a second state, such as an 'on' state.
At least one of the sensors may be configured to measure power or current in order to detect events, determine a state of operation, or both. For example, a sensor in a plug may be configured to measure the power or current used by an appliance.
When an appliance is on, the measured power may be above a first threshold. The switching on of an appliance may raise the measured power above the first threshold. A sensor may be configured to detect that an appliance is switched on when the measured power is above the first threshold. For example, the status of the appliance may be determined to be on when the measured power is above the threshold of 15W. The sensor may be configured to be triggered when the measured power transitions from below the first threshold to above the first threshold. For example, the switching on of an appliance may be detected when the measured power transitions from below a threshold of 15W to above the threshold of 15W.
When an appliance is off, the measured power may be below a second threshold. The switching off of the appliance may lower the measured power below the second threshold. The sensor may be configured to be triggered when the measured power transitions from above the second threshold to below the second threshold. The sensor may be configured to detect that an appliance is switched off when the measured power is below the first threshold. For example, the switching off of an appliance may be detected when the measured power transitions from above a threshold of 10W to below the threshold of 10W. The status of the appliance may be determined to be off when the measured power is below the threshold of 15W.
The sensor may be configured such that the first and second thresholds may be tuned. Tuning may be required to cope with low power appliances such as LED lights and phone chargers. Alternatively, the sensor may be configured such that only one of the thresholds may be tuned. This may be useful with higher power appliances, for example, where only the first threshold may need to be tuned. The tuning could be automatic, for example based on the data from the training/learning period. For example the first and/or second threshold could be determined based on a learned measure of voltage or current or power level from sensor data from the learning period. The threshold(s) could be determined based on periods of automatic usage, for example smart devices/appliances may have periods of automatic usage based on preset or remote-controlled timings (e.g. an appliance may be turned on and off by a smart plug). These periods of automatic usage can be used to identify the power consumption or current drawn by the appliance when it is on, e.g. an average power/current, and the threshold(s) for determining activity can be set based on the identified power or current.
Other types of sensors are envisaged. For example, sensors may be used to detect events when any appliance in a particular area transitions from an 'off' state to an 'on' state, for example, by sensing the status of a circuit providing power to that area. Sensors may be connected to water pipes, locks, or any other item which may be sensed to detect events.
Pressure sensors may be placed in beds or seats to detect events when a person lies down or gets up.
At least one of the sensors may provide remote operation functionality. For example, a sensor may be configured to control an appliance. In one example, a sensor is configured turn an appliance on or off. Alternatively, the sensors may provide sensing functionality only (i.e., the sensor will not provide remote operation functionality). This has the benefit of keeping the cost of the sensors, and therefore the cost of the system, low, and may also provide improved security (e.g. in the event the system is hacked). Further, this reduces the size of the sensors, and therefore increases the places in which a sensor may be placed, and decreases how noticeable the sensors will be in the caree's home.
Instructions may be provided with the monitoring system to place sensors at preferred positions.
Sensor measurements may be taken by the sensors periodically, e.g. at a predefined sampling rate (with adjacent measurements being separated by a predetermined sampling time). The sampling time is preferably not more than 5 seconds and at least 1 ms.
Preferably the sampling time is greater than about 0.1 seconds or greater than about 0.5 seconds. Generally the sampling time is less than 3 seconds or less than 2 seconds. Sensor signals indicative of the measured characteristic may be sent to the monitoring device 10 as they are measured, or they may be stored on the sensor device and sent in batches to the monitoring device 10. In other examples, sensor signals are only sent to the monitoring device 10 if they differ from the previous, or most recent sensor signal. For example, the contact sensor 16 may continuously measure "contact" (i.e. indicating the bedroom door 30 is closed) for multiple consecutive sampling times and only send a sensor signal to the monitoring device 10 when it next measures "no contact" (i.e. indicating the bedroom door 30 is open). The next sensor signal sent may be the next "contact" measurement, e.g. indicative of the bedroom door 30 being closed.
Based on the sensor signals the monitoring device 10 can identify activity in the monitoring space. For example, outputs from the second current sensor 26 may be used to identify the operation of the kettle 20 boiling.
Although in the embodiment described in Figure 1 the monitoring device 10 is in wireless communication with the sensors, in alternative embodiments one or more of the sensors can be integrated into the monitoring device 10 itself. For example the monitoring device may comprise the motion sensor 14.
3. Learning and Alert System The connected care/monitoring system also includes a learning and alert system in communication with the plurality of sensors. The learning and alert system is configured to receive information from the plurality of sensors in the monitoring system. In particular, the learning and alert system is configured to receive information indicating the timing of events which occur in the monitoring space. The information received by the learning and alert system may also include an indication of what type of event occurred.
In the exemplary monitoring system illustrated in Figure 1, there is also provided an access point/router 50 and DSLJfibre modem 60 at the monitoring space location for connecting the monitoring device 10 to the Internet 70 and thus to the learning and alert system.
The system 1 includes a remote monitoring server 80, which receives sensor data from the monitoring device 10 and provides the learning and alert system functionality. In some alternative embodiments, this functionality is provided using the monitoring device.
The monitoring server 80 analyses the sensor information received in order to learn about normal activity in the monitoring space, e.g. by determining trends, and thereby determine conditions under which an alert about unusual activity in the monitoring space should be triggered. The monitoring server 80 also uses the sensor information to detect unusual activity (by comparing with the identified normal activity) and trigger alerts and notify one or more carers in the case of unusual activity.
The learning and alert system may be configured to determine that an event has occurred based on a status of operation.
The learning and alert system may be configured to determine a status of operation based on events. For instance, the learning and alert system may determine that a door is open because it received information indicating that the door was opened, and the learning and alert system has not yet received information indicating that the door has been closed.
The learning and alert system may be configured to determine a duration of a status of operation by determining the time between two related events. For example, the learning and alert system may determine a door was open for 2 minutes, because it received information indicating that the door was opened at 8.00 AM, and the learning and support system received information indicating that the door has been closed at 8.02 AM.
Although in the system of Figure 1, the learning and alert system is provided by the remote monitoring server 80, in alternative embodiments the learning and alert system may be configured to communicate with the plurality of sensors directly. In some embodiments, the learning and alert system may be configured to communicate with some of the sensors directly, and some of the sensors via the monitoring device 10.
The learning and alert system determines a 'normal' activity in the monitoring space, which may be indicative of behaviour of the caree, based upon the information stored by the learning and alert system over a period of time, known as the learning period. The learning period may be the two week period ending at 11:59 PM the previous day, for example.
The normal activity in the space or behaviour of the caree is described using a number of learned parameters. For example, the normal behaviour may describe the caree's normal wake up time, event time, bed time, and active/day time.
The learning period could be extended to make the learning and alert system less sensitive to variations in behaviour, e.g. to prevent one or two unusual days skewing the data, although this would also make the learning and alert system slower to adapt to changes in routines. The learning period could be adaptive to variations in behaviour, e.g. the length of the learning period could depend on the standard deviation of historical data collected for a previous learning period. The learning period may have a lower limit, for example at least one week. The lower limit can be used when the system is first installed, for example, to prevent the customer having to wait too long before notifications are enabled.
The learning and alert system may be configured to use all information stored by the learning and alert system over the learning period. Alternatively, the learning and alert system may be configured to filter the information relating to the learning period, e.g. to exclude information regarding days on which no activity or events, or less than a threshold level/count of activity or events were recorded (for example, on days where the caree is in hospital or staying with relatives).
There may also be an upper limit, or preferred length, of the learning period.
Advantageously the length of learning period is two weeks, though preferred lengths of the learning period may range from one week to four weeks, more preferably 10 days to 3 weeks. Once monitoring/sensor data has been collected that exceeds the length of the preferred learning period, e.g. two weeks' worth of data have been collected, the learning period will shift by one day every day, and routines will be updated daily. Thus the learning period would be the most recent (or latest) period of time for which data is available.
The learning and alert system is configured to raise alerts when the sensor signals are not indicative of normal activity in the monitoring space. The time at which an alert is raised is referred to as the alert time. The alert time may be the time at which a deviation from normal activity is identified. Alternatively, the alert time may be some time (e.g. a predetermined time) after which a deviation from normal activity is identified. Setting the alert time to be some time after deviation from normal activity is identified allows for slight deviations from normal activity, in order to reduce the number of false alarms. The relationship between the time at which normal activity is expected and the alert time for the respective alert may be different for different alerts and different times.
A first type of alert that can be raised by the learning and alert system is a wake-up' alert.
Wake-up alerts may be raised when the learning and alert system determines that the sensor data indicates that the caree has not woken up, or not got up. The learning and alert system may determine that the caree has not woken up if, for instance, no activity at all has been detected by any of the plurality of sensors by the time that the caree would normally have woken up, e.g. no activity has been detected by 9am. This alert treats activity from any sensor as evidence that the caree has got up. The alert does not require that the caree always wakes up at the same time each day, since the learning and alert system would be configured to set the wake-up alert time (or the frst alert time) such that wake-up alerts would only be sent after the latest time that the caree typically wakes up, in order to prevent sending false alarms if the caree has a short lie-in. Figure 2A shows an exemplary method for learning the wake up alert criteria. Generally the method will be performed at the remote server 80, but in some embodiments it could be performed by the monitoring device 10.
At step 210 sensor data is received from the plurality of sensors in the monitoring space over a first time period, the learning time period. The sensor data is data collected by the sensors during the first time period, although it is not necessarily received by the remote server during the first time period (e.g. it may be received by the remote server 80 after a short delay).
At step 212 the sensor data for the first time period (i.e. the learning time period) is analysed. In this example, the way that the learning and alert system is configured to determine the first alert time, or wake up alert time, is to determine the probability of any activity being detected (e.g. any of the plurality of sensors being activated or detecting activity) by each of a plurality of times (also referred to as "analysis times") throughout the day based on the information stored by the learning and alert system over the learning period. The probability of an event being detected "by each time" means the probability of an event being detected in the time interval (also referred to as an "analysis time interval") between midnight (00:00h) and that time each day. Each of the plurality of analysis times (or the corresponding analysis time intervals) may be selected based on the sensor data being analysed, e.g. the sensor data from the learning period. For example, the first time a sensor activation / sensor activity is recorded on each day in the learning period can be selected as an analysis time. Alternatively the analysis times may be predetermined, e.g. at regular times throughout the day, such as every minute, every 5 minutes or every 10 minutes. Where the analysis times are predetermined, consecutive analysis times are preferably within 1 hour of each other, more preferably within 30 minutes, most preferably within 20 minutes. Generally predetermined analysis times would be at least 30s apart.
An example plot 250 of a measure of probability of any activity being detected by a plurality of analysis times throughout the day, determined based on the information stored over the learning period for an example caree, is shown in Figure 2B. The x-axis shows the time of day and the y-axis shows the probability on a scale from 0% to 100%. As can be seen, the plot 250 of probability shows a number of discrete probability points, with consecutive points in time being joined by a straight line. Here each probability point is indicative of the first time any sort of sensor activity was recorded in the monitoring space on each day in the monitoring period.
The graph of Figure 2B is produced as follows. The time of turn on (e.g. of the earliest sensor signal of the day) is measured each day of a two week learning period. The graph is then produced by plotting the number of days on which at least one sensor had been activated at each point in the day. Figure 2B shows that on one day the first sensor event/activity was detected between 05:50 and 05:55. On 12 days the first sensor activity was detected by 08:00. The probability of the y axis is calculated by 1/14 and 12/14 respectively. Thus the plot of Figure 2B is approximating a cumulative probability. Note from the graph of Figure 2B, on three days the first sensor activity was detected between 06:55 and 07:00. The measure of probability may be adjusted by fitting a curve to the data points etc. At step 214 the determined probability is compared with a wake-up probability threshold, shown by line 260 in Figure 2B. Here the probability threshold is a high probability threshold of 90%. Preferably such a high probability threshold is at least 70%, more preferably at least 80%, most preferably at least 85%. Generally the probability threshold is less than 95%.
At step 216 a wake up alert time is selected. The wake up alert time here is the first data point 270 (i.e. the earliest analysis time) for which the probability exceeds the wake-up probability threshold is learned as the wake up alert time for this caree. Thus, where the analysis times each correspond to the earliest time sensor activity is recorded on one of the days in the learning period, the wake up alert time is the next of the earliest sensor activity times following the first 90% of earliest sensor activity times in the learning period. The same wake-up threshold will therefore correspond to a different time of day for each caree. In the example caree's home, some activity is detected on 90% of days over the learning period by 08:10, and therefore an alert will be raised at this time if no activity has been detected so far on future days. In other embodiments, the time at which the line 250 connecting the probability points may be selected as the alert wake up time.
A second type of alert that may be raised by the learning and alert system is referred to as a 'Sensor not fired so far' alert. A sensor not fired so far alert may be raised when a specific sensor (or a specific group of sensors in the plurality of sensors) has not detected activity by a time that it normally would have done, e.g. the microwave has not been used by 7pm. The learning and alert system may be configured to learn an alert time for each sensor for each day that the sensor normally detects activity. Thus the method for determining the sensor not fired alert time will be very similar to the method 200 shown in Figure 2A for the wake up alert time, except instead of using sensor data from all of the plurality of sensors and looking at the earliest time that any of the plurality of sensors is activated, the analysis will be performed based on the sensor data from a single sensor (or group of sensors). Where the sensor not fired so far alert refers to a group of sensors the group may be linked based on sensor characteristics (e.g. sensors of the same type, such as movement or electrical, or sensors relating to the same type of appliances, e.g. sensors arranged to monitor the same type of appliances in the house, such as in a house where there are multiple showers or toilets the user may use), or positioning (e.g. sensors located in the same room in the house, such as any sensor in the kitchen). The alert time is the time at which an alert will be raised. For example, a user may turn on the TV to watch a popular TV show at 7.30pm every Tuesday, Thursday, and Friday. The turning on of the TV may be detected by a sensor.
The learning and alert system may be configured to learn an alert time for the sensor which detects the turning on of the TV. Should the TV not be turned on by 7.30pm on a Tuesday, Thursday, and Friday, the learning and alert system will raise an alert at the alert time. The alert time may be 7.30pm, or anytime thereafter. However, in some cases, sensors which do not detect activity on a daily basis will not cause alerts to be raised.
Figure 3 shows an example of the probability of activity being detected so far by each of a plurality of sensors throughout the day for an example monitoring space. Plot 312 relates to contact sensor readings for a contact sensor arranged to monitor when the front door is closed or open (where open is "active", e.g. opening of the door is indicative of activity or an event). Plot 314 corresponds to the probability of a plug sensor measuring current/power to a hairstraightener being activated by each of a plurality of analysis times. Plot 316 shows the probability of a motion sensor in the hallway being activated (i.e. detecting movement) by each of a plurality of analysis times. Plot 320 shows the probability of a motion sensor in the kitchen being activated (i.e. detecting motion) by each of a plurality of analysis times. Plot 322 shows the probability of a motion sensor in the living room detecting motion by each of a plurality of analysis times. Plot 324 shows the probability of a contact sensor arranged to measure whether the kitchen door is open or closed being activated by each of a plurality of analysis times (where either change of state, i.e. from open to closed, or from closed to open, indicates activity). Plot 326 shows the probability of a power or current sensor arranged to measure power or current through a plug connected to the microwave being activated by each of a plurality of analysis times.
The probability of each sensor being activated, or recording sensor activity, by each of a plurality of analysis times (or within analysis periods, e.g. periods beginning at midnight and ending at the analysis time) is compared against a threshold 330. Here the threshold 330 is 90%). The first data point which exceeds the threshold 330 is learned as the time of day by which this sensor normally detects activity, and this is selected as the sensor alarm time for that sensor. In the example shown in Figure 3 in over 90% of days motion is detected in the kitchen by 08:10 (shown by plot 320), while motion is detected in the hallway (plot 316) and living room (plot 322) and the kitchen door is normally opened/closed (plot 324) on over 90% of days by 15:10. It is worth noting that the above example is not from an elderly person's home and each appliance is not used every day since the home sometimes empty overnight and the occupants only return in the afternoon/evening. The graph shows that if there is not enough data some devices will not reach the threshold (e.g. if front door not opened on more than 90% of the days -as shown in plot 312 of Figure 3, the front door is only opened on 7 out of the 14 days).
A third type of alert that may be raised by the learning and alert system is referred to as a night-time activity alert'. A night-time activity alert may be raised when unusual motion has been detected during the night, e.g. movement has been detected in the kitchen at 2am.
Motion sensors are not expected to be positioned such that they detect night-time bathroom visits, so such activity should not raise alerts. However, if they are, the learning and alert system may be configured to account for this.
In order for the night time alert to be raised, a sensor must detect activity during an time slot that it does not normally detect activity, and preferably that time slot must be during a period of time which the learning and support system considers to be the night. The period of time which the learning and alert system considers to be the night may be fixed, e.g. as 00:00 - 05:00. Alternatively, the period of time could be learned individually for each customer.
Figure 4A shows an example method 400 for determining a night time alarm condition. In step 400 sensor data from one sensor (or from a group of sensors, e.g. sensors in a similar location, or of the same type) is received. The received sensor data relates to a first time period (the learning time period).
At step 412 the sensor data is received to determine the probability of the sensor (or at least one sensor in the group) recording activity (or being activated) in the monitoring space in each of a plurality of analysis time intervals. Here the analysis time intervals are designated night time intervals, such that they occur only during a designated night time period each day, e.g. 0:00 -05:00. The analysis time intervals are consecutive during the night time period such that the end time of one is the start time of the next. In this example the analysis time intervals are regular intervals, here each being one hour long. However other lengths of analysis time intervals are possible. Preferably the analysis time intervals are each at least 20 minutes, at least 40 minutes or at least 50 minutes. Preferably the analysis time intervals are each not more than 3 hours, more preferably not more than 2 hours.
Figure 4B shows an example plot 460 of the probability of motion being detected during each hour of the day for an example monitoring space. This could be the probability of motion being detected by a single sensor (e.g. only the hallway motion sensor), or the probability of motion being detected by any of a group of sensors (e.g. the hallway motion sensor, as well as the kitchen and living room motion sensors).
In step 414 the learning and alert system compares, for each of the night time intervals (e.g. each of the hours 0:00-01:00, 01:00-02:00, 02:00-03:00, 03:00-04:00 and 04:00-05:00) the probability of the sensor (or where a sensor group is used, any sensor in the sensor group) against a low probability threshold (shown as 470 in Figure 4B). In this case the threshold 470 is 10%. The threshold 470 could be at least 3% or at least 5% and/or not more than 30% or not more than 20%.
In step 416 any of the analysis time intervals for which the probability of sensor activation is below the threshold are selected as an alarm time interval. A night-time activity alert is activated should motion be detected during any time interval which has a probability lower than such the threshold, and is during the period of time which is designated as the night. In this example, the probability of detecting motion is less than 10% between 03:00 -06:00, although only 03:00-04:00 and 04:00-05:00 are designated as alert time intervals, so alerts would only be raised between 03:00 and 05:00 due to the fixed night-time boundary.
A fourth type of alert that may be raised by the learning and alert system is referred to as an 'on-going activity alert'. On-going activity alerts may be raised when no activity has been detected during a time interval in which activity is normally detected e.g. there is normally some activity between 7-8pm but today no activity was detected during this time. This alert treats activity data from all sensors equally, and would require the caree to always do something at the same time each day, although it does not require this to be the same activity each day. Figure 5A shows a method 500 for identifying alert time intervals for the ongoing activity alert.
At step 510 sensor data is received from the plurality of sensors in the monitoring space. At step 512 the sensor data is analysed to determine the probability of any sensor activity being recorded (by any of the plurality of sensors) in each of a plurality of analysis time intervals.
The analysis time intervals are consecutive over the day such that the end time of one time interval is the start time of the next. In this example, the analysis time intervals are regular intervals, here each being one hour long, starting and ending on the hour. However other lengths of analysis time intervals are possible. Preferably the analysis time intervals are each at least 20 minutes, at least 40 minutes or at least 50 minutes. Preferably the analysis time intervals are each not more than 3 hours, more preferably not more than 2 hours. The analysis here is similar to that in step 212 of method 200, in that activity from a single one of any of the sensors in the plurality is enough to be classified as activity. However, whilst in step 212 the first activity in the day (also the first activity in the time period) was identified, here any activity in the time period (whether the first in the day or not) will contribute to the probability. Figure 5B shows a plot 560 of the probability of recording at least one sensor activity in each of the analysis time periods. As can be seen, there is an entry for each hourlong time interval, and consecutive entries are joined by a straight line.
In step 514 the probability for each analysis time interval is compared to a high probability threshold (shown in Figure 5B as 570). Once again the high probability threshold 570 is a threshold of 90%. Preferably such a high probability threshold is at least 70%, more preferably at least 80%, most preferably at least 85%. Generally the probability threshold is less than 95%. In this example, 15:00 -16:00, 16:00 -17:00 and 17:00 -18:00 are all identified as high activity time intervals, and as such, on-going activity alerts would be raised if no activity was detected during any one of those hours.
A fifth type of alert that may be raised by the learning and alert system is referred to as a door left open' alert -this alert could be raised if a door, such as the front door, is left open for an extended period of time above a door open time threshold. In one example the door open time threshold is 12 minutes, however other door open time thresholds are envisaged.
Preferably the door open time threshold is at least 2 minutes, more preferably at least 5 minutes, most preferably at least 10 minutes and/or not more than 30 minutes, preferably not more than 20 minutes, more preferably not more than 15 minutes. For example the door open time threshold is at least 10 minutes and not more than 15 minutes. The door open time threshold could be learned, e.g. determined based on the sensor data from the learning time period. For example, the probability of a door being open for more than each of a plurality of times may be recorded and compared to a low probability threshold (e.g. around 10% as discussed above) and then the shortest time selected with a probability below the low probability threshold selected as the door open time threshold.
In addition, a door open alert could be triggered if the door is opened at all during the night, e.g. the designated night time period (such as 0:00-05:00). The day-time duration may be learned for each customer, e.g based on data obtained during the learning period. Alternatively, the duration may be fixed or predetermined. Furthermore, as discussed above, the period of time which the learning and alert system considers to be the night may be learned or fixed.
Preferably the system is capable of learning and triggering all, or any combination of, the alerts set out herein.
4. Example Sensor Location In a preferred embodiment, the monitoring system will comprise 3 appliance sensors, 2 motion sensors, and 2 door sensors, located as follows: an appliance sensor, measuring current or power, for a kettle or coffee maker (as these appliances are commonly used during a caree's wake-up routine); an appliance sensor, measuring current or power, for a microwave or toaster (as these appliances are commonly used during a caree's wake-up routine and during food preparation times); an appliance sensor, measuring current or power, for a TV or radio ( as these appliances are commonly used throughout the day); a motion sensor associated with the living room (a commonly occupied room during daytime activity); a motion sensor associated with the kitchen (a commonly occupied room during morning routine and daytime activity); a door sensor for the main access door (to satisfy a key use case of raising an alert if the main access door is left open); and a door sensor for a cupboard door or fridge door ( to provide anindication of ongoing activity).
It has been found that this system strikes a good balance between providing meaningful and valuable information and assistance to the carer, while at the same time using a minimal number of sensors in the caree's home, thereby respecting the privacy and independence of the caree. However, this particular setup is only a preferred embodiment. Other setups are also envisaged. Setups may vary depending on the particular caree and carer.
5. Monitoring System The monitoring system provides an information feed to the carer (and possibly a circle of other carers/secondary carers).
Figure 20 shows a method 2000 for grouping individual recorded sensor activities into a sensor activity grouping, for example when sensors are activated, deactivated and then reactivated in fairly quick succession. This provides filtering for the sensor activities, or events, which can reduce processing power and mean that more meaningful, relevant and/or easily digestible data can be displayed to a user. A sensor activity grouping is a grouping or set of distinct sensor events/activities. A sensor event or activity is a single activation of a sensor, e.g. defined between a sensor activation time and a sensor deactivation time. By grouping temporally proximal sensor activities into a sensor activity grouping it is possible to provide more useful information.
In step 2010 sensor data is received from one sensor in the monitoring space for a second time period, also referred to as the monitoring time period. In some cases the sensor data received in step 2010 is from a group of linked sensors, for example sensors may be linked by their location (for example sensors of the same type, e.g. motion sensors, located in a similar area, such as the same floor, maybe linked in a single group) or by their type (contact or appliance or flow sensors) or may be grouped based on the appliance or object the sensor is monitoring. For example, sensors monitoring cooking appliances (microwaves, hobs ovens) may all be grouped into one linked sensor group. Sensor groups may be predetermined, or preprogramed. In other embodiments the groups may be selected by a user, such as on installation. In other embodiments a sensor group may be defined automatically by the system, e.g. based on sensor location data, or based on sensor data from the learning period (e.g. to identify sensors which are normally activated at similar times, or have a probability of being activated at similar times that is higher than a sensor group probability threshold.
In some embodiments the method 2000 further comprises receiving a sensor identifier indicative of the individual sensor and/or or the type of sensor and/or a grouping the sensor may be a part of.
In step 2012 a sensor activation time is determined based on the received sensor data. A sensor activation time can be determined based on a sensor signal switching from an inactive to an active state. Alternatively the sensor signal itself may be an indication that the sensor has switched from an inactive to an active state. An active state may be defined in a different way for different sensors, for example a contact sensor may be in an active state if it is detecting no contact, for example the door it is monitoring is open, whilst an appliance sensor may be deemed to be in an active state if the current or power sensor exceeds a predetermined threshold, for example 15VV).
In step 2014 the time difference between the activation time that was determined in step 2012 and a previous deactivation time of that sensor or of a sensor in the group of sensors is calculated. The previous deactivation time should be the most recent or latest deactivation time of that particular sensor. Alternatively where the sensor data is received from a sensor that is part of a sensor group, the latest or previous deactivation time could be the most recent deactivation time for any of the sensors in the sensor group. In some embodiments where there is a sensor group, it may be determined that one of the other sensors in the sensor group is currently active and thus there is no most recent deactivation time. Therefore the time difference calculated in step 2014 would be zero.
In step 2016 it is determined whether the time difference calculated in step 2014 is greater than an activity grouping time threshold. The activity grouping time threshold is a threshold time value for calculating or determining whether the consecutive sensor activities should be grouped under the same sensor activity group. The activity grouping time threshold is generally less than 20 minutes, preferably less than 10 minutes more preferably less than 5 minutes. Generally the activity grouping time threshold will be greater than 1 second, preferably greater than 2 seconds, more preferably greater than 5 seconds or 10 seconds.
An activity grouping time threshold of at least 5 minutes and not greater than 10 minutes has been found to be beneficial.
In some embodiments the method further comprises, either before step 2016 such as after step 2010, after 2012, or after step 2014, determining a sensor type or a sensor group type. Each different type of sensor or type of sensor group may be assigned an activity grouping time threshold based on its type. In some cases an activity grouping time threshold may be determined based on the type of appliance or object that the sensor(s) are monitoring, for example sensors arranged to monitor doors may have different activity grouping time thresholds from sensors arranged to monitor microwaves. In some embodiments the activity grouping time threshold is determined based on sensor data collected from the first time period or the learning time period, for example by analysing the temporal frequency of sensor activities for the sensor, or the sensor group, over the first time period.
If the time difference is greater than the activity grouping time threshold the method proceeds to step 2020. At step 2020 a new activity grouping is defined and the activation time determined in step 2012 is selected as the activity grouping start time. This is because the determination in step 2016 shows that the activity has started more than a predetermined period of time (the activity grouping time threshold) after the previous sensor activity for that type of sensor or for that particular sensor, and therefore should not be classed in the same activity grouping as the previous activity and thus a new entry for sensor activity should be defined.
In alternative embodiments the question at 2016 that is decided is whether there is an open activity grouping, such an activity grouping with an activity grouping start time but not an activity grouping end time. If it is determined that there is no activity grouping for that sensor or for the particular group of sensors then the method proceeds to step 2020. Upon determining at step 2016 that there is an open or current activity grouping for the sensor or for the particular group of sensors, i.e. an activity grouping with a start time but not yet with an end time, the method proceeds to step 2018.
After step 2020 the method proceeds to 2022. In this step the new activity grouping developed in step 2020 is represented as an activity entry in an activity log. The activity entry also includes the activity grouping start time selected in step 2020, i.e. the sensor activation time determined in step 2012. The method then proceeds to step 2024.
If at step 2016 it is determined that the time difference between the determined activation time of step 2012 and the previous deactivation of that sensor (or a sensor in the sensor group) is not greater than the activity grouping time threshold then the sensor activity which starts with the determined activation time is assigned to an existing activity group grouping. The existing sensor activity grouping should be an open or current activity grouping, i.e. one that has already been assigned a grouping start time (e.g. at a previous iteration of the method in step 2020) but has not yet been assigned an activity grouping end time.
After step 2018 the method proceeds to step 2024.
At step 2024 a sensor deactivation time is determined based on the received sensor data. A sensor deactivation time may be determined simply by receiving a signal from a sensor indicating that it has changed from an active state to an inactive state. Alternatively, for example where sensor signals are a sampling set of sensor states at each of a number of sampling times, a sensor deactivation time may be determined based on the fact that a preceding sensor reading shows the sensor is active and then next sensor reading shows the sensor is inactive.
The method then proceeds to step 2028. At step 2028 a timer is started. The timer should be started at the sensor deactivation time.
The method then proceeds to step 2030. At step 2030 it is determined whether a subsequent sensor activation time has been determined before the activity grouping time threshold is reached by the timer. Thus throughout this method sensor data continues to be received and is continually analysed to identify sensor activation times and sensor deactivation times, e.g. it is analysed as it is received.
If it is determined at step 2030 that a subsequent sensor activation time for the particular sensor (or any sensor in the sensor group) is identified prior to the timer reaching the grouping time threshold, then the method returns to step 2018 where the sensor activity starting with the subsequent activation time is added to the existing activity grouping. However, if the timer reaches the activity grouping time threshold without a subsequent sensor activation time being identified for the specific sensor (or for any sensor in the group of sensors) then the method proceeds to step 2032.
At step 2032 the deactivation time determined in step 2024 is selected as the activity grouping end time for the new activity identified in step 2020 or for the existing activity identified in 2018.
The method then progresses to step 2034. The duration of the activity group is determined by subtracting the activity grouping start time from the activity grouping end time.
The method then progresses to step 2036. In step 2036 the duration determined in step 2034 is appended to the activity entry from the activity grouping in the activity log. Thus the activity log includes an activation time for each activity grouping and a duration for each activity grouping. Providing two different measures (one a time and one a duration) makes the data more easily digestible for a user.
The activity log can be displayed to the user as part of a time line view of activity in the monitoring space. Advantageously the data is filtered so that sensor data for activities which are spaced close together in time can be grouped and displayed as a single entry. This reduces data processing and also provides a clearer and more useful display to the user.
The monitoring system may comprise a user device configured to notify a carer when unusual activity is detected. Figure 1 shows user devices 90, 92, 94, 96, 98 could be a smartphone, laptop or tablet with an application that allows the user to interface with the monitoring server 80 (and/or monitoring device 10). In addition, through the user devices 90, 92, 94, 96, 98 it may be possible to display to one or more carers information about recent and/or current activity in the monitoring space, as will be described further below.
One or more of the user devices 90, 92, 94, 96, 98 can also be used by a user in setup of the monitoring system 1. For example a user can enter information about one or more of: the specific devices/appliances, types of devices/appliances (e.g. doors 30, 32, toilet 34, water tap 36, kettle 38 and microwave 40) and/or locations of the sensors (e.g. which room the motion sensors 12, 14 are in) within the monitoring space.
To this end, returning to Figure 1, there is a primary user device 90 in communication with the monitoring server 80. The monitoring server 80 sends notifications to the primary user device 90 if an alert is triggered, as will be described in more detail below. There are also two secondary user devices 92, 94, to which notifications may be sent, e.g. if no response is received after sending a notification to primary user device 90, or in response to a more severe alert being triggered. In addition there are two tertiary user devices 96, 98, to which notifications may be sent, e.g. if no response is received after sending notifications to secondary user devices 92, 94 or in response to a very severe alert being triggered. The user devices have a screen on which information can be displayed to the carer. The user devices may be portable user devices, such as a laptop or mobile device. The user devices may be a phone with software installed. The monitoring system is described below with regard to a generic mobile user device, such as may already be owned by a carer.
However, it is to be understood that the monitoring system may use other device types, including devices manufactured specifically for the monitoring system.
The user devices are configured to use at least one screen layout to provide information from the sensors. Each screen layout distils information from the sensors in an understandable and meaningful format. Screenshots of some example screen layouts are shown in Figures 6, 7A, 7B, 8A, 8B, 9, 10A, 10B and 11 to 19 and 21 to 23.
Figures 6 and 12 show a screen display of an application on a user device, such as a one of user devices 90, 92, 94, 96, 98. The notification 602 is an alert that no activity from any of the sensors in the monitoring space has been identified by 8:30am. 8:30am is a threshold time which has been developed, as described above, to trigger alerts. This is because usually activity of some sort is detected (i.e. at least one sensor is fired in the monitoring space) before 8:30am each day. This could mean that the probability of at least one sensor activation happening before 8:30am exceeds a probability threshold, such as 90%. For example on more than a threshold proportion of the days in the learning period activity was detected before 8:30am. The notification 602 displays this information. Figures 14 to 19 show further examples of notifications on a user device.
The notification 602 also contains two user selectable icons. The first user selected icon 604 gives the user the option to contact the caree. For example pressing the object 604 could initiate a telephone call to the caree's telephone. The second user selectable object 606 gives the user the option of closing the notification of the alert. This provides an indication that the user (for example a first carer) has at least viewed the alert and considered it but decided to dismiss it. For example the carer may be aware that the caree was not intending to be at home that morning anyway (for example if the caree is on holiday or in hospital or otherwise absent from the home overnight). The user interface screen of the notification 602 also includes an indication 608 that no actions have yet been taken in response to the notification. This may provide an indication that no action has been taken by the user of this particular user device (for example user of a primary user device 90). In addition the indication that no action has been taken may indicate that no action has been taken by any of the users or carers who have been contacted and notified of the alert.
A notification 1301 of an alert may also be provided on a lock screen of a user device, as shown in Figure 13. Providing the notification 1301 on a lock screen may be triggered by assessing the severity of the related alert or unusual behaviour. Notifications with severity above a severity threshold may be displayed on a lock screen.
In one example, the user device (or an application on the user device) is configured to use two screen layouts to provide information about sensor events or activity in the monitoring space -namely, a current activity layout and a timeline layout. Figures 7A and 7B show the current activity layout and Figures 8A and 8B show the timeline layout.
The current activity layout showin in Figures 7A and 7B includes areas for each sensor which provide information regarding that sensor. An item or device associated with each sensor is identified, and the sensed activity state, i.e. whether it is "active" (e.g. for an appliance, switched on, for a contact sensor, open or for a motion sensor "active") or "inactive" . Figure 7A shows a current activity view of sensor activity in the monitoring space. The current activity view includes a first user selectable object 702 for navigating to or refreshing current activity views. The current activity view may refresh automatically. The current activity view also includes a second user selectable object 704 for navigating to a time line view of sensor activity. The current activity view includes a plurality of objects indicative of current sensor activity of a plurality of sensors in the monitoring space. For example a first sensor object 706 corresponds to an appliance sensor that is arranged to monitor kettle activity. For example the appliance sensor could be a current or power sensor. In the view shown in Figure 7a none of the plurality of sensors are currently active. Thus the sensor object 706 is not highlighted or circled however the sensor object 706 does include an indication of the most recent activity time of the kettle. Here the kettle was last active at 8:42am. This could indicate that the kettle was last activated at 8:42am and subsequently deactivated or that the kettle was last deactivated at 8:42am.
The current view shown in Figure 7A also includes a sensor object 708 corresponding to a motion sensor in the kitchen. The sensor object 708 provides an indication that it relates to the kitchen e.g.. the location of the sensor. The sensor object 708 also indicates that there was last motion in the kitchen at 8:44am. The current activity view shown in Figure 7A also includes a sensor object 714 corresponding to the television (which was last on at 8:59am) and a sensor object 718 corresponding to the fridge door contact sensor. This shows the fridge door was last opened at 8:41am. This could indicate that the last time the fridge door transitioned from a closed state to an open state was at 8:41am and the fridge door had subsequently been closed, or it could indicate that the fridge door was most recently closed at 8:41am.
Whether an item associated with a sensor is active is indicated by highlighting the particular area for that sensor (e.g. by making the area bold, or circling the area). As shown in Figure 7B, the kettle icon 710 and the kitchen motion icon 712 are circled to highlight activity is currently sensed, i.e. the kettle is on and there is motion in the kitchen. Whether an item associated with a sensor is active may also be indicated by including text to that effect in the particular area for that sensor, as shown in Figure 7B. If a sensor is not presently active the current activity layout may be configured to show when the relevant sensor was last active. For example the television icon 716 in Figure 7B is not highlighted or circled and it is indicated the television was last on yesterday. The fridge door icon 720 in Figure 7B is not highlighted or circled and indicates the fridge door was last active at 08:41.
The timeline layout provides a chronological display of events in the monitoring space.
Figure 8A shows a timeline view displayed on a user device. Once again the user device could be any of user devices 90, 92, 94, 96 or 98. Here the user selectable object 704 is underlined since it is the timeline view that is active. If the user selects the other user selectable object 702 then the time line view will be hidden and the current activity log will be shown on the user device screen. The timeline log shown in Figure 8A includes a chronological list of sensor activity in the monitoring space. The sensor activities are arranged in order from most recent to least recent.
A first line 820 in the timeline view of Figure 8A indicates it is relevant to the kitchen motion sensor. The timeline view shows that at 22:18 the kitchen motion sensor was active for a duration. Here the duration is 3 minutes. The time shown in the line 820 (22:18) is the activation time of the kitchen motion sensor, or could be the start time of an activity grouping (for example where multiple activities occur within a short time of each other). Alternatively, the time shown in the line 820 (22:18) is the deactivation time of the kitchen motion sensor, or could be the end time of an activity grouping (for example where multiple activities occur within a short time of each other), The next row 830 shown on the user view of Figure 8A indicates the previous sensor activity in the monitoring space. The sensor activity is associated with the living room, in particular a motion sensor located in the living room. The line 830 shows that the motion sensor activity time is 22:15. Alternatively, this is the deactivation time of the sensor, or the activity group end time. This is the activation time of the sensor, or the activity group start time. The second row 830 indicative of the sensor activity or sensor activity grouping indicates the duration of this activity was 5 minutes. A fourth line 840 on the timeline screen shown in Figure 8A provides an indication that it relates to a television sensor. For example the television sensor may be a current or power sensor arranged to detect current at an electricity plug connected to the television. Once again the fourth 840 provides an indication of the timely activity or the activity grouping occurred (1725) and also the duration of the activity (3 hours 28 minutes).
As shown in Figure 8B, events, or activities, are displayed with the most recent at the top of the screen. In this case, two sensors are currently active. The first line 850 of the timeline identifies the kettle and indicates it is currently on, and also the time it was switched on. The second line 852 identifies the kitchen and indicates there is currently motion in the kitchen and has been motion since 08:51. The fourth line 854 identifies the living room and indicates there was motion in the living room for two minutes at 08:32. The times displayed in the right hand column relate to the activation time of the activity or event.
The interface includes two user-selectable objects 702, 704 for switching the view between the current activity layout and timeline layout. In Figure 7B the current activity object 702 is highlighted (by a bar beneath it) to indicate the current activity layout is being displayed, whilst the timeline object 704 is not highlighted. Conversely, Figure 7B the timeline object 704 is highlighted (by a bar beneath it) to indicate the timeline layout is being displayed, whilst the current activity object 702 is not highlighted. Upon detecting a user has selected the non-highlighted object the other layout is displayed.
In one embodiment, the timeline and/or the current activity can be filtered using a filter screen layout, to select a single, or a subset of sensors. Figure 23 shows a filter view 2302 for allowing the user to select which sensor data is shown on the current activity and timeline views. The filter view 2302 comprises eight user-selectable filter objects each indicative of different sensor(s). A first user-selectable filter object 2304 allows the user to select to view information about activity recorded by all of the plurality of sensors. By selecting one or more of the other user-selectable filter objects it is possible for the user to select a subset of sensors for which information is displayed on the current activity or timeline views. For example, user-selectable object 2306 relates to the sensor arranged to monitor the microwave and user-selectable object 2308 relates to the sensor arranged to monitor motion in the kitchen.
Notifications are generally push notifications and are viewed in a separate notification screen, shown in Figure 9. A notification may be generated when one of the five specific alerts (wake-up, sensor not fired so far, night time activity, on-going activity, front door left open) detailed above occurs. However other notifications are also contemplated.
A screen showing a notification 1040 of an alert is shown in Figure 11. The notification screen provides two main user-selectable objects 1114, 1116. The first user-selectable object 1114 allows the user/carer the possibility of contacting another user. The second user-selectable object 1116 allows the user to mark the notification as closed. If the first user-selectable object 1114 is selected, two user-selectable contact objects 1120, 1122 are displayed. The first user-selectable contact object 1120 allows the user/carer to contact the caree, for example via a telephone number stored in an address book of the user device 90, which may be associated with the caree. The second user-selectable contact object 1122 allows the user to share the notification with other users/carers, referred to as the "circle", as is described further below in relation to Figures 10A and 10B.
An active notification is defined as a notification that has not yet been closed. The system can automatically close a notification if the relevant activity has since been detected (e.g. caree has woken up as detected by motion or plug activity etc., or a contact sensor signal indicates the front door has now been closed). For example this may be done by the server sending a push message to the application on the user device indicating the notification should be closed, or could be closed by the application on the user device receiving sensor activity data and analysing the sensor activity data to determine the relevant activity or event has been detected and the notification can therefore be closed. Figure 21 shows a screen 2102 showing a notification 2104 and an indication 2106 that the notification has been closed as activity has been detected.
Figure 22 shows a screen 2202 showing a notification 2204, an indication 2205 that the notification was shared to a third party, an indication 2206 that the notification has been closed by the third party, and the reason why the notification was marked as closed.
A notification log of all previous notifications can be displayed, divided into open and closed notifications, as shown in Figure 9. The notifications may be displayed according to a prioritised order. The prioritised order may be based on historical data, e.g. recorded user reactions or responses to previous notifications. The prioritised order may also or alternatively be based on a severity of the notification or alert associated with that notification.
The mainscreen has a single red/green indicator -visible in either current activity or timeline view -showing whether or not there are any active/open notifications. This provides an "at a glance" indication of whether a carer has anything to consider/review. See for example Figures 7A and 8A, which have open notifications, and Figures 7B and 8B, which have no open notifications.
The monitoring system, in particular the remote server 80, may be configured to communicate with a plurality of user devices 90, 92, 94, 96, 98, each associated with a different person, or carer. The monitoring system may store information defining different the groups of user devices (or the people associated with the user devices) belong to. The plurality of user devices (or the people associated with the user devices) groups may be referred to as a 'carer circle'. A carer circle may distinguish between primary carees, and other carees in the circle, for example by ranking or categorising the user devices according to a notification category. Each notification category can be indicative of a different level or tier within the circle (e.g. secondary carer, tertiary carer). The users in the circle may include friends and family for example, or may include a third party organisation e.g. private nurse or equivalent. In the example shown in Figure 1, there is one primary user device 90, two secondary user devices 92, 94 and two tertiary user devices 96, 98.
A notification is usually sent to the primary carer only initially, e.g. to the primary user device 90. If after a first response time period the notification has not been closed (e.g. by receiving a positive user input at the first user device 90, it can escalated and sent to the rest of the circle, or the next tier or category of the circle.
The first response time period is preferably at least 30 minutes and not more than 6 hours.
More preferably the first response time period is at least 1 hour and not more than 3 hours. Most preferably the first response time period is around 2 hours.
In some embodiments the alert or notification may be categorised, e.g. according to the type of alert or notification (e.g. based on whether it is a wake up alert or a night time alert). For example the alert may be classified into one of a plurality of alert/notification categories.
If a further tier exists, the notification can be further escalated, e.g. to tertiary user devices 96, 98, in the absence of the notification within a second response time period. The second response time period could be e.g. 24 hours. Preferably it is greater than 10 hours but not more than 48 hours.
In some embodiments the first (and optionally the second) response time periods are dependent on the alert/notification category. For example, the alert/notification category could be based on the severity of the alert and the more severe the alert the shorter the response time period.
A notification can also be forwarded by a carer to another carer or carer(s) in the circle. As shown in Figure 10A, the notification comprises a share view 1002. The share view 1002 includes a user-selectable object 1018 for indicating the notification should be shared, or forwarded. The share view 1002 also includes a plurality of user-selectable objects 1010, 1012, 1014, 1016 each associated with a different user or carers. The user can select one or more of the user-selectable objects 1010, 1012, 1014, 1016 to indicate which of the other users/user devices the notification should be sent to. When the user selects the user-selectable object 1018 for indicating the notification should be shared, or forwarded, the application may construct a message to the remote server 80 indicating the notification should be forwarded and including identifiers of users or associated user devices to whom/which the notification should be forwarded. The user device 90 will send this message to the remote server 80. Upon receiving the message, the remote server 80 will send a second notification indicative of the first notification to the selected user devices.
In other embodiments, the user selects the user-selectable object 1018 for indicating the notification should be shared, or forwarded, the user device 90 may simply forward the notification to the other user devices associated with the selected users, without requiring a message to be sent to the remote server 80. Preferably however, the user device 90 will send the a message to the remote server 80 indicating which user devices 92, 94, 96, 98 the notification has been forwarded to.
Figure 10B shows a notification history view 1052 which can be displayed on the user device as part of the notification, for example in the space underneath the notification 1040. The notification history view 1052 indicates which users have interacted with the notification and the way in which they interacted. In this way, each notification can operate as a small group chat or a history of interactions regarding the notification. The notification history view 1052 includes a plurality of notification interaction identifiers 1060, 1062, 1064, 1066, 1068. Each notification interaction identifier displays the user associated with the interaction. Here the notification interaction identifiers each also include the time of the notification interaction.
Here the notification interaction identifiers 1060, 1062, 1064, 1066, 1068 are arranged in chronological order, with the most recent last/at the bottom. The first notification interaction identifier 1060 shows the user of the user device 90 has viewed the notification at 08:33. The second notification interaction identifier 1062 indicates the user of the user device shared the notification with a user named Robert at 08:34. The third notification interaction identifier 1064 indicates Robert viewed the notification at 08:37. The fourth notification interaction identifier indicates Robert telephoned the caree at 08:37 and the fifth notification interaction identifier 1068 indicates Robert closed the notification at 08:40.
Thus as well as the primary carer/user (e.g. the user the notification was initially sent to), a notification can be marked as closed by another user/member of the circle if that notification has been escalated/forwarded to them.
Figure 24 is a flow chart showing a method of processing and transmitting sensor and alert data in a monitoring system such as the system 1.
At step 2401 the sensors in the monitoring space take measurements in the form of sensor signals. The sensor signals can be regular, periodic measurements, e.g. a stream of equally spaced sensor signals, where each sensor signal is indicative of a state (on/off or active/inactive) for each sensor. In alternative embodiments sensor signals are only generated and sent to the hub when the sensor state changes (e.g. from active to inactive, or from inactive to active).
At step 2402 the sensors send these sensor signals to a hub located at the monitoring space (e.g. the monitoring device 10). The communication with the Hub is via Zigbee, but could be via other short-range wireless protocols, such as WiFi.
At step 2403 a Data Platform on the hub either creates a new event/activity or populates the attributes of an existing event/activity with the new information from the sensor signal. For example, a new event or sensor activity may be detected in response to the sensor signal changing from inactive to active (e.g. being activated), whereas a deactivation indicates the end of the sensor activity and so is added as an attribute to the activity, or an indication that a sensor is active when the previous sensor reading was also active can show the sensor activity is not yet over (i.e. currently active). While here step 2403 is performed on the hub, the data platform and associated functionality may alternatively be provided on a server, such as remote cloud monitoring server 80. In some embodiments processing is divided between the hub, one or more servers and/or the sensors themselves.
The hub can make a determination of an alarm condition, e.g. as described above. If any sensor signals have triggered an alarm condition they are identified and at step 2411 are sent to the remote cloud monitoring server 80.
At step 2412 the alarm services within the remote cloud server 80 determine which categories of carers (here known as circles) should be notified of the alarm condition. However more categories/circles of carer may be notified initially in some cases, e.g. dependent on the severity of the alarm or the time of day. Each category/circle can be a group of carers with differing responsibilities. Usually only a primary carer is initially informed. However at step 2413 the circles the alarm has been sent to may be reconsidered and the alarm sent to other circles.
At step 2414 the alarm is pushed as a notification to user devices of those selected at 2412, e.g. in the case of the primary carer by sending a notification to the user device 90. The notifications may be sent via the Internet and long-range wireless communication (e.g. cellular), or a mixture of wired and short-range wireless communication (e.g. via WiFi).
At 2415 the alarm notification is reviewed and addressed by the members of the circle it was sent to. Addressing the alarm may comprise receiving a user input indicating the notification can be completed or alarm dismissed. If the alarm is not completed or dismissed at step 2415 the method may return to step 2413 and then 2412 to reconsider and determine to which further circles to send the alarm notification.
At step 2416 the alarm is closed/ended, e.g. following a user addressing the notification.
This may cause a message to be sent back to the hub indicating the alarm has been closed.
At 2417 the user is able to add notes to the alarm notification, such that these notes may be added to notifications or notification logs on the apps of other users/carers.
Whether or not an alarm has been identified, details of all sensor activities/events are sent to the remote cloud server 80 at step 2404.
At step 2405 the app pulls event/activity data from cloud server 80, either as part of a regular update occurring after a specified time period or in response to a user request, e.g. a user may input a request for the app data to be updated.
At step 2421 the app process the events pulled from the cloud server 80 for addition to an event timeline on the app. This involves a determination at 2422 of whether the event/activity is a new event or a continuation of a previous event. To this end, sensor activities which are within a certain threshold time of each other, and optionally measured by the same sensor, may be grouped together in a sensor activity grouping. This can help prevent excessive information being added to the timeline in cases where a sensor is triggered several times in quick succession (e.g. a microwave is used for a short time period (e.g. 30 seconds), then stopped (e.g. to stir food) and then restarted after only a few seconds. Both these sensor activities would be grouped into the same sensor activity group.
At 2423 the duration of the activity (or activity grouping), e.g. the time during which a sensor is continuously active, or the total length of time of multiple sensor active periods that are close to each other. This information about the duration of an event is then again used at 2421 to add to the activity entry for the timeline. The timeline can then be displayed. Before a duration has been identified the timeline may indicate against the sensor activity entry that the activity is ongoing.
Concurrently, at step 2431 the app processes the received event/activity information to determine the current state of the sensors and at 2432 the app updates a current sensor log with to match the current state determined in 2431.
In the system 1 shown in Figure 1, the learning and monitoring/alert functionality are both provided by the monitoring server 80. However in alternative embodiments the learning functionality (learning normal activity and setting rules for triggering alerts) may be provided in a separate device (e.g. another remote server) from the monitoring functionality (detecting unusual activity and triggering alerts). In yet further embodiments one or both of the learning functionality and monitoring functionality may be performed by the monitoring device 10 in the monitoring space.
The above embodiments and examples are to be understood as illustrative examples.
Further embodiments, aspects or examples are envisaged. It is to be understood that any feature described in relation to any one embodiment, aspect or example may be used alone, or in combination with other features described, and may also be used in combination with one or more features of any other of the embodiments, aspects or examples, or any combination of any other of the embodiments, aspects or examples. Furthermore, equivalents and modifications not described above may also be employed without departing from the scope of the invention, which is defined in the accompanying claims.

Claims (55)

  1. CLAIMS: 1. A method for identifying unusual activity in a monitoring space, the method comprising: receiving a plurality of sensor activity timings for sensor activity measured by one or more sensors in the monitoring space during a first time period; determining a probability model comprising a measure of probability of sensor activity over time based on the received sensor activity timings in the first time period; obtaining a probability threshold; defining a threshold time or times for sensor activity based on the probability model and the probability threshold; monitoring sensor activity of one or more sensors in the monitoring space during a second time period; comparing sensor activity timings in the second time period with the threshold time or times to determine unusual activity; and raising an alert in response to determining unusual activity.
  2. 2. The method according to claim 1, wherein determining a probability model comprises: determining a probability of sensor activity of any of a plurality of sensors in the monitoring space; and wherein comparing sensor activity timings in the second time period with the threshold time or times to determine unusual activity comprises: comparing sensor activity timings for sensor activity of any of the plurality of sensors in the monitoring space.
  3. 3. The method according to any preceding claim, wherein determining a probability model comprises: determining a probability of sensor activity of a first sensor of a plurality of sensors in the monitoring space; and wherein comparing sensor activity timings in the second time period with the threshold time or times to determine unusual activity comprises: comparing sensor activity timings for sensor activity of the first sensor of the plurality of sensors in the monitoring space.
  4. 4. A method according to claim 1, wherein a measure of probability of sensor activity over time comprises: a first probability measure of sensor activity of any of a plurality of sensors in the monitoring space; and a second probability measure of sensor activity of a first sensor of the plurality of sensors in the monitoring space; wherein obtaining a probability threshold comprises: obtaining a first probability threshold; and obtaining a second probability threshold; wherein defining a threshold time or times for sensor activity based on the probability model and the probability threshold comprises: defining a first threshold time or times for sensor activity based on the first probability measure and the first probability threshold; and defining a second threshold time or times for sensor activity based on the second probability measure and the second probability threshold; and wherein comparing sensor activity timings in the second time period with the threshold time or times to determine unusual activity comprises: comparing sensor activity timings for sensor activity of any of the plurality of sensors in the monitoring space with the first threshold time or times; and comparing sensor activity timings for sensor activity of the first sensor with the second threshold time or times.
  5. 5. A method according to any preceding claim, wherein determining a probability model comprising a measure of probability of sensor activity over time based on the received sensor activity timings in the first time period comprises: determining the probability of sensor activity being measured on or before each of a set of times.
  6. 6. A method according to claim 5, wherein defining a threshold time or times for sensor activity comprises defining one of the set of times as a threshold time; and wherein unusual activity is determined if sensor activity is not measured in the second time period by the alert time.
  7. 7. The method of any of preceding claim, wherein the threshold time or times define at least one alert time interval; and the measure of probability of sensor activity exceeds the probability threshold for each of the alert time intervals; and wherein unusual activity is detected if no sensor activity is measured in the monitoring space during one of the at least one alert time interval in the second time period.
  8. 8. The method of any of claims 1 to 5, wherein the threshold time or times define at least one alert time interval; and the measure of probability of sensor activity is less than the probability threshold for each of the alert time intervals; and wherein unusual activity is detected if sensor activity is measured in the monitoring space during one of the at least one alert time intervals in the second time period.
  9. 9. The method of any claims 7 or 8, wherein determining a measure of probability of sensor activity comprises determining a measure of probability of sensor activity in each of a set of time intervals, preferably wherein the set of time intervals each commence at the same time, but have different end times, and defining one or more alert time intervals comprises selecting the shortest interval for which the measure of probability of sensor activity is either higher or lower than the probability threshold.
  10. 10. The method according to any preceding claim, wherein determining a probability model comprises: categorising the sensor activity in the first time period into at least a first activity type and second activity type; determining a probability of an activity of the first activity type being recorded in the monitoring space during each time interval in a first set of time intervals in the second time period, based on the received sensor activity timings in the first time period; and defining a first set of alert time intervals in the second time period during which the probability of an event occurring in the monitoring space meets a first probability criterion; determining a probability of an activity of the second activity type being recorded in the monitoring space during each time interval in a second set of time intervals in the second time period, based on the received sensor activity timings in the first time period; and defining a second set of alert time intervals in the second time period during which the probability of an activity being recorded in the monitoring space meets a second probability criterion; and wherein determining unusual behaviour comprises: identifying whether sensor activity of the first event type is detected in the monitoring space during one of the first set of alert time intervals; identifying whether activity of the second type is detected in the monitoring space during one of the second set of alert time intervals.
  11. 11. A computer-implemented method for processing and storing activity data about a monitoring space, the method comprising: receiving sensor data from a plurality of sensors in the monitoring space; analysing the sensor data to determine activation and deactivation times for each of the plurality of sensors; defining a sensor activity grouping having an activity grouping start time based on temporal proximity of the determined activation and deactivation times; maintaining a sensor log indicative of the current sensor state of each the plurality of sensors; upon determining an activity grouping start time, adding a sensor activity entry to an activity log, wherein the sensor activity entry identifies the determined activity grouping start time; determining an activity grouping end time based on temporal proximity of the determined activation and deactivation times; upon determining an activity grouping end time, calculating a duration of the sensor activity grouping; and appending the sensor activity entry in the activity log with the calculated duration.
  12. 12. A method according to claim 11, wherein determining an activity grouping end time based on the temporal proximity of the determined activation and deactivation times comprises: starting a timer immediately following a latest determined deactivation time; and selecting the latest determined deactivation time as the activity grouping end time in the absence of determining an activation time within a predetermined activity grouping time threshold.
  13. 13. A method according to claim 11 or 12, wherein determining an activity grouping start time based on the temporal proximity of the determined activation and deactivation times comprises: determining a latest activation time; calculating a time difference between the latest activation time and the most recent preceding deactivation time; and selecting the latest determined activation time as the activity grouping start time if the time difference between the latest activation time and the most recent preceding deactivation time exceeds a/the predetermined activity grouping time threshold.
  14. 14. A method according to any of claims 11 to 13, further comprising: determining the type of sensor, or a type of appliance or object associated with the sensor; and selecting the predetermined activity grouping time threshold based on the type of sensor, or the type of appliance or object associated with the sensor.
  15. 15. A method according to any of claims 11 to 14, further comprising: receiving historical sensor data relating to sensor activity recorded in the monitoring space over a first time period; and selecting the predetermined activity grouping time threshold based on the historical sensor data.
  16. 16. A method according to claim 15, further comprising: analysing the historical sensor data to determine a probability of an activation time occurring within each of a plurality of time intervals of a deactivation time in the first time period; selecting as the predetermined activation grouping time threshold the longest of the plurality of time intervals for which the probability of an activation time occurring within each of a plurality of time intervals of a deactivation time that satisfies a probability grouping criterion.
  17. 17. A method according to any of claims 11 to 16, wherein the predetermined activity grouping time threshold is less than 10 minutes, preferably less than 5 minutes, more preferably less than 3 minutes; and/or at least 1 second, preferably at least 2 seconds, more preferably at least 4 seconds.
  18. 18. A method according to any of claims 11 to 17, wherein defining a sensor activity grouping is based on the determined activation and deactivation times of a group of at least two sensors.
  19. 19. A method according to any of claims 11 to 17, wherein defining a sensor activity grouping is based on the determined activation and deactivation times of a single of the plurality of sensors.
  20. 20. A method according to any of claims 11 to 18, wherein the sensor log is based on the activation times or the activity grouping start times.
  21. 21. A method according to any of claims 11 to 20, further comprising: providing an interface in an application for a user device, wherein the interface comprises: a first view displaying a portion of the sensor log relating to a subset of the plurality of sensors; and a second view displaying the activity log relating to a preceding time period for the subset of the plurality of sensors; and means to receive a user input to alternate between the first view and the second view.
  22. 22. A computer-implemented method for displaying activity information about a monitoring space, the method comprising: storing a sensor log indicative of the current sensor state of each the plurality of sensors; storing an activity log of a plurality of sensor activity entries, each sensor activity entry identifying an activity start time or an activity grouping start time; providing an interface in an application for a user device, wherein the interface comprises: a first view displaying a portion of the sensor log relating to a subset of the plurality of sensors; a second view displaying the activity log relating to a preceding time period the subset of the plurality of sensors; and means to receive a user input to alternate between the first view and the second view.
  23. 23. A method according to claim 21 or 22, wherein: in the second view the sensor activity entries are displayed in time order, for example according to activation time, activity grouping start time, deactivation time or activity grouping end time; preferably with the most recent displayed first (or at the top).
  24. 24. A method according to any of claims 21 to 23, further comprising: determining the subset of the plurality of sensors based on the location and/or type of each sensor.
  25. 25. A method according to any of claims 21 to 24, further comprising: providing a third view displaying user-selectable objects associated with each of the plurality of sensors or each group of at least two sensors; wherein user selection of one of the user-selectable objects causes the corresponding sensor or group of at least two sensors to be added to the subset of the plurality of sensors for which sensor state is displayed in the first view and sensor activity is displayed in the second view.
  26. 26. A method according to any of claims 11 to 25, wherein each sensor activity entry further comprises either: a duration for the sensor activity or sensor activity grouping; or an indication that the sensor (or at least one sensor in a group of at least two sensors in the plurality of sensors) is currently active.
  27. 27. A method according to any of claims 11 to 26, wherein the sensor log comprises, for each sensor or for each group of at least two sensors, either: an indication that the sensor (or one sensor in the group of at least two sensors) is active; or an indication of the last time the sensor (or one sensor in the group of at least two sensors) was active.
  28. 28. A method according to any of claims 11 to 27, wherein: one or both of the sensor activity entry and the sensor log identifies an appliance or object associated with the sensor.
  29. 29. A method according to any of claims 11 to 28, wherein: one or both of the sensor activity entry and the sensor log identifies the type of sensor and/or an individual sensor or type of the group of at least two sensors, such as by a sensor identifier.
  30. 30. A method according to any of claims 11 to 29, wherein: one or both of the sensor activity entry and the sensor log identifies the location of the sensor, or group of at least two sensors, within the monitoring space.
  31. 31. A method according to any preceding claim, wherein the sensor data is a time series of sensor readings for each of the plurality of sensors, preferably wherein the sensor readings indicate whether or not the sensor is active at the respective time.
  32. 32. A device for processing and storing activity data about a monitoring space, the device comprising: a communication interface for receiving sensor data from a plurality of sensors in the monitoring space; and a memory storing: an activity log of sensor activity entries; and a sensor log indicative of the current sensor state of each the plurality of sensors; a processor configured to: analyse the received sensor data to determine activation and deactivation times for each of the plurality of sensors; define a sensor activity grouping having an activity grouping start time based on temporal proximity of the determined activation and deactivation times; upon determining an activity grouping start time, add a sensor activity entry to an activity log, wherein the sensor activity entry identifies the determined activity grouping start time; determine an activity grouping end time based on temporal proximity of the determined activation and deactivation times; upon determining an activity grouping end time, calculate a duration of the sensor activity grouping; and append the sensor activity entry in the activity log with the calculated duration.
  33. 33. A device according to claim 32, further configured to perform the method of any of claims 12 to 31.
  34. 34. A system for handling sensor data from a plurality of sensors in a monitoring space, the system comprising: a remote server configured to: receive raw sensor information from a plurality of sensors in the monitoring space; determine an operating state of at least one of the plurality of sensors based on the raw sensor information; create at least one notification based on the determined operating state; receive a pull request from an application on user device, the pull request relating to at least one of the plurality of sensors; send the at least one notification to a user device; a user device configured to: send the pull request to the remote server; receive the notification; interpret the notification by: identifying whether the notification is indicative of a new activity or an earlier activity; updating an activity log based on the received notification by: adding a new activity entry to the activity log if the notification is indicative of a new activity; updating an existing activity entry if the notification is indicative of an earlier activity; and updating a sensor log to show currently operating sensors by interpreting sensor activation and deactivation notifications and associating the notifications with sensors based on a sensor identifier in each received notification.
  35. 35. A system according to claim 34, wherein the user device is configured to: maintain a timer for each sensor device, the timer adapted to record the activation time of a device monitored by the sensor; start a time event based on the timer when an activation or deactivation sensor notification is received; display the time of the time event on the sensor log; and freeze the time event displayed on the sensor log when a deactivation or activation sensor notification is received.
  36. 36. A system according to claim 35, wherein the timer can be used for multiple time events based on the plurality of sensors.
  37. 37. A system according to any of claims 34 to35, wherein each notification comprises an event detail and a time detail, the event detail identifying the sensor and the time detail defining the time of the event, wherein the user device is configured to: search the sensor log for a sensor entry for the identified sensor; and insert or append the notification to the sensor entry in the sensor log.
  38. 38. A system according to claim 37, wherein inserting the notification to the sensor entry comprises: inserting the event and recalculating the sensor line.
  39. 39. A system according to any of claims 34 to 38, wherein the remote server is configured to: identify whether the determined operating state is indicative of a priority event; upon determining the operating state is indicative of a priority event, send a push notification to the user device, the push notification configured to cause the user device to display a message; and monitor for a response message from the user device indicative of user action by a user of the user device.
  40. 40. A system according to claim 39, wherein the user device is operable to: receive the push notification indicative of a priority event from the remote server; upon receiving the push notification indicative of a priority event, display the push notification and an indication of the priority event on a user interface, for example on a lock screen of the user device.
  41. 41. A system according to any of claims 34 to 38, wherein the remote server is configured to: identify whether the determined operating state is indicative of a priority event; upon determining the operating state is indicative of a priority event, send a command to the user device causing the application on the user device to create and send a pull request to the remote server; and upon receiving a pull request from the user device, send a notification indicative of a priority event to the user device.
  42. 42. A system according to any of claims 34 to 41, wherein the user device is configured to: upon receiving a notification indicative of a priority event, highlight the priority event to the user.
  43. 43. A system according to any of claims 34 to 42, wherein the user device is configured to: store a list of received notifications; rank the priority of each received notification based on recorded user responses to previous notifications; and display the notification in a notification log in the application or on a lock screen of the user device based on the priority ranking of each notification.
  44. 44. A method according to any preceding claim, wherein each of the plurality of sensors is one of: a contact sensor; a motion sensor; and a current sensor, e.g. for monitoring an electrical plug, such as integrated within a smart plug.
  45. 45. A method for notifying a user of unusual activity in a monitoring space, the method comprising: detecting unusual activity in a monitoring space; providing a first notification on a first user device; determining whether a signal acknowledging the notification is received within a response time period; and providing a second notification on a second user device if the signal acknowledging the notification is not received within the response time period.
  46. 46. A method for notifying users of unusual activity in a monitoring space, the method comprising: detecting unusual activity in a monitoring space; providing a first notification on a first user device and a second notification on a second user device; receiving a user input indicative of an interaction with the first notification on the first user device; updating the first notification based on the user input; and updating the second.notification based on the user input.
  47. 47. A method for notifying a user of unusual activity in a monitoring space, the method comprising: detecting unusual activity in a monitoring space; providing a notification on a first user device; determining whether a forwarding criterion for forwarding the notification is satisfied; and forwarding the notification to a second user device if the forwarding criterion is satisfied.
  48. 48. A method according to claim 47, wherein a forwarding criterion is satisfied if: a message from the first user device indicating the notification should be forwarded is received; preferably wherein providing a notification on a first user device comprises displaying a user-selectable option for indicating a notification should be forwarded.
  49. 49. A method according to any of claims 45 to 48, wherein providing the notification to the first user device is performed by: creating a notification indicative of the detected unusual activity in the monitoring space; selecting the first user device from a plurality of potential user devices based on a notification category of the notification; looking up an address of the first user device from a list of user device addresses; and sending the notification to the first user device using the address.
  50. 50. A method according to any of claims 45 to 49, further comprising: assigning the notification to one of a plurality of notification categories based on one or more of: the timing of the unusual activity; a determined severity of the unusual activity; recorded user responses to previous notifications; and the type or severity of unusual activity determined.
  51. 51. A method according to claim 49 or 50, further comprising: assigning each of the plurality of potential user devices assigned to a notification category based on one or more of: the proximity of the user device to the monitoring space; the identity of a user associated with the user device; and recorded user responses to previous notifications received from the user device; wherein selecting the first user device comprises selecting a user device having a notification category that matches the notification category of the notification.
  52. 52. A non-transient computer-readable medium comprising instructions which, when executed by a computer, cause the computer to carry out the method of any of claims 1 to 31 or 44 to 51.
  53. 53. A device for identifying unusual activity in a monitoring space, the device comprising: a memory; a communication interface for receiving a plurality of sensor activity timings for sensor activity measured by one or more sensors in the monitoring space during a first time period; and a processor operable to: determine a probability model comprising a measure of probability of sensor activity over time based on the received sensor activity timings in the first time period; obtain a probability threshold; define a threshold time or times for sensor activity based on the probability model and the probability threshold; monitor sensor activity measured by one or more sensors in the monitoring space during a second time period; compare sensor activity timings in the second time period with the threshold time or times to determine unusual activity; and raise an alert in response to determining unusual activity.
  54. 54. A device according to claim 53, further operable to perform the method of any of claims 2 to 31 or 44 to 51.
  55. 55. A system for identifying unusual activity in a monitoring space, the system comprising: a device according to claim 53 or 54; a plurality of sensors for detecting activity in the monitoring space; and a user device for displaying alerts to a user.
GB1820274.7A 2018-12-12 2018-12-12 Monitoring method and system Active GB2579674B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1820274.7A GB2579674B (en) 2018-12-12 2018-12-12 Monitoring method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1820274.7A GB2579674B (en) 2018-12-12 2018-12-12 Monitoring method and system

Publications (3)

Publication Number Publication Date
GB201820274D0 GB201820274D0 (en) 2019-01-30
GB2579674A true GB2579674A (en) 2020-07-01
GB2579674B GB2579674B (en) 2022-09-07

Family

ID=65147004

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1820274.7A Active GB2579674B (en) 2018-12-12 2018-12-12 Monitoring method and system

Country Status (1)

Country Link
GB (1) GB2579674B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113515428B (en) * 2021-07-13 2023-04-11 抖音视界有限公司 Memory monitoring method, terminal, server, equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030059081A1 (en) * 2001-09-27 2003-03-27 Koninklijke Philips Electronics N.V. Method and apparatus for modeling behavior using a probability distrubution function
US20150170497A1 (en) * 2013-12-16 2015-06-18 Robert Bosch Gmbh Monitoring Device for Monitoring Inactive Behavior of a Monitored Person, Method and Computer Program
WO2015127491A1 (en) * 2014-02-25 2015-09-03 Monash University Monitoring system
US10037668B1 (en) * 2017-05-09 2018-07-31 Microsoft Technology Licensing, Llc Emergency alerting system and method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030059081A1 (en) * 2001-09-27 2003-03-27 Koninklijke Philips Electronics N.V. Method and apparatus for modeling behavior using a probability distrubution function
US20150170497A1 (en) * 2013-12-16 2015-06-18 Robert Bosch Gmbh Monitoring Device for Monitoring Inactive Behavior of a Monitored Person, Method and Computer Program
WO2015127491A1 (en) * 2014-02-25 2015-09-03 Monash University Monitoring system
US10037668B1 (en) * 2017-05-09 2018-07-31 Microsoft Technology Licensing, Llc Emergency alerting system and method

Also Published As

Publication number Publication date
GB2579674B (en) 2022-09-07
GB201820274D0 (en) 2019-01-30

Similar Documents

Publication Publication Date Title
JP5058504B2 (en) Remote person tracking method and device for person in residence
US10909832B2 (en) Thoughtful elderly monitoring in a smart home environment
US10475141B2 (en) System and method for adaptive indirect monitoring of subject for well-being in unattended setting
US10311694B2 (en) System and method for adaptive indirect monitoring of subject for well-being in unattended setting
US10620595B2 (en) System, method and apparatus for resupplying consumables associated with appliances
CN105981082B (en) Intelligent household's hazard detector of useful tracking communication for detecting event is provided
EP1700281B1 (en) Activity monitoring
US20190182329A1 (en) Systems and methods for evaluating sensor data for occupancy detection and responsively controlling control devices
WO2015020975A9 (en) System and method for automating electrical devices at a building structure
US10282962B2 (en) Method, computer program, and system for monitoring a being
US10948965B2 (en) User-configurable person detection system, method and apparatus
JP6445815B2 (en) Information processing apparatus, program, and information processing method
JP6197258B2 (en) Behavior prediction device, program
WO2016057564A1 (en) System and method for adaptive indirect monitoring of subject for well-being in unattended setting
GB2579674A (en) Monitoring method and system
KR20200091235A (en) Devices for managing smart home
JP7420207B2 (en) Sleep state determination system and sleep state determination method
KR102404885B1 (en) SYSTEM AND METHOD FOR PROTECTING LONELY DEATH BASED ON IoT
EP3832618A1 (en) Monitoring system
JP7316094B2 (en) Monitoring device, monitoring method and monitoring program
JP2011135516A (en) Alarm management system and alarm management method
US20190325725A1 (en) System for monitoring a person within a residence
Brownsell et al. Developing a systems and informatics based approach to lifestyle monitoring within eHealth: part II-analysis & interpretation
JP2005115411A (en) Life watching system
WO2022044388A1 (en) Monitoring device, monitoring method, and monitoring program