GB2593931A - Person monitoring system and method - Google Patents

Person monitoring system and method Download PDF

Info

Publication number
GB2593931A
GB2593931A GB2005336.9A GB202005336A GB2593931A GB 2593931 A GB2593931 A GB 2593931A GB 202005336 A GB202005336 A GB 202005336A GB 2593931 A GB2593931 A GB 2593931A
Authority
GB
United Kingdom
Prior art keywords
sitting
standing
event
person
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
GB2005336.9A
Other versions
GB202005336D0 (en
Inventor
Moorhead Paul
Richard Gallagher Stephen
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kraydel Ltd
Original Assignee
Kraydel Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kraydel Ltd filed Critical Kraydel Ltd
Priority to GB2005336.9A priority Critical patent/GB2593931A/en
Publication of GB202005336D0 publication Critical patent/GB202005336D0/en
Publication of GB2593931A publication Critical patent/GB2593931A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1113Local tracking of patients, e.g. in a hospital or private home
    • A61B5/1115Monitoring leaving of a patient support, e.g. a bed or a wheelchair
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1116Determining posture transitions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1116Determining posture transitions
    • A61B5/1117Fall detection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1126Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique
    • A61B5/1127Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique using markers
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1126Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique
    • A61B5/1128Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique using image analysis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6887Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient mounted on external non-worn devices, e.g. non-medical devices

Abstract

The mobility of a person 20 is determined using a sensor to monitor the vertical location of a target object that is fixed with respect to the person. The sensor may be a camera 14 or other type of sensor (e.g. thermal camera, radar, lidar, EM, sonar). The target may be a physical feature of the individual 20 (e.g. face 21, head, shoulder) or an object attached to their body (e.g. spectacles, necklace). Transitions of the person between sitting and standing are determined. The time taken for the person to move between sitting and standing (time spent in a transition event) may be used as an indication of the person’s mobility. Changes in transition times may be indicative of changes in mobility. Mobility determination may be used to assess the person’s risk of falling. The camera 14 may be adjacent to/incorporated in a television 15, with the person’s chair 18 in its field of view. Monitoring may be performed using a pose estimator and analyser (24, 26, fig. 3) using a series of images of the scene recorded by the camera. Facial feature tracking may be used.

Description

Person Monitoring System and Method
Field of the Invention
This invention relates to person monitoring and in particular to monitoring a person's movement to detect mobility problems.
Background to the Invention
Falls are a major risk to frail or elderly people and are a frequent cause of injuries requiring hospitalisation and bedrest which in turn results in muscle loss, reduced mobility and an increased risk of a further fall. Thus falls often trigger terminal decline in the elderly.
Health care systems are typically reactive and provide expensive treatments for fall injuries and subsequent care at home including modifications to the home to reduce risk. Suffering and expense could be reduced if a proactive model were adopted to anticipate falls and allow steps to be taken to make the home safe, provide physiotherapy and walking aids to prevent or delay falls.
Falls may be the result of dizziness or fainting, often due to low blood pressure from poor hydration, 20 but a large proportion are the result of reduced mobility, loss of core strength and stability. Such problems increase the risk that a person will not be able to recover their balance after a trip or stumble and hence such events tend to lead to a fall.
Reactive fall detection systems, for example comprising a motion sensor worn by the user, are known. However, it would be desirable to provide a proactive person monitoring system that allows falls to be predicted.
Summary of the Invention
A first aspect of the invention provides a method of monitoring a person's mobility, the method comprising: monitoring a vertical location of at least one target object that is fixed with respect to the person.
detecting, by said monitoring, transitions of said person between sitting and standing; determining a duration of each transition between sitting and standing; determining from said duration an indication of the person's mobility, wherein said monitoring involves using at least one sensor to detect the vertical location of said at least one target object, or wherein said at least one target object comprises at least one vertical location sensor, and wherein said monitoring involves monitoring the vertical location of said at least one vertical location sensor.
Preferably, said monitoring involves using at least one sensor to detect the vertical location of said at least one target object, and wherein said at least one sensor has a sensing field, the method further including arranging said at least one sensor such that a seat is located in said sensing field.
In preferred embodiments he method includes locating said at least one sensor adjacent a television set.
Preferably the method includes: using at least one sensor to record sensor data of a scene in said at least one sensor's sensing field; detecting said at least one object in said sensor data; determining the vertical location of said at least one target object in said scene; recording multiple instances of scene data over time. each instance of scene data comprising data indicating the vertical location of said at least one target object and a corresponding time of occurrence; detecting, from said data indicating the vertical location of said at least one target object, transitions of said person from sitting to standing and/or from standing to sitting; and in response to detecting a transition, determining the duration of said transition from said scene data.
In preferred embodiments, said at least one sensor comprises at least one camera, and wherein said method includes arranging said at least one camera such that said seat is in said at least one camera's field of view. The method may include: using a camera to record image data of a scene in said camera's field of view; detecting said at least one object in said image data; determining the vertical location of said at least one target object in said scene; recording multiple instances of scene data over time, each instance of scene data comprising data indicating the vertical location of said at least one target object and a corresponding time of occurrence; detecting, from said data indicating the vertical location of said at least one target object, transitions of said person from sitting to standing and/or from standing to sitting; and in response to detecting a transition, determining the duration of said transition from said scene data.
Preferably, the method includes classifying each instance of scene data according to type, wherein supported types comprise a standing event type and a sitting event type; monitoring changes in the type of recorded scene data to detect transitions of said person from sitting to standing, and/or from standing to sitting; and in response to detecting a transition, determining from said scene data the duration of said transition.
The preferred method includes detecting a transition from sitting to standing by detecting recorded scene data corresponding to a first standing event occurring immediately after recorded scene data that does not correspond to a standing event, for example that corresponds to a sitting event, or that corresponds to a transition event.
Typically, the method includes determining, in response to detecting a transition from sitting to standing, a time at which standing is deemed to have occurred. The time at which standing is deemed to have occurred may be the time that is associated with the scene data corresponding to the first standing event.
The method may include calculating the time at which standing is deemed to have occurred using 5 the respective time associated with one or more instances of standing event scene data recorded within a buffer period after the scene data corresponding to said first standing event.
Preferably the method includes determining a time at which the person is deemed to have most recently left a sitting position. The time at which the person is deemed to have most recently left the 10 sitting position may be the time that is associated with the scene data corresponding to the most recent sitting event that precedes the first standing event.
The method may include calculating the time at which the person is deemed to have most recently left the sitting position using the respective time associated with one or more instances of sitting 15 event scene data recorded within a buffer period before the scene data corresponding to the most recent sitting event that precedes the first standing event.
Determining the duration of a transition from sitting to standing may involve calculating the difference between the time at which standing is deemed to have occurred and the time at which the person is 20 deemed to have most recently left a sitting position.
Preferably, the method includes detecting a transition from standing to sitting by detecting recorded scene data corresponding to a first sitting event occurring immediately after recorded scene data that does not correspond to a sitting event, for example that corresponds to a standing event, or that 25 corresponds to a transition event.
The method may include determining, in response to detecting a transition from standing to sitting, a time at which sitting is deemed to have occurred. The time at which sitting is deemed to have occurred may be the time that is associated with the scene data corresponding to the first sitting 30 event.
Preferably the method includes calculating the time at which sitting is deemed to have occurred using the respective time associated with one or more instances of sitting event scene data recorded within a buffer period after the scene data corresponding to said first sitting event.
The method may include determining a time at which the person is deemed to have most recently left a standing position. The time at which the person is deemed to have most recently left the standing position is the time that is associated with the scene data corresponding to the most recent standing event that precedes the first sitting event.
The method may include calculating the time at which the person is deemed to have most recently left the standing position using the respective time associated with one or more instances of standing event scene data recorded within a buffer period before the scene data corresponding to the most recent standing event that precedes the first sitting event.
Determining the duration of a transition from standing to sitting may involve calculating the difference between the time at which sitting is deemed to have occurred and the time at which the person is deemed to have most recently left a standing position.
Optionally the method includes classifying each instance of scene data according to type, wherein supported types comprise a transition event type, and optionally a standing event type and a sitting event type; detecting transitions of said person from sitting to standing, and/or from standing to sitting by detecting multiple successive instances of transition event type scenes; and in response to detecting a transition, determining from said scene data the duration of said transition Preferably the method includes defining at least an upper sitting boundary in the vertical direction that is deemed to correspond to a sitting event. The method may include designating any recorded scene data as corresponding to a sitting event if its target object location data indicates that the target object is located at or below the upper sitting boundary. Optionally the method includes defining a lower sitting boundary in the vertical direction that is deemed to correspond to a sitting event.
Optionally, the method includes designating any recorded scene data as corresponding to a sitting event if its target object location data indicates that the target object is located at or below the upper 25 sitting boundary and at or above the lower sitting boundary.
The method may include defining at least a lower standing boundary in the vertical direction that is deemed to correspond to a standing event. The method may include designating any recorded scene data as corresponding to a standing event if its target object location data indicates that the target object is located at or above the lower standing boundary. The method may include defining an upper standing boundary in the vertical direction that is deemed to correspond to a standing event.
Optionally, the method includes designating any recorded scene data as corresponding to a standing 35 event if its target object location data indicates that the target object is located at or above the lower standing boundary and at or below the upper standing boundary.
In preferred embodiments the method includes recording multiple instances of data indicating the vertical location of the target object; and calculating at least one standing boundary from the vertical 40 locations indicated by a first set of said data instances, wherein said first set contains instances of said data indicating relatively high vertical locations in comparison with the other instances of data not in said first set Said first set may contain scene data in which the target object locations are relatively high.
In preferred embodiments the method includes recording multiple instances of data indicating the vertical locaflon of the target object; and calculafing at least one sitting boundary from the vertical locations indicated by a second set of said data instances, wherein said second set contains instances of said data indicating relatively low vertical locations in comparison with the other instances of data not in said second set. Said second set may contain scene data in which the target object locations are relatively low.
Preferably, the method includes calculating said at least one standing boundary and/or said at least one sitting boundary from the respective first or second data set involves performing a statistical analysis of the respective first or second data set, wherein said statistical analysis optionally involves fitting a mathematical distribution function to the respective data set and selecting the or each boundary from the resulting mathematical distribution.
The method may include detecting said at least one object in said image or sensor data, and determining the vertical location of said at least one target object in said scene involves performing pose estimation on said image data or sensor data.
The method may include detecting said at least one object in said image data, and determining the vertical location of said at least one target object in said scene involves performing face recognition, or other object recognition, on said image data or sensor data.
In preferred embodiments, said at least one target object comprises at least one physical feature of said person and/or at least one object fixed to said person.
Said determining said indication of the person's mobility typically involves assessing the person's risk of falling, Preferably, said determining said indication of the person's mobility involves monitoring changes in said duration over time.
From another aspect the invention provides a system for monitoring a person's mobility, the system 35 comprising: means for monitoring a vertical location of at least one target object that is fixed with respect to the person; means for detecting, by said monitoring, transitions of said person between sitting and standing; means for determining a duration of each transition between sitting and standing; means for determining from said duration an indication of the person's mobility, wherein said means for monitoring a vertical location of at least one target object comprises: at least one sensor for detecting the vertical location of said at least one target object; or at least one vertical location sensor provided on said at least one target object.
In preferred embodiments said means for monitoring a vertical location of at least one target object comprises at least one sensor for detecting the vertical location of said at least one target object, said at least one sensor having a sensing field, and wherein a seat is located in said sensing field.
Advantageously, said at least one sensor is located adjacent a television set, or is incorporated into 10 a television set.
Said system may be configured to perform, and comprise any suitable means for performing, any one or more of the features of the method of the first aspect of the invention.
In arriving at the present invention it is observed that people with mobility problems, reduced strength, pain from joints and musculo-skeletal problems have greater difficulty getting up from a seat and typically take longer to do so than a fit and mobile person. Over time the increasing length of time it takes to stand up is a strong indicator of core strength and stability. Conversely, an inability to control the descent to a seated position, resulting in a more rapid drop, is also a measure, albeit a weaker one, of reducing strength and stability.
Preferred embodiments of the invention provide a means to monitor a person's core strength and stability through the proxy of measuring and tracking the time it takes the person to rise from a seated position and/or return to a seated position. In preferred embodiments, the system is configured to track the location of a person's face, or other feature, in a vertical plane or direction and, using this information, to measure and track the length of time it takes the person to rise from a seated position and/or to return to a seated position.
Further advantageous aspects of the invention will be apparent to those ordinarily skilled in the art 30 upon review of the following description of a preferred embodiment and with reference to the accompanying drawings.
Brief Description of the Drawings
An embodiment of the invention is now described by way of example and with reference to the accompanying drawings in which: Figure 1 is a schematic diagram of a person monitoring system embodying one aspect of the invention shown in situ; Figure 2 is a block diagram of a typical embodiment of the system of Figure 1; and Figure 3 is a graph showing vertical position information from data recorded by the system.
Detailed Description of the Drawings
Referring now to Figure 1 of the drawings there is shown, generally indicated as 10, a preferred embodiment of a person monitoring system embodying one aspect of the invention. The preferred system 10 comprises a computing device 12 and at least one camera 14 connected to the computing device 12. The camera 14 may be integrally formed with the computing device 12, or provided separately and connected by any suitable wired or wireless link, as is convenient.
The system 10 is typically installed in a room that includes a seat 18 in which a person 20 being monitored may be seated. In Figure 1 the system 10 is supported by a table 16 but it may be installed in any other convenient manner, e.g. wall-mounted or supported by any other available surface. The arrangement is such that the person 20 is in the camera's field of view at least when seated in the seat 18 and preferably also when standing in front of the seat 18 (after rising from a sitting position or before sitting). In a preferred arrangement, the camera 14 is located adjacent, e.g. on, above, beside or below, a television set 15 and positioned such that the person 20 is in the camera's field of view when seated in front of the television 15. Optionally: the, or each, camera 14 and/or computing device 12 may be integrally formed with the television set 15. Positioning the camera(s) 14 at TV set 15 affords opportunities to make several measurements per day because TV viewing is a regular daily activity of nearly all elderly people.
In preferred embodiments: the system 10 is configured to support pose estimation. Pose estimation is a computer vision technique comprising computer-implemented methods for detecting a person in digital image(s) or video(s), and in particular to determining the location of one or more points relating to the person's body in image(s) or video(s). The detected points may correspond to any detectable feature of the person's body including joints (e.g. neck, shoulders, elbows, wrists, knees and ankles) and/or facial features (e.g. eyes, ears, nose and mouth). Connections between the detected points are determined, and together the points and connections may be used to create a pose model, commonly referred to as a pose skeleton, of all or part of the person's body as captured by the relevant image/video. OpenCV (trade mark) provided by OpenCV.org, and Posenet (trade mark) provided by TensorFlow are examples of available computer vision pose estimation products, although any suitable conventional computer vision, in particular machine learning computer vision, products or methods may be used to perform pose estimation. Depending on the configuration of the system 10, it may perform 2-dimensional (2D) or 3-dimensional (3D) pose estimation. 3D pose estimation typically requires the use of more than one camera, although more generally pose estimation may be performed using a single camera.
Alternatively, or in addition, to supporting pose estimation, the system 10 may be configured to support face tracking. Face tracking is another computer vision technique that involves detecting a face (e.g. by detecting or one or more facial features or by detecting a face as a whole) in digital image(s) or video(s) and tracking movement of the face in the digital image(s) or video(s). Any conventional face tracking computer software may be used for this purpose.
As indicated above, pose estimation may comprise face tracking. In some embodiments, the person's movement may be tracked only using face tracking. It is preferred however that the pose estimation involves detecting and tracking non-facial features (as well as or instead of a face or facial features) of the person's body since the person's face may not be clearly visible to the camera 14 as the person 20 stands or sits (for example if they are looking downwards). For example, pose estimation typically allows a head to be recognised from a number of angles, and/or can flnd a head feature, e.g. the ears, alone (or estimate their position), which makes this a more robust approach than just facial detection and tracking.
In alternative embodiments, the system 10 may be configured to support any other means for tracking the person's movement. For example, the camera 14 may comprise a thermal camera and the system 10 may be configured to detect and track the movement of the person's head (which tends to appear as the hottest part of a person's body to a thermal camera). Alternatively or in addition, the system 10 may be configured to detect and track one or more object (not shown) on the person's body (e.g. on their clothing or attached to the body itself). The object(s) may be visually detectable in which case the system 10 may include one or more cameras as described above, Alternatively the object(s) may be thermally detectable in which case the system 10 may include one or more thermal cameras. Alternatively or in addition the object(s) may be detectable by one or more conventional sensors such as a light sensor, an electric field sensor, ultrasound sensor, IR sensor or acoustic sensor, in which case the system 10 includes one or more sensors of the appropriate type for detecting and tracking the object(s) on the person's body. Alternatively sflII, the object(s) may comprise one or more location sensor and/or motion sensor that is capable of determining its location, and/or its movement, and of transmitting this information to the computing device 12 for analysis. Alternatively, low power radar devices may be used that are able to detect a body with sufficient granularity to distinguish the head or other part of the anatomy sufficient to track standing up or sitting down. For example, embodiments of the invention my employ any one or more of the following detection techniques: 1) Use of a sensor(s) which can remotely locate the body and recognise the location of one or more features. Such sensor(s) may comprise a visual camera, thermal camera, radar, lidar, or other electromagnetic field based sensor(s), or sound based sensor(s), e.g. sonar.
2) Same as 1) except that one or more items worn by the user is detected and tracked. e.g. glasses, a necklace, a fob or any other item that can preferably be tracked with less computational effort than a body or facial feature(s).
3) An object worn by the user is able to track its own position in space, e.g. comprising one or more accelerometer, and communicates with the controller using any conventional wireless communication link, e.g. Bluetooth (trade mark), or WiFi (trade mark).
It will be understood that in some alternative embodiments it is not necessary for the system to include a camera. and tracking the user's movement may not involve pose estimation.
Figure 2 is a block diagram of a typical embodiment of the person monitoring system 10. In the embodiment of Figure 2 only one camera 14 is shown although in alternative embodiments two or more cameras 14 may be provided. The, or each, camera 14 may be of any conventional type that is capable of creating digital images and/or videos (typically of the type that use visible light to create images although in some embodiments other types of camera, e.g. thermographic cameras can be used).
The computing device 12 may comprise any conventional general purpose computing device, such as a PC, laptop, smartphone or tablet, or may comprise a dedicated computing device as is convenient. The camera 14 is connected to the computing device 12 so that the computing device 12 can receive images and/or videos from the camera 14. The connection may be wired or wireless as is convenient. In the illustrated embodiment the camera 12 and computing device 12 are co-located but in alternative embodiments they may be remote from one another, for example communicating across a telecommunications network (not shown). The computing device 12 includes a camera driver 22 for supporting operation of the camera 14.
The computing device 12 supports computer-implemented pose estimation. To this end the computing device 12 typically comprises a pose estimator 24 configured to receive images and/or videos captured by the camera 14 and to perform pose estimation using one or more received image and/or one or more received video to create data representing one or more pose model (or pose skeleton). The pose estimator 24 may comprise any conventional pose estimation software, for example OpenCV or Posenet software.
The preferred system 10 is configured to analyse the person's pose, including changes in the person's pose, in order to assess the person's mobility. This analysis may be performed using one or more pose models, generated in this example by the pose estimator 24, relating to the person. To this end the system 10 includes a pose analyser 26, although it will be understood that in alternative embodiments the pose estimation and pose analysis may be performed by the same system component(s) as is convenient. The pose estimation and pose analysis may be performed using facial features and/or non-facial features of the person, and may comprise facial tracking. In some embodiments, the pose estimator 24 and pose analyser 26 may be configured to perform only face tracking.
It is noted that face tracking may be performed using pose estimation, i.e. pose estimation using facial features. Alternatively, or in addition, face tracking may be performed using any conventional face detection and tracking means. Similarly any other object/feature on or of the user may be detected and tracked using any conventional object detection and tracking means. Object detection and tracking (including facial detection and tracking) may involve creation of a notional bounding box around the detected object, and the vertices of the bounding box may be used to determine and track the location of the object. Any conventional facial detection or object detection algorithms (typically implemented in computer software) may be used for this purpose. Accordingly, in some embodiments, the pose estimator 24 and pose analyser 26 may be omitted and replaced with any suitable conventional object detection and tracking components, typically comprising computer software. Any such object detection and tracking components may be configured to perform the analysis described hereinafter, including with regard to identifying standing, sitting and transition events, as applicable, and any related determinations and calculations as would be apparent to a skilled person.
While embodiments of the invention may use either face tracking or pose estimation, face tracking relies on the system being able to detect most of the face in order to decide that there is a head or face in the image, whereas pose estimation can infer that an object on top of what looks like a body is probably a head. When people stand up or sit down they typically do not look straight ahead, all the more so with age and stooped back, or when standing up using a walking frame, so pose estimation is preferred in embodiments of the invention.
The system 10 typically includes a controller 28 for controlling the overall operation of the system 10. The controller 28 may comprise any suitably configured or programmed processor(s), for example a microprocessor, microcontroller or multi-core processor. Conveniently, the controller 28 is implemented by the central processing unit (CPU) of the computing device 12. The CPU may also implement and/or support the implementation of: as required: any one or more of the camera driver 22, pose estimator 24 and pose analyser 26. Typically the computing device 12 comprises a multi-core processor running a plurality of processes, one of which may be designated as the controller and the others performing the other tasks described herein as required. Each process may be performed in software, hardware or a combination of software as is convenient. One or more hardware digital signal processors may be provided to perform one or more of the processes as is convenient and as applicable.
In the illustrated embodiment, the system 10 comprises a single computing device 12 that supports the camera driver 22, pose estimator 24: pose analyser 26 and controller 28, and to which the camera 14 is connected. In alternative embodiments (not illustrated) the system 10 may be implemented in any convenient distributed manner across more than one computing device which may be in communication with each other and/or with the camera 14 via a telecommunications network and/or via wired and/or wireless connection(s) as is convenient.
The preferred system 10 is configured to detect at least one target feature associated with the person's body and to determine a location of the target feature(s). The target feature is preferable a feature of the person's body or face, or an object carried by the person's body. In preferred embodiments the target feature is the person's face 21 and/or or one or more features of the face 21.
For example the system 10 may detect and locate the face 21 as a whole, or one or more facial features such as the eyes, ears, nose or mouth. Detecting the face 21 as a whole may involve detecting one or more facial features and generating data representing a facial boundary using the detected facial features. The facial boundary may be rectangular or polygonal and may be defined by 3 or more vertices. Locating the target feature preferably involves generating data defining a location of the target feature in a vertical plane, for example using two dimensional coordinates in a vertical plane, but at least involves determining a location of the target feature in a vertical direction, i.e. its height. Other parts of the person's body may be used as the target feature, for example one or more shoulder or other joint(s). In the case where there is more than one target feature, a respective location for each target feature may be calculated and these locations may be used individually or may be combined to define a feature boundary, or be aggregated or otherwise processed to produce data defining at least one target feature location. For example, in cases where a feature boundary is generated, any one or more of the vertices of the feature boundary may serve as a target feature location, or a target location feature may be determined from the vertices, e.g. the centre of the bounding box can be determined as the mean of the x and y co-ordinates of the 4 vertices.
The preferred system 10 is configured to record data representing the scene in the camera's field of view. For each recorded scene the data indicates the location of the, or each, target feature. In preferred embodiments, the camera 14 captures the scene and the pose estimator 24 detects the target feature(s) in the corresponding data provided by the camera 14. The pose estimator 24 may also determine the location of the target feature(s), although this may alternatively be performed by any other convenient part of the system 10, e.g. by the pose analyser 26 or controller 28. During monitoring of the person 20, the system 10 also records, or otherwise obtains, the time of each recorded scene as part of the scene data. To this end the system 10 may use any convenient clock or timing device, e.g. the system clock (not shown) of the computing device 12. The system 10 may be operable in a training mode in which recording the time of scenes is optional.
The pose estimator 24 and /or pose analyser 26 may obtain the scene data from a series of images (e.g. a series of individual images captured by a stills camera or, more typically. a series of frames of a video captured by a video camera) captured by the camera 14. Typically, the respective image data or frame data includes or is associated with a time stamp and/or a frame rate and so the system 10 can determine the time of and/or time interval between the captured scenes. The tracking and analysis performed by the system 10 may be performed in real time or off-line as is convenient.
In use, the system 10 records multiple instances of scene data. Preferably, the system 10 records scene data periodically with any suitable frequency, preferably more than once per second.
Optionally, the system 10 is configured to record scene data in response to detecting movement in the camera's field of view. Motion detection may be performed by any conventional means, for example by providing the system 10 with at least one motion sensor (not shown) arranged to detect motion in the camera's field of view. or by using any convenient computer-implemented motion detection method to analyse the output of the camera 14. For example, the system 10 may be configured to record scene data for a period of time, typically in the order of seconds or minutes, after motion is detected. The scene data may be created by the pose estimator 24 or pose analyser 26 as is convenient, and may be stored in any convenient computer memory 30. It is assumed that any instance of scene data that is recorded or analysed relates to a scene in which the target feature(s) are detected. If the target feature(s) are not detected, the respective scene data may be discarded or ignored.
In use, the system 10 is arranged such that a seating zone, i.e. the seat 18 in the illustrated example, is in the camera's field of view. As such. and with reference in particular to Figure 3, it may be assumed that each instance scene data that is recorded or analysed corresponds to a real world event in which the person 20 is sitting, or standing, or is in a transition position between sitting and standing. Accordingly, each instance of scene data may be designated as corresponding to a sitting event, a standing event or a transition event.
When the person 20 is sitting, the target feature(s) may be within a range of locations, and in particular within a range of locations in the vertical direction. Therefore, the system 10 may define at least an upper sitting boundary 40 in the vertical direction that is deemed to correspond to a sitting event, and to designate any recorded scene data as corresponding to a sitting event if its target feature location data indicates that the target feature(s) are located at or below the upper sitting boundary. Optionally, the system 10 may define a lower sitting boundary 42 in the vertical direction that is deemed to correspond to a sitting event, and to designate any recorded scene data as corresponding to a sitting event if its target feature location data indicates that the target feature(s) are located at or below the upper sitting boundary 40 and at or above the lower sitting boundary 42.
When the person 20 is standing, the target feature(s) may be within a range of locations, and in particular within a range of locations in the vertical direction. Therefore, the system 10 may define at least a lower standing boundary 44 in the vertical direction that is deemed to correspond to a standing event, and to designate any recorded scene data as corresponding to a standing event if its target feature location data indicates that the target feature(s) are located at or above the lower standing boundary 44. Optionally, the system 10 may define an upper standing boundary 46 in the vertical direction that is deemed to correspond to a standing event, and to designate any recorded scene data as corresponding to a standing event if its target feature location data indicates that the target feature(s) are located at or above the lower standing boundary 44 and at or below the upper standing boundary 46.
Scene data having a target feature located between the upper sitting boundary 40 and the lower standing boundary 44 may be designated as corresponding to a transition event, or may be ignored.
Classifying the scene data as corresponding to a standing event or a sitting event (or a transition event if applicable) may be performed in any convenient manner. For example, a user (not shown) may set a respective value for the boundaries 40, 42, 44, 46, as applicable, and the system 10 may classify the scene data accordingly. In preferred embodiments the system 10 is configured to classify the scene data automatically, including setting the relevant boundaries 40, 42, 44. 46 as applicable.
The system 10 may classify the scene data using any conventional statistical analysis. For example, the system 10 may be configured to group the scene data into a first cluster containing scene data in which the target feature locations are relatively high, and a second cluster containing scene data in which the target feature locations are relatively low; to mathematically fit a distribution (normal or other as preferred) to the clusters; and to select a respective offset with respect to a modal value for each boundary that is to be set. For example, if a normal distribution is fitted to the second cluster of data (i.e. corresponding to a sitting position), the upper sitting boundary 40 may be chosen as a 2 sigma distance from the modal value of the normal (or other as may be preferred) distribution. Other statistical analysis techniques can alternatively be used to determine boundary values. For example, a boundary value may be calculated as the mean, median or modal value of the relevant data set, and/or as an offset from the mean, median or modal value. In the illustrated embodiment, classification of the scene data may conveniently be performed by the pose analyser 26. Boundary values may be updated or adjusted manually or automatically as desired. For example: the system 10 may adjust the boundary values periodically as additional scene data is recorded over time.
Optionally, initial boundary values may be set during a training mode.
In order to monitor the person's mobility, the system 10 is configured to monitor the time taken for the person 20 to transition between sitting and standing. In is preferred to monitor at least the time taken to reach a standing position from a sitting position. Alternatively, or in addition: the time taken 20 to reach a sitting position from a standing position may be monitored.
In order to detect transitions between sitting and standing, the system 10 is configured to temporally monitor. or track. the recorded scene data. This tracking may be performed in real time or off-line as is convenient. The preferred system 10 is configured to detect changes in the type of scene data that is recorded, i.e. to detect occurrences of the recorded scene data type changing to denote a standing event, and/or to detect occurrences of the recorded scene data type changing to denote a sitting event.
A transition from sitting to standing may be detected by detecting recorded scene data corresponding to a first standing event occurring immediately after recorded scene data that does not correspond to a standing event, e.g. that corresponds to a sitting event, or that corresponds to a transition event (if transition events are positively detected). When a transition from sitting to standing is detected, the system 10 determines a time at which standing is deemed to have occurred. Preferably, the time (which may be referred to as the standing time) at which standing is deemed to have occurred is the time that is associated with the scene data corresponding to the first standing event. Alternatively or in addition, the time at which standing is deemed to have occurred is calculated using the respective time associated with one or more instances of standing event scene data recorded within a buffer period ( typically in the order of seconds, e.g. less than 15 seconds) after the scene data corresponding to said first standing event. The system 10 also determines a time (which may be referred to as the sitting time) at which the person 20 is deemed to have most recently left the sitting position. Preferably, the time at which the person 20 is deemed to have most recently left the sitting position is the time that is associated with the scene data corresponding to the most recent sitting event that precedes the first standing event. Alternatively or in addition: the time at which the person 20 is deemed to have most recently left the sitting position is calculated using the respective time associated with one or more instances of sitting event scene data recorded within a buffer period ( typically in the order of seconds, e.g. less than 15 seconds) before the scene data corresponding to the most recent sitting event that precedes the first standing event. The system 10 determines the time taken for the person to stand from sitting by calculating the difference between the sitting time and the standing time. This analysis may conveniently be performed by the pose analyser 26, and may be performed in real time, or off-line, locally or remotely as is convenient.
A transition from standing to sitting may be detected by detecting recorded scene data corresponding to a first sitting event occurring immediately after recorded scene data that does not correspond to a sitting event e.g. that corresponds to a standing event, or that corresponds to a transition event (if transition events are positively detected. When a transition from standing to sitting is detected, the system 10 determines a time at which sitting is deemed to have occurred. Preferably, the time (which may be referred to as the sitting time) at which sitting is deemed to have occurred is the time that is associated with the scene data corresponding to the first sitting event. Alternatively or in addition, the time at which sitting is deemed to have occurred is calculated using the respective time associated with one or more instances of sitting event scene data recorded within a buffer period (typically in the order of seconds, e.g. less than 15 seconds) after the scene data corresponding to said first sitting event. The system 10 also determines a time (which may be referred to as the standing time) at which the person 20 is deemed to have most recently left the standing position. Preferably, the time at which the person 20 is deemed to have most recently left the standing position is the time that is associated with the scene data corresponding to the most recent standing event that precedes the first sitting event. Alternatively or in addition, the rime at which the person 20 is deemed to have most recently left the standing position is calculated using the respective time associated with one or more instances of standing event scene data recorded within a buffer period (typically in the order of seconds. e.g. less than 15 seconds) before the scene data corresponding to the most recent standing event that precedes the first sitting event. The system 10 determines the time taken for the person to sit from standing by calculating the difference between the standing time and the sitting time. This analysis may conveniently be performed by the pose analyser 26.
Accordingly, when the preferred system 10 is monitoring the person 20, it records multiple instances of scene data (including the location of the target feature(s) and the time of occurrence); classifies the scene data according to type (in particular determining whether the scene data corresponds to a sitting event or standing event); detects changes in the type of scene data recorded (in particular to detect transitions to a standing event and/or transitions to a sitting event) in order to detect when the person stands or sits; and: upon detecting that the person has stood up or sat down, calculates the time taken to stand or sit as applicable.
In the above examples, the time taken to stand or to sit, i.e. the time spent in a transition event, is determined by differences between standing and sitting events: Alternatively, the time spent in a transition event may be determined directly by detecting and tracking transition events, i.e. determining the amount of time that the person spends in any given transition event. A transition event can be categorised as a sitting to standing event or as a standing to sitting event by determining which type of event (standing or sitting) precedes and/or follows it, or by determining the direction of movement of the target feature during the transition event.
By way of example, when the vertical position of the person's head (or other tracked target feature) is in the range of values associated with sitting, the system 10 determines that the person is "seated", and when the head/target feature is in the range of values associated with standing, the system 10 determines that they are "standing". When their status changes from "seated" to "standing" or vice versa, at the system 10 examines historical event data to determine the most recent time that they were in the alternate state. So if for example the system 10 determines that at time X they ceased sitting and become standing, the system 10 examines historical event data and determines the most recent time Vat which they were seated. The stand-up transition time is therefore X-Y. The purpose of the buffer period described above is to filter out misleading data caused in cases where the person does not sit down or stand up in a normal manner, e.g. when the person stoops to rearrange cushions before sitting down. Preferably, the system 10 ignores events where there is more than face or body in the scene, optionally unless the systemic can use facial recognition to be clear about the event being tracked.
The system 10 may analyse the person's stand-up and/or sit down times in order to assess the person's mobility. In particular changes in stand-up and/or sit down times may be indicative of changes in the person's mobility. The preferred system 10 generates one or more mobility indicator based on its assessment of the person's stand-up and/or sit down times, and may compare the mobility indicator(s) against reference data to determine if the mobility indictor(s) indicate that the person's risk of falling is at a level that requires intervention. The reference data may take any suitable form, for example comprising historical data generated for the person 20 and/or data generated by third parties.
Over time, sufficient records of the person's stand-up and/or sit down times allows a distribution of times approximating a normal distribution to be stored. Advantageously, subsequently recorded data which does not conform to the distribution is discarded since it is likely arise from irrelevant events such as a different person being in the field of view or a detected movement other than sitting down or standing up. Given a sufficiently large sample of sit-down and stand-up times, conventional statistical analysis methods may be applied to determine trends over time. Such techniques including moving averages which help to smooth the data, but may obscure sudden changes indicative of injury, or illness. Thus it is advantageous to examine both the raw data, or data averaged over narrow time windows (e.g. 1 day), as well as averaged over longer periods. Given sufficient data captured from multiple individuals, advantageously together with physical examination of said individuals to determine their level of mobility and strength it is possible to define useful reference data, beyond which an intervention to reduce the risk of a fall is deemed to be beneficial. The reference data 10 may be used by the system 10, or by multiple instances of the system 10, to assess data captured in respect of a person being monitored in order to make assessments about the person's risk of fall or other potential problems.
The system 10 may perform any or all of its calculations in real time as the motion occurs, and/or the calculations may be performed using data recorded after a period of collection e.g. a day, or a week, or a month.
Advantageously, the system 10 may be configured to support facial recognition. To this end, the computing device 12, or any other convenient component of the system 10, may include any convention facial recognition tool(s), for example any commercially available facial recognition software. The system 10 may use facial recognition to identify the person 20 being monitored, which allows multiple people to be monitored using the same system 10, and/or can prevent data being gathered for non-monitored people and affecting the monitoring of a monitored person 20. For example, the controller may only record the data for a known face (the system may be trained to recognise any given user based on one or more images of the user) or for each known face (where the system has been trained with one or more images for multiple users).
If the system 10 determines that the person 201s at risk, it may take one or more actions, typically including generating an alert. The actions may include notifying the user or a carer of the risk. This may involve issuing a visual or audio alert to the user, and/or sending a message to one or more computing device (not shown) across a telecommunications network. The message may comprise an email, SMS message, MMS message, pager notification, phone call and/or any other messaging means supported by the system 10.
The invention is not limited to the embodiment(s) described herein but can be amended or modified without departing from the scope of the present invention.

Claims (25)

  1. CLAIMS: 1. A method of monitoring a person's mobility, the method comprising: monitoring a vertical location of at least one target object that is fixed with respect to the person detecting, by said monitoring, transitions of said person between sitting and standing; determining a duration of each transition between sitting and standing; determining from said duration an indication of the person's mobility, wherein said monitoring involves using at least one sensor to detect the vertical location of said at least one target object, or wherein said at least one target object comprises at least one vertical location sensor, and wherein said monitoring involves monitoring the vertical location of said at least one vertical location sensor.
  2. 2. The method of claim 1, wherein said monitoring involves using at least one sensor to detect the vertical location of said at least one target object. and wherein said at least one sensor has a sensing field, the method further including arranging said at least one sensor such that a seat is located in said sensing field.
  3. 3. The method of claim 1 or 2, further including locating said at least one sensor adjacent a television set.
  4. 4. The method of claim 2 or 3, further including: using at least one sensor to record sensor data of a scene in said at least one sensor'ssensing field;detecting said at least one object in said sensor data; determining the vertical location of said at least one target object in said scene; recording multiple instances of scene data over time, each instance of scene data comprising data indicating the vertical location of said at least one target object and a corresponding time of occurrence; detecting, from said data indicating the vertical location of said at least one target object, transitions of said person from sitting to standing and/or from standing to sitting; and in response to detecting a transition, determining the duration of said transition from said scene data.
  5. 5. The method of any preceding claim, wherein said at least one sensor comprises at least one camera, and wherein said method includes arranging said at least one camera such that said seat is in said at least one camera's field of view, and wherein the method preferably includes: using a camera to record image data of a scene in said camera's field of view; detecting said at least one object in said image data; determining the vertical location of said at least one target object in said scene; recording multiple instances of scene data over time, each instance of scene data comprising data indicating the vertical location of said at least one target object and a corresponding time of occurrence; detecting, from said data indicating the vertical location of said at least one target object, 5 transibons of said person from sitting to standing and/or from standing to sitting; and in response to detecting a transition, determining the duration of said transition from said scene data.
  6. 6. The method of claim 4 or 5, further including classifying each instance of scene data according to 10 type, wherein supported types comprise a standing event type and a sitting event type; monitoring changes in the type of recorded scene data to detect transitions of said person from sitting to standing, and/or from standing to sitting; and in response to detecting a transition, determining from said scene data the duration of said transition.
  7. 7. The method of any one of claims 4 to 6, further including detecting a transition from sitting to standing by detecting recorded scene data corresponding to a first standing event occurring immediately after recorded scene data that does not correspond to a standing event, for example that corresponds to a sitting event, or that corresponds to a transition event.
  8. 8. The method of any preceding claim, further including determining, in response to detecting a transition from sitting to standing, a time at which standing is deemed to have occurred, and wherein, preferably, the time at which standing is deemed to have occurred is the time that is associated with the scene data corresponding to the first standing event, and wherein the method may further include calculating the time at which standing is deemed to have occurred using the respective time associated with one or more instances of standing event scene data recorded within a buffer period after the scene data corresponding to said first standing event.
  9. 9. The method of any preceding claim, further including determining a time at which the person is deemed to have most recently left a sitting position, and wherein, preferably, the time at which the person is deemed to have most recently left the sitting position is the time that is associated with the scene data corresponding to a most recent sitting event that precedes a first standing event, and wherein the method may further include calculating the time at which the person is deemed to have most recently left the sitting position using the respective time associated with one or more instances of sitting event scene data recorded within a buffer period before the scene data corresponding to the most recent sitting event that precedes the first standing event, and wherein determining the duration of a transition from sitting to standing may involve calculating the difference between the time at which standing is deemed to have occurred and the time at which the person is deemed to have most recently left a sitting position.
  10. 10. The method of any preceding claim, further including detecting a transition from standing to sitting by detecting recorded scene data corresponding to a first sitting event occurring immediately after recorded scene data that does not correspond to a sitting event, for example that corresponds to a standing event, or that corresponds to a transition event.
  11. 11. The method of any preceding claim further including determining, in response to detecting a transition from standing to sitting, a time at which sitting is deemed to have occurred, and wherein, preferably, the time at which sitting is deemed to have occurred is the time that is associated with scene data corresponding to the first sitting event, and wherein the method may further include calculating the time at which sitting is deemed to have occurred using the respective time associated with one or more instances of sitting event scene data recorded within a buffer period after the scene data corresponding to said first sitting event.
  12. 12. The method of any preceding claim further including determining a time at which the person is deemed to have most recently left a standing position, and wherein, preferably, the time at which the person is deemed to have most recently left the standing position is the time that is associated with scene data corresponding to the most recent standing event that precedes the first sitting event, and wherein, the method may further include calculating the time at which the person is deemed to have most recently left the standing position using the respective time associated with one or more instances of standing event scene data recorded within a buffer period before the scene data corresponding to the most recent standing event that precedes the first sitting event, and wherein determining the duration of a transition from standing to sitting may involve calculating the difference between the time at which sitting is deemed to have occurred and the time at which the person is deemed to have most recently left a standing position.
  13. 13. The method of any one of claims 4 to 12, further including classifying each instance of scene data according to type, wherein supported types comprise a transition event type, and optionally a standing event type and a sitting event type; detecting transitions of said person from sitting to standing, and/or from standing to sitting by detecting multiple successive instances of transition event type scenes; and in response to detecting a transition, determining from said scene data the duration of said transition.
  14. 14. The method of any preceding claim, further including defining at least an upper sitting boundary in the vertical direction that is deemed to correspond to a sitting event, and wherein the method may further include designating any recorded scene data as corresponding to a sitting event if its target object location data indicates that the target object is located at or below the upper sitting boundary, and wherein the method may further include defining a lower sitting boundary in the vertical direction that is deemed to correspond to a sitting event, and wherein the method may further include designating any recorded scene data as corresponding to a sitting event if its target object location data indicates that the target object is located at or below the upper sitting boundary and at or above the lower sitting boundary.
  15. 15. The method of any preceding claim, further including defining at least a lower standing boundary in the vertical direction that is deemed to correspond to a standing event, and wherein the method may further include designating any recorded scene data as corresponding to a standing event if its 5 target object location data indicates that the target object is located at or above the lower standing boundary, and wherein the method may further include defining an upper standing boundary in the vertical direction that is deemed to correspond to a standing event, and wherein the method may further include designating any recorded scene data as corresponding to a standing event if its target object location data indicates that the target object is located at or above the lower standing 10 boundary and at or below the upper standing boundary.
  16. 16. The method of any preceding claim, further including recording multiple instances of data indicating the vertical location of the target object; and calculating at least one standing boundary from the vertical locations indicated by a first set of said data instances, wherein said first set contains instances of said data indicating relatively high vertical locations in comparison with the other instances of data not in said first set, and wherein, said first set may contain scene data in which the target object locations are relatively high.
  17. 17. The method of any preceding claim, further including recording multiple instances of data indicating the vertical location of the target object; and calculating at least one sitting boundary from the vertical locations indicated by a second set of said data instances, wherein said second set contains instances of said data indicating relatively low vertical locations in comparison with the other instances of data not in said second set, and wherein said second set may contain scene data in which the target object locations are relatively low.
  18. 18. The method of claim 16 or 17, wherein calculating said at least one standing boundary and/or said at least one sitting boundary from the respective first or second data set involves performing a statistical analysis of the respective first or second data set, wherein said statistical analysis optionally involves fitting a mathematical distribution function to the respective data set and selecting the or each boundary from the resulting mathematical distribution.
  19. 19. The method of claim 4 or 5, or any preceding claim dependent on claim 4 or 5, wherein detecting said at least one object in said image or sensor data, and determining the vertical location of said at least one target object in said scene involves performing pose estimation on said image 35 data or sensor data.
  20. 20. The method of claim 4 or 5. or any preceding claim dependent on claim 4 or 5, wherein detecting said at least one object in said image data, and determining the vertical location of said at least one target object in said scene involves performing face detection, or other object detection. on said 40 image data or sensor data.
  21. 21. The method of any preceding claim wherein said at least one target object comprises at least one physical feature of said person and/or at least one object fixed to said person.
  22. 22. The method of any preceding claim said determining said indication of the person's mobility 5 involves monitoring changes in said duration over time.
  23. 23. A system for monitoring a person's mobility, the system comprising: means for monitoring a vertical location of at least one target object that is fixed with respect to the person; means for detecting, by said monitoring, transitions of said person between sitting and standing; means for determining a duration of each transition between sitting and standing; means for determining from said duration an indication of the person's mobility, wherein said means for monitoring a vertical location of at least one target object comprises: at least one sensor for detecting the vertical location of said at least one target object; or at least one vertical location sensor provided on said at least one target object.
  24. 24. The system of claim 23, wherein said means for monitoring a vertical location of at least one target object comprises at least one sensor for detecting the vertical location of said at least one 20 target object, said at least one sensor having a sensing field, and wherein a seat is located in said sensing field.
  25. 25. The system of claim 23 or 24, wherein said at least one sensor is located adjacent a television set, or is incorporated into a television set.
GB2005336.9A 2020-04-09 2020-04-09 Person monitoring system and method Pending GB2593931A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB2005336.9A GB2593931A (en) 2020-04-09 2020-04-09 Person monitoring system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB2005336.9A GB2593931A (en) 2020-04-09 2020-04-09 Person monitoring system and method

Publications (2)

Publication Number Publication Date
GB202005336D0 GB202005336D0 (en) 2020-05-27
GB2593931A true GB2593931A (en) 2021-10-13

Family

ID=70848144

Family Applications (1)

Application Number Title Priority Date Filing Date
GB2005336.9A Pending GB2593931A (en) 2020-04-09 2020-04-09 Person monitoring system and method

Country Status (1)

Country Link
GB (1) GB2593931A (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112333419A (en) * 2020-08-21 2021-02-05 深圳Tcl新技术有限公司 Monitoring and tracking method, device, system and computer readable storage medium
CN112472481A (en) * 2020-12-15 2021-03-12 沈阳工业大学 Dynamic human body pose recognition embedded platform under trunk shielding state

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1195139A1 (en) * 2000-10-05 2002-04-10 Ecole Polytechnique Féderale de Lausanne (EPFL) Body movement monitoring system and method
US8206325B1 (en) * 2007-10-12 2012-06-26 Biosensics, L.L.C. Ambulatory system for measuring and monitoring physical activity and risk of falling and for automatic fall detection
US20140330172A1 (en) * 2013-02-27 2014-11-06 Emil Jovanov Systems and Methods for Automatically Quantifying Mobility
WO2017153120A1 (en) * 2016-03-07 2017-09-14 Koninklijke Philips N.V. System and method for implementing a chair rise test

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1195139A1 (en) * 2000-10-05 2002-04-10 Ecole Polytechnique Féderale de Lausanne (EPFL) Body movement monitoring system and method
US8206325B1 (en) * 2007-10-12 2012-06-26 Biosensics, L.L.C. Ambulatory system for measuring and monitoring physical activity and risk of falling and for automatic fall detection
US20140330172A1 (en) * 2013-02-27 2014-11-06 Emil Jovanov Systems and Methods for Automatically Quantifying Mobility
WO2017153120A1 (en) * 2016-03-07 2017-09-14 Koninklijke Philips N.V. System and method for implementing a chair rise test

Also Published As

Publication number Publication date
GB202005336D0 (en) 2020-05-27

Similar Documents

Publication Publication Date Title
US10095930B2 (en) System and method for home health care monitoring
JP7138931B2 (en) Posture analysis device, posture analysis method, and program
CN105283129B (en) Information processor, information processing method
EP3525673B1 (en) Method and apparatus for determining a fall risk
CN109887238B (en) Tumble detection system and detection alarm method based on vision and artificial intelligence
KR101999934B1 (en) Display control device, display control system, display control method, display control program, recording medium
JP6720909B2 (en) Action detection device, method and program, and monitored person monitoring device
US10846538B2 (en) Image recognition system and image recognition method to estimate occurrence of an event
GB2593931A (en) Person monitoring system and method
JP2007006427A (en) Video monitor
WO2019013257A1 (en) Monitoring assistance system and method for controlling same, and program
JP6822328B2 (en) Watching support system and its control method
Bai et al. Design and implementation of an embedded monitor system for detection of a patient's breath by double webcams
TWI541769B (en) Falling down detecting systems and method
JP6119938B2 (en) Image processing system, image processing apparatus, image processing method, and image processing program
CN113657150A (en) Fall detection method and device and computer readable storage medium
CN112949417A (en) Tumble behavior identification method, equipment and system
Lee et al. A new posture monitoring system for preventing physical illness of smartphone users
CN113114977A (en) Intelligent nursing system and intelligent nursing method
CN109044375A (en) A kind of control system and its method of real-time tracking detection eyeball fatigue strength
CN108846996B (en) Tumble detection system and method
US20220409120A1 (en) Information Processing Method, Computer Program, Information Processing Device, and Information Processing System
JP6870514B2 (en) Watching support system and its control method
WO2023000599A1 (en) Bone conduction-based eating monitoring method and apparatus, terminal device, and medium
Chua et al. Intelligent visual based fall detection technique for home surveillance