GB2621822A - Monitoring system - Google Patents

Monitoring system Download PDF

Info

Publication number
GB2621822A
GB2621822A GB2212022.4A GB202212022A GB2621822A GB 2621822 A GB2621822 A GB 2621822A GB 202212022 A GB202212022 A GB 202212022A GB 2621822 A GB2621822 A GB 2621822A
Authority
GB
United Kingdom
Prior art keywords
projection
image
projections
relating
predetermined criterion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
GB2212022.4A
Other versions
GB202212022D0 (en
Inventor
Vermaak Justus
Sarosh Tariq Muhammad
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Skystrm Ltd
Original Assignee
Skystrm Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Skystrm Ltd filed Critical Skystrm Ltd
Priority to GB2212022.4A priority Critical patent/GB2621822A/en
Publication of GB202212022D0 publication Critical patent/GB202212022D0/en
Publication of GB2621822A publication Critical patent/GB2621822A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/44Event detection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1113Local tracking of patients, e.g. in a hospital or private home
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1116Determining posture transitions
    • A61B5/1117Fall detection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1126Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique
    • A61B5/1128Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique using image analysis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/746Alarms related to a physiological condition, e.g. details of setting alarm thresholds or avoiding false alarms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/04Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons
    • G08B21/0407Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons based on behaviour analysis
    • G08B21/043Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons based on behaviour analysis detecting an emergency event, e.g. a fall
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/04Alarms for ensuring the safety of persons responsive to non-activity, e.g. of elderly persons
    • G08B21/0438Sensor means for detecting
    • G08B21/0476Cameras to detect unsafe condition, e.g. video cameras

Abstract

Method of monitoring a concealed object which is not in the line of sight of an imager, comprising: identifying light projections (shadows) 6 from an object 12 within an image (38, Fig.3); isolating the light projection of the object (40, Fig.3); and determining if the projection meets a criterion (48, Fig.3). The light projection may be a shadow from or reflection of the object. The image capture device 4 may capture visible and/or infrared light. Identifying the shadows may comprise clustering to identify shapes or forms with similar colour, brightness and/or contrast characteristics. Isolating the shadow may comprise segmenting the clusters. The criterion may indicate an object or person’s position, orientation, pose, gesture, posture or movement and further indicate a distress condition such as a person falling, collapsing, tripping, lying down and/or waving. A neural network and/or machine learning may be used to segment the shadow and/or determine a criterion. A notification or alarm may be provided to a remote device in response the criterion being satisfied thereby providing a fall alarm or the like (40, Fig.3), Figs 4&5. The system may visually monitor the object when the object is in a camera’s line of sight (24-30, Fig.2).

Description

Monitoring system The present disclosure relates to a monitoring system using a light projection, particularly, but not limited to, using projected shadows.
Background of the Invention
Vulnerable people (e.g. the elderly) may reside in environments which have little or no supervision. For example, a person may live in a house on their own. Should the person become injured or otherwise incapacitated, the person may struggle or be unable to obtain assistance.
In some prior art systems, the person may have a "panic alarm". Such alarms comprise a device that may be activated by the user in the event of an emergency situation. Assistance may then be dispatched accordingly. However, such systems require the user to be conscious. Additionally, if the user is incapacitated, and the alarm is elsewhere, then they may be unable to activate the alarm.
In W02021/125550, a system is provided to detect a falling action of the user. The 20 system uses a camera, an imaging processing system and a neural network to identify movement indicative of falling.
The inventor has numerous problems with the prior art systems. Whilst WO'550 does not require user input and can work automatically, analysis is performed by optical processing. In order to capture events, the system must have a direct line of sight of the user. In a typical residential environment, this would typically require multiple cameras in each room to ensure complete coverage. This increases the expense of the system, the processing power required, and may be overly intrusive for the user.
The present invention aims to overcome one or more of the above problems.
Statement of Invention
According to first aspect of the invention, there is provided: a method of monitoring an object comprising the following steps: a) capturing an image of a surface comprising one or more projection of the object; b) identifying one or more projection within the image, the projection including at least one projection of the object; c) isolating the projection of the object from the one or more identified projection in the image; and d) determining if the projection of the object meets one or more predetermined criterion.
The projection may comprise a shadow of the object. The projection may comprise a reflection of the object.
The predetermined criterion may be indicative of a characteristic pose, gesture, posture or movement of the object. The predetermined criterion may be indicative of a distress condition of the object (i.e. the object is distressed or otherwise requirement intervention). The distress condition may be comprises one or more: falling; collapsing; tripping; lying down; and/or waving. The predetermined criterion may be indicative of a position and/or orientation of the object.
The object may comprise a human or animal. The object may be movable. The object may comprise an inanimate object.
Step b) may comprise clustering/grouping the captured image to identify one or more shape or form within the image having the same or similar characteristics.
The characteristics comprise one or more of: colour; brightness; contrast. The characteristic may comprise a spatial closeness. The spatial closeness may be defined relative to same/similar pixel. The spatial closeness may be defined relative to an arbitrary pixel.
Step c) may comprise segmenting the projections to discriminate projections relating to the object. Step c) may comprise segmenting the projections to discriminate projections relating to other objects. Segmentation may be performed on the clustered data. The system may select data relating to only the projection of the object. Data for projections relating to other objects may be deleted or disregarded.
Discriminating projections relating to the object may comprises inputting data relating to object and/or a projection thereof. The system may compare the projection data with said data relating to object and/or a projection thereof. If the captured projection data is the same or within a similarity threshold to the input data, then the projection may deemed as relating to the object. The input data may provide a training model. The input data relating to the object and/or a projection may comprise static and/or moving images of the object or projection thereof.
One or more light sources may be determined or recorded by the system. The position, direction and/or luminosity of the light source may be determined. The light source may comprise the Sun and/or moon. The light source may comprise an artificial light.
The determined light sources may be used to discriminate projections relating to the object. The position and/or form of the object projection may be estimated using a known/estimated configuration of the object and the light source. A virtual model of the projected may be construction using the known/estimated configuration of the object and the light source. The model may be compared with the captured projection data to determine or estimate whether the projection is provided by the object.
Step d) may comprise discriminating projections relating to the predetermined criteria. Step d) may comprises discriminating projections not relating to the predetermined criteria. Data for projections not relating to the predetermined criteria may deleted or disregarded.
Discriminating projections relating to the predetermined criteria may comprise inputting data relating to the object and/or a projection thereof in characteristic pose, posture, gesture or movement of the object. The system may compare the projection with said data relating to object and/or a projection thereof. lithe captured projection data is the same or within a similarity threshold to the input data, then the projection may deemed as relating to the predetermined criteria (e.g. the distress case).
The input data relating to the object and/or a projection may comprise static and/or moving images. The input data may provide a training model.
The input data relating to the object and/or a projection may comprise virtual or synthetic images of the object or projection thereof. Where the input data comprises data relating to the object, the system may create a virtual model of the projection of said object. Discrimination may be performed using a neural network and/or machine learning.
The orientation of the isolated projection may be determined (i.e. relative to the plane of the camera). The orientation of the isolated projection may be adjusted. The isolated projection may be transformed (e.g. rotation, scaling, translation, affine and/or projective transformations). The orientation may be adjusted such the projection lies on a predetermined plane. The plane may correspond to a plane perpendicular to the camera and/or virtual camera.
The system is configured to provide a notification or alarm in response to affirmative determination of the predetermined criterion. The system may provide a notification to a remote device (e.g. a mobile phone). The notification may provide an indication of the distress case. The notification may provide an identifier for the object.
The system may be configured to capture visible and/or infrared light. The light source may comprise an infrared light.
The system may be configured to monitor the object via a projection thereof when the object is not in visual range. The system may be configured to monitor the object via a projection thereof when the object is in visual range. The system may be configured to visually monitor the object when in visual range (e.g. using optical tracking). The position and/or trajectory of the objected may be tracked via the visual system when the object is in visual range. The position and/or trajectory data may be used determined the position or identity of the object in the projection monitoring system.
The system may be configured to classify one or more object in visual range. The system may be configured to monitor one or more object class.
According to a further aspect of the invention, there is provided: a data carrier or computer storage medium comprising machine instructions for monitoring an object, comprising the following steps: a) capturing an image of a surface comprising one or more projection of the object; b) identifying one or more projection within the image, the projection including at least one projection of the object; c) isolating the projection of the object from the one or more identified projection in the image; and d) determining if the projection of the object meets one or more predetermined criterion.
According to a further aspect of the invention, there is provided: a system for monitoring an object, the system comprising: an imaging device configured to capture an image of a surface comprising one or more projection of the object; and a processing system configured to: a) identify one or more projection within the image, the projection including at least one projection of the object; b) isolate the projection of the object from the one or more identified projection in the image; and c) determine if the projection of the object meets one or more predetermined criterion.
Any optional or preferable features described in relation to any one aspect of the invention may be applied to any further aspect, wherever practicable.
Detailed Description
Practicable embodiments of the disclosure are described below in further detail, by way of example only, with reference to the accompanying drawings, of which: Figure 1 shows a schematic view of a monitoring system; Figure 2 shows a schematic view of a first monitoring system; Figure 3 shows a schematic view of a second monitoring system; Figures 4 and 5 show a schematic view of a notification of the monitoring system; Figure 6 shows a schematic view of an example of the monitoring system.
A monitoring system 2 is shown in figure 1. The monitoring system 2 comprise an optical device 4 configured to observe a light projection 6. The projection may be provided by light 8 from a light source 10 intercepting or reflecting off an object 12. The light 8 intercepting the object creates a projection 6 of the object on a surface 14. The projection 6 may therefore comprise a shadow or a reflection of the object 12. The optical device 4 is configured to capture an image of the projection 6 and process the image to identify the projection 6. The monitoring system 2 may therefore track and/or monitor the object 12 by tracking the projection 6 accordingly. The monitoring system can then use the projection 6 to determine the object 12 is provided in one or more configuration. For example, the system 2 may determine if a person is falling or has fallen down. The process will be described in detail later.
The light source 10 may comprise any suitable light source, or illumination device, for example, one or more of: the Sun; the Moon; ambient light (i.e. diffuse sunlight); indoor lighting; outdoor lighting; IR lighting; UV lighting; portable lighting; fixed lighting; a display (e.g. TV screen or monitor); or any other natural or artificial light source. The light source may comprise any suitable wavelength or spectrum suitable to allow suitable spatial resolution of the object 12 (e.g. wavelengths including infrared and shorter). In some embodiments, the light source 10 comprises only visible light, thus the system requires no specialist light source. Additionally or alternatively, the light source comprises infrared. This allows operation at night, without casting visible light.
The surface 14 may comprise any suitable surface, for example, one or more of: a wall; a floor/ground; ceiling; window; door; furniture etc. It can be appreciated that the exact form of the surface is not pertinent to the invention at hand provided that the projection 6 can be adequately discerned thereon.
The optical device 4 comprise any suitable device configured to capture the appropriate wavelength light. The optical device may comprise one or more of: a camera; a webcam; CCTV; CCD; or IR camera. The optical device 4 is capable of capturing a plurality of images at one or more time interval. The optical device 4 may therefore record video. The optical device 4 may comprise any suitable lens, for example any of, a wide angle, zoom, or fisheye lens. The optical device 4 may be incorporated into any suitable computing device, for example, one or more of: a mobile/cellular phone; a tablet computer; a laptop etc. A plurality of optical devices 4 may be used. The position and/or number of optical devices 4 is determined according to user needs. For example, an optical device 4 could be provide in each room in a residence. Typically, the optical devices 4 are positioned to obtain maximal coverage. The optical devices 4 may be provided in an outdoor and/or indoor environment.
The optical device(s) 4 is operatively connected to an image processing system 16. The image processing system 16 comprise any suitable processing hardware and/or software. The image processing system 16 may comprise one or more of: mobile/cellular phone; desktop computer; laptop computer; tablet computer; server; microcomputer; SoC etc. The processing system 16 may comprise a processor, RAM, and/or a data store. The optical device 4 may be directly connected to the image processing system 16 (e.g. via wifi, ethernet etc.). Additionally or alternatively, the optical device 4 is connected to the image processing 16 via a WAN 18 (e.g. the internet or the cloud etc.). The system may therefore provide processing remote from the optical device 4.
In some embodiments, the optical device 4 and the image processing system 16 are attached or integrated. For example, image capture and processing may be provided by a single device.
A remote device 20 is operatively connected to the image processing system 16.
The remote device 20 may comprise any suitable system, for example, one or more of: mobile/cellular phone; desktop computer; laptop computer; tablet computer. The remote device 20 typically comprise a portable/mobile device. The remote device 20 may be connected directly to the optical device 4 and/or the image processing 16 are previously described. Typically, the remote device 20 is remote to the optical system 4 to allow remote monitoring via the optical device 4.
The monitoring and/or tracking method is described with reference to figures 2 and 3. In an initiation stage 22, the system 2 is initialised to ensure effective and efficient operation. In a first stage 24, objects within a defined space or visual field of the optical device(s) 4 are classified. The objects are classified to determine with objects should be tracked. For example, when the system is configured to track a human, then only human will be tracked. Such classification systems are known and a suitable method will be known. Such methods may include neural networks, machine learning or artificial intelligence etc. A user may input one or more desired classification. The classification system may classify all the objects in the visual field and the system may provide a list or database of classified objects. The user may select which objects they wish to track. In some embodiments, the desired classification may be predetermined by the system. For example, a human tracking system may be preconfigured to track humans accordingly.
The classification may include multiple objects of the same type. For example, the system may track multiple humans. The user may select a desired specific object with a classification (i.e. to provide a sub-classification). For example, the user may wish to track a specific person, rather than all humans. The user may input the profile of specific person to be tracked. The system may track a plurality of desired people. For example, the user could track the residents of a care home or like, whilst the system may be configured to ignore care workers or visitors etc. The user may input a plurality of desired classifications. The system may therefore track a plurality of different object classes. For example, the user may input that the system tracks human and domestic pets.
In some embodiments, the user may input a desired characteristic of an object or classification. For example, the user may wish to track objects above/below a certain size, or movement over a predetermined time period. Thus, the desired criteria may range across a plurality of classifications. The desired characteristic may comprise an age of a person and/or a selected group or persons.
Once the desired classifications are selected, the system then performs tracking on those desired classifications. The other classified objects or non-classified objects are simply ignored and/or the data therefor may be disregarded. The classification step 24 may be performed periodically (e.g. at a particular time interval). The user may manually initiate the classification, for example, when the visual field significantly changes. Similarly, the user may periodically modify the desired classification(s), for example, when a new person is required to be tracked.
In the next step 26, a profile is added for the desired tracked object. As previously discussed, the profile provides one or more criteria for the classification system to determine if the object is to be tracked. The profile may comprise physical characteristics of the object, for example, one or more of: size; shape; facial composition or features; body composition or features; hair colour; height; weight; age; posture profile; gait; identifying features (e.g. tattoos, scars) and/or clothing etc. Images or videos of the desired object may be input into the classification system to help the system identify the desired object.
The profile may comprise specific data for the tracked person. For example, the data may comprise one or more of: date of birth; name; address; or emergency contact details. The profile may comprise health information for the tracked person. The health information may comprise one or more medical condition and/or any symptoms associated with said condition/problem.
In the next step 28, the system 2 determines a position of a light source 10. This allows the system to later predict the shape/form of projections for the light source based on object position accordingly. The system 2 may determine the position of a fixed light source may optically observing the position thereof. The system may determine or a volumetric (i.e. 3D) space in which the optical device 4 is positioned (e.g. a room or the like). The volumetric space may comprise the position, shape, size or orientation of any surfaces 14 (e.g. walls, floors, ceilings) in the space. Methods of performing such analysis are known and will not be described in detail. For example, two or more cameras may stereo-image the space, or machine learning may be used to determine spatial geometry of an image. The system 2 may then determine the position of the light source within the volumetric space. The system 2 may then estimate the shape/form of the projection given a known position of light source, a known position of the object, and the known geometry of the space. Conversely, the system 2, may estimate the shape/form/position of the object 12 given a known position of light source, an observed form of the projection, and the known geometry of the space. In some embodiments, a virtual model of the environment may be created, and virtual light sources may be input accordingly.
The system may determine the position of a movable light source (e.g. the Sun) at one or more time or over a time period. The system may therefore track the position of the movable light source over said time period. The system can then estimate the shape/form/position of the object 12 as previously discussed. Where the movable light source comprises the Sun, location data (e.g. latitude, longitude) and/or orientation of the space may be input into the system. The position of movable light source may therefore be predetermined. The system may then take into consideration any portal (e.g. window or door) into which the sunlight is passing through. The system 2 may therefore determine the position and form of the light 6 from the Sun within the space at a given time period. For example, this allows determination of the position and angle of a beam of sunlight passing into a room at given a given time (e.g. using a predetermined model or the like). In some embodiments, the system 2 may used machine learning to predict the position/form of the movable light source. The system 2 may use training data from within the space observed by the optical device(s) 4. For example, the camera may be position within a room having a window or the like. The system then observes the position of a projection of an arbitrary object in the room to determine the position the light source. This is then observed over a number of days to create a model of the movable light source movement. When the movable light source comprises the Sun, the training data may be at least partially combined with predictive/location data described above to extrapolate and/or interpolate data within the model. The system may therefore use a combination of training data and predictive data.
In the next step 30, the desired object 12 is tracked. Typically, the object 12 is movable, and so the object 12 may be tracked across a plurality of optical devices 4. Tracking is performed in real-time or near real-time. For example, the image capture and/or the associated tracking processing step may be performed at least once every 5 seconds; preferably, at least once every second; preferably, multiple times per second. This ensures the system 2 can adequately track the object 12 and ensure sufficient data is observed therefrom. The sampling rate for tracking will be determined according to required accuracy and processing capability of the system.
During the tracking, visual presence of the tracked object is confirmed 32. Thus, when the object 14 is in the visual range of the optical device 4, the conventional image tracking may be used to classify and track the position of the object. Visual tracking may be performed using conventional techniques. The position, shape and/or form of the objected is tracked and monitored. If one or more parameter of the visual object is detected, then an event may be determined. For example, the system may be configured to detect that a person is falling or has fallen down, thus determining a fall event. This processing will be described in detail later.
If during tracking, the visual presence of the object 14 is not confirmed, the system 2 moves to a projection tracking step 34. As such, the system 2 determines the position, orientation, shape or form or the object using the projection 6 thereof onto one or more surface 14 within the space. This allows tracking of the object 12 without the need for line of sight between the object 12 and the optical device 4.
The projection tracking system is disclosed shown in detail in figure 2.
In a first step 36, the image of the projection 6 is captured. Typically, the image is formed from an individual a frame from a video captured by the optical device 4.
The next step 38, a colour change for each pixel is measured between adjacent neighbours. In the present context, colour may refer to any change in the wavelength (e.g. ROB values) and/or intensity of the pixels. The colour change may be determined across nearest neighbours or across a plurality of adjacent pixels (e.g. all pixels with an n pixel range). The change may be only determined if the magnitude of colour change meets a predetermined criteria (i.e. above a predetermined magnitude threshold of change). This may eliminate noise etc. The colour change algorithm indicates the boundaries between different objects in the image. For example, a high change in colour would be present at the boundary between a dark shadow and white wall.
The pixels are then grouped or clustered using the change data. This creates a virtual shape indicative of the shape of one or more objects in the image. Suitable clustering algorithms are known. Typically, clustering may involve defining a virtual shape defined by the colour change boundary. Thus, pixels of sufficient similarity are grouped into a cluster. Clustering may be used to select or disregard pixels changes within/outside one or more predetermined criterion. For example, if the magnitude of change is above and/or below a predetermined threshold for a given pixel, then such a pixel may be disregarded or added to a given cluster. Typically, this may be used to exclude noisy or erroneous pixels from the cluster.
Clustering may be used to interpolate and/or extrapolate within the defined shape. For example, clustering may be used to include pixels or groups within the shape which initially exceeded the predetermined magnitude of colour change (i.e. pixels which were initially deemed too dissimilar to neighbouring pixels may be included in the cluster). The system may therefore provide a more accurate representation of shape of the object.
Further clustering may then be performed to discriminate between projection 6 clusters and clusters of other objects. The clustering may define a meta cluster of projections (i.e. a group of projection clusters). The meta cluster by grouped/clustered based on one or more predetermined criteria. Typically, the meta-clustering may determined based a colour profile. For example, shadows typically comprise a dark/grey/black colour. Thus, the meta-cluster may include individual clusters having dark pixels accordingly.
Each projection pixel may be assigned to cluster based on a spatial position.. To identify multiple objects in an image, the system takes a pixel and assigns the pixel to an object (group of pixels/clusters). This is performed by first defining an arbitrary point within the image space. A distance is then measure between the arbitrary point and a given pixel. This is repeated for each pixel. The pixels are then assigned to a cluster based on their distance to the arbitrary point. It can be appreciated that such a regime may be repeated for any number of arbitrary points to define a number of clusters. This allows definition of spatially separated clusters.
Once the projection clusters or the meta-cluster is identified, then the remaining clusters (i.e. those relating to other objects) may be ignored. Data for the remaining clusters may be discarded accordingly.
In the next step 40, the identified projections cluster(s) are segmented. The segmentation process discriminates between projections of different objects, thereby allowing selection of the desired objection projection. The segmentation process uses an Al, machine learning, and/or neural network system. The segmentation system is trained to detect the projection of the desired object(s).
Images of the desired projection are input into the system to provide a training model. For example, images of shadows of the tracked object are input into the model. This trains the system to identify a shadow or reflection for a particular object. The system can therefore identify or discriminate projections belonging to the objects.
When the object comprises a human or animal, projections of different conditions of the human or animal may be input into the system. The conditions may comprise a pose, position, gesture or posture of the object. For example, the condition may define a position relating to one or more: standing; sitting; lying down; lying on the side; lying on the back (supine); lying on the front (prone); sprawled; slumping; slouching; crouching; kneeling; squatting; curled-up or foetal postion etc. In some embodiments, the conditions may comprise a characteristic movements or sequences of poses, for example, one or more of: walking; running; punching; kicking; waving; shaking; clutching or grabbing. Images of the shadow of a person or animal performing such poses/movements are then input into the training model. The system may therefore be able to identify such conditions accordingly.
In some embodiments, the training data may be synthetic. For example, a virtual shadow may be created using a computer model of a lighting scenario and a model of the object. The object model may be moved to different poses and/or animated to performed characteristic movements. The system can therefore create an essentially infinite number of permutations of projection data. The data can then input into the training model with or without "real word" image data.
In some embodiments, the training data may be derived from the training data for the visual system. If the shape of the object and the light sources are known, then a virtual projection can be determined accordingly. For example, an image of a person in a particular pose can be input into the system, the system can then generate a virtual shadow of the pose, and then record the virtual shadow. The training data may therefore comprise hybrid system using real world and virtual data.
Data from the light source(s) 10 determined in step 28 may be used in the segmentation process. For example, if the position of a light source is known, and position of the object is known, then position of the corresponding projection may be known. If the object is not selected for tracking, then the corresponding projection can be disregarded during the segmentation process. Similarly, if the position of the desired objection is known or can be estimated (e.g. known to be within a certain area), then the projection in that area may be selected.
Once the projections have been segmented 40, the projection for the desired objected is identified in the next step 42. This performed by simply selecting the projections which correspond to the desired object performed in the segmentation step. Data for the remaining objects, unclassified objects or any object not part of the desired object may be disregarded and/or deleted. The system may therefore process and/or retain as little data as possible. It can be appreciated the identification step 42 may be repeated for each individual desired object. In the present embodiment, shadows relating to a human are identified.
In the next step 44, the orientation of the projection 6 is determined. Typically, this is performed by determining the orientation and/or position of the surface 14 on which the projection 6 is formed. For example, if the surface is a wall, then the system 2 may determine the orientation of the wall and form a virtual plane corresponding to said orientation. The observed projection is then mapped onto the virtual plane.
In the next step 44, the orientation of the projection 6 is virtually adjusted to be mapped to predefined or normalised plane. For example, the projection is re-mapped to a plane substantially perpendicular to the camera 4. This allows easier comparison of the projections 6 between surfaces 14 having different positions or orientations. Where multiple projections 6 are provided, the projections 6 are normalised. Techniques to adjust the mapping of the projection 6 are known (e.g. using matrix transformations).
At this stage, the projection(s) 6 have been clustered, segmented, and the desired projections have been isolated. For example, at this stage, only shadows relating to the tracked person are provided. The isolated projections are monitored to determine if the shape/form of the projections meet one or more predetermined criteria. In the present embodiment, the isolated projections are monitored to determine the projection indicates a distress case for a person. The distress case may be indicative one or more conditions of the tracked person, for example: falling; collapsing; tripping; lying down; waving; shouting; screaming; crying; motions indicative of a heart attack; motions indicative of a stroke; heavy or laboured breathing; clutching at the chest or any of the other predetermined conditions discussed above. The distress conditions thus generally relate to situations where intervention may be required. In a speficiation embodiment, the distress condition relates to falling down onto the floor. The system may therefore monitor if an object moves from an upright position toward a lying position..
In some embodiments, the predetermined criteria may comprise a position or location of the object 12. A boundary or "geofence" may be provided by the operator. If the object then moves outside the boundary, the predetermined criteria may be met. For example, this may be used to monitor the location of children or pets. Alternatively, this could be used to monitor the position of a valuable object, or heavy machinery (e.g. in a warehouse or factory environment). In some embodiments, the predetermined criterion may comprise the orientation of the objection. For example, this could be used to determine if a door, window or other barrier is in an open or closed position.
If the predetermined condition is met, then an event is generated indicative of the predetermined condition. The event may comprise a notification. The notification may be transmitted to the remote device 20 and/or the image processing system 16. The notification may comprise an indication of any of: the tracked object 12 (e.g. an ID thereof); a distress condition; a location/orientation; time/date; a copy of the image generating the event; an image/video capture before, during or after event detection; a live video feed from one or more camera in the system. The notification may provide a prompt to the user to suggest an appropriate response.
For example, the notification may prompt the user to call the emergency services, an emergency contact, or indicate they should visit the patient etc. The notification may provide a prompt for the user to view an image/video of the event or to watch a live video feed.
The notification may be provided in any suitable form, format or protocol. The notification may provided by one or more: automated phone call; SMS; email; internet messaging service; push notification; Bluetooth (RTM) message etc. The remote device 20 may comprise suitable software (e.g. an app) to receive such messages. The software may also allow input, modification and/or programming of the system 2.
Examples of the notifications are shown in figures 3 and 4. In figure 3, the notification indicates that a human has tripped or fallen. The notification prompts the user to call an ambulance. The notification includes a prompt to watch a live feed from the camera. In figure 4, the notification informs the user their pet is outside a predetermined boundary. The user is prompted to call a neighbour.
In some embodiments, the system 2 may automatically notify the emergency services in response to the detection of an event. For example, in life threatening situations, the system may automatically contact the emergency services. The system 2 may provide location data (e.g. using Emergency Location Service/Advanced Mobile Location) and/or an indication of the event.
The monitoring system 2 may be operatively connected to an alarm system. Upon detection of the event, the alarm system may be activated. The alarm system may comprise a audible and/or visual signal.
It can be appreciated the above process can be repeated as required. The monitoring system may operate in real-time. The projection tracking system may stop once the object moves back into line of sight with the camera 4. The system may then use conventional optical tracking. The system 2 may trained to detect characteristic distress conditions as previously described. The system 2 may be trained using direct images of the poses/movements accordingly. If the object moves out of the line of sight again, then the projection monitoring is activated. Visual (i.e. line of sighnt) and projection monitoring may be used in a seamless fashion (i.e. the system automatically determines which mode to operate in and/or data is shared therebetween).
The system may determine the projection is undetectable. For example, the projection 6 may be overly distorted, obscured or other undiscernible. A notification may be provided to the remote device, for example, to prompt reconfiguration or to indicate tracking cannot be performed.
The visual system may be used to provide the identify the object in the projection monitoring system. Thus, even without visual confirmation a target object can be tracked. This may be useful where multiple tracked objects are provided in an environment. For example, if the location and/or trajectory of the object is known in the visual system, then such location/trajectory data can be used to determine or estimate whether a projection 6 belong to a particular object. Typically, this is achieved by extrapolating trajectory data to provide an estimate location. This can then be correlated with the position of the light sources and projection accordingly.
Additionally or alternatively, identification may be performed deductively. For example, if the system determines a plurality of objects are in an environment, and one or more objects are in visual range, then the system can determine that one or more of the remaining tracked objects may be providing the tracked projections accordingly.
The benefit of the present system 2 is exemplified in figure 6. The line of sight 52 between the camera 4 and the object 12 is broken by wall 14a. However, the camera 4 is able to observe the shadows 6a,b created by the lights sources 10a,b.
The camera 4 is therefore able to effectively observe the object despite their being no line of sight. The camera may also observe partial shadows 6b even if the projection itself is partially obscured. The system 2 also observes a plurality of shadows 6, thus even if the shadow 6b were completely obscure, the other shadow 6a is still observable. Given that most environments comprise multiple light sources, the resilience of the system is increased.
The present system 2 may be used in environments where one or more objects requires monitoring and/or tracking for example: private residences or gardens; elderly care homes; hospitals, hospices or other medical facilities; nurseries (kindergartens), schools, universities or other educational establishments; prisons or other secure areas; and/or warehouse or logistical areas. The system 2 may be used to detect accidents, injuries, medical emergencies of persons and/or vehicles within the observed environments. The system 2 may be used to detect fighting or rioting. The system 2 may be used to detect theft, trespassing or other criminal activity.

Claims (25)

  1. Claims 1. A method of monitoring an object comprising the following steps: a) capturing an image of a surface comprising one or more projection of the object; b) identifying one or more projection within the image, the projection including at least one projection of the object; c) isolating the projection of the object from the one or more identified projection in the image; and d) determining if the projection of the object meets one or more predetermined criterion.
  2. 2. A method according to claim 1, where the projection comprises a shadow of the object
  3. 3. A method according to any preceding claim, where the predetermined criterion is indicative of a characteristic pose, gesture, posture or movement of the object.
  4. 4. A method according to any preceding claim, where the predetermined criterion is indicative of a distress condition of the object.
  5. 5. A method according to any preceding claim, where the distress condition comprises one or more: falling; collapsing; tripping; lying down; and/or waving.
  6. 6. A method according to any preceding claim, where the predetermined criterion is indicative of a position and/or orientation of the object.
  7. 7. A method according to any preceding claim, where the object comprises a 30 human or animal
  8. 8. A method according to any preceding claim, where step b) comprises clustering the captured image to identify one or more shape or form within the image having the same or similar characteristics.
  9. 9. A method according to claim 8, where the characteristics comprise one or more of: colour; brightness; contrast.
  10. 10. A method according to any preceding claim, where step c) comprises segmenting the projections to discriminate between projections relating to the object and projections relating to other objects.
  11. 11. A method according to claim 10, where discriminating projections relating to the object comprises inputting data relating to object and/or a projection thereof, and comparing the projection with said data relating to object and/or a projection 15 thereof.
  12. 12. A method according to any preceding claim, where step d) comprises discriminating between projections relating to the predetermined criteria and projections not relating to the predetermined criteria.
  13. 13. A method according to claim 12, where discriminating projections relating to the predetermined criteria comprises inputting data relating to the object and/or a projection thereof in characteristic pose, posture, gesture or movement of the object, and comparing the projection with said data relating to object and/or a projection thereof.
  14. 14. A method according to any of claims 11-13, where the input data relating to object and/or a projection thereof comprises static or moving images.
  15. 15. A method according to any of claims 10-14, where the discrimination is performed using a neural network and/or machine learning.
  16. 16. A method according to any preceding claim, where after step c) the orientation of the isolated projection is adjusted.
  17. 17. A method according to claim 16, where the orientation is adjusted such the projection lies on a predetermined plane.
  18. 18. A method according to any preceding claim, where the system is configured to provide a notification or alarm in response to affirmative determination of the predetermined criterion.
  19. 19. A method according to claim 18, where the system provided a notification to a remote device.
  20. 20. A method according to any preceding claim, where the system is configured to capture visible and/or infrared light.
  21. 21. A method according to any preceding claim, where the system is configured to visually monitor the object when in visual range.
  22. 22. A method according to claim 21, where the position and/or trajectory of the objected is tracked via the visual system when the object is in visual range, and position and/or trajectory data is used determined the position or identity of the object in the projection monitoring system.
  23. 23. A method according to any preceding claim, where the system to configured to classify one or more object in visual range, and the system is configured to monitor one or more object class.
  24. 24 A data carrier or computer storage medium comprising machine instructions for monitoring an object, comprising the following steps: a) capturing an image of a surface comprising one or more projection of the object; b) identifying one or more projection within the image, the projection including at least one projection of the object; c) isolating the projection of the object from the one or more identified projection in the image; and d) determining if the projection of the object meets one or more predetermined criterion.
  25. 25. A system for monitoring an object, the system comprising: an imaging device configured to capture an image of a surface comprising one or more projection of the object; and a processing system configured to: a) identify one or more projection within the image, the projection including at least one projection of the object; b) isolate the projection of the object from the one or more identified projection in the image; and c) determine if the projection of the object meets one or more predetermined criterion.
GB2212022.4A 2022-08-17 2022-08-17 Monitoring system Pending GB2621822A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB2212022.4A GB2621822A (en) 2022-08-17 2022-08-17 Monitoring system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB2212022.4A GB2621822A (en) 2022-08-17 2022-08-17 Monitoring system

Publications (2)

Publication Number Publication Date
GB202212022D0 GB202212022D0 (en) 2022-09-28
GB2621822A true GB2621822A (en) 2024-02-28

Family

ID=84546429

Family Applications (1)

Application Number Title Priority Date Filing Date
GB2212022.4A Pending GB2621822A (en) 2022-08-17 2022-08-17 Monitoring system

Country Status (1)

Country Link
GB (1) GB2621822A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102017010731A1 (en) * 2017-11-20 2018-05-30 Daimler Ag Method for detecting an object
DE102017206974A1 (en) * 2017-04-26 2018-10-31 Conti Temic Microelectronic Gmbh Method for the indirect detection of a covered road user
US20190012537A1 (en) * 2015-12-16 2019-01-10 Valeo Schalter Und Sensoren Gmbh Method for identifying an object in a surrounding region of a motor vehicle, driver assistance system and motor vehicle

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190012537A1 (en) * 2015-12-16 2019-01-10 Valeo Schalter Und Sensoren Gmbh Method for identifying an object in a surrounding region of a motor vehicle, driver assistance system and motor vehicle
DE102017206974A1 (en) * 2017-04-26 2018-10-31 Conti Temic Microelectronic Gmbh Method for the indirect detection of a covered road user
DE102017010731A1 (en) * 2017-11-20 2018-05-30 Daimler Ag Method for detecting an object

Also Published As

Publication number Publication date
GB202212022D0 (en) 2022-09-28

Similar Documents

Publication Publication Date Title
US10936655B2 (en) Security video searching systems and associated methods
US7106885B2 (en) Method and apparatus for subject physical position and security determination
US20170039455A1 (en) Computer-vision based security system using a depth camera
US9824570B1 (en) Visible-light-, thermal-, and modulated-light-based passive tracking system
CN100450179C (en) Household safe and security equipment for solitary old person based on omnibearing computer vision
Fleck et al. Smart camera based monitoring system and its application to assisted living
US20180315200A1 (en) Monitoring system
US9740921B2 (en) Image processing sensor systems
US8427324B2 (en) Method and system for detecting a fallen person using a range imaging device
US9720086B1 (en) Thermal- and modulated-light-based passive tracking system
US20110043630A1 (en) Image Processing Sensor Systems
US20030058111A1 (en) Computer vision based elderly care monitoring system
EP2390820A2 (en) Monitoring Changes in Behaviour of a Human Subject
US20120098927A1 (en) Omni-directional intelligent autotour and situational aware dome surveillance camera system and method
US10140832B2 (en) Systems and methods for behavioral based alarms
US20210241597A1 (en) Smart surveillance system for swimming pools
Ahmad et al. Energy efficient camera solution for video surveillance
US9921309B1 (en) Visible-light and sound-based passive tracking system
JP4610005B2 (en) Intruding object detection apparatus, method and program by image processing
US20080211908A1 (en) Monitoring Method and Device
US11509831B2 (en) Synchronous head movement (SHMOV) detection systems and methods
JP2020145595A (en) Viewing or monitoring system, or program
US20230172489A1 (en) Method And A System For Monitoring A Subject
GB2621822A (en) Monitoring system
CN110807345A (en) Building evacuation method and building evacuation system