GB2608639A - Threat assessment system - Google Patents

Threat assessment system Download PDF

Info

Publication number
GB2608639A
GB2608639A GB2109885.0A GB202109885A GB2608639A GB 2608639 A GB2608639 A GB 2608639A GB 202109885 A GB202109885 A GB 202109885A GB 2608639 A GB2608639 A GB 2608639A
Authority
GB
United Kingdom
Prior art keywords
threat
data
scenario
behaviour
probability
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
GB2109885.0A
Other versions
GB202109885D0 (en
Inventor
Walton Paul
knox Robin
Tylecek Radim
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Boundary Tech Ltd
Original Assignee
Boundary Tech Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Boundary Tech Ltd filed Critical Boundary Tech Ltd
Priority to GB2109885.0A priority Critical patent/GB2608639A/en
Publication of GB202109885D0 publication Critical patent/GB202109885D0/en
Priority to EP22740468.8A priority patent/EP4367653A1/en
Priority to PCT/GB2022/051775 priority patent/WO2023281278A1/en
Priority to PE2024000046A priority patent/PE20240511A1/en
Publication of GB2608639A publication Critical patent/GB2608639A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19602Image analysis to detect motion of the intruder, e.g. by frame subtraction
    • G08B13/19608Tracking movement of a target, e.g. by detecting an object predefined as a target, using target direction and or velocity to predict its new position
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19602Image analysis to detect motion of the intruder, e.g. by frame subtraction
    • G08B13/19613Recognition of a predetermined image pattern or behaviour pattern indicating theft or intrusion
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B29/00Checking or monitoring of signalling or alarm systems; Prevention or correction of operating errors, e.g. preventing unauthorised operation
    • G08B29/18Prevention or correction of operating errors
    • G08B29/185Signal analysis techniques for reducing or preventing false alarms or for enhancing the reliability of the system
    • G08B29/186Fuzzy logic; neural networks
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B31/00Predictive alarm systems characterised by extrapolation or other computation using updated historic data

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Automation & Control Theory (AREA)
  • Artificial Intelligence (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Image Analysis (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

Method of determining suspicious behaviour (degree of threat), comprising: processing an image 600 to identify 608 and track 610 an object, thereby generating first order data; processing first order data over time to generate second order data 612; estimating a rapid first threat level from the first order data 614, 616; estimating a slower second threat level from the second order data 622; and generating threat assessment based on the first and second threat levels 618. Generating second order data may comprise processing the first order data using a probabilistic behaviour inference algorithm. Second order data may comprise a scenario probability score which is compared to predefined probabilistic behaviour scenarios (Figs. 11-20) to determine a most probable behaviour and hence a threat level. A master threat score may be based on: motion intensity; a rapid threat score; a threat level of a probable behaviour scenario; and/or a threat level associated with a state within a behaviour scenario. An action 626-634, such as an alarm, may be performed in dependence on the master threat score and confidence level. Low confidence threats may be subject to user feedback in order to refine the model parameters 630, 636, 638.

Description

THREAT ASSESSMENT SYSTEM
The present invention relates to a method of determining a degree of threat associated with activity in a scene observed by a camera, and an alarm system. The present invention relates in particular to a method and system for predicting the occurrence of a burglary or similar unauthorised breach of a property.
Background to the invention
It is nowadays common to have cameras installed at homes and offices to guard against unwanted intrusion, for example to commit burglary. However, it is typically too expensive to have cameras monitored continuously in real-time, and they can typically only be used after the event to attempt to identify intruders who have successfully carried out a burglary.
"Smart" camera systems exist for home security which provide video streaming of footage to users' phones. Image analysis may be provided, but is typically limited to facial recognition of visitors, for example via doorbell cameras. Such devices do not predict the likelihood of burglary activity, and they do not provide links to any security monitoring and response service which could deal with incidents as they occurred.
This is in part because current object detection algorithms and object tracking algorithms do not provide a sufficient level of accuracy to be relied on for automated alarms. False positives were found to be generally more than 20%, leading to a significant overhead in reviewing and actioning potential threats detected.
The present invention seeks to address problems such as these in the prior art.
Summary of the invention
In a first aspect of the invention there is provided a method of determining a degree of threat associated with activity in a scene observed by a camera (or a plurality of cameras or other sensing devices of any appropriate type), the method comprising: receiving image data from the camera; processing the image data to generate first order image analysis data, (optionally) including at least one of (and preferably both of): object data identifying at least one object in the scene, and tracking data encoding the movement of said at least one object within the scene; (optionally) processing the first order image analysis data to generate second order image analysis data dependent on the evolution of the first order image analysis data over time; estimating a first threat level associated with the first order image analysis data; estimating a second threat level associated with the second order image analysis data; and generating threat assessment data indicative of the degree of threat associated with activity in the scene in dependence on both the first and second threat levels; and outputting the threat assessment data. Preferably said objects include actors (such as people, animals or vehicles) and items (such as packages, tools, and so on). Preferably the method further comprises identifying zones in the scene, and generating tracking data comprises determining the movement of said at least one object between different zones and/or identifying a zone that an object is currently occupying.
It was discovered that determining a first and second threat level associated with first order and second order image analysis and generating threat assessment data in dependence on both the determined threat levels allowed the use of relatively sophisticated second order analysis methods (which typically take a long time to provide useful results), while providing a useable threat assessment in the meantime from the first order analysis methods (which typically take a shorter time to provide useful results, albeit not necessarily as accurate as the results from the second order analysis). As time progresses, the combination of threat levels can provide more accurate results than using a single threat analysis alone (whether first or second order). It should be noted that although the term 'threat' is used herein, the present invention may extend to estimating levels of suspicion, an abnormality or deviation from expected results, or any desired type of behaviour (positive, negative or neutral in any appropriate context) and so on.
Preferably processing the image data to generate first order image analysis data takes on average less time than processing the first order image analysis data to generate second order image analysis data, and preferably the system is configured such that processing the image data to generate first order image analysis data is substantially instantaneous (for example the first order image analysis data is generated and available for further processing within 1, 10, 100 or 1,000 milliseconds of receipt of the corresponding image data). Preferably the second order image analysis data has a higher dimensionality than the first order image analysis data and preferably processing the image data takes into account the event activity substantially in its entirety, for example starting with the detection of an actor in the scene.
Preferably generating the second order image analysis data comprises processing the first order image analysis data using a probabilistic inference algorithm operating on at least one model of behaviour. Preferably a model of behaviour is any appropriate model which a probabilistic inference algorithm can be operated with and which in some aspect models events which may take place in the scene. Preferably at least two models of behaviour are provided. In this case, preferably at least one said model corresponds to a benign scenario, and at least one said model corresponds to a non-benign scenario, thereby allowing relative estimated probabilities of benign and non-benign outcomes to be compared. This can provide more effective insight than (for example) determining only the absolute estimated probability of a single scenario (or single type of scenario) occurring. Preferably the method further comprises comparing the likelihood of at least one benign scenario with the likelihood of at least one non-benign scenario to determine whether a threat is present. Preferably 'benign' connotes having a relatively low threat.
In a related aspect of the invention there is provided a method of determining a degree of threat associated with activity in a scene observed by a camera, the method comprising: receiving image data from the camera (or a plurality of cameras or other sensing devices of any appropriate type); processing the image data to: (optionally) generate object data identifying at least one object in the scene; (optionally) generate tracking data encoding the movement of said at least one object within the scene; and processing the object data and tracking data with a probabilistic inference algorithm (such as a particle filter) operating on at least one model of behaviour associated with a threat to generate threat assessment data indicative of the degree of threat; and outputting the threat assessment data.
It was discovered that the processing object data and tracking data with probabilistic inference algorithm (such as a particle filter) operating on a model of behaviour provided a significant improvement in detection rates and a reduction in false positives, in part arising from the ability to track the event history and to do so in a relatively nonlinear fashion.
Preferably the method is carried out at the camera or proximate to the camera (electronically or electromagnetically in contact with the camera, for example at the same premises or within Bluetooth(RTM), Wi-Fi(RTM) or physically networked proximity of the camera, for example. This can allow a more responsive (and effectively substantially instantaneous) determination of threat level, and can allow the system to be protected against communications or server outages. This can be especially relevant in the case of home security in the event that communications links are deliberately severed or jammed, for example. However, the processing can be partially or fully de-localised as desired (for example to a cloud or other remote server). In one variant, the relatively more straightforward first order analysis is carried out locally and the relatively more difficult second order analysis is carried out remotely.
The method preferably further comprises receiving scenario data encoding a plurality of probabilistic behaviour scenarios, whereby each scenario is associated with a respective threat level; and generating the second order image analysis data comprises generating scenario probability data encoding the probability of occurrence of each scenario, and processing the scenario probability data in conjunction with the threat levels associated with the scenarios, whereby the second threat level depends at least in part on the scenario probability data. The scenario probability data typically varies over time, and the estimated probability of occurrence of each scenario may likewise vary over time; accordingly the threat assessment data typically may vary over time. The term 'probabilistic behaviour scenario' preferably connotes any type of model of behaviour relevant to the scene which can be used with a probabilistic inference algorithm (and in particular a particle filter).
By providing a plurality of behaviour scenarios, the system can more effectively discriminate between suspicious or threatening behaviour and other forms of behaviour that may occur in the scene. The association of a threat level with each respective scenario can provide a more nuanced assessment of the threat level.
Furthermore, preferably each scenario may make a contribution to the estimated degree of threat in approximate proportion to their estimated probability of occurrence, or similar. This can again provide a more nuanced estimation of threat.
Preferably at least one scenario is associated with burglary and has an appropriate level of threat associated with it. Other scenarios may relate to benign activity, such as residents or guests arriving, deliveries being made, animals wandering into the scene, and so on, and can make an appropriate negative contribution to the degree of threat if estimated to be present. Other scenarios may also include other threatening activity of any appropriate kind.
The method preferably further comprises processing the scenario probability data to determine the most probable behaviour scenario, and outputting an indication of the determined most probable behaviour scenario. For example, if a burglary is estimated to be the most probable behaviour scenario, a simple indication of this can be indicated, in addition to or instead of a more generic threat indication.
The method may further comprise generating an estimate of uncertainty of the estimate of the most probable behaviour scenario, which is preferably a mean log deviation or any other appropriate measurement or calculation of uncertainty. The estimation of the uncertainty of the estimate in addition to the estimate itself can assist in decision making (whether automated or manual).
Preferably each behaviour scenario is encoded as a directed (or undirected) graph having nodes representing states and edges representing transitions between the states, wherein each state may be associated with a respective threat level, and wherein processing the object data and tracking data comprises: generating state probability data encoding the probability of occurrence of each state within a scenario; and processing the state probability data in conjunction with the threat levels associated with the states, whereby the generated threat assessment data depends at least in part on the state probability data.
Thus, the probabilistic inference algorithm can more effectively determine a threat level by making use of the more granular threat predictions associated with specific individual nodes within each scenario model. Preferably the probabilistic inference algorithm uses a Hidden Markov Model, and more preferably a Hidden Semi-Markov Model, to make predictions based on a history of observations. Preferably the filter instantiates a model for each behaviour and each actor (such as a person) that appears in the scene.
The method may further comprise receiving object type data encoding a plurality of object types. In this case, it may yet further comprise receiving scene element weighting data encoding the importance to be placed on interactions between first and second said object types. These interactions can be assigned a positive or negative value to indicate the relative associated levels of threat. Thus actors can be weighted with respect to items, and actors can be weighted with respect to each other. Typically the object type data is a sparse array or set of arrays.
The method may also further comprise receiving location weighting data encoding the importance to be placed on the location of said at least one object within the scene. The weighting may be as aforesaid. The method may yet further comprise receiving attribute weighting data encoding the importance to be placed on detected attributes of said at least one object, and likewise.
The method may further comprise generating a rapid (preferably substantially instantaneous) threat score ('instant threat scoring') in dependence on at least one of: current interactions between identified objects in the scene, locations of identified objects, and attributes of identified objects. Preferably all three features are used, and are combined using the weighting as aforesaid. The rapid (or 'instantaneous') threat score is preferably provided without using the probabilistic inference algorithm and/or only using at least one of the object identification and motion tracking steps, allowing a near instantaneous measurement of threat to be provided on demand, and to provide a secondary reference point besides the probabilistic inference algorithm, for robustness.
Preferably the method further comprises generating a master threat score based on at least one and preferably two, three or four of: a motion intensity (determined from the tracking data, for example), a rapid (for example substantially instantaneous) threat score (for example as aforesaid), a threat level associated with the probability of a behaviour scenario occurring (for example as aforesaid), and a threat level associated with the probability of a state occurring within a behaviour scenario (also as aforesaid, for example).
The method may further comprise generating a confidence level associated with the master threat score, for example in dependence on the confidence levels associated with each of the behaviour scenarios/models.
The method preferably further comprises selecting an action in dependence on both the master threat score and confidence level, and causing the action to be carried out. The action may be selected from at least one of: waiting for significant motion, analysing existing motion, obtaining further information from the scene, alerting a user of possible suspicious behaviour, challenging a suspicious actor, and generating an alarm signal based on an imminent threat. I n addition, the method may further comprise providing images or other data (from sensors or otherwise) relating to an estimated detected threat. The method may further comprise providing an indication of threat level, which may be visual (for example, in the form of a traffic light style indicator, a numerical indicator, a flashing light, or otherwise), audio, or otherwise. In one aspect, the camera or a device in the proximity of the scene and/or camera can be configured to issue visual, audio or other warnings in order to act as a deterrent.
The method may further comprise routing the output data (comprising any appropriate data as aforesaid) into a machine learning system and updating any data accessed when processing the object data and tracking data in dependence on the output of the machine learning system. In this regard, the method may further comprise inputting user feedback into the machine learning system in response to alerting the user or generating an alarm signal. In particular, the method may comprise using local machine learning, for example to optimise or tune state machines (or other system components) for a specific scene, and global machine learning, for example to learn new behaviour scenarios and scenes or types of scenes.
This can allow model parameters to be updated and to allow a system to be more closely tailored to a specific set of objects, items, locations, events, and so on.
In another aspect of the invention there is provided an alarm system, comprising at least one processor and associated memory, said memory storing computer program code which, when executed by said at least one processor, causes the alarm system to carry out any method as aforesaid.
Any aspect as aforesaid may be provided specifically for a consumer or residential home, and/or specifically for the purpose of detecting the threat of burglary. This feature is provided independently.
Accordingly, in another aspect, there is provided a method of predicting the likelihood of burglary of a consumer home, comprising: receiving image data from one or more cameras and identifying zones related to specific observation areas with the field of view; utilising image analysis software to identify the objects within the image data; tracking the movement of the objects identified across the identified zones; applying a particle filter to predict and compare the tracked movement and identified objects with modelled behaviour scenarios in order to determine the probability of a burglary threat; (optionally) raising an alarm/alert to a connected 24/7 remote monitoring centre providing the images where a threat was detected with the threat conditions that caused the alert highlighted; and activating the alarm system based on the threat indication.
Although the embodiments of the invention described above with reference to the drawings may comprise computer-related methods or apparatus, the invention may also extend to program instructions, particularly program instructions on or in a carrier, adapted for carrying out the processes of the invention or for causing a computer to perform as the computer apparatus of the invention. Programs may be in the form of source code, object code, a code intermediate source, such as in partially compiled form, or any other form suitable for use in the implementation of the processes according to the invention. The carrier may be any entity or device capable of carrying the program instructions.
Thus, there is specifically provided in a further aspect of the invention a non-transitory computer readable medium encoding computer program code which, when executed on at least one processor of a computer, causes the computer (or any appropriate combination of computers, with appropriate distribution of the computer program code) to carry out any appropriate method as aforesaid.
For example, the carrier may comprise a storage medium, such as a ROM, for example a CD ROM or a semiconductor ROM, or a magnetic recording medium, for example a floppy disc, hard disc, or flash memory, optical memory, and so on. Further, the carrier may be a transmissible carrier such as an electrical or optical signal which may be conveyed via electrical or optical cable or by radio or other means. When a program is embodied in a signal which may be conveyed directly by cable, the carrier may be constituted by such cable or other device or means.
Although various aspects and embodiments of the present invention have been described separately above, any of the aspects and features of the present invention can be used in conjunction with any other aspect, embodiment or feature where appropriate. For example apparatus features may where appropriate be interchanged with method features. References to single entities should, where appropriate, be considered generally applicable to multiple entities and vice versa. Unless otherwise stated herein, no feature described herein should be considered to be incompatible with any other, unless such a combination is clearly and inherently incompatible. Accordingly, it should generally be envisaged that each and every separate feature disclosed in the introduction, description and drawings is combinable in any appropriate way with any other unless (as noted above) explicitly or clearly incompatible.
Description of the Drawings
An example embodiment of the present invention will now be illustrated with reference to the following Figures in which: Figure 1 is a schematic of a threat assessment system for detecting threats in a scene; Figure 2 is a flowchart illustrating the basic operation of the system of Figure 1; Figure 3 is a flowchart illustrating an alternative mode of operation of the system of Figure 1; Figure 4 is a flowchart illustrating a further alternative mode of operation of the system of Figure 1; Figure 5 is a further schematic showing the operation of the system of Figure 1 at a software architecture level; Figure 6 is a schematic showing the operation of the system of Figure 1 in more detail; Figures 7(a) to 7(d) show plots of various states of the system of Figure 1 for four different scenarios when a "break-in" script is run; Figures 8(a) to 8(d) show plots of various states of the system of Figure 1 for four different scenarios when a "courier" script is run; Figures 9(a) to 9(d) show plots of various states of the system of Figure 1 for four different scenarios when an "arrive" script is run; Figures 10(a) to 10(c) show plots of the combined outputs of the scenarios and scripts shown in Figures 7, 8 and 9; Figure 11 is a schematic of a postman/delivery behaviour scenario; Figure 12 is a schematic of a courier behaviour scenario; Figure 13 is a schematic of a visitor behaviour scenario; Figure 14 is a schematic of a resident arriving behaviour scenario; Figure 15 is a schematic of a window cleaner behaviour scenario; Figure 16 is a schematic of a burglar behaviour scenario; Figure 17 is a scenario of a gardener behaviour scenario; Figure 18 is a scenario of a rubbish collector behaviour scenario; Figure 19 is a scenario of a builder behaviour scenario; Figure 20 is a scenario of an animal behaviour scenario; Figure 21 is a flowchart of a variant of the threat assessment system; Figure 22 is a screenshot from a threat assessment system in operation; and Figure 23 is a further screenshot from a threat assessment system in operation.
Detailed Description of an Example Embodiment
An embodiment of a process and system for assessing a threat level in a scene will now be described in overview, followed by a more detailed description.
Figure 1 is a schematic of a threat assessment system for detecting threats in a scene. The threat assessment system 100 includes at least one processor 102, data storage 104, program storage 106, and appropriate input/output device or devices 108. The input/output device 108 receives a video feed from one or more cameras 122 in a scene of interest 120. The system 100 is able to trigger an alarm system 130, which is typically a 24/7 security response company. In addition (or alternatively) the system 100 may transmit to one or more entities that may respond and/or to the cloud or any other appropriate storage or processing equipment: complete or partial footage, screen grabs or any other appropriate collected or computed data. A preferred application of the system 100 is to determine when a burglary event is being attempted or is in progress, or to determine when suspicious behaviour is occurring prior to a burglary attempt.
Figure 2 is a flowchart illustrating the basic operation of the system of Figure 1. In step S200, video is captured of a scene. In step S202, the video (or rather, one or more frames of the video) is processed to generate first order image analysis data. Specifically, the video is processed to identify objects in the scene and movement of those (and any other appropriate) objects is tracked (though in a variant only one of these steps is carried out). In step S204, the first order image analysis data is processed (for example using a probabilistic inference algorithm, such as a particle filter associated with a behaviour model) to generate second order image analysis data. In step 5206, a first and second threat level associated with the first and second order image analysis data respectively are estimated. In step 5208, a threat assessment of an appropriate type is generated in dependence on the first and second threat levels, and is then output. This output may for example take the form of an alarm signal, or an alert signal, or any other appropriate output type.
Figure 3 is a flowchart illustrating an alternative mode of operation of the system of Figure 1. As before, video of the scene is captured (S300). In step S302, the video is processed (S302) to identify objects in the scene. Movement of objects is tracked (S304), and a probabilistic inference algorithm is applied to the tracked movement and identified objects to estimate the probability of occurrence of each of a plurality of probabilistic behaviour scenarios (S306). An indication of the scenario most likely to have occurred is then output (S308), though in variants of this embodiment the individual estimated probabilities of all or some of the scenarios are output instead (or as well).
Figure 4 is a flowchart illustrating a further alternative mode of operation of the system of Figure 1. Steps S400 to S404 proceed as in Figure 3. In Step S406, a particle filter is applied to the tracked movement and identified objects to estimate the probability of occurrence of each of a plurality of probabilistic behaviour scenarios and the overall probability of threat outcome. An indication of the overall likelihood of threat is output (S408) as well as (optionally) an indication of the scenario deemed most likely to have occurred. As the state of the scene changes over time, the output may be output at a predetermine time or times, or relatively continuously, and may vary as time progresses. This is true of all of the other embodiments.
Figure 5 is a further schematic showing the operation of the system of Figure 1 at a software architecture level. In this case, two cameras 502, 504 feed into a motion detection module 506, which determines whether any activity is taking place at all. The execution/signal path then proceeds to the object detection and classification module 508, which then feeds into the threat prediction module 510, comprising an instant threat scoring module 512 and a scenario modelling module 514. A threat alarm and interaction module 516 can be activated by the threat prediction module if required. The operation of the components of this system are described in more detail below.
Figure 6 is a schematic showing the operation of the system of Figure 1 in more detail.
In block 600, video is captured. If motion is detected (602), motion event characteristics are extracted (604). If these characteristics are above a threshold (606), object and person detection is carried out (608). Detected objects are then tracked through the scene (610). Object and scene attributes are then extracted (612). Threat detection is carried out (614). In parallel: an instantaneous threat probability is evaluated (616), and state scenario probabilities are computed (622) The outputs of these blocks 616, 622 are used to evaluate a combined threat probability (618) and the threat probability is compared to an alert threshold (620). A threat decision algorithm is run (624) and an appropriate action from the set of actions 626-634 is selected. The actions comprise a standby mode (626), a monitor mode (628), a user verification/alert stage (630), an alert notification (632) and triggering an alarm (634). Feedback on false alarms can be taken (636) which is used to learn new model parameters (638). Feedback can be provided by confirming the alerted event activity does not pose a real threat by means of user mobile application, or alarm receiving centre interface, or further offline analysis of the data on the cloud server.
The underlying processes and systems will now be described.
In the present embodiment, a model is created of activity in a scene observed by a camera, where actors interact with objects, and the goal is to predict the associated threat level and raise an alarm when a sustained threat is identified.
A discrete model is considered, where time steps correspond to (discrete) video frames. Variables describing objects also take values from a discrete set, including location defined in terms of zones rather than coordinates. The set will typically have attached observations using categorical distribution, for example when transitioning from one zone to another they both get 50% probability.
To formalize these assumptions, scene description at time step tis denoted as Yt = {E; X; L; T}, with observable parameters: * A = { ai E Da} ... actors (person, animal, vehicle), i = 1, , na * 0 = oi E D0} ... items (packages, tools, other objects), i = 1, ... 0 * Da, Do... actor and item class labels (examples above) * E = A u 0 = {ei E De} ... all scene elements (actors and items), i = 1, , ne * C= ci} ... actor and item attributes (face visible, velocity, direction), i = 1, , ne * Z= { } ... home location zones (gate, path, door), i = 1, , nz * X: (E x E) (0; 1) ... matrix of interactions among actors and items (e.g. person carries package, two persons meet), value is the strength of the interaction * L: (E x 2) -> (0; 1) ... vector of locations of actors and objects, value is proximity (derived from position, distance and orientation) * P: (E x G) (0; 1) ... vector assigning attributes to actors and objects, value is the probability (sums to 1 for competing values, i.e. male+female=1) * T = { td; th}... time context (hour/day/week/month; day/night) Note that D, C and Zare static definitions (the latter scene-specific) and can be seen as hyper-parameters.
A process of instant threat scoring will now be described.
We assign probability p(Y) to the current scene observations Y based on our prior knowledge. To avoid notation overload, we will follow the convention that pt(Y) with index t means that all variables in the distribution are time varied (where applicable), that is, they have the same meaning as p(Y).
Pr(Y) = IvxPx(E)+ 1/21PL(E I + wpPp(E I C) (1) where weights have IA = 1 and individual terms are probability mixtures for interactions, locations and attributes respectively. The first term Px(E) = wDem,,,X(e,e) (e,enEL2 evaluates how interactions X between scene elements (actors with objects, also actors with each other) are weighted according to their label with wDe,D,e. The second term PL(E,Z)= Ilwz,DeL(e,z) zEZ eEE similarly evaluates how locations L of all elements are weighted according to their label with wz De. Finally the third term pp(E,C)= Iwe.,P(e,c) cEC eEE evaluates how specific values of element attributes P are weighted with we,c. The definition can be then represented with matrices Wx [ We,e WL=[we,z];andWP=[We,c], which will be typically very sparse, that is, only a few values will be non-zero. They can be jointly expressed with a single matrix W = [Wx; WL; WpJ usually defined in a configuration file.
The instant threat score is then calculated as smoothed time-varied probability Pt(Y) = arPt00+ -ar)Pt-100, where ay is the update rate parameter to avoid sharp score changes.
Behaviour modelling with scenarios will now be described.
The above instant threat scoring is conceptually a static assignment or data term corresponding to the current observation. Although some of the scene attributes are dynamic (related to the actor trajectory e.g. velocity and direction), they do not fully model the history of actions that correspond to a certain behaviour. For example, a typical burglar will first loiter around the property, check entrances, and pick up a tool to attempt a break-in. In contrast a routine delivery will see a postman carry parcels to the door, drop them there and leave.
Analysis of the course of actions allows for a better understanding of the observed behaviour.
Scenarios In order to evaluate the observed sequence, we propose to compare it with a set of hypotheses. Each behaviour hypothesis is described with a flexible scenario that outlines postulated developments of events. Rather than describing a single linear sequence, our scenario can generate multiple sequences with a common pattern but multiple paths, for example a person going first to the window and to the door second, or vice versa.
This can be achieved when each scenario is represented with a directed graphical model G(b) = (Sb, Eb), where nodes Sb =(s,jrepresent distinct situations (states of an automaton) and edges Eb = represent possible transitions between the states.
Given a history of the observations Yt from time period t = 0, 1, 2, ... , we predict the current behaviour (scenario) bt* c B, where B = 61) is the pool of nb possible behaviours, like burglary or delivery. Several possible behaviours are defined by the designer (see below in relation to Figures 11 to 20).
Special states: * Idle (mandatory) -this corresponds to no activity (motion) in the scene, transitions from this state are conditioned on the presence of one or more actors in the scene (n5> 0); * Actionable (optional) -when action is needed to resolve the next transition (request for face exposure).
Observation Probability Each state.9; E Sb has defined probability pt(Y 1 s) of observing certain members in the scene Y. This has the form of probability mixture of interactions, locations and attribute values as in (1), but weights specific to the scenario b and state Bare now defined with a sparse weight matrix Wbs.
Transitions The probability of transition from one state to another is denoted q(si/s), which is nonzero if and only if ell E Eb. This can be jointly expressed for all states in a stochastic transition matrix Qb E (0,1)nsxns.
Duration of presence in states: we want to model the typical or maximum time spent in a state Ti. When the actual time or spent in the state exceeds this value, the probability of staying in the same state qt(s I s) decreases and the probability of transitions to other states qt(s' I s) increases. This can be expressed in qt(s = qstay when St vi (2) (St-t,) gstay*gdecay when -ci < S < rt qleave when St rt, where the value for staying Cbtay = 0.9 exponentially decreases with cirde"y= 0.99 as the root until the minimum value of cpeave = 0.1 is reached after 4. We first randomly decide if particle stays or leaves (that is, sample from Bernoulli distribution qt(s I s, 7-,)). When the particle leaves then the new state is sampled from the q(s'l s) in the second step.
Prediction Models We simulate behaviours to determine their probability given the history of observations. A probabilistic approach to such problem is known as Hidden Markov Model (HMM). For more details, see for example "Statistical Inference for Probabilistic Functions of Finite State Markov Chains", Leonard E. Baum and Ted Petrie, Ann. Math. Statist. 37.6 (Dec. 1966), pp. 1554-1563. This model can predict a 'hidden' distribution of being in all states, and allows inference of the current state (discussed in the next section).
In particular we will consider a variant called Hidden Semi-Markov Model (HSMM), where state has variable duration at and a number of observations being produced while in the state, which fits our application. For more details, see for example "Hidden semi-Markov models", Shun-Zheng Yu, Artificial Intelligence 174.2 (2010), Special Review Issue, pp. 215-243. The cumulative probability of a sequence Sof Hstates in this model of scenario b is given by Pb(SIY) = Pb(501170)1111=1Pb(StiSt-1,6t-1)Pb(StlYt) (3) The model is instantiated for each behaviour and each actor (person) that appears in the scene. That is, there will be nb * n3 instances in the pool at any given time.
Particle Filter Model This statistical model moves a population of particles through the states based on random sampling, which makes it stochastic (predictions vary even with the same inputs). The more particles are used, the better is their approximation of the underlying probability distribution (and less variability with the same inputs), more precisely the posterior p(S I Y).
Representation: population Mt = Intk) is an evolving set of N particles mkt = (sk pki, ck). A particle k is present in state Sk E S with probability pkt. The ck is the cumulative log-probability (3) of states the particle visited.
Initialisation: all particles are initialised in the idle state, sok( = 0. The initial particle probability pa; is uniform and cumulative history is empty co,k = 0.
Prediction: at each time step tthe following procedure repeats: 1. Resample. Sample new population from the previous distribution p,, effectively discarding less observable particles.
2. Transition. Independently sample the new state of all particles sk,t -qt(Sk,11.5k,1-1, 1k) from transition distribution.
3. Mutate. Uniformly randomly choose Nm= 0.1Nparticles and move them to a random neighbouring state with non-zero transition probability to prevent mode collapse. Alternatively, they can be moved to uniformly random state, or both options on their respective subsets.
4. Observe. Calculate the particle likelihood pk.t = 1P07tIsk,t) of observing all particles in their new states, where Z= Xkp(Yilsk,t) is the normalisation factor.
5. Track. Accumulate particle probability to get its sequence probability (3) by summing the corresponding log-probability (for numerical stability) ck,t = ck,tri + log qk,t + log pk,t, where the added terms are the current transition and observation probabilities.
6. Infer. Calculate state probability distributions. First, we get the filtered posterior probability by counting the number of particles in states pc (st 1K) = 1 [sk,t = sd, k=1 where [.] denotes the indicator function (Dirac measure).
Second, we calculate the aggregated cumulative probability as mean over all particles currently in each possible state 1 1 pg = -exp Ali - ck,t where /V, is the number of particles in state sand Zis the normalising factor.
Icsk,r=s Additionally, we calculate the data observation distribution assuming uniform prior on state probability with: Pt (siin =IL,P(zt ISk,t) where ns is the number of states.
7. Predict. We can now use the each of the three observed and inferred distributions to predict the current state sr* =arg max pr (si sr* =arg max pg (si In (4) =arg max pt (si where Ystands for data observations, Ffor particles of the filter, and Cfor the tracked cumulative probability (see above for details).
Behaviour Selection Now let us consider again the pool of scenarios. The probability distribution of behaviours can be obtained from the mean accumulated score of the predicted state for each behaviour b with b C = CMG, t 1Sb kES, where St' = fk; sic,t = sill represents a set of particles in the predicted state si". After normalisation we obtain the distribution Pt(n == e 2.7 tb* b from which we predict the most probable behaviour bt =arg max pt(b Ili), We also want to estimate the uncertainty associated with this decision, for which a measure of entropy is suitable. We opt for a variant of generalised entropy index known as mean log deviation: nb 1 Pb aB = MLD(B) = log-, nu Pb b=1 which intuitively corresponds to confidence, that is, it is equal to 0 when the probability distribution is exactly uniform (maximum uncertainty) and increases as some behaviours become more probable than others by a margin (e.g. at, = 1 when pb 0.99 for one out of two behaviours).
The master threat level will now be described.
Finally, the previous predictions will be combined into a master threat score mr, which will in turn be transformed into discrete levels Mtand will drive decisions regarding the action taken, such as raising an alarm.
Threat score. The master threat score is calculated as a weighted sum of motion intensity, instant threat score, behaviour selection indicators and state selection indicators as in m = wAA(t) + wrijt (Y) + ws / 0 (b)pt (blY) bEB WsI (bin t9(5b,i)Pt(SbAn bEB where A(t) (0, 1) is the motion intensity level and indicator functions 19(b) and t 9 (s) are defined for each behaviour and its states respectively to determine their contribution towards the threat score. The contribution is weighted by positive margins of predicted behaviour probabilities Pt (Y) = r Pu (IVY) -P.), where pi, = 71 is the probability of uniform behaviour distribution and positive function fix) =x when x> 0, otherwise 1(4 = 0. In this way only the 'winners' with probability above the random chance pu contribute. Both indicators 0(b), i9(s) are defined similarly with -1 for friendly activity 0(b) E t9 (s) = 0 for neutral or unknown 1 for a threat activity In this way behaviour scenarios and states indicated as threatening will lead to the increase of mt, while some other indicated as friendly lead to the decrease, and the rest will be neutral (default for states). Scaling of the indicator for some cases could be also considered.
With a suitable setting of weights w, the resulting range of the score becomes mt E (-1, 1) where, similar to above, values around 0 indicate neutral setting, positive values indicate threat and negative values friendly activity.
Threat level and actions: for further interpretation, the threat score is transformed into discrete threat level Alt E M with 4 possible values M idle, low medium, high, with associated thresholds po, p,, p2 between them. To avoid frequent changes near the threshold, the Mt will change value only after the score mt stays within the thresholds corresponding to the new value for at least time steps.
Threat Level Confidence It Mt Low High Idle Monitor Monitor Verify Verify Standby Monitor Notify Alarm Normal Suspicious High Table 1: Master Threat Decision Matrix.
The master state of the system is determined from the threat level and its confidence.
The threat level Min conjunction with associated confidence level It (low or high based on value of at compared to a threshold ao) puts the system in one of master states shown in Table 1. The master states are defined as: 1. Standby-motion detection component is running, waiting for some significant motion (initial state) 2. Monitor -motion is present and image is analysed with detectors and analytics components.
3. Verify -further insights are obtained by means of interaction or human intervention.
4. Notify -alert user of a possible suspicious behaviour.
5. Alarm -raise alarm based on an imminent threat.
Generally only transitions between states neighbouring in the decision matrix are allowed. Further time constraints may apply for some states, for example if verification takes too long transition to notify or alarm is triggered.
An evaluation of the above-described process will now be described.
Figures 7(a) to 7(d) show plots of various states of the system of Figure 1 for four different scenarios when a "break-in" script is run; Figures 8(a) to 8(d) show plots of various states of the system of Figure 1 for four different scenarios when a "courier" script is run; Figures 9(a) to 9(d) show plots of various states of the system of Figure 1 for four different scenarios when an "arrive" script is run; Figures 10(a) to 10(c) show plots of the combined outputs of the scenarios and scripts shown in Figures 7(a)-(d), 8(a)-(d) and 9(a)-(d); Figures 7(a)-7(d), 8(a)-8(d) and 9(a)-(d) show how probabilities of different behaviours and their states evolve over time based on simulated input data scripts. There are several scenarios in the pool a BreakDoor, Delivery, which are both evaluated against the three scripts that directly feed observations into the scene: break deliverand arrive.
Figures 7(a)-7(d), 8(a)-8(d) and 9(a)-(d) compare scenarios on the same data and show that the matching scenario probability increases once distinctive observations appear in the scene around t= 30. The actor leaves the scene at t= 100; then system converges towards empty uninformed equilibrium.
Figures 10(a)-(c) show comparisons of master threat score calculation on different data. The master score mt is shown with thick line as the sum of different components from (4) in different line styles, and values are in points (100 x /Th). Scores from the state selection are detailed below (the last term of (4)), and the motion term is not used in this simulation.
Results show that master score for friendly Courier and Arrive data, reaches negative values of -60 and -30 respectively, while scenarios Delivery and Resident have the maximum probability. In contrast, the Break-in score reaches value of +60, with Burg/aryscenario being the most probable.
In summary, the presented simulations on synthetic data show that the particle filter can infer the most probable behaviour correctly. Additionally, this can be used to calculate the master threat score, which allows to distinguish friendly activities from threats.
Some examples of threat attributes will now be given.
Element Attribute Values Actor type person animal vehicle direction enter leave passby velocity static slow normal trajectory straight typical random activity move wait interact duration brief short long Attribute person face visible face_occluded face occluded face_ maskedface masked
_
unidentified adult child senior male female animal small medium large insect spider vehicle car van truck bus bike motorcycle utility service Object type package bag tool_small tool_large prop Element Attribute Values Zones none street entry path non-path driveway vegetation door window wall outbuilding TimeOfDay morning afternoon evening night Table 2: Attributes Used in Determining Risk of Threat Various example scenarios are shown in Figures 11 to 20: For example, Figure 11 is a schematic of a postman/delivery scenario. Figure 12 is a schematic of a courier scenario. Figure 13 is a schematic of a visitor scenario. Figure 14 is a schematic of a resident arriving scenario, and so on. By default, the states in the scenarios are neutral. Threatening states (such as an uncooperative person at the door) are shown with a thick dotted outline. States which are deemed friendly (such as a package being left at the door) are shown with a thin line with a dash and two dots.
Figure 15 is a schematic of a window cleaner behaviour scenario. Figure 16 is a schematic of a burglar behaviour scenario. Figure 17 is a scenario of a gardener behaviour scenario. Figure 18 is a scenario of a rubbish collector behaviour scenario. Figure 19 is a scenario of a builder behaviour scenario, and Figure 20 is a scenario of an animal behaviour scenario. (In the case of Figure 19, the scenario is not marked as a threat, though it will be noted that it has many similarities to the burglar scenario, and careful discrimination is required between the two.) Figure 21 is a flowchart of a variant of the threat assessment system. In step S2100, video of the scene is captured. The video is processed (S2102) to identify objects in the scene, and the movement of objects in the scene is tracked (S2104). Next, a probabilistic inference algorithm is applied as aforesaid (S2106), and a threat assessment is output (S2108).
Figure 22 is a screenshot showing the threat assessment system in operation (in a debug mode). This screenshot shows first order image analysis data overlaid on a single frame of image data from a camera feed, from a camera mounted on a house and monitoring the scene in front of the house (comprising a front garden, an outbuilding, a path, a porch, a gate, and so on). Here it appears that a delivery man is bringing items to the front door. The person and the package are identified as objects, and outlined. Characteristics of the person are identified (wearing cap) and output. The confidence of the estimations is also overlaid (0.99 for the person and 0.97 for the package). In the normal output of the system, a track is shown in two different colours, showing the most recent movement and recent historic movement. Any other appropriate display method (besides using different colours) may be used. Though the movement tracking requires at least one previous image frame in order to provide useful results, the output shown on this screenshot is essentially instantaneous on contemporary computing hardware.
Figure 23 is a further screenshot showing the threat assessment system in operation. In this screenshot, the same information is shown as for Figure 22, but additional threat information is overlaid. At the right hand side, a threat score of +25.6 (out of -100 to +100) is displayed. At the top left, a 'master threat score' is shown, In this case comprising +13.4 from motion ("Motion: level=95%; duration=90"), +5.6 from the objects, +7.3 from the estimated behaviour, and -0.2 for the estimated current action(s). As more frames are recorded, the behaviour element (appropriately weighted) will begin to dominate with a negative threat score, as the 'delivery' behaviour is recognised with greater certainty. At this point, however, the behaviour analysis is not yet completed ("Behaviour: confidence=0%; action: unknown, probability = 0%"), so the other aspects of the master threat score provide a provisional analysis.
Other scenarios and types of scenario are of course possible. It will be appreciated that additional pre-processing modules may be provided as appropriate, and other modules (such as the motion tracking module) may be removed if appropriate. Any reference herein to particle filter may where appropriate be replaced by any other appropriate kind of probabilistic inference algorithm.
Any appropriate camera can be used, but it is preferred to use a high definition resolution such as 1080p to allow the detection of small objects, and to use a frame rate of ideally at least 15 frames per second (fps) to track fast movements. Lastly, a viewing angle of at least 90 degrees is preferred in order to capture the entirety of a scene, but this is of course location specific.
Although the present invention has been described above with reference to specific embodiments, it will be apparent to a skilled person in the art that modifications lie within the spirit and scope of the present invention.

Claims (20)

  1. Claims 1. A method of determining a degree of threat associated with activity in a scene observed by a camera, the method comprising: receiving image data from the camera; processing the image data to generate first order image analysis data, including at least one of: object data identifying at least one object in the scene, and tracking data encoding the movement of said at least one object within the scene; processing the first order image analysis data to generate second order image analysis data dependent on the evolution of the first order image analysis data over time; estimating a first threat level associated with the first order image analysis data; estimating a second threat level associated with the second order image analysis data; and generating threat assessment data indicative of the degree of threat associated with activity in the scene in dependence on both the first and second threat levels; and outputting the threat assessment data.
  2. 2. A method according to Claim 1, wherein generating the second order image analysis data comprises processing the first order image analysis data using a probabilistic inference algorithm operating on at least one model of behaviour.
  3. 3. A method according to Claim 2, further comprising receiving scenario data encoding a plurality of probabilistic behaviour scenarios, and wherein each scenario is associated with a respective threat level, and wherein generating the second order image analysis data comprises: generating scenario probability data encoding the probability of occurrence of each scenario; and processing the scenario probability data in conjunction with the threat levels associated with the scenarios, whereby the second threat level depends at least in part on the scenario probability data.
  4. 4. A method according to Claim 3, further comprising processing the scenario probability data to determine the most probable behaviour scenario, and outputting an indication of the determined most probable behaviour scenario.
  5. 5. A method according to Claim 4, further comprising generating an estimate of uncertainty of the estimate of the most probable behaviour scenario.
  6. 6. A method according to any one of Claims 3 to 5, wherein each behaviour scenario is encoded as a directed graph having nodes representing states and edges representing transitions between the states, wherein each state may be associated with a respective threat level, and wherein generating the second order image analysis data further comprises: generating state probability data encoding the probability of occurrence of each state within a scenario; and processing the state probability data in conjunction with the threat levels associated with the states, whereby the second threat level depends at least in part on the state probability data.
  7. 7. A method according to Claim 6, wherein the probabilistic inference algorithm uses a Hidden Markov Model, and more preferably a Hidden Semi-Markov Model, to make predictions based on a history of observations.
  8. 8. A method according to any preceding claim, further comprising receiving object type data encoding a plurality of object types.
  9. 9. A method according to Claim 8, further comprising receiving scene element weighting data encoding the importance to be placed on interactions between first and second said object types.
  10. 10. A method according to Claim 8 or 9, further comprising receiving location weighting data encoding the importance to be placed on the location of said at least one object within the scene.
  11. 11. A method according to any one of Claims 8 to 10, further comprising receiving attribute weighting data encoding the importance to be placed on detected attributes of said at least one object.
  12. 12. A method according to any preceding claim, further comprising generating a rapid threat score in dependence on at least one of: current interactions between identified objects in the scene, locations of identified objects, and attributes of identified objects.
  13. 13. A method according to any preceding claim, further comprising generating a master threat score based on at least one and preferably two, three or four of: a motion intensity, a rapid threat score, a threat level associated with the probability of a behaviour scenario occurring, and a threat level associated with the probability of a state occurring within a behaviour scenario.
  14. 14. A method according to Claim 13, further comprising generating a confidence level associated with the master threat score.
  15. 15. A method according to Claim 14, further comprising selecting an action in dependence on both the master threat score and confidence level, and causing the action to be carried out.
  16. 16. A method according to Claim 15, wherein the action is selected from at least one of: waiting for significant motion, analysing existing motion, obtaining further information from the scene, alerting a user of possible suspicious behaviour, challenging a suspicious actor, and generating an alarm signal based on an imminent threat.
  17. 17. A method according to any preceding claim, further comprising routing the output data into a machine learning system and updating data accessed when processing the object data and tracking data in dependence on the output of the machine learning system.
  18. 18. A method according to Claim 17 when dependent on Claim 16, further comprising inputting user feedback into the machine learning system in response to alerting the user or generating an alarm signal.
  19. 19. An alarm system, comprising at least one processor and associated memory, said memory storing computer program code which, when executed by said at least one processor, causes the alarm system to carry out the method of any one of Claims 1 to 18.
  20. 20. A computer readable medium tangibly embodying computer program code which, when executed on a processor of a computer causes the computer to carry out a method of any one of Claims 1 to 18.
GB2109885.0A 2021-07-08 2021-07-08 Threat assessment system Pending GB2608639A (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
GB2109885.0A GB2608639A (en) 2021-07-08 2021-07-08 Threat assessment system
EP22740468.8A EP4367653A1 (en) 2021-07-08 2022-07-08 Threat assessment system
PCT/GB2022/051775 WO2023281278A1 (en) 2021-07-08 2022-07-08 Threat assessment system
PE2024000046A PE20240511A1 (en) 2021-07-08 2022-07-08 THREAT ASSESSMENT SYSTEM

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB2109885.0A GB2608639A (en) 2021-07-08 2021-07-08 Threat assessment system

Publications (2)

Publication Number Publication Date
GB202109885D0 GB202109885D0 (en) 2021-08-25
GB2608639A true GB2608639A (en) 2023-01-11

Family

ID=77354001

Family Applications (1)

Application Number Title Priority Date Filing Date
GB2109885.0A Pending GB2608639A (en) 2021-07-08 2021-07-08 Threat assessment system

Country Status (4)

Country Link
EP (1) EP4367653A1 (en)
GB (1) GB2608639A (en)
PE (1) PE20240511A1 (en)
WO (1) WO2023281278A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116486272B (en) * 2023-06-14 2023-09-05 南京理工大学 Multi-dimensional index damage evaluation method and system based on complex scene

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180300557A1 (en) * 2017-04-18 2018-10-18 Amazon Technologies, Inc. Object analysis in live video content

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10572737B2 (en) * 2018-05-16 2020-02-25 360Ai Solutions Llc Methods and system for detecting a threat or other suspicious activity in the vicinity of a person
EP3996058B1 (en) * 2018-10-29 2024-03-06 Hexagon Technology Center GmbH Facility surveillance systems and methods

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180300557A1 (en) * 2017-04-18 2018-10-18 Amazon Technologies, Inc. Object analysis in live video content

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LEONARD E. BAUMTED PETRIE: "Statistical Inference for Probabilistic Functions of Finite State Markov Chains", ANN. MATH. STATIST, vol. 37, no. 6, December 1966 (1966-12-01), pages 1554 - 1563
SHUN-ZHENG YU: "Hidden semi-Markov models", ARTIFICIAL INTELLIGENCE, vol. 174, no. 2, 2010, pages 215 - 243

Also Published As

Publication number Publication date
GB202109885D0 (en) 2021-08-25
EP4367653A1 (en) 2024-05-15
WO2023281278A1 (en) 2023-01-12
PE20240511A1 (en) 2024-03-15

Similar Documents

Publication Publication Date Title
US11626008B2 (en) System and method providing early prediction and forecasting of false alarms by applying statistical inference models
US9852342B2 (en) Surveillance system
US9451214B2 (en) Indoor surveillance system and indoor surveillance method
US20180278894A1 (en) Surveillance system
JP5224401B2 (en) Monitoring system and method
KR20210149169A (en) Anomaly detection method, system and computer readable medium
KR101720781B1 (en) Apparatus and method for prediction of abnormal behavior of object
CN114973140A (en) Dangerous area personnel intrusion monitoring method and system based on machine vision
Ahmed et al. Surveillance scene representation and trajectory abnormality detection using aggregation of multiple concepts
CN110674761A (en) Regional behavior early warning method and system
CN107122743A (en) Security-protecting and monitoring method, device and electronic equipment
KR101454644B1 (en) Loitering Detection Using a Pedestrian Tracker
GB2608639A (en) Threat assessment system
AU2007311761A1 (en) Improvements relating to event detection
KR102556447B1 (en) A situation judgment system using pattern analysis
CN114511978B (en) Intrusion early warning method, device, vehicle and computer readable storage medium
JP2013125469A (en) Security device and security action switching method
CN117746338B (en) Property park safety management method and system based on artificial intelligence
AU2018286587A1 (en) Surveillance system
Islam et al. Carts: Constraint-based analytics from real-time system monitoring
Taj et al. Recognizing interactions in video
US11574461B2 (en) Time-series based analytics using video streams
Feizi et al. Application of combined local object based features and cluster fusion for the behaviors recognition and detection of abnormal behaviors
KR20220031270A (en) Method for providing active security control consulting service
KR20220031316A (en) A recording medium in which an active security control service provision program is recorded

Legal Events

Date Code Title Description
732E Amendments to the register in respect of changes of name or changes affecting rights (sect. 32/1977)

Free format text: REGISTERED BETWEEN 20231109 AND 20231115