GB2554948A - Video monitoring using machine learning - Google Patents

Video monitoring using machine learning Download PDF

Info

Publication number
GB2554948A
GB2554948A GB1617566.3A GB201617566A GB2554948A GB 2554948 A GB2554948 A GB 2554948A GB 201617566 A GB201617566 A GB 201617566A GB 2554948 A GB2554948 A GB 2554948A
Authority
GB
United Kingdom
Prior art keywords
cctv
event recognition
stream
event
recognition model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1617566.3A
Other versions
GB2554948B (en
GB201617566D0 (en
GB2554948B8 (en
Inventor
Rashid Mohammed
Ploix Boris
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Calipsa Ltd
Original Assignee
Calipsa Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Calipsa Ltd filed Critical Calipsa Ltd
Priority to GB1617566.3A priority Critical patent/GB2554948B8/en
Publication of GB201617566D0 publication Critical patent/GB201617566D0/en
Publication of GB2554948A publication Critical patent/GB2554948A/en
Publication of GB2554948B publication Critical patent/GB2554948B/en
Application granted granted Critical
Publication of GB2554948B8 publication Critical patent/GB2554948B8/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/44Event detection

Abstract

A method of training a CCTV event recognition model using machine learning. The technique comprises the steps of the observing event recognition activity 17 of an operator (7, Fig. 1), monitoring a CCTV stream and developing, using a machine learning approach, an event recognition model 21 based on the said activity of the user. The video stream may be monitored via a graphical user interface (GPU). The video stream may be a live stream and may be provided by one (1, Fig. 1) or more cameras, which can be internet protocol (IP) cameras. Developing the event recognition model may comprise the use of Deep Convolutional Networks, Recurrent Neural Networks or Reinforced Learning. Potentially relevant events may be highlighted 25 to the operator. The method may comprise the step of receiving feedback 25 from the user. This feedback can be used to refine 29 the model.

Description

(54) Title of the Invention: Video monitoring using machine learning Abstract Title: Machine learning-enhanced CCTV monitoring.
(57) A method of training a CCTV event recognition model using machine learning. The technique comprises the steps of the observing event recognition activity 17 of an operator (7, Fig. 1), monitoring a CCTV stream and developing, using a machine learning approach, an event recognition model 21 based on the said activity of the user. The video stream may be monitored via a graphical user interface (GPU). The video stream may be a live stream and may be provided by one (1, Fig. 1) or more cameras, which can be internet protocol (IP) cameras. Developing the event recognition model may comprise the use of Deep Convolutional Networks, Recurrent Neural Networks or Reinforced Learning. Potentially relevant events may be highlighted 25 to the operator. The method may comprise the step of receiving feedback 25 from the user. This feedback can be used to refine 29 the model.
Figure GB2554948A_D0001
1/2
Figure GB2554948A_D0002
Figure 1
2/2
Figure GB2554948A_D0003
Figure 2
VIDEO MONITORING USING MACHINE LEARNING
Field
The present invention relates to video monitoring using machine learning techniques. More particularly, the present invention relates to monitoring CCTV video streams using machine learning techniques in order to identify events.
Background
CCTV surveillance is often performed by human operators in industries ranging from transport, security, manufacturing, military, police, and sea ports to critical infrastructure projects. Such CCTV systems are often monitored 24/7 by human operators sitting in control rooms. However, these systems generate a vast volume of data, estimated worldwide to be in the region of 566 Petabytes of data daily.
This amount of data is not feasible for humans to review and evaluate due to the sheer scale of the data, as well as physical limitations such as loss of attention and tiredness in the human operators. Furthermore, operators in different industries have different needs and requirements, making the monitoring difficult to automate without using task-specific or static solutions, which are not scalable or portable between applications. These automated systems are often used in combination with dedicated hardware that does not adapt to the required applications or surroundings.
Summary of Invention
Aspects and/or embodiments seek to provide an improved method of monitoring CCYV systems that is scalable, portable and application agnostic.
Described herein is a method of training a CCTV event recognition model using machine learning, the method comprising the steps of: monitoring event recognition activity of an operator performing monitoring a CCTV stream; and developing, using a machine learning approach, an event recognition model based on the event recognition activity of the operator.
Compared to existing solution providers, who provide static systems to work with specific use cases, aspects and/or embodiments provide a trainable virtual assistant that can learn to perform CCTV based video monitoring tasks with help from human supervisors. Users do not have to individually programme a new use case; instead, the system learns new use cases by monitoring or “watching” the human operator. The system can have the advantage of being more accurate and efficient than a typical human operator, as well as the ability to be applied across several industry verticals and can adapt to its environment in the style of a human operator.
Optionally, the operator monitors the CCTV stream through a graphical user interface. The Operator can interact with the CCTV stream through the Graphical User Interface, providing a convenient source of data for developing the event recognition model.
Optionally, the event recognition activity comprises identifying one or more events in the CCTV stream. For example, the one or more event may comprise at least one of: an accident; the presence of an intruder; a traffic violation; a vehicle passing; vehicle tracking; and vehicle identification.
Optionally, the CCTV stream is a live video stream, allowing a rapid response to any event that has been recognised.
Optionally, the CCTV stream comprises input video streams from a plurality of CCTV cameras, allowing a wide area to be monitored.
Optionally, the CCTV stream originates from one or more IP cameras. This allows the method to be used without the need for dedicated hardware.
Optionally, the step of developing the event recognition model comprises the use of at least one of: Deep Convolutional Networks; Recurrent Neural Networks; and Reinforced Learning. These methods provide a fast and accurate way of training event recognition models.
Optionally, the method further comprises the step of recommending identified events in the CCTV stream to the user. The method may further comprise the step of receiving feedback from the user relating to the identified event, the feedback being received through a graphical user interface.
Optionally, the method may further comprise the step of seeking the feedback from the user. Such a step may comprise “actively” seeking feedback, for example whereby a prompt or similar is sent to the user to request feedback, optionally through the graphical user interface and the user notified accordingly using at least one of a conventional notification means, which include visual, sounds and vibration alerts.
Optionally, the method further comprises the step of refining the event recognition model based on the received feedback. Receiving feedback from a user relating to potentially recognised events provides an additional source of data for training the event recognition models that can be used to refine the model after an initial period of training the by observation only.
Optionally, the step of refining the event recognition model comprises the use of machine learning, preferably using at least one of: Deep Convolutional Networks; Recurrent Neural Networks; and Reinforced Learning. These methods provide a fast and accurate way of refining the event recognition models.
Optionally, the method further comprises the step of detecting a general event category based on the event recognition activity. Detecting a general event category can increase the efficiency of the training process.
Optionally, the step of detecting a general event category occurs prior to the development of the event recognition model. This can provide a starting point from which the event recognition model can be trained, for example by providing a known initial model for that event category, which can increase the efficiency of the training process.
Optionally, the general event category comprises one or more of: vehicles; pedestrians; and intruders. The general event category, or object category, can be one tailored for common CCTV applications.
Optionally, the event recognition model is developed using a deep learning technique. Deep learning techniques can result in accurate models that run efficiently.
Optionally, the event recognition model comprises at least one of: a Convolutional Neural Network; and a Recurrent Neural Network. Convolution Neural Networks and Recurrent Neural Networks provide examples of fast and accurate models that can be used with visual data.
The method may be performed in the cloud, or on a system local to the CCTV system.
Also described herein is a system for monitoring CCTV streams, the system comprising: one or more CCTV cameras, operable to provide a CCTV stream; and an event recognition module; wherein the event recognition module is operable to identify one or more event types in the CCTV stream using an event recognition model developed using a machine learning technique.
The system may additionally comprise a user interface (for example, a graphical user interface) that allows an operator to monitor the events recognised by the event recognition algorithm, for example remotely. The event recognition module may be implemented in the cloud, or another distributed computing system, or may be implemented in a system local to the operator.
Optionally, the event recognition model may be developed using the method of any of preceding aspect of the invention.
Also described herein is a system and/or a method substantially as herein described and illustrated in the accompanying drawings.
Brief Description of Drawings
Embodiments will now be described, by way of example only and with reference to the accompanying drawings having like-reference numerals, in which:
Figure 1 illustrates an embodiment of a system for monitoring CCTV feeds; and
Figure 2 illustrates an embodiment of a method of training event recognition models for CCTV feeds.
Specific Description
Referring to Figures 1 to 2, an exemplary embodiment will now be described.
Figure 1 illustrates an embodiment of a system for monitoring CCTV feeds. One or more CCTV cameras 1 are used to monitor one or more scenes 3, which in this example comprise a road. The images recorded by the CCTV camera are output to a computing device 5, and can be monitored by an operator 7. The output is further monitored by an event recognition module 9, on which an event recognition model runs. The event recognition model operates to identify events 11 in the CCTV stream, in this case the passage of cars 13 through the scene 3, and presents the recognised events 11 to the operator 7 via a user interface on the operator computing device. The event recognition model can be described as providing a “virtual agent/assistant” to assist the human operator 7 of the CCTV system.
The CCTV cameras 1 can be part of a network of CCTV cameras that cover multiple scenes or views, and which provide video streams, which can be live, to the CCTV system. The CCTV cameras are, in some embodiments, IP cameras. The video streams output by the CCTV cameras can be in any format, but are preferably in MJPEG or a H264 compatible format. In general, the training method is hardware agnostic - it can be applied to a video stream originating from any device.
The event recognition model is trained to identify particular types of event in the CCTV streams as described below in relation to Figure 2. Examples of the events that the event recognition model can be trained to identify include: an accident; the presence of an intruder; a traffic violation; and a vehicle passes. The event recognition model can further be configured to maintain a record of the identified events, which may include counting the number of events recognised of particular event types.
The event recognition module 9 may run on a system local to the CCTV monitoring system, or may alternatively run on a remote or distributed system, such as the cloud.
Figure 2 illustrates an embodiment of a method of training event recognition models for CCTV feeds.
Users connect their existing CCTV cameras to a training platform/system and providing the system access to the video streams output by the CCTV cameras. The training process is then initialised. The system starts by detecting general categories of events 15 that are present in the CCTV streams, such as vehicles or pedestrians. The training system, or module, can be located with the CCTV system itself, or alternatively may run remotely in a cloud or distributed computing system.
By detecting the general category of events, the system can be substantially ready “out of the box” to perform general object detection and tracking. Optimisation by watching the human operator can then be used fine tune the object detection and tracking, providing adaptability and flexibility to the event recognition models.
A graphical user interface (GUI) is provided that displays the camera feed and additional controls to the user, allowing the human CCTV operator to perform his job as usual while the training system observes in the background, effectively monitoring the user event recognition activity passively 17. The user interacts with the CCTV stream as normal, flagging events that the user recognises through the GUI. This information is fed back into the training system as training data, and machine learning techniques are applied to the data to develop the event recognition model 21 based on the user event recognition activity. Examples of the machine learning methods used include, but are not limited to, Deep Convolutional Networks, preferably in combination with appearance and motion based models to track moving objects.
Following period of time developing the model based on passive observation of the user event recognition activity, for example a week of passive watching, the event recognition model will have been trained to focus on objects of interest from evaluating the human operator. The event recognition model will then be applied to the input CCTV stream 23. The model will identify potentially interesting events from CCTV camera feeds, including events from other camera feeds that it has not been trained on, and recommend these events to the user 27. These are identified using unsupervised machine learning methods, such as anomaly detection. The supervisor can dismiss or interact with identified events to provide feedback 27, providing additional data that can be fed back into the model as training data, thereby allowing the model to be refined 29.
After sufficient training, the event recognition model is able to plug into multiple camera feeds and actively show actionable alerts relating to events to the human user/operator. Further feedback can be received from the user, and the model is rewarded for correctly recognised events and penalised for incorrectly recognised events. Thus, the event recognition model (and/or the system) may be trained further by the user after deployment. For example the event recognition model may be trained to a level of around 90% before use, with the remaining 10% or so of the training being carried out (e.g. “on site”) using user feedback. Thus, the model may be further trained in an interactive manner by actively seeking feedback from human operators.
Examples of events that the model can be trained to recognise include: traffic violations; suspected intruders; or accidents. The model can also be trained to track objects within the CCTV stream, for example following a vehicle as it drives though a scene. The tracking of the objects may extend across CCTV streams provided by multiple cameras, allowing an object to tracked over a larger area.
The process of refining the model continues while the model is in use by the operator. Eventually, the model will be able to offload most of the human operator’s work, though his input will still be required to process the alerts. In a sense, the model is trained in a similar way to training a newly hired camera operator; the experienced human operator trains the model, which eventually learns to outperform the operators who trained it.
Examples of the situations to which the event recognition model can be applied include, but are not limited to: traffic violations, such as driving in the bus lane or taking a banned turn at a junction; and intruder detection, for e.g. if a system is monitoring a fence and overtime has learned that most of the time, there is no pedestrian detected in the vicinity but spots a pedestrian, then the system will flag that.
The event recognition model can comprise at least one of a Deep Convolutional Network and/or a Recurrent Network. Convolutional Networks can be used for visual object detection, and Recurrent Networks can be used for motion tracking of objects within the CCTV stream.
The event recognition model, once trained, can run locally within the CCTV systems computers. Alternatively, it could run in the cloud or any distributed computing system.
Any system feature as described herein may also be provided as a method feature, and vice versa. As used herein, means plus function features may be expressed alternatively in terms of their corresponding structure.
In some embodiments, the event recognition model can be trained to count the 5 number of vehicles passing a section of road.
Any feature in one aspect may be applied to other aspects, in any appropriate combination. In particular, method aspects may be applied to system aspects, and vice versa. Furthermore, any, some and/or all features in one aspect can be applied to any, some and/or all features in any other aspect, in any appropriate combination.
It should also be appreciated that particular combinations of the various features described and defined in any aspects can be implemented and/or supplied and/or used independently.

Claims (22)

CLAIMS:
1. A method of training a CCTV event recognition model using machine learning, the method comprising the steps of:
monitoring event recognition activity of an operator performing monitoring a CCTV stream; and developing, using a machine learning approach, an event recognition model based on the event recognition activity of the operator.
2. The method of claim 1, wherein the operator monitors the CCTV stream through a graphical user interface.
3. The method of claim 1 or 2, wherein the event recognition activity comprises identifying one or more events in the CCTV stream.
4. The method of claim 3, wherein the one or more event comprises at least one of: an accident; the presence of an intruder; a traffic violation; a vehicle passing; vehicle tracking; and vehicle identification.
5. The method of any preceding claim, wherein the CCTV stream is a live video stream.
6. The method of any preceding claim, wherein the CCTV stream comprises input video streams from a plurality of CCTV cameras.
7. The method of any preceding claim, wherein the CCTV stream originates from one or more IP cameras.
8. The method of any preceding claim, wherein the step of developing the event recognition model comprises the use of at least one of: Deep Convolutional Networks; Recurrent Neural Networks; and Reinforced Learning.
9. The method of any preceding claim, further comprising the step of recommending identified events in the CCTV stream to the user.
10. The method of claim 9, further comprising the step of receiving feedback from the user relating to the identified event, the feedback being received through a graphical user interface.
11. The method of claim 10, comprising the step of seeking the feedback from the user.
12. The method of claim 10 or 11, further comprising the step of refining the event recognition model based on the received feedback.
13. The method of claim 12, wherein the step of refining the event recognition model comprises the use of machine learning, preferably using at least one of: Deep Convolutional Networks; Recurrent Neural Networks; and Reinforced Learning.
14. The method of any preceding claim, further comprising the step of detecting a general event category based on the event recognition activity.
15. The method of claim 14, wherein the step of detecting a general event category occurs prior to the development of the event recognition model.
16. The method of claim 14 or 15, wherein the general event category comprises one or more of: vehicles; pedestrians; and intruders.
17. The method of any preceding claim, wherein the event recognition model is developed using a deep learning technique.
18. The method of any preceding claim, wherein the event recognition model comprises at least one of: a Convolutional Neural Network; and a Recurrent Neural Network.
19. A system for monitoring CCTV streams, the system comprising:
one or more CCTV cameras, operable to provide a CCTV stream; and an event recognition module;
wherein the event recognition module is operable to identify one or more event types in the CCTV stream using an event recognition model developed using a machine learning technique.
20. The system of claim 19, wherein the event recognition model was developed
5 using the method of any of claims 1 to 18.
21. A system substantially as hereinbefore described in relation to the Figures.
22. A method substantially as hereinbefore described in relation to the Figures.
Intellectual
Property
Office
Application No: GB1617566.3 Examiner: Dr Fabio Noviello
GB1617566.3A 2016-10-17 2016-10-17 Video monitoring using machine learning Active GB2554948B8 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1617566.3A GB2554948B8 (en) 2016-10-17 2016-10-17 Video monitoring using machine learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1617566.3A GB2554948B8 (en) 2016-10-17 2016-10-17 Video monitoring using machine learning

Publications (4)

Publication Number Publication Date
GB201617566D0 GB201617566D0 (en) 2016-11-30
GB2554948A true GB2554948A (en) 2018-04-18
GB2554948B GB2554948B (en) 2022-01-19
GB2554948B8 GB2554948B8 (en) 2022-02-09

Family

ID=57680634

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1617566.3A Active GB2554948B8 (en) 2016-10-17 2016-10-17 Video monitoring using machine learning

Country Status (1)

Country Link
GB (1) GB2554948B8 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019043406A1 (en) * 2017-08-31 2019-03-07 Calipsa Limited Anomaly detection from video data from surveillance cameras
US11232327B2 (en) 2019-06-19 2022-01-25 Western Digital Technologies, Inc. Smart video surveillance system using a neural network engine

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1995016252A1 (en) * 1993-12-08 1995-06-15 Minnesota Mining And Manufacturing Company Method and apparatus for machine vision classification and tracking
WO2009070560A1 (en) * 2007-11-29 2009-06-04 Nec Laboratories America, Inc. Efficient multi-hypothesis multi-human 3d tracking in crowded scenes
US20110052000A1 (en) * 2009-08-31 2011-03-03 Wesley Kenneth Cobb Detecting anomalous trajectories in a video surveillance system
WO2015000192A1 (en) * 2013-07-02 2015-01-08 深圳市华星光电技术有限公司 Air floatation guide wheel conveyer for liquid crystal panel
CN105160313A (en) * 2014-09-15 2015-12-16 中国科学院重庆绿色智能技术研究院 Method and apparatus for crowd behavior analysis in video monitoring
CN105184271A (en) * 2015-09-18 2015-12-23 苏州派瑞雷尔智能科技有限公司 Automatic vehicle detection method based on deep learning
CN105447458A (en) * 2015-11-17 2016-03-30 深圳市商汤科技有限公司 Large scale crowd video analysis system and method thereof
WO2016102733A1 (en) * 2014-12-23 2016-06-30 Universidad De Málaga Computer vision system and method for the detection of anomalous objects (pedestrians or animals) on roads or motorways

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1995016252A1 (en) * 1993-12-08 1995-06-15 Minnesota Mining And Manufacturing Company Method and apparatus for machine vision classification and tracking
WO2009070560A1 (en) * 2007-11-29 2009-06-04 Nec Laboratories America, Inc. Efficient multi-hypothesis multi-human 3d tracking in crowded scenes
US20110052000A1 (en) * 2009-08-31 2011-03-03 Wesley Kenneth Cobb Detecting anomalous trajectories in a video surveillance system
WO2015000192A1 (en) * 2013-07-02 2015-01-08 深圳市华星光电技术有限公司 Air floatation guide wheel conveyer for liquid crystal panel
CN105160313A (en) * 2014-09-15 2015-12-16 中国科学院重庆绿色智能技术研究院 Method and apparatus for crowd behavior analysis in video monitoring
WO2016102733A1 (en) * 2014-12-23 2016-06-30 Universidad De Málaga Computer vision system and method for the detection of anomalous objects (pedestrians or animals) on roads or motorways
CN105184271A (en) * 2015-09-18 2015-12-23 苏州派瑞雷尔智能科技有限公司 Automatic vehicle detection method based on deep learning
CN105447458A (en) * 2015-11-17 2016-03-30 深圳市商汤科技有限公司 Large scale crowd video analysis system and method thereof

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019043406A1 (en) * 2017-08-31 2019-03-07 Calipsa Limited Anomaly detection from video data from surveillance cameras
US11232327B2 (en) 2019-06-19 2022-01-25 Western Digital Technologies, Inc. Smart video surveillance system using a neural network engine
US11875569B2 (en) 2019-06-19 2024-01-16 Western Digital Technologies, Inc. Smart video surveillance system using a neural network engine

Also Published As

Publication number Publication date
GB2554948B (en) 2022-01-19
GB201617566D0 (en) 2016-11-30
GB2554948B8 (en) 2022-02-09

Similar Documents

Publication Publication Date Title
Laufs et al. Security and the smart city: A systematic review
US11328163B2 (en) Methods and apparatus for automated surveillance systems
US10977519B2 (en) Generating event definitions based on spatial and relational relationships
US7944468B2 (en) Automated asymmetric threat detection using backward tracking and behavioral analysis
JP2022095617A (en) Video processing system, video processing method and video processing program
EP2980767B1 (en) Video search and playback interface for vehicle monitor
DE102017129076A1 (en) AUTONOMOUS SCHOOLBUS
US11037604B2 (en) Method for video investigation
CN108230669B (en) Road vehicle violation detection method and system based on big data and cloud analysis
CN110390232A (en) Confirm method, apparatus, server and the system of irregular driving
CN111523362A (en) Data analysis method and device based on electronic purse net and electronic equipment
Casado et al. Multi‐agent system for knowledge‐based event recognition and composition
CN110895663B (en) Two-wheel vehicle identification method and device, electronic equipment and monitoring system
GB2554948A (en) Video monitoring using machine learning
Van Rest et al. Requirements for multimedia metadata schemes in surveillance applications for security
Gadgil et al. A web-based video annotation system for crowdsourcing surveillance videos
Keval Effective design, configuration, and use of digital CCTV
Barnard et al. Field Operational Tests: challenges and methods
Pavletic The Fourth Amendment in the age of persistent aerial surveillance
Davies et al. Integrating body-worn cameras, drones, and AI: A framework for enhancing police readiness and response
Ferryman Video surveillance standardisation activities, process and roadmap
Zhu Jr Study of key technologies for intelligent monitoring and face recognition systems
BG4804U1 (en) INTELLIGENT SECURITY SYSTEM THROUGH VIDEO SURVEILLANCE
Guo On the Effect of Ranger Patrols on Deterring Poaching: A Bayesian Approach for Causal Inference Using Field Tests as an Instrument
Rahman Development and evaluation of a smartphone-based system for inspection of road maintenance work