GB2557920A - Learning analytics - Google Patents

Learning analytics Download PDF

Info

Publication number
GB2557920A
GB2557920A GB1621479.3A GB201621479A GB2557920A GB 2557920 A GB2557920 A GB 2557920A GB 201621479 A GB201621479 A GB 201621479A GB 2557920 A GB2557920 A GB 2557920A
Authority
GB
United Kingdom
Prior art keywords
pattern
stored
detected object
movement
detected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1621479.3A
Other versions
GB201621479D0 (en
GB2557920B (en
Inventor
Villy Todorov Kliment
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Europa NV
Original Assignee
Canon Europa NV
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Europa NV filed Critical Canon Europa NV
Priority to GB1621479.3A priority Critical patent/GB2557920B/en
Publication of GB201621479D0 publication Critical patent/GB201621479D0/en
Publication of GB2557920A publication Critical patent/GB2557920A/en
Application granted granted Critical
Publication of GB2557920B publication Critical patent/GB2557920B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19602Image analysis to detect motion of the intruder, e.g. by frame subtraction

Abstract

A method of performing image analysis on a video stream comprises: detecting an object 503 within a captured image in the video stream; tracking movement of the object within the video stream; determining whether the tracked movement 505 of the detected object corresponds to a pre-stored pattern 601; and when the tracked movement does not correspond to a pre-stored pattern, notifying a user of a video surveillance accordingly. The method could also give the user the option of storing and labelling (e.g. as suspicious or safe) a detected track as a new pre-stored pattern in a database (240). The user notification may be given by e-mail, SMS message or a dialogue box on a display. Tracks may be stored as co-ordinate points. Independent claims are also included for an apparatus and computer program.

Description

(54) Title of the Invention: Learning analytics
Abstract Title: Comparing tracked movements of objects in a video stream with pre-stored patterns (57) A method of performing image analysis on a video stream comprises: detecting an object 503 within a captured image in the video stream; tracking movement of the object within the video stream; determining whether the tracked movement 505 of the detected object corresponds to a pre-stored pattern 601; and when the tracked movement does not correspond to a pre-stored pattern, notifying a user of a video surveillance accordingly. The method could also give the user the option of storing and labelling (e.g. as suspicious or safe) a detected track as a new pre-stored pattern in a database (240). The user notification may be given by e-mail, SMS message or a dialogue box on a display. Tracks may be stored as co-ordinate points. Independent claims are also included for an apparatus and computer program.
Figure GB2557920A_D0001
1/12
100
110
140
Figure GB2557920A_D0002
150
130
FIG. 1
Ι/Ύλ
230
Figure GB2557920A_D0003
250
260
FIG. 2
3/12
330
Figure GB2557920A_D0004
FIG. 3
4/12
400
410
420
ΞΧ
430
440
450
Lens
Image Capture Sensor
Internal Memory
Processor
Input/Output
FIG.4
5/12
501
Figure GB2557920A_D0005
FIG. 5
6/12
Figure GB2557920A_D0006
FIG. 6
7/12
701
Figure GB2557920A_D0007
FIG. 7
8/12
Figure GB2557920A_D0008
FIG. 8
9/12
Figure GB2557920A_D0009
FIG. 9
10/12
1000 v_
1010
1011
1012
Pattern ID Point No. Coordinates
28 1 2,4
28 2 6,5
28 3 8,12
28 Xn 15,32
28 Xn+i 16,39
FIG. 10
11/12
Figure GB2557920A_D0010
FIG. 11
12/12
1200
Figure GB2557920A_D0011
FIG. 12
LEARNING ANALYTICS
FIELD OF THE INVENTION
The present invention relates to a surveillance system. In particular, the present invention relates to image analysis within a video surveillance system, and behavioural patterns of objects that are detected in captured image data.
DESCRIPTION OF THE RELATED ART
Object recognition in video streams within a video surveillance system is known. Video surveillance systems can be configured to identify when an object detected within a series of captured images follows a pre-stored pattern of behaviour. For example, such pre-stored patterns of behaviour might include recognising that a detected object such as a backpack or suitcase remains in a stationary position for an extended period of time, or recognising that a detected object, such as a human, follows a pre-stored movement path.
Such video surveillance systems typically reguire behaviours or patterns to be defined by a vendor and are configured in the video surveillance software prior to the video surveillance system being installed on site. Accordingly, the behaviour patterns defined in advance might not be suitable for each and every site on which a video surveillance system is installed.
SUMMARY OF THE INVENTION
According to a first aspect of the invention there is provided a method for performing image analysis on a video stream as provided in claims 1 to 9.
According to a second aspect of the invention there is provided an apparatus for performing image analysis on a video stream as provided in claim 10.
According to a third aspect of the invention there is provided a program for performing image analysis on a video stream as provided in claim 11.
According to fourth aspect of the invention there is provided a computer-readable storage medium storing a program for performing image analysis on a video stream as provided in claim 12.
BRIEF DESCRIPTION OF THE DRAWINGS
Embodiments of the present invention will now be described, by way of example only, with reference to the accompanying drawings in which:
Figure 1 illustrates an example of a surveillance system,
Figure 2 illustrates an example of a management server comprising modules and databases,
Figure 3 illustrates an example of a hardware configuration of a server and client device of the surveillance system,
Figure 4 illustrates a hardware configuration of a surveillance camera,
Figure 5 illustrates an example of a behavioural pattern of a detected object within a scene captured by a video camera of a surveillance system,
Figure 6 illustrates an example of a behavioural pattern of a detected object and a pre-stored pattern of a surveillance system,
Figure 7 illustrates a second example of a behavioural pattern,
Figure 8 is a flowchart of processing performed for object detection and object tracking,
Figure 9 is a flowchart of processing performed for pattern creation,
Figure 10 illustrates an example of stored movement points obtained for a detected object.
Figure 11 is a flowchart of pattern recognition, and
Figure 12 illustrates an example of a GUI used in the video surveillance system.
DESCRIPTION OF THE EMBODIMENTS
First Embodiment
A video surveillance system 100 as illustrated in figure comprises a client terminal 110, management server 120, recording server 130, a plurality of cameras 140 150 (video sources or capturing means), and network/bus 160. A video surveillance system 100 will typically include a large number of cameras 140 - 150. The system 100 of figure 1 is for illustrative purposes only and any number of cameras and/or servers may be provided.
The client terminal 110 is provided for use by a user in order to monitor or review the video data, or captured images, of the cameras 140 - 150. The client 110 is configured to communicate via the network/bus 160 with the management server 120, the recording server 130, and the plurality of cameras 140 - 150.
The recording server 130 stores video data that is captured by the cameras 140 - 150. The recording server allows for long term storage or the archiving/retention of captured video data.
The management server 120 stores settings which are to be applied to the video surveillance system 100. For example, the management server 120 might store settings that determine how the recording server 120 should operate. Further, the management server 120 is able to control user log-in and access to the video surveillance system 100.
Additionally, the management server 120 includes software modules and databases as illustrated in figure 2. Figure 2 illustrates that the management server 120 comprises a pattern engine 210, an object detection module 220, a pattern similarity module 250, and databases such as an object database 230, a pattern database 240, and a temporary pattern database 260. Pattern engine 210 is configured to generate new patterns based on the movement of an object within captured images within which an object has been detected by the object detection module 220. The object database 230 is a database that holds a number of known objects and can be used to verify whether or not an object detected by the object detection module 220 is known. The pattern database 240 is a database that is configured to hold a number of pre-stored patterns that can be used as a reference for the behavior of an object that has been detected within a captured image. The pattern similarity module 250 is configured to identify a similarity of behavioral pattern of a detected object with a pre-stored pattern that is held in the pattern database 240. The temporary pattern database 260 is configured to temporarily store a number of points at which a moving object is detected as the object moves within captured video data. Each of the pattern engine 210, object detection module 220, object database 230, pattern database 240 and temporary pattern database 260 will be discussed in greater detail subsequently.
The client terminal 110, the management server 120, and the recording server 130 of figure 1 have a system architecture consistent with the computer shown in figure 3. The architecture shown in figure 3 is greatly simplified and any suitable computer architecture may be used.
Figure 3 illustrates a typical arrangement for a computer 200. A processor 310 is configured to communicate via a bus 320 with a central processing unit (CPU) 330, a hard disk 340, and a display 350. An input/output port 360 is configured so that the computer 300 can communicate with external devices.
The processor 310 is used to control the analysis of data performed by the CPU 330. Data is stored in the hard disk 340. The display 350 is used to convey information to the user, which is achieved using a monitor for example. The input/output port 360 receives data from other devices, transmits data via the network,
and allows a user to give instructions to the computer
300 using a mouse and a keyboard.
The plurality of cameras 140 - 150 have a hardware
configuration generally described in connection with figure 4. Figure 4 shows an example arrangement for a camera 400, which comprises a lens 410, an image capture device 420, an internal memory 430, a processor 440, and an input/output port 450.
The lens 410 is used to transmit light to the image capture sensor 420, and the resulting image is stored in the memory 430. The processor 440 is used to instruct the camera 400 to capture the image, modify the image, and store the image in the internal memory 430. The processor 440 is also used to transmit and receive data via the input/output port 450.
The camera types may vary. For example, the plurality of cameras 140 - 150 might be any one of or a combination of a pan-tilt-zoom (PTZ) camera, 360 degree camera, or any other type of camera known in the art. The main feature is that the cameras 140 - 150 can send a stream of video data to at least the recording server 130, via the management server 120, for storage.
Cameras 140 - 150 capture images of a scene and the captured images are sent via network/bus 160 to the management server 120. The management server 120, as discussed, can comprise a number of modules and/or databases configured to perform specific processing on the video data captured by the cameras 140 - 150.
The management server 120 comprises an object detection module 220 to perform detection of objects identified within the video data captured by the cameras 140 - 150. Object detection module 220 detects or identifies objects in image data or video data such as, for example, a video sequence. Such object detection modules can perform detection of objects in a multitude of ways. Examples of methods that are employed to perform object detection include the following: approaches based on CAD-like object models such as edge detection or primal sketch, appearance based methods such as edge matching or greyscale matching, and feature-based methods such as pose consistency and pose clustering.
The object detection module 220 is employed to detect objects within each of the frame of image data that is captured by the cameras 140 - 150. Accordingly, an object of interest can be detected and tracked for each of the subsequent video frames captured by the cameras 140 150. As an object is detected within each of the video frames, the coordinates of the detected object within the video frame can be identified and recorded such that the detected object can be tracked as it moves through a scene captured by cameras 140 - 150.
For example, figure 5 illustrates a scene of a carpark that is captured by cameras 140 - 150. Here it can be seen that a human is detected as an object 503 by the object detection module 220. The object detection module 220 first detected the object 503 whilst the object was positioned at a point 504 on the left side of the scene. Point 504 is considered as the point of primary detection. Dashed line 505 illustrates a path that the human 503 has taken whilst moving across the scene 501. Cameras 140 - 150 continually capture video frames whilst the human 503 transitions from a left side of the scene 501 to a right side of the scene 501. The position of the human 503 is detected within a number of the captured video frames such that the movement of the human can tracked and recorded. It can be seen from figure 5 that the human 503 has visited each of the cars 502 parked in each of the parking bays 506 within the scene.
Accordingly, a pattern of the movement of object 503 has been created by performing detection of the object 503 for each of the video data captured within a video sequence as a human 503 moves across scene 501 from a left side (starting point 504) to a right side of scene (503).
The video surveillance system 100 further comprises a pattern database 240 that can store a number of behavioural patterns such that the behaviour of a detected object 503 can be compared to one of the prestored behavioural patterns. Accordingly, the behaviour of a detected object 503 can be tracked and the detected behavioural pattern 505 associated with that detected object 503 can be referenced against a pattern database 240 such that the video surveillance system 100 can analyse the movement of the detected object 503 and
- 8 determine whether or not the behavioural pattern 505 of the detected object 503 is consistent with a pre-stored pattern of behaviour. Some of the pre-stored patterns of behaviour might be preloaded to the pattern database 240 prior to a customer purchasing the video surveillance system 100, or prior to a customer purchasing only the pattern database 240 that is to be installed on an already existing video surveillance system 100. Alternatively, some of the pre-stored patterns may have been configured upon installation of the cameras 140 150 based on the scenes captured by the cameras 140 150, or may have been added to the pattern database 240 subsequently.
Figure 5 illustrates the behavioural pattern 505 that has been detected for object 503 as it transitioned from a left side of the scene 501 to a right side of the scene 501. Figure 6 depicts the same scene 501 and the same behavioural pattern 505 as that illustrated in figure 5, however figure 6 further illustrates a pre-stored pattern 601, represented by the line of dots. The prestored pattern 601 is held in the pattern database 240 as a sequence of coordinates and, for the purpose of illustration, the pre-stored pattern 601 is overlaid on scene 501 to illustrate a similarity between a pre-stored pattern and the behavioural pattern 505 of object 503. Here it can be seen that the two patterns correlate quite closely to one another and, in this instance, the pattern similarity module 250 of video surveillance system 100 would return a match between the two patterns. In the event that a match is determined by the pattern matching module 250 a user of the video surveillance system 100 would typically be alerted to the fact that movement of a detected object 503 has been matched to a pre-stored pattern 601 stored in the pattern database 240. As can be seen from figure 6, it is not necessary for an exact match to occur, rather a match within a predetermined tolerance can be sufficient to trigger a notification to a user of the video surveillance system 100. For example, a comparison of the positional coordinates of the stored movement points that are associated with each of the current behavioural pattern 505 and the pre-stored pattern 601 can be performed as a method of determining whether or not the respective patterns are similar. A similarity of a predetermined number of pixels might be enough to trigger a match between a behavioural pattern and a pre-stored pattern. Alternatively, pattern matching might be performed according to image scaling. Further still, the coordinates of the tracked object over time form a time series and the similarity of the series may be measured by, for example, crosscorrelation. Such notification of a match between the pre-stored pattern 601 and the behavioural pattern 505 of an object 503 allows a user of the video surveillance system 100 to analyse the behavioural pattern of object 503 and determine whether or not the detected object is behaving suspiciously, and whether or not the behaviour of the detected object warrants further investigation. Pattern matching is further described below with reference to figure 11.
In the instance illustrated in figures 5 and 6 a prestored pattern of behaviour 601 that is consistent with a detected object 503 visiting each of the parking bays 506 might have been stored in the pattern database 240 because such movement or pattern of behaviour might be considered unusual and therefore considered to be consistent with the behaviour of a person that intends to commit a crime. Such a pattern of movement might be considered unusual because the person that represents the detected object might be assessing each of the cars 502 to identify a car 502 that might later become a target of crime. Such movement or pattern of behaviour would be considered to be inconsistent with the movement of a car owner that knows where their car 502 is parked and simply returns to their parked car 502 ready to drive away. Such behavioural pattern 505 of figures 5 and 6 might be relatively simple to define as a pre-stored pattern and upload the pattern to the pattern database 240 .
However, a person 503 that intends to commit a crime might not follow such a simple pattern when visiting cars 502 parked in car bays 506, as such a large number of variations in the behavioural pattern, or movement, of an object such as a car thief, for example, might exist. Such a large variation in behavioural patterns can cause a problem when pre-stored patterns are uploaded to a pattern database 240 because there is simply too much variation in behavioural patterns of an object for each of the behavioural patterns to be pre-stored and uploaded to a pattern database 240 by a vendor. Accordingly, a pattern database will typically hold only a limited number of pre-stored patterns, and the pattern database 240 might not include a behavioural pattern that is particularly pertinent to a given scenario.
Figure 7 illustrates how, in the given scenario of a carpark and pre-stored patterns of a car thief, there is scope for a car thief's movement to vary when inspecting cars to target and how it might be difficult to ensure that a pre-stored pattern corresponds to the movement.
Instead of following a relatively simple pattern of movement such as the one illustrated by behavioural pattern 505, the car thief instead adopts a more complicated movement pattern 705 as he/she moves between the parked cars 702. It will be understood that many variations of a movement path between parked cars might exist and it cannot be expected that pattern database 240 will always hold a pre-stored pattern to match each variation of how a user might transition from one side of a scene to another, opposite, side of the scene, for example. Accordingly, the pre-stored patterns of pattern database 240 cannot hold a pattern for each of the possible variations of movement by a car thief when the surveillance system is initially configured, for example. As such, a behavioural pattern that is suspicious might not be alerted to a user because a prestored pattern that corresponds to that detected behavioural pattern does not exist in the pattern database 240 and a match between a behavioural pattern and a pre-stored pattern cannot occur. As mentioned, it might be that the pre-stored patterns are preloaded onto a pattern database at the point that a customer purchases a video surveillance system 100 or a pattern database 240, for example. Accordingly, the number of pre-stored patterns that are preloaded onto the pattern database 240 or video surveillance system 100 is limited.
Figures 8, 9 and 11 illustrate a number of processing steps in the form of flowcharts which when implemented allow for the above-mentioned limitation to be overcome.
Figure 8 is a flowchart that illustrates processing steps that occur as the cameras 140 - 150 capture new video data of a scene. At S801 the management server 120 is waiting to receive new video data from cameras 140 150. After new video data has been received by the management server 120, processing proceeds to step S802 where objects within the image data are detected using one of the above-mentioned methods of detection. At S802 the object detection module 220 analyses the captured video data to determine whether or not it can identify objects in the scene. As discussed the object detection module 220 can perform detection of objects based on CAD-like object models such as edge detection or primal sketch, appearance based methods such as edge matching or greyscale matching, and feature-based methods such as pose consistency and pose clustering. The object detection module 220 can be configured to dismiss, during an object detection process, objects such as trees or any other object that would not be of interest to a user of the video surveillance system 100. Such object detection processing can thereby increase the efficiency of object detection because the object detection module can be configured to only detect objects such as humans and objects that a user of the video surveillance system 100 might be interested in monitoring.
After the object detection module 220 has identified any objects of interest within the newly captured image data, processing proceeds to step S803 where it is determined whether or not any of the detected objects are moving within the captured scene. In the case that there is no movement (NO at S803) then the processing returns to step S801 until the cameras 140 - 150 send new image data to the management server 120 for analysis. In the case that it is determined at step S803 that there is movement in the captured image data (YES at S803) then processing proceeds to step S804 where it is checked whether or not a starting point of the detected object is previously stored in the temporary pattern database 260. The starting point, or primary detection point, is essential for the creation of a behavioural pattern displayed by the detected object. The starting point corresponds to the first point where movement of a detected object is detected. It might be the edge of a frame or within a frame if an object in the field of image capture moves.
At step S804, in the case that it is determined that a starting point of the detected object cannot be found in the temporary pattern database 260 (NO at step S804) then processing proceeds to step S808. At step S808 the current position of the detected object is stored in the temporary pattern database 260 as the starting point, or primary detection point. The determined starting point and any subsequent points that are recorded in order to track movement of the detected object are recorded using a standard coordinate system or grid typically associated with image data. For example, an X and Y coordinate system ordered from top to bottom and left to right.
In the case that it is determined that a starting point of the detected object can be determined (YES at step S804) then the found starting point is considered the starting point of the current behavioural pattern and the process proceeds to S805. After the starting point, or primary detection point, has been detected at either of step S804 or S809 then processing proceeds to step S805 where object tracking of the detected object commences. At step S805 the object is tracked by analysing sequential video frames and the position of the object within the sequential video frames is determined, the points or coordinates associated with the object in sequential frames are also detected and stored within the temporary pattern database 260 (this is further described with reference to figure 9). Processing such that the object is tracked over sequential frames and recording of the coordinates or position of the object in sequential frames allows for a behavioural pattern of the detected object to be created as the object moves across the scene captured by cameras 140 - 150.
A variety of algorithms can be used to perform object tracking. Common examples include target representation and localisation methods such as kernel-based tracking or contour tracking, and filtering and data association methods such as kalman filter or particle filter. After object tracking has begun, processing proceeds to step S 8 0 6 .
At step S806 it is determined whether or not the object that is currently tracked is still moving. Determination of movement is not necessarily restricted to a frame-byframe basis and the determination can be set to span over a pre-stored time period or over a number of frames that are received by the video management server 12 0. Such a configuration ensures that the pattern creation is not stopped in the event that the detected object stops temporarily whilst it moves within the scene captured by the cameras 140 - 150. In the case that it is determined that the detected object does continue to move (YES at S806) within the scene captured by cameras 140 - 150 then processing proceeds to step S809 where the current tracking of the detected object is marked as a pattern in progress and stored in the temporary pattern storage database 260 accordingly. From S809 processing further proceeds to step S810 where a behavioural pattern of the detected object continues to be created. The process of pattern creation is further described below with reference to figure 9.
However, and returning to step S806, in the case that it is determined at step S806 that the detected object does not continue to move within the captured scene then processing proceeds to step S807 where pattern recognition is performed. The process of pattern recognition that is performed at step S807 is further described below with reference to figure 11.
The process of pattern creation (S810) will be first described with reference to figure 9.
At step S901 of figure 9 it is determined whether or not a new movement point can be detected. The new movement point can be determined according to a user's preference included in the settings of the pattern similarity module 250. For example, a new movement point could be detected for each and every frame captured by the cameras 140 150 and sent to the video management server 120, or a new movement point can be determined according to, for example, a time period which elapses between detection of a point, or it can be defined such that a point is detected in the case that the detected object moves by a predetermined number of pixels (i.e. a distance between the points) . That is to say that a new movement point can be determined according to a number of frames that are captured and received by the management server 120, for example every 5 frames a movement point is detected, or, alternatively, the management server 120 can be configured to capture a new movement point according to a pre-stored period of time, such as every 2 seconds. After a new movement point is detected then the processing proceeds to step S902 where it is determined whether or not the new movement point is a starting point. A new starting point might be detected, for example, when a new moving object is detected within the scene captured by the cameras 140 - 150. Accordingly, multiple objects can be detected and tracked within the scene so that each of behavioural patterns associated with each of the moving objects can be compared against pre-stored patterns to determine whether or not each of the behavioural patterns is known. In the case that it is determined at step S902 that the new movement point acquired at step S901 is a starting point then processing proceeds to step S911. At step S911 the position or coordinates of the new movement point within the video data is determined, and the current position is set as the starting point for the current behavioural pattern (S912) . Processing proceeds to step S913 where the current behavioural pattern is marked as being in progress and processing returns to step S901 where another new movement point is determined and the processing of steps S901 to S913 can begin again until a behavioural pattern that satisfies the conditions of step S907 is obtained.
However, in the case that it is determined at step S902 that the new movement point is not a starting point (NO at step S902) then processing proceeds to step S903. At step S903 the position of the new movement point within the captured video data is determined according to a typical coordinate system described above. Processing proceeds to step S904 where it is determined whether or not a camera position has changed. Such determination is particularly useful in the case that a camera having pan, tilt and zoom (PTZ) capabilities is used because the scene that is captured by the cameras 140 - 150 can be changed whilst the movement of a detected object is tracked. Accordingly, in order to continue to create an accurate behavioural pattern of a detected object, the movement of the camera needs to be compensated for in the case that the camera moves at a time when a behavioural pattern is being created. For example, in the case that the camera position has changed then an initial grid corresponding to an area initially captured by the camera extended to allow for the new area to be captured by the camera 140 - 150. Such compensation can be achieved by expanding an initial grid that is used to determine and record the coordinates of the movement points of the detected object. By way of example, consider that an initial grid is a 20 x 20 grid that is divided into four quadrants, the centre having coordinates (0,0) and each of the four corners of the grid having coordinates (-10,10), (10,10), (10,-10) and (-10,-10). If the camera now captures images of an area outside of the initial 20 x 20 grid, then the grid can be expanded by adding areas equivalent to the size of a quadrant, for example, to the existing grid. Such compensation is performed at step S905 in the case that it is determined at step S904 that a camera position has changed (YES at S904) . Processing then proceeds from step S905 to step S906. Further, in the case that it is determined at step S904 that there has not been camera movement (NO at step S904), processing also proceeds to step S906.
At step S906 the new movement point as determined in step S901 is stored in the temporary pattern database 260 and associated with the other points that have been previously captured and stored for the current behavioural pattern. Figure 10 illustrates an example of how detected points of a current behavioural pattern and their associated coordinates might be stored in the temporary pattern database 260. In particular, it can be seen that figure 10 illustrates a table 1000 that has columns 1010 to 1012 in which information such as pattern ID 1010, point number 1011 and coordinates for each of the point numbers 1012.
At step S907 of figure 9 it is determined whether or not enough movement points in order to consider a behavioural pattern as being complete have been determined and stored. Such determination can be made according to, for example, whether or not the detected object is still moving within the captured video data, whether or not the detected object can no longer be detected within the captured video data (i.e. the detected object has moved beyond the boundaries of the captured video data). Determination could also be made based on a number of movement points that have been detected and stored in the temporary pattern database 260. For example, a user can specify a predetermined number of points that can be considered to correspond to a complete behavioural pattern.
In the case that it is determined at step S907 that the behavioural pattern is not complete then processing proceeds to step S910 where the pattern is marked as in progress and not yet available. Processing returns to step S901 where another new movement point is determined and the processing of steps S901 to S913 can begin again until a behavioural pattern that satisfies the conditions of step S907 is obtained.
However, in the case that the conditions of step S907 are satisfied and it is determined that a pattern is complete then processing proceeds to step S908 where the pattern is marked as complete and it is made available (i.e. it is no longer considered the creation of a behavioural pattern is in progress) . After a behavioural pattern is determined to be complete and it has been made available the newly obtained behavioural pattern is added to the temporary pattern database 260 so that it can be referenced for future detection or recognition and processing proceeds to step S909. Step S909 of figure 9 is further described below with reference to figure 11.
Figure 11 describes a process of checking whether or not the current behavioural pattern is known to the video surveillance system 100. In particular, at step S1101 the complete pattern of the current behavioural pattern is obtained from the temporary pattern database 260 so that it is ready to be cross-referenced against the prestored patterns stored in the pattern database 240. At step S1102 the current behavioural pattern is checked against the pre-stored patterns stored in pattern database 240 for whether or not the current behavioural pattern is similar to a pre-stored pattern stored by the video surveillance system 100.
In the case that it is determined at step S1102 that the pattern is similar to a known pattern of the video surveillance system 100 (YES at step S1102) then processing proceeds to step S1107. At step S1107 a further check is conducted to determine whether or not a user of the video surveillance system has labelled the known pattern as being 'dangerous'. A pattern might be labelled as 'dangerous' at the point that the video surveillance system is installed, this will be the case for the pre-stored pattern created by the vendor, but a pattern might also be labelled by a user of the video surveillance system 100 in the case that the video surveillance system 100 has detected a new behavioural pattern that is not known to the video surveillance system 100 i.e. it is not stored in the pattern database 240 (this latter scenario will be discussed below in further detail) . Further, a pattern might be marked as 'dangerous' because the movement to which it corresponds can be considered as being suspicious or unusual behaviour. For example, and using the example of a car thief that has been previously discussed, a user might wish to label the pattern of figures 5 and 7 as dangerous because it does not correspond to the usual behaviour of a car owner simply returning to their car ready to drive away.
In the case that it is determined that the current behavioural pattern is known (YES at step S1102) and the current behavioural pattern is considered 'dangerous' (YES at step S1107) then processing proceeds to step S1108 where a user of the video surveillance system 100 is alerted to the fact that a dangerous behavioural pattern has been identified. A user of the video surveillance system 100 might be alerted to the dangerous behavioural pattern by means of, for example, an audio alert, a visual alert, an e-mail that can be sent to one or more e-mail addresses, a SMS message that can be sent to one or more mobile devices, or a dialogue box provided on a display screen of a display unit of client 1. The manner in which a user is alerted to detection of a dangerous behavioural pattern is varied and can take many different forms, but it will be understood that it is important for a user to receive some form of notification or alert that can efficiently warn the user that an object within the captured video data is behaving in a dangerous or unusual manner.
However, in the case that the current behavioural pattern is not labelled as 'dangerous' (NO at step S1107) then processing proceeds to step S1109 where it returns to step S801 of figure 8 until the management server 120 receives new video data.
Returning to step S1102, in the case that it is determined that the current behavioural pattern is not known (NO at step S1102) then processing proceeds to step S1103 where a user is presented with a graphical user interface (GUI) on client device 110 or a mobile device, for example, thereby informing a user that an unknown behavioural pattern has been detected. This may also trigger an alert similar to those sent in S1108. The reason for this is that a newly detected pattern is likely to correspond to unusual behaviour (in a system that has been in use for a while and has most common behaviours pre-stored in the pattern database) . The user of the video surveillance system 100 may wish to be warned of such unknown behaviour.
An example of a GUI is provided in figure 12. GUI 1200 can include an illustration of the current behavioural pattern 1202 such that a user can review the new, unrecognised, pattern. The current behavioural pattern, however, does not need to be presented along with the GUI and a user can simply review the unrecognised pattern at a later date and label it either dangerous or safe after a review of the unrecognised pattern has taken place. It will be understood that it is important for a user to be alerted to the fact that an unrecognised pattern has been detected and a review of the unrecognised pattern needs to take place in order for the unrecognised pattern to be labelled according to whether or not it is a dangerous pattern or safe pattern. Dialog box 1201 includes text informing a user that the current behavioural pattern is not recognised and it can further include buttons 1201a and 1201b that allow a user to provide an input to indicate whether or not the current behavioural pattern is 'dangerous'. Such labelling of the pattern will be stored along with the unrecognised pattern on the pattern database 240 such that it can be used for future cross-referencing when newly obtained behavioural patterns of an object are cross-referenced against the pattern database 240.
Accordingly, in the case that a user of the video surveillance system 100 selects button 1201a and labels the current behavioural pattern as 'dangerous' then processing proceeds to step S1106 of figure 11 and the current behavioural pattern is added to the pattern database 240 and it is stored as a 'dangerous' pattern. However, in the case that a button 1201b is selected by a user then the current behavioural pattern is added to the pattern database 240 and it is stored as a 'safe' behavioural pattern (sll05).
Alternatively, the current behavioural pattern could be ranked according to a level of danger which is allocated to the behavioural pattern by a user. For example, the behavioural pattern could be ranked from 1 to 5 or any other method of ranking that is able to indicate a danger level. Such ranking would allow a user to be alerted in the case that a newly obtained behavioural pattern matches a behavioural pattern stored on the pattern database 240 and the ranking of that behavioural pattern is above a predetermined threshold. It should be understood that the method by which a behavioural pattern is labelled is not important, but rather the fact that it is labelled in a manner that allows the behavioural pattern to be identified as dangerous and therefore alerted to a user in future should a similar behavioural pattern be detected by the video surveillance system 100 .
According to the present invention, it is possible to for a user to build, and expand upon, a customised database of behavioural patterns. The present invention is able to overcome the limitation of databases storing only a number of behavioural patterns that have been pre-stored. Such an arrangement is able to continually expand the number of behavioural patterns stored in the pattern database 240 because each time an unrecognised behavioural pattern is detected a user is able to add the behavioural pattern to the pattern database 240. Furthermore, a user is able to mark each of the behavioural patterns on the pattern database 240 according to whether or not that particular behavioural pattern poses a threat. As discussed, each behavioural pattern can be labelled, for example, as either 'dangerous' or 'safe' and such labelling allows the system to alert a user to unusual or threatening behaviour in the case that movement of an object detected within video data has a behavioural pattern that corresponds to one of the stored 'dangerous' patterns.
Embodiment (s) of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a 'nontransitory computer-readable storage medium') to perform the functions of one or more of the above-described embodiment (s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the abovedescribed embodiment (s) , and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment (s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment (s) . The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM) , a read only memory (ROM) , a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)™), a flash memory device, a memory card, and the like.

Claims (12)

1. A method of performing image analysis on a video stream, the method comprising:
detecting an object within a captured image in the video stream;
tracking movement of a detected object within the captured images of the video stream;
determining whether or not the tracked movement of the detected object corresponds to a pre-stored pattern; and wherein in the case that it is determined that the tracked movement of the detected object does not correspond to a pre-stored pattern, notifying a user of a video surveillance system that the movement of the detected object does not correspond to a pre-stored pattern.
2. A method of performing image analysis according to claim 1, wherein a notification provided to the user provides the user with an option of whether or not they wish to store the pattern corresponding to the movement of the detected object as a pre-stored pattern in a pattern database.
3. A method of performing image analysis according to claim 1 or claim 2, wherein the tracked movement of the detected object that is stored as a pre-stored pattern is labelled according to a user's input.
4. A method of performing image analysis according to claim 3, wherein the tracked movement of the detected object that is stored to be used as a pre-stored pattern is labelled as being either dangerous or safe.
5. A method of performing image analysis processing according to claim 1, wherein the notification provided to a user is any one of an e-mail, a SMS text message, or a dialogue box provided on a display screen of a client terminal in a video surveillance system.
6. A method of performing image analysis processing according to any preceding claim, wherein coordinate points of the detected object are detected and stored during the tracking movement of the detected object.
7. A method of performing image analysis processing according to claim 1, wherein the determining of whether or not the tracked movement of the detected object corresponds to a pre-stored pattern is performed according to a similarity between the tracked movement of the detected object and patterns that have been prestored on the pattern database.
8. A method of performing image analysis processing according to claim 7, wherein a similarity between the tracked movement of the detected object and patterns that have been pre-stored on the pattern database is performed according to a similarity between a sequence of coordinate points associated with each of the tracked movement of the detected object and pre-stored patterns.
9. A method of performing image analysis processing according to any preceding claim, wherein a plurality of objects are detected and tracked within the video stream.
10. An apparatus for performing image analysis on a video stream, the apparatus comprising:
detection means configured to detect an object within a captured image in the video stream;
tracking means configured to track movement of a detected object within the captured images of the video stream;
determination means configured to determine whether or not the tracked movement of the detected object corresponds to a pre-stored pattern; and wherein in the case that the determination means determines that the tracked movement of the detected object does not correspond to a pre-stored pattern, notification means provides a user of a video surveillance system with a notification that the movement of the detected object does not correspond to a pre-stored pattern.
11. A program which, when run on a device, causes the device to execute a method according to any one of claims 1 to 9.
12. A computer-readable storage medium storing a program according to claim 12.
Intellectual
Property
Office
Application No: GB1621479.3
GB1621479.3A 2016-12-16 2016-12-16 Learning analytics Active GB2557920B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1621479.3A GB2557920B (en) 2016-12-16 2016-12-16 Learning analytics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1621479.3A GB2557920B (en) 2016-12-16 2016-12-16 Learning analytics

Publications (3)

Publication Number Publication Date
GB201621479D0 GB201621479D0 (en) 2017-02-01
GB2557920A true GB2557920A (en) 2018-07-04
GB2557920B GB2557920B (en) 2020-03-04

Family

ID=58284320

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1621479.3A Active GB2557920B (en) 2016-12-16 2016-12-16 Learning analytics

Country Status (1)

Country Link
GB (1) GB2557920B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2392033A (en) * 2002-08-15 2004-02-18 Roke Manor Research Video motion anomaly detector
US20090033745A1 (en) * 2002-02-06 2009-02-05 Nice Systems, Ltd. Method and apparatus for video frame sequence-based object tracking
EP2058777A1 (en) * 2007-03-06 2009-05-13 Kabushiki Kaisha Toshiba Suspicious behavior detection system and method
US20120237081A1 (en) * 2011-03-16 2012-09-20 International Business Machines Corporation Anomalous pattern discovery
GB2501542A (en) * 2012-04-28 2013-10-30 Bae Systems Plc Abnormal behaviour detection in video or image surveillance data
US20160210829A1 (en) * 2013-09-06 2016-07-21 Nec Corporation Security system, security method, and non-transitory computer readable medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090033745A1 (en) * 2002-02-06 2009-02-05 Nice Systems, Ltd. Method and apparatus for video frame sequence-based object tracking
GB2392033A (en) * 2002-08-15 2004-02-18 Roke Manor Research Video motion anomaly detector
EP2058777A1 (en) * 2007-03-06 2009-05-13 Kabushiki Kaisha Toshiba Suspicious behavior detection system and method
US20120237081A1 (en) * 2011-03-16 2012-09-20 International Business Machines Corporation Anomalous pattern discovery
GB2501542A (en) * 2012-04-28 2013-10-30 Bae Systems Plc Abnormal behaviour detection in video or image surveillance data
US20160210829A1 (en) * 2013-09-06 2016-07-21 Nec Corporation Security system, security method, and non-transitory computer readable medium

Also Published As

Publication number Publication date
GB201621479D0 (en) 2017-02-01
GB2557920B (en) 2020-03-04

Similar Documents

Publication Publication Date Title
JP6905081B2 (en) Methods and Devices for Obtaining Vehicle Loss Assessment Images and Devices, Servers, and Terminal Devices
JP6213843B2 (en) Image processing system, image processing method, and program
CN110494861B (en) Image-based anomaly detection method and system
US9811755B2 (en) Object monitoring system, object monitoring method, and monitoring target extraction program
KR101530255B1 (en) Cctv system having auto tracking function of moving target
Nam et al. Intelligent video surveillance system: 3-tier context-aware surveillance system with metadata
EP3373201A1 (en) Information processing apparatus, information processing method, and storage medium
CN109727275B (en) Object detection method, device, system and computer readable storage medium
US20180061065A1 (en) Information processing apparatus, method thereof, and computer-readable storage medium
JP6210234B2 (en) Image processing system, image processing method, and program
JP6551226B2 (en) INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND PROGRAM
US9105101B2 (en) Image tracking device and image tracking method thereof
CN106295598A (en) A kind of across photographic head method for tracking target and device
KR101062225B1 (en) Intelligent video retrieval method and system using surveillance camera
Sabri et al. Low-cost intelligent surveillance system based on fast CNN
Liu et al. A cloud infrastructure for target detection and tracking using audio and video fusion
JP2013195725A (en) Image display system
WO2020217368A1 (en) Information processing device, information processing method, and information processing program
Dalka Detection and segmentation of moving vehicles and trains using Gaussian mixtures, shadow detection and morphological processing
JP7202995B2 (en) Spatio-temporal event prediction device, spatio-temporal event prediction method, and spatio-temporal event prediction system
GB2557920A (en) Learning analytics
Chaitanya et al. Study on real-time face recognition and tracking for criminal revealing
Becker et al. The effects of camera jitter for background subtraction algorithms on fused infrared-visible video streams
Srilaya et al. Surveillance using video analytics
Creusen et al. ViCoMo: visual context modeling for scene understanding in video surveillance

Legal Events

Date Code Title Description
732E Amendments to the register in respect of changes of name or changes affecting rights (sect. 32/1977)

Free format text: REGISTERED BETWEEN 20190429 AND 20190502