GB2614241A - A method of displaying video surveillance data in a video management system - Google Patents

A method of displaying video surveillance data in a video management system Download PDF

Info

Publication number
GB2614241A
GB2614241A GB2118668.9A GB202118668A GB2614241A GB 2614241 A GB2614241 A GB 2614241A GB 202118668 A GB202118668 A GB 202118668A GB 2614241 A GB2614241 A GB 2614241A
Authority
GB
United Kingdom
Prior art keywords
video
motion
surveillance data
video surveillance
sample values
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
GB2118668.9A
Inventor
Hojbjerg Kjær Jakobsen Jesper
Bendtson Jimmi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Milestone Systems AS
Original Assignee
Milestone Systems AS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Milestone Systems AS filed Critical Milestone Systems AS
Priority to GB2118668.9A priority Critical patent/GB2614241A/en
Priority to PCT/EP2022/083181 priority patent/WO2023117294A1/en
Publication of GB2614241A publication Critical patent/GB2614241A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/188Capturing isolated or intermittent images triggered by the occurrence of a predetermined event, e.g. an object reaching a predetermined position

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Image Analysis (AREA)

Abstract

A computer-implemented method of displaying a visual representation of video surveillance data in a video management system comprises: receiving, in the system, sample values generated by (e.g., frame) sampling video surveillance data and/or metadata associated therewith, the sample values representing at least different levels of motion and/or object detection (e.g. object recognition) in the video surveillance data when motion and/or one or more objects is determined to exist; and displaying, in the video management system, a visual representation of an evolution of the sample values over time. The method may further comprise mapping the sample values to predetermined threshold ranges, with the visual representation further representing an evolution of the sample values over time based on the mapping. Video surveillance data may be downscaled prior to motion and/or object detection. Sampling of frames may be only carried out in relation with frames for which motion is determined to occur based on a motion threshold. Object detection may be based on a speed a group of neighbouring pixels moved between different video frames.

Description

A METHOD OF DISPLAYING VIDEO SURVEILLANCE DATA IN A VIDEO MANAGEMENT SYSTEM
Technical field of the Invention
[0001] The present invention relates to a video surveillance system, a computer-implemented method, a computer program and a storage medium comprising such a program, for displaying a visual representation of video surveillance data in a video management system.
Backaround of the Invention
[0002] Surveillance systems are typically arranged to monitor surveillance data received from a plurality of data capture devices. A viewer may be overwhelmed by large quantities of data captured by a plurality of cameras. If the viewer is presented with video data from all of the cameras, then the viewer will not know which of the cameras requires the most attention. Conversely, if the viewer is presented with video data from only one of the cameras, then the viewer may miss an event that is observed by another of the cameras. Moreover, even if presented with a single video stream, file or sequence, the viewer may lose time trying to identify which parts of the video stream, file or sequence may be relevant to him/her and/or require further attention.
[0003] The identification of whether information is important is typically made by the viewer, although the viewer can be assisted by a computer-generated alert identifying that the information could be important. Typically, the viewer is interested to view video data that depicts the motion of objects that are of particular interest, such as people or vehicles.
[0004] The XProtectO Video Management Software (VMS) developed and distributed by the Applicant is a video management system which can be used to -1 -retrieve and play live and recorded video surveillance data from different video cameras and recording servers. Figure 2 (further described below) illustrates a typical view in the VMS, wherein video data 201, 202, 203 is displayed along two coloured-timelines 210, 220. The start and end times of the timelines 210, 220 can usually be adjusted to represent a period of time of particular interest for the viewer.
The first timeline 210 shows the recording period(s) of a selected video camera (video stream 201 of a video camera 110a, shown in Figure 1 which will further be described below) and the second timeline 220 shows the recording period(s) for all the video camera whose video streams 201, 202, 203 are simultaneously displayed to the viewer (video streams of video cameras 110a-c shown in Figure 1), including the video stream 201 from the selected video camera (video camera 110a). More precisely, each timeline indicates whether video has been recorded (e.g. light-red 222) and whether motion has been detected in the recorded video (e.g. red 230). Alternatively, the second timeline 220 may be configured to show whether there is motion in any one of the recorded video streams, regardless of whether these video streams are displayed. However, the current system does not allow the viewer to identify the most relevant and/or active periods within those already identified as including motion, because the coloured-timeline(s) will only display (with the colour red 230) when motion has been detected regardless of an amount of motion detected in the video stream(s), file(s) or sequence(s) above a motion threshold. In other words, the current system only indicates whether motion occurs or not, based on a motion threshold above which motion is considered to occur in the recorded video.
[0005] It is also known from PCT publication WO 91/03053 Alto analyse different components of recorded audio material associated with recorded video material, and to subsequently augment a timeline associated with the video material with information derived from components of the audio material. However, such a solution may not be suitable for noisy environments under surveillance, and has the -2 -drawbacks usually associated with solutions relying on video and audio data, such as for instance audio and video synchronisation issues, the need to analyse and process audio data in addition to video data, and a reliance on audio capturing devices that can fail. Moreover, such a solution may not be suitable for detecting motion which is not associated with audio and/or not associated with audio within predetermined frequency ranges. Thus, there is a need for a non-audio-based solution which allows detected motion to be given priority if it is identified as being more important than other motion that has been detected.
Summary of the Invention
[0006] The present invention addresses at least some of the above-mentioned issues.
[0007] The invention provides a computer-implemented method of displaying a visual representation of video surveillance data in a video management system, according to claim 1 [0008] The invention further provides a computer program according to claim 24.
[0009] The invention further provides a video surveillance system according to claim 25.
[0010] Aspects of the present invention are set out by the independent claims 1, 24 and 25 and preferred features of the invention are set out in the dependent claims 2-23 [00]1] In particular, the invention achieves the aim of displaying a visual representation of video surveillance data in a video management system by sampling video surveillance data and/or metadata associated therewith, the sample -3 -values representing at least different levels of motion and/or object detection in the video surveillance data when motion and/or one or more objects is determined to exist. An evolution of the sample values over time can then be displayed with or along one or more timelines of the video surveillance data. Alternatively, the sample values can be mapped to predetermined ranges or values (e.g. ROB colour values) and the said visual representation can be based on the mapping, as will be described below.
[0012] Additional features of the present invention will become apparent from the following description of embodiments with reference to the attached drawings.
Brief description of the drawings
[0013] Embodiments of the present invention will now be described, by way of example only, with reference to the accompanying drawings in which: [0014] Figure 1 shows a diagram of a video surveillance system in which the present invention can be implemented, [0015] Figure 2 shows a diagram illustrating a view in a VMS, according to the
prior art;
[0016] Figures 3A and 3B illustrate flowcharts of computer-implemented methods of displaying a visual representation of video surveillance data in a VMS, according to the invention; [0017] Figures 4 and 6 show diagrams of different visual representations of an evolution of the sample values over time, according to the invention; -4 - [0018] Figure 5 shows an example of a possible mapping between the sample values and colours to be displayed in the visual representation, according to the invention.
Detailed Description of the Invention
[0019] Figure 1 shows an example of a video surveillance management system 100 in which embodiments of the invention can be implemented. The system 100 comprises a management server 130 and a recording server 150. Further servers may also be included, such as further recording servers, archive servers, indexing sewers or analytics sewers. For example, an archiving sewer (not illustrated) may be provided for archiving older data stored in the recording server 150 which does not need to be immediately accessible from the recording server 150, but which it is not desired to be deleted permanently. A fail-over recording server (not illustrated) may be provided in case a main recording server fails. Also, a mobile server (not illustrated) may be provided to allow access to the surveillance/monitoring system from mobile devices, such as a mobile phone hosting a mobile client or a laptop accessing the system from a browser using a web client. An analytics sewer can also run analytics software for image analysis, for example motion or object detection, facial recognition, event detection.
[0020] A plurality of video surveillance cameras 110a, 110b, 110c send video data to the recording server 150. An operator client 120 (client apparatus) provides an interface via which an operator can view live video streams from the video cameras 110a, 110b, 110c, or recorded video data from the recording server 150, on a display 125. The video cameras 110a, 110b, 110c capture image data and send this to the recording server 150 as a plurality of video streams. The recording server 150 stores the video streams captured by the video cameras 110a, 110b, 110c. -5 -
[0021] The management server 130 includes management software for managing information regarding the configuration of the surveillance/monitoring system 100 such as conditions for alarms, details of attached peripheral devices (hardware), which data streams are recorded in which recording server, etc.. The management server 130 also manages user information such as operator permissions. When an operator client 120 is connected to the system, or a user logs in, the management server 130 determines if the user is authorised to view video data. The management server 130 also initiates an initialisation or set-up procedure during which the management server 130 sends configuration data to the operator client 120. The configuration data defines the video cameras in the system, and which recording server (if there are multiple recording servers) each camera is connected to. The operator client 120 then stores the configuration data in a cache. The configuration data comprises the information necessary for the operator client 120 to identify cameras and obtain data from cameras and/or recording servers.
[0022] The operator client 120 is provided for use by an operator (such as a security guard or other user) in order to monitor or review the outputs of the video cameras 110a, 110b, 110c. The operator client UO may be a fixed console or could be a mobile device connected to the video management system via a network. The operator client 120 has or is connected to a display 125 which can display an interface for interacting with the management software on the management server 130. The operator client 120 can request video data streams from one or more of the video cameras 110a, 110b, 110c to view video streams in real time, or the operator client 120 can request recorded video data stored in the recording server 150. According to an embodiment of the invention, the video being captured by one of the video cameras as a selected video stream is displayed in a main window, with video streams captured by other video cameras being di splayed in smaller windows. In this case, multiple video streams are sent to the operator client 120. In the case of a system with a large number of cameras, even a large display may not be able -6 -to show the videos from all of the video cameras, only a selection. Alternatively, the operator client 120 may be arranged to show the video stream being captured by only one of the video cameras 110a, 110b, 110c as a selected video stream on its display. In this case, only one video stream is sent to the operator client 120, this being the stream from the selected camera.
[0023] The operator client may also be connected with an incident response system that can receive commands from the operator to remotely close doors, set or change access control rights, prevent or allow access to certain zones of the physical area, set traffic lights, trigger an alarm or control any devices configured to be remotely controlled via the incident response system. Note that the commands can be security related, emergency related or operations related.
[0024] The operator client 120 is configured to communicate via a first network/bus 121 with the management server 130 and the recording server 150 and the video cameras 110a, 110b, 110c. The recording server 150 communicates with the video cameras 110a, 110b, 110c via a second network/bus 122. The recording sewer 150 is configured so as to stream video streams from the video cameras 110a, 110b, 110c to the operator client 120.
[0025] The video surveillance system of Figure 1 is an example of a system in which the present invention can be implemented. However, other architectures are possible. For example, the system of Figure 1 is an "on premises" system, but the present invention can also be implemented in a cloud-based system. In a cloud-based system, the video cameras stream data to the cloud, and at least the recording server 150 is in the cloud. Additionally, video analytics may be carried out in the cloud. The client apparatus requests the video data to be viewed by the user from the cloud. The system may also be configured as a hybrid system where, for instance, the data is archived on a cloud-based archive server after having been -7 -recorded on an on-premises recording server. Alternatively, an on-premises server may buffer the video data before moving it to a cloud-based recording server.
[0026] Figure 2, partly discussed above, shows a diagram illustrating a view in a VMS, which includes a player interface 200. The interface 200 includes a speed adjustment slider 205 which allows the viewer to adjust a playback speed, typical media player buttons 206 to play, pause, stop and fast forward (etc.) video playback, and a time span slider 207 which allows the viewer to adjust a span of time of the timelines. The timelines can be moved leftward and rightward to move forward and backward in time, respectively, and display corresponding video streams, files and/or sequences. The timelines 210, 220 are colour-coded as described earlier. In addition, dark grey 221 indicates that there are no recordings for the considered time, and a dark chequerboard pattern 212 indicates that no recordings have been requested and therefore it is unknown whether there are recordings. Additional sources of data can optionally be displayed as other colours (not shown) and displayed in a stacked-up fashion in each timeline. A vertical line 214 represents a frame or moment in time along one or more of the one or more timelines of the video surveillance data that is currently being displayed on the display 215, for instance corresponding to a frame or moment in time of the video stream 201 of the video camera 110a that is currently being displayed on the display 215.
[0027] Figure 3A is a flowchart illustrating the steps of the computer-implemented method of displaying a visual representation of video surveillance data in a video management system, according to a first embodiment of the invention.
[0028] In a step S300, the VMS receives sample values generated by sampling video surveillance data (e.g. one or more video streams, files or sequences of video surveillance data) and/or metadata associated therewith. The sample values represent at least motion occurring and/or one or more objects moving in the video -8 -surveillance data. The sample values may advantageously correspond to motion and/or object descriptors which describe, for instance, an amount of motion or a number of objects of the same type (i.e. belonging to the same class) in a given frame or at a given point in time, where or when motion and/or one or more objects is determined to exist. In other words, the visual representation according to the invention goes beyond the mere di splaying of the prior art of whether motion occurs or not or whether an object is detected or not. The viewer is thus presented with more relevant information allowing him/her to quickly identify periods of interest in the video surveillance data.
[0029] Within the context of the present invention, the term 'video surveillance data' means one or more video streams, files and/or sequences related to one or more surveillance areas (i.e. areas under surveillance). The present invention applies to video streams, files or sequences, which may be used interchangeably in the present disclosure, knowing that a video stream may comprise one or more files, and that a video stream or file may comprise one or more video sequences. A video stream may further be received in a live fashion from a video surveillance camera or received in a playback fashion from one or more recording servers, in the form of one or more video files. Video sequences and video files usually have known start and end times, while video streams may not have a known end time as they are currently being streamed. The present invention preferably applies to video stream(s), file(s) or sequence(s) received from one or more recording servers, as the visual representation(s) is/are easier to represent when the end time(s) is/are known. Moreover, such recorded video streams, files and sequences usually come with existing metadata which may be re-used for the purpose of representing the visual representation(s).
[0030] Within the context of the present invention the term 'sampling' means any kind of computational and/or algorithmic operations allowing to derive, extract or -9 -otherwise obtain, from video surveillance data and/or associated metadata, quantitative and/or qualitative values representing at least different levels of motion and/or object detection, when motion and/or one or more objects is determined to exist in the video surveillance data. For instance, the XProtecte VMS has the ability to perform motion detection (or Video Motion Detection, also referred to as VMD) in video surveillance data and generate motion metadata which includes sample values representing motion in the video surveillance data.
[0031] The generation of sample values can be performed at a predetermined sampling rate that can be set in the VMS. The sampling rate may advantageously correspond to a predetermined frame interval, e.g. every 10 frames, meaning that a sample value is generated every 10 frames of video surveillance data. However, other configurations are possible, and the VMS can be configured such that an appropriate sampling rate can be set in the VMS. Advantageously, the sampling is carried out outside of the VMS, for instance in one or more recording servers storing the video surveillance data. The sample values may then be received by the VMS from the one or more recording servers. This allows to display a visual representation of an evolution of the sample values over time in the VMS without the need to receive or buffer the full video surveillance data. The viewer may then decide to download relevant parts of the video surveillance data by selecting relevant parts of the visual representation.
[0032] Alternatively, motion detection may be carried out as disclosed in the UK patent application GB2569557A i.e. by determining how many pixels have changed between two captured images in different grid (or cell) elements of a motion grid.
An average amount of change can then be determined for each motion grid and used as a sample value for implementing the present invention. Note that the invention is not limited to this configuration. For instance, it is also possible to generate motion metadata in a continuous fashion (e.g. for every frame), and to sample the -10 -metadata at a predetermined sampling rate to extract, then after, sample values from the continuously generated motion metadata. The sample values may thus correspond to the metadata generated by sampling the video surveillance data, and in particular frames of the video surveillance data or parts thereof or be obtained by sampling the metadata. The present invention is not limited to a particular format of sample values.
[0033] Motion detection may further be determined based on a sensitivity value representing a sensitivity of a video camera, from which the video surveillance data originates, to motion, and determined based on a motion threshold representing a value above which motion is considered to occur. Such a sensitivity value is usually set within a range from 0 to 100, where a higher value means less sensitivity to changes in the image. The sensitivity value is usually determined automatically in the VMS based on the type of the video camera (i.e. according to the video camera's specifications). The motion threshold is usually set within a range from 0 to 10.000 and determines when motion is considered to occur, where a higher value means that irrelevant motion (e.g. leaves of a tree moving in the wind or raindrops falling in the field of view of the video camera) is more likely to be dismissed, but this also means that relevant motion may also not be detected. The sensitivity value and motion threshold may advantageously be set in the VMS.
[0034] Advantageously, motion detection may only be carried out in relation with frames for which motion is determined to occur based on the said motion threshold. This allows to avoid transmitting sample values (and receiving the same in the VMS) which are not meaningful and could be excluded from the visual representation.
[0035] Motion detection may be determined in different ways. For instance, motion detection may be determined based at least on a number of pixels changed between different frames of the video surveillance data relative to a total number of pixels in these frames (which is in principle the same in each of these frames).
[0036] Alternatively, motion detection may be determined based on motion vectors embedded in at least one fully encoded or partly decoded video stream of the video surveillance data (for instance using the technique described in K. Szczerba, S. Forchhammer, I. Stottrup-Andersen and P. T. Eybye, "Fast Compressed Domain Motion Detection in H.264 Video Streams for Video Surveillance Applications," 2009 Sixth WEE International Conference on Advanced Video and Signal Based Surveillance, 2009, pp. 478-483, doi: 10.1109/AVSS.2009.78). This also allows to consume fewer computing resources as the sample values (the motion vectors or values derived therefrom) can be obtained without the need to fully decode the video surveillance data when it is supplied in an encoded form.
[0037] Object detection may also be carried out in different ways. For instance, objection detection may be determined based at least on a number of neighbouring pixels moved between different frames of the video surveillance data. For instance, knowing the size of different potential objects in the video surveillance data, it becomes possible to detect an object based on the number of neighbouring pixels moved between different frames of the video surveillance data. When different groups of neighbouring pixels move between different frames of the video surveillance data, it may be advantageous to base any such determinations based on the biggest group of pixels moved between the said different frames.
Advantageously, only the biggest group of pixels may be used in the visual representation.
-12 - [0038] It is also possible to calculate a speed of the group of neighbouring pixels which move between the said different frames, and to calculate a speed of the biggest group of pixels when several groups of pixels move between the said different frames. Such a speed may then be used in the visual representation. In such a case, the visual representation may correspond to a graph representing the speed of the biggest group of pixels moved between the said different frames as a function of time.
[0039] The sample values may include numerical quantitative values (e.g. real numbers between 0 and 10, but may also be in the form of non-numerical quantitative values (e.g. low motion', 'high motion') or in the form of qualitative values (e.g. 'non-important event', 'important event'). The sample values may be expressed as (x,y) points or pairs, where y may represent a measured, estimated or otherwise assessed value (such as an amount of motion) and x may represent a point in time or a frame number.
[0040] Within the context of the present invention the expression 'representing at least different levels of motion and/or object detection' means that the sample values may represent anything else in addition to the detected motion and/or the detected object(s) and/or comprise additional data Accordingly, the said levels may only correspond to actual motion and/or existing objects, not to values indicating that there is no motion and/or no object in the video surveillance data [0041] The sample values may further represent at least one image characteristic of the frames of the video surveillance data. This image characteristic may for instance be a luminance value which represents an intensity of light in a given image, which may also help to detect motion and/or objects, e.g. a very bright object and/or activity of interest such as a fire.
-13 - [0042] Within the context of the present invention object detection is to be construed as covering object detection per se but also object recognition, identification, and the like.
[0043] Advantageously, metadata may exist which is indicative of whether at least one recognised object is present in a given frame. When such an object is recognised, it becomes possible to calculate, for instance, a speed of the object by sampling the metadata to determine a time taken by the object to cross an area of known length. In such a case, the visual representation may be in the form of a graph representing a speed of a known object as a function of time in the video surveillance data.
[0044] Metadata indicative of whether several objects of the same type or class are present in a given frame may also exist. In such a case, the visual representation may be in the form of a graph representing a number of objects belonging to the same type or class as a function of time in the video surveillance data.
[0045] In a step S310, a visual representation of an evolution of the sample values over time is displayed in the VMS. This visual representation may be in any appropriate form, such as in the form of a bar chart as shown in Figure 4.
Advantageously, the visual representation represents a combination of different types of sample values, for instance a combination of numerical quantitative values (e.g. values tracking an amount of motion) and qualitative values (e.g. values tracking different types of events of interest). In such a case, the visual representation may represent a first type of sample values in the form of a graph (e.g. bar chart) and a second type of sample values with colours on that graph. For instance, assume that a height of each bar on the chart of Figure 4 is proportional to a level (or an amount) of motion in the video surveillance data and that a colour of each bar represents one or more types of detected events. By looking at the chart, -14 -the viewer is able to detect a first period of time 410 during which little activity occurs and a second period of time 420 with more activity. The second period of time 420 includes high bars 424 indicating a high amount of motion and bars of intermediate height 422 indicating an intermediate amount of motion. The bars of intermediate height 422 have a first colour (e.g. red) while the higher bars 424 have a second colour (e.g. green), which indicates to the viewer that the intermediate bars 422 relate to a type of event that is different from the higher bars. As the viewer and/or an administrator of the VMS is able to map different colours to different types of events, it becomes possible for the viewer to quickly identify whether an amount of movement is related to a particular, and possibly more relevant, type of event. Thus, with the visual representation, the viewer may be able to quickly assess whether the intermediate bars 422 are more relevant to him/her than the higher bars 424. The graph of Figure 6 shows another possible visual representation in the form of a graph plotting (x,y) points or pairs along a timeline in the VMS. The visual representation may also be in the form of a coloured-timeline such as the timelines 210, 220 in Figure 2, where the different levels of motion and/or object detection may be represented by different colours and/or nuances thereof (e.g. dark blue and pale blue). For instance, using a colour scale 500 such as the one shown in Figure 5, it is possible to map different sample values to different RGB values. An entire spectrum of colours may also be used. Advantageously, the viewer may also be allowed to switch between different visual representations in the VMS, and/or adjust the sensitivity and/or motion threshold to update the visual representation(s) as need be. Other configurations are of course possible without departing from the scope of the present invention.
[0046] Figure 3B is a flowchart illustrating the steps of a computer-implemented method of displaying a visual representation of video surveillance data in a video management system, according to a second embodiment of the invention.
-15 - [0047] This second embodiment includes the previously described step S300, which is followed by a step S305. In the step S305, the sample values received in the step 8300 are mapped to predetermined ranges.
[0048] These predetermined ranges may correspond to percentage ranges associated with RGB colours. For example, the following threshold ranges and RGB colours could be used: * sample values between 0 (),'* to 25% (excluded) -0,0,255 (Blue); * sample values between 25 % to 50% (excluded) -0,255,255 (Cyan); * sample values between 50 % to 75% (excluded) -0,255,0 (Green); * sample values between 75 % to 100% (excluded) -255,255,0 (Yellow); * sample values of 100% -255,0,0 (Red); each colour being indicative, for instance, of an amount of motion in the video surveillance data.
[0049] Percentages may advantageously be used when the minimum and maximum sample values are known, such that the minimum value may be associated with 0% the maximum value with 100%, and the intermediate values with corresponding intermediate percentages.
[0050] The predetermined threshold ranges need not be percentage ranges and may be ranges specific to the sample values. For instance, assuming the sample values correspond to speed limits of vehicles in kilometres per hour, the following threshold ranges and RGB colours could be used: * sample values of 0 kph (stand still) -0,0,255 (Blue); * sample values above 0 kph and below 50 kph (included) (city limit) -0,255,255 (Cyan); -16 - * sample values above 50 kph and 80 kph (excluded) (country road limit) -0,255,0 (Green); * sample values between 80 kph and 130 kph (excluded) (highway limit) -255,255,0 (Yellow); * sample values above 130 kph (over the maximum speed limit) -255,0,0 (Red) [005]] Alternatively, knowing the minimum and maximum values that can be taken by the sample values, it is possible to associate the minimum sample value with a lower boundary 510 of the colour scale 500 shown in Figure 5, and the maximum sample value with an upper boundary 520 of the colour scale 500. Intermediate sample values may then be mapped to intermediate colours (colours between the lower and upped boundaries 510, 520) as explained earlier.
[0052] Note that the ranges need not necessarily be associated with colours and may associated with any other visual elements, e.g. visual patterns, signs, and the like.
[0053] Then, in a step S310', a visual representation of an evolution of the sample values over time based on the mapping is displayed in the VMS, such as the colour evolution shown in Figure 4. Thus, steps S310 and S310' essentially differ in that the sample values are directly (step S310) or indirectly (step 5310') represented in the visual representation (i.e. indirectly after being mapped to threshold ranges associated with visual elements).
[0054] Note that the minimum and maximum values may change if a viewer adjusts a span of time, for instance via a time span slider 207 as described above, which represents a period of time of particular interest for the viewer for which he/she wishes to know whether motion occurs. In such a case, the method further -17 -comprises adjusting a span of time of the video surveillance data in the video management system, carrying out the mapping again (with the minimum and maximum values in the adjusted span of time) and updating the visual representation (previously displayed in step S3 10') accordingly. Note that the mapping is not carried out when the sample values have been directly represented (step S310) in the visual representation, and the visual representation can be updated directly without any mapping [0055] The viewer may, additionally or alternatively, move one or more timelines 210, 220 leftward or rightward while the vertical line 214 remains at a fixed position (or according to an alternative embodiment move the vertical line 214 along one or more timelines 210, 220 which remain(s) at a fixed position or at their fixed positions(s)) in order to change a frame or moment in time of the video surveillance data that is currently being displayed on the display 215. For instance, assume that the viewer sees that some activity occurred some time earlier along the one or more timelines 210, 220, the viewer may want to go back to a point in time corresponding to that activity or to a particular period of that activity (e.g. a period of high and relevant motion intensity). The viewer may then decide to resume a live view by returning the vertical line or the timeline(s) to its or their original position(s) or to a position or positions corresponding to one or more live views. The viewer may alternatively or additionally resume a live view by using the above-mentioned media player buttons 206.
[0056] Note that the invention is not limited to the particular structures of the above-described buttons, timelines, and visual elements in the player interface 200 such as the speed adjustment slider 205, media player buttons 206, time span slider 207, timelines 210, 220, vertical line 214 and colour-coding 212, 221, 222, 230 (etc.). On the contrary, the skilled person will appreciate that the invention can be implemented with means for carrying out the functions of the said above-described -18 -buttons, timelines, and visual elements in the player interface 200, which are structurally different from those described in the present application [0057] Advantageously, the method according to the present invention comprises downscaling the video surveillance data prior to carrying out motion and/or object detection in the video surveillance data. This allows to consume fewer computing resources (e.g. CPU resources) when carrying out the motion and/or object detection, as motion and/or object detection may achieve satisfactory results more quickly with downscaled video data than with video data with a higher frame rate and/or a higher resolution.
[0058] The invention also provides a computer program which, when run on a computer, causes the computer to carry out the method according to any one of the previous embodiments and features.
[0059] The invention also provides a computer-readable data carrier having stored thereon the above-mentioned computer program.
[0060] The invention also provides a video surveillance system for carrying out the method according to any one of the previous embodiments and features This video surveillance system may comprise (or alternatively consist in) the above-mentioned client apparatus having a display. This apparatus comprises one or more processors configured to receive sample values generated by sampling video surveillance data and/or metadata associated therewith, the sample values representing at least different levels of motion and/or object detection in the video surveillance data when motion and/or one or more objects is determined to exist, and cause to display, on the display, a visual representation of an evolution of the sample values over time.
-19 - [0061] The client apparatus may be configured to run the above-mentioned computer program, preferably from the above-mentioned computer-readable data carrier.
[0062] While the present invention has been described with reference to embodiments, it is to be understood that the invention is not limited to the disclosed embodiments. The present invention can be implemented in various forms without departing from the principal features of the present invention as defined by the claims. -20 -

Claims (5)

  1. CLAIMS1. A computer-implemented method of displaying a visual representation of video surveillance data in a video management system, the method comprising: receiving, in the video management system, sample values generated by sampling video surveillance data and/or metadata associated therewith, the sample values representing at least different levels of motion and/or object detection in the video surveillance data when motion and/or one or more objects is determined to exist; displaying, in the video management system, a visual representation of an evolution of the sample values over time.
  2. 2. The method according to claim 1, further comprising mapping, in the video management system, the sample values to predetermined threshold ranges, wherein the visual representation further represents an evolution of the sample values over time based on the mapping.
  3. 3. The method according to claim 1 or 2, further comprising downscaling the video surveillance data prior to carrying out motion and/or object detection in the video surveillance data.
  4. 4. The method according to any one of claim 1 to 3, wherein the object detection further includes object recognition.
  5. 5. The method according to any one of claims 1 to 4, wherein the sample values are generated by sampling frames of video surveillance data or parts thereof 6. The method according to claim 5, wherein motion detection is determined based on a sensitivity value representing a sensitivity of a video camera, from which the video surveillance data originates, to motion, and determined based on a motion threshold representing a value above which motion is considered to occur.7. The method according to claim 6, further comprising setting the sensitivity and/or motion threshold in the video management system.8 The method according to claim 6 or 7, wherein the sampling of frames of video surveillance data is only carried out in relation with frames for which motion is determined to occur based on the said motion threshold.9. The method according to any one of claims 5 to 8, wherein motion detection is determined based at least on a number of pixels changed between different frames of the video surveillance data relative to a total number of pixels in these frames.10. The method according to any one of claims Ito 5, wherein object detection is determined based at least on a number of neighbouring pixels moved between different frames of the video surveillance data.11. The method according to any one of claims 1 to 5, or 10, wherein object detection is determined based at least on a movement distance of a group of neighbouring pixels moved between different frames of the video surveillance data.12. The method according to the preceding claim, wherein object detection is further determined based on a speed of the group of neighbouring pixels moved between different frames of the video surveillance data.13. The method according to any one of claims 1 to 4, wherein motion detection is determined based on motion vectors embedded in at least one fully encoded or partly decoded video stream of the video surveillance data -22 -N, The method according to any one of the preceding claims in combination with claim 2, the method further comprising adjusting a span of time of the video surveillance data in the video management system, carrying out the mapping again based on the adjusted span of time and updating the visual representation accordingly.15. The method according to any one of the preceding claims, wherein the sample values further represent at least one image characteristic of the frames of the video surveillance data.16. The method according to any one of claims 1 to 4, wherein the sample values are generated by sampling the metadata associated with the frames.17. The method according to claim 16, wherein the metadata represents at least one recognised object in the video surveillance data 18. The method according to claim 17, wherein the sample values further represent a number of recognised objects in the frames.19. The method according to claim 16, wherein the sample values further represent a speed of at least one recognised object in the frames.20. The method according to any one of the preceding claims, further comprising setting a sampling rate in the video management system, carrying out the sampling based on the sampling rate in one or more recording servers and receiving the video surveillance data and sample values from the one or more servers in the video management system -23 - 21. The method according to any one of the preceding claims, wherein the visual representation is displayed in the video management system with or along a timeline of the video surveillance data.22. The method according to the preceding claim, wherein the visual representation comprises a graph representing the said evolution along the timeline.23. The method according to any one of the previous claims, wherein the visual representation represents the said evolution with at least one colour or nuance 10 thereof 24. A computer program which, when run on a computer, causes the computer to carry out the method according to any one of the preceding claims.25. A video surveillance system comprising a display and one or more processors configured to: receive sample values generated by sampling video surveillance data and/or metadata associated therewith, the sample values representing at least different levels of motion and/or object detection in the video surveillance data when motion and/or one or more objects is determined to exist; and cause to display, on the display, a visual representation of an evolution of the sample values over time.-24 -
GB2118668.9A 2021-12-21 2021-12-21 A method of displaying video surveillance data in a video management system Pending GB2614241A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB2118668.9A GB2614241A (en) 2021-12-21 2021-12-21 A method of displaying video surveillance data in a video management system
PCT/EP2022/083181 WO2023117294A1 (en) 2021-12-21 2022-11-24 A method of displaying video surveillance data in a video management system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB2118668.9A GB2614241A (en) 2021-12-21 2021-12-21 A method of displaying video surveillance data in a video management system

Publications (1)

Publication Number Publication Date
GB2614241A true GB2614241A (en) 2023-07-05

Family

ID=79601702

Family Applications (1)

Application Number Title Priority Date Filing Date
GB2118668.9A Pending GB2614241A (en) 2021-12-21 2021-12-21 A method of displaying video surveillance data in a video management system

Country Status (2)

Country Link
GB (1) GB2614241A (en)
WO (1) WO2023117294A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1991003053A1 (en) 1989-08-18 1991-03-07 Jemani Ltd. A method of and apparatus for assisting in editing recorded audio material
GB2409123A (en) * 2003-12-03 2005-06-15 Safehouse Internat Inc Tracking foreground objects and displaying on a timeline
US20160307049A1 (en) * 2015-04-17 2016-10-20 Panasonic Intellectual Property Management Co., Ltd. Flow line analysis system and flow line analysis method
KR101858663B1 (en) * 2018-03-23 2018-06-28 (주)리얼허브 Intelligent image analysis system
GB2569557A (en) 2017-12-19 2019-06-26 Canon Kk Method and apparatus for detecting motion deviation in a video

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1991003053A1 (en) 1989-08-18 1991-03-07 Jemani Ltd. A method of and apparatus for assisting in editing recorded audio material
GB2409123A (en) * 2003-12-03 2005-06-15 Safehouse Internat Inc Tracking foreground objects and displaying on a timeline
US20160307049A1 (en) * 2015-04-17 2016-10-20 Panasonic Intellectual Property Management Co., Ltd. Flow line analysis system and flow line analysis method
GB2569557A (en) 2017-12-19 2019-06-26 Canon Kk Method and apparatus for detecting motion deviation in a video
KR101858663B1 (en) * 2018-03-23 2018-06-28 (주)리얼허브 Intelligent image analysis system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
K. SZCZERBAS. FORCHHAMMERJ. STOTTRUP-ANDERSENP. T. EYBYE: "Fast Compressed Domain Motion Detection in H.264 Video Streams for Video Surveillance Applications", 2009 SIXTH IEEE INTERNATIONAL CONFERENCE ON ADVANCED VIDEO AND SIGNAL BASED SURVEILLANCE, 2009, pages 478 - 483, XP031542079

Also Published As

Publication number Publication date
WO2023117294A1 (en) 2023-06-29

Similar Documents

Publication Publication Date Title
EP2283472B1 (en) A system and method for electronic surveillance
CN106327875B (en) Traffic video monitoring management control system
JP6088541B2 (en) Cloud-based video surveillance management system
US7428314B2 (en) Monitoring an environment
US8107680B2 (en) Monitoring an environment
US20150264296A1 (en) System and method for selection and viewing of processed video
JP2017033554A (en) Video data analysis method and device, and parking place monitoring system
CN101860731A (en) Video information processing method, system and server
JP2014512768A (en) Video surveillance system and method
CN112437264B (en) Monitoring video processing method and device
US11575837B2 (en) Method, apparatus and computer program for generating and displaying a heatmap based on video surveillance data
US20160210759A1 (en) System and method of detecting moving objects
GB2552511A (en) Dynamic parametrization of video content analytics systems
US20150125130A1 (en) Method for network video recorder to accelerate history playback and event locking
CN106781167B (en) Method and device for monitoring motion state of object
KR101033238B1 (en) Video surveillance system and recording medium recording in which video surveillance program is recorded
KR20160093253A (en) Video based abnormal flow detection method and system
US20210136327A1 (en) Video summarization systems and methods
CN112449159B (en) Monitoring video display control method and device, electronic equipment and storage medium
CN109544589A (en) A kind of video image analysis method and its system
GB2614241A (en) A method of displaying video surveillance data in a video management system
EP4125002A2 (en) A video processing apparatus, method and computer program
Shih et al. Real-time camera tampering detection using two-stage scene matching
GB2594459A (en) A method, apparatus and computer program for generating and displaying a heatmap based on video surveillance data
JP2022098663A (en) Monitoring system, abnormality sensing detection method of monitoring system, and abnormality sensing detection program of monitoring system