GB2575683A - Method, device, and computer program for identifying relevant video processing modules in video management systems - Google Patents
Method, device, and computer program for identifying relevant video processing modules in video management systems Download PDFInfo
- Publication number
- GB2575683A GB2575683A GB1811870.3A GB201811870A GB2575683A GB 2575683 A GB2575683 A GB 2575683A GB 201811870 A GB201811870 A GB 201811870A GB 2575683 A GB2575683 A GB 2575683A
- Authority
- GB
- United Kingdom
- Prior art keywords
- video
- video source
- pairs
- processing module
- pair
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/44—Event detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/62—Text, e.g. of license plates, overlay texts or captions on TV images
- G06V20/625—License plates
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Configuring a video management system comprising a plurality of video processing or content analytics modules VCAk, the video management system receiving video streams from a plurality of video sources VSDi. the method comprises: determining a set of pairs of video source and video processing module, the pairs being different one to another, the set of pairs comprising at least one pair for each video source and among the pairs directed to one video source, one pair for each of several video processing modules; for each pair, processing a video stream sample obtained from the corresponding video source and, in response, determining a relevance indication for the referenced video processing module to process streams from the referenced video source; for each video source, determining a list of processing modules relevant to process video streams obtained from the video source, the determination being based on the relevance indication. In examples the video management system receives security camera streams from a variety of contexts and the processing modules may include vehicle number plate recognition or human face recognition. The processing of a stream may determine relevance to the numberplate recognition processer by giving a numberplate recognition score to the stream or input feed.
Description
METHOD, DEVICE, AND COMPUTER PROGRAM FOR IDENTIFYING RELEVANT VIDEO PROCESSING MODULES IN VIDEO MANAGEMENT SYSTEMS
FIELD OF THE INVENTION
The present invention relates generally to video management systems and more particularly to a method, a device, and a computer program for identifying relevant video processing modules such as video content analytics in video management systems.
BACKGROUND OF THE INVENTION
The present invention deals with a video management system (VMS) which manages video source devices (VSD) that can stream video data.
Typically, a network camera belonging to a video surveillance system is a video source device as it can capture video and stream it. As another example, a device that obtains a video stream from another device (e.g. a network camera) and that can stream it may be considered as a video source device. This may be the case of a server that obtains streams from a set of network cameras and that merges them to generate an output stream.
Generally speaking, a video surveillance system can discover video source devices, obtain their streamed data and store these data. It also enables a user to display the obtained data (both live streams and previously captured streams).
A video surveillance system may have various additional features. For instance, it may enable the settings of the cameras to which it is connected to be changed. It may also enable it to be defined that video content analytics (VGA) should be run on certain streams, and allow a user to search for a specific video stream, e.g. based on time. For instance, a video content analytics may be dedicated to determining the presence of various objects such as vehicles, human beings or animals, or to recognition tasks such as license plate recognition or face recognition. Other examples of use of video content analytics include intrusion detection, abandoned objects detection, or object counting (e.g. cars or people counting).
In most cases, a single video surveillance system is used ata given site (e.g. a company facility). However, if a company has various sites, one video surveillance system may be installed on each site (multi-VMS architecture). For security purposes, a central office of the company may control and monitor all the remote VMSs. Therefore, the number of video source devices to control and monitor may be high and the use of video content analytics may help to detect events of interest in the videos. Configuration of the video content analytics is generally performed by dedicated personnel having good expertise and knowledge of the whole system.
It is also to be noted that multiple video surveillance systems belonging to different companies may also be connected to third parties, for example to videoprotection service providers or to law enforcement agencies. Such third parties and/or agencies should also configure their own video content analytics to detect additional behavior and/or to analyze video stream contents according to their own settings. In such cases, there is no dedicated personnel with knowledge of all the remote systems and thus, configuring video content analytics may be difficult.
In the context of video surveillance, video management systems generally record large amounts of video data as recordings. To help in the filtering of the recordings or to search recordings of interest, many video content analytics may be used, for example to identify objects or persons, to detect suspect behavior or to identify properties (e.g. color of object, number of persons, etc.), or to detect emergency situations (e.g. flame detection). However, running many video content analytics on a high number of recordings has a significant computing cost and requires expensive servers. As a consequence, such filtering or searching can be done only for video management systems having few video source devices.
It is also to be observed that different video content analytics target different video contents and thus, each of the video content analytics should be preferably executed only for a sub-set of the video source devices. For the sake of illustration, video content analytics directed to the detection of cars are advantageously used for analyzing video streams received from video cameras located outdoors (e.g. in parking areas or streets). Indeed, it would be useless and resource and time consuming to use such video content analytics for analyzing video streams received from video cameras located indoors. Accordingly, each of the video content analytics are generally configured manually for each of the video source devices knowing, in particular, their location, which is time-consuming and error-prone. This can be done only for video management systems handling a small number of video cameras, by people having a good knowledge of the video management system considered.
Therefore, there is a need for improving the setting of video management systems, in particular to optimize resources.
SUMMARY OF THE INVENTION
The present invention has been devised to address one or more of the foregoing concerns.
According to a first aspect of the invention, there is provided a method of configuring at least one video management system comprising a plurality of video processing modules, the at least one video management system receiving video streams from a plurality of video sources, the method comprising:
determining a set of pairs of video source and video processing module, the pairs being different from each other, the set of pairs comprising at least one pair for each video source of the plurality of video sources and among the pairs directed to one video source, one pair for each of several video processing modules;
for each pair of the set of pairs, processing at least one video stream sample obtained from the video source referenced by the pair;
in response to the processing, determining a relevance indication for the video processing module referenced by the pair to process video streams obtained from the video source referenced by the pair;
for each video source of the plurality of video sources, if at least one video processing module is relevant to process video streams obtained from the video source, determining a list of at least one video processing module that is relevant to process video streams obtained from the video source, the determination being based on the relevance indication.
Accordingly, the method of the invention makes it possible to automatically determine which video content analytics should be used in relation with which video streams in a multi-camera system of any size. The method of the invention also makes it possible to assess that the video content analytics used are still appropriate in relation with an actual context ofthe video management system (health monitoring), for instance when new video cameras are added, or in case of accidental or voluntary modification of camera configurations.
The method of the invention may be implemented in almost any video management system, independently ofthe number of video source devices. It makes it possible to use efficiently video content analytics (that are used in relation with video source devices for which reliable results may be obtained) and to configure video content analytics easily, in particular when video source devices of local video management systems are used by third parties such as law enforcement agencies or external security agencies. Moreover, the method of the invention makes it possible to select video content analytics as a function of available processing resources, with a limited impact on efficiency since selected video content analytics run in relation with the most accurate video source devices.
According to a second aspect of the invention, there is provided a device for configuring at least one video management system comprising a plurality of video processing modules, the at least one video management system receiving video streams from a plurality of video sources, the device comprising a microprocessor configured for: determining a set of pairs of video source and video processing module, the pairs being different from each other, the set of pairs comprising at least one pair for each video source of the plurality of video sources and among the pairs directed to one video source, one pair for each of several video processing modules;
for each pair of the set of pairs, processing at least one video stream sample obtained from the video source referenced by the pair;
in response to the processing, determining a relevance indication for the video processing module referenced by the pair to process video streams obtained from the video source referenced by the pair;
for each video source of the plurality of video sources, if at least one video processing module is relevant to process video streams obtained from the video source, determining a list of at least one video processing module that is relevant to process video streams obtained from the video source, the determination being based on the relevance indication.
The advantages of such a device are similar to those described above in relation with the method of the invention.
Optional features of the invention are further defined in the dependent appended claims.
Since the present invention may be implemented in software, the present invention may be embodied as computer readable code for provision to a programmable apparatus on any suitable carrier medium, and in particular a suitable tangible carrier medium or suitable transient carrier medium. A tangible carrier medium may comprise a storage medium such as a floppy disk, a CD-ROM, a hard disk drive, a magnetic tape device or a solid state memory device or the like. A transient carrier medium may include a signal such as an electrical signal, an electronic signal, an optical signal, an acoustic signal, a magnetic signal or an electromagnetic signal, e.g. a microwave or RF signal.
BRIEF DESCRIPTION OF THE DRAWINGS
Embodiments of the invention will now be described, by way of example only, and with reference to the following drawings in which:
Figure 1a illustrates an example of the architecture of a video management system implementing embodiments of the invention;
Figure 1b illustrates an example of the architecture of a multi-video management system implementing embodiments of the invention;
Figure 2 illustrates an example of steps for building a score table, processing such a table, and identifying video content analytics to be used in relation with video source devices based on this score table;
Figure 3 illustrates an example of steps for characterizing the efficiency of a video content analytics for processing video stream samples obtained from a video source device;
Figure 4 illustrates an example of steps of an assessment process for verifying and adapting the selection of VCAs for the VSDs, according to the change in the VMS(s);
Figure 5 illustrates an example of associating video content analytics to video source devices; and
Figure 6 is a schematic block diagram of a computing device for implementing embodiments of the invention.
DETAILED DESCRIPTION OF EMBODIMENTS OF THE INVENTION
Embodiments of the present invention relate to identification of relevant video processing modules such as video content analytics for each of a set of video source devices in a video management system. This may be done from video stream samples obtained from these video source devices and from characteristics of these video source devices, for example the video resolution. According to embodiments, after a sub-set of video source devices has been selected based on their characteristics, video stream samples are obtained from the video source devices of the sub-set to estimate a score for each of the video processing modules, that is to say to determine pairs of video source devices and video processing modules, to provide relevant and reliable results. Most relevant video processing modules may then be identified based on the estimated scores. Relevance of video processing modules may be evaluated periodically to optimize the video management system.
Figure 1a illustrates an example of the architecture of a video management system implementing embodiments of the invention.
In this example, the video management system 100 is configured to manage a plurality of video source devices 105. Using recording service 110, video management system 100 obtains and stores media data (for example video data and possibly audio data) as well as metadata obtained from video source devices 105. For the sake of illustration, these media data (or recordings) and the associated metadata are stored in dedicated database 115. According to embodiments, at least a part of the metadata is related to video source devices 105. Such a part of the metadata may comprise a device identifier, a device location, and/or device configuration parameters.
As illustrated, video management system 100 further comprises a video content analytics classification service 120 that makes it possible to identify, if any, one or more video content analytics that may prove to be useful for each of the video source devices. To that end, video content analytics classification service 120 evaluates the efficiency or relevance of video content analytics when they are executed in relation to a video source device, for example, when they are executed on video stream samples obtained from this video source device. In order to evaluate the efficiency or relevance, video content analytics classification service 120 interacts with video source devices 105 and/or with database 115 of recordings and metadata obtained from these video source devices in order to compute a score for each of the video content analytics executed on video stream samples obtained from a considered video source device. The scores may be stored as metadata in database 115.
Such scores may be used to determine which video content analytics should be used in relation to which video source devices so as to associate a set of video content analytics to each of the video source devices. Such sets of video content analytics may be empty or may comprise from one to several video content analytics.
When sets of video content analytics have been identified for the video source devices, a selective execution of the video content analytics may be performed, depending on the video source devices, as illustrated with video content analytics selective execution service 125, to process new video streams. Running the identified video content analytics makes it possible to analyze recordings so as to extract information that may be recorded in database 115 in relation with the corresponding recordings. Such an analysis may consist in detecting predetermined objects such as faces or license plates and to count the detected object, and/or to recognize them.
Execution of the identified video content analytics may be further based on available processing power. It may be carried out during recording or later on.
It is to be noted that the architecture of a video management system implementing embodiments of the invention is not limited to the one illustrated in Figure 1a, and other components and/or services may be used. For the sake of illustration, the VGA scores associated with video source devices may be stored in a dedicated database (distinct from the database used to store recordings and metadata). Likewise, other services such as a configuration service and/or an administration service may be implemented.
Figure 1b illustrates an example of the architecture of a multi-video management system implementing embodiments of the invention.
As illustrated, the whole system 150 comprises a set of n video management systems, denoted 155-1 to 155-n. For the sake of illustration, it is considered that each of these video management systems is made accessible through a VMS gateway denoted 160-1 to 160-n, respectively. A gateway may be a part of the considered video management system that makes it possible to access the VMS features from a remote system or may be implemented as a service running on a cloud or at a specific remote site from which all video management systems are interconnected.
The whole system 150 further comprises a video content analytics classification service 165 that is similar to the video content analytics classification service 120 in Figure 1a. A main difference between video content analytics classification service 165 and video content analytics classification service 120 is that video content analytics classification service 165 can interact with the video source devices associated with different video management systems, whereas video content analytics classification service 120 typically interacts with the video source devices associated with a single video management system. Therefore, video content analytics classification service 120 can be implemented as a local service, whereas video content analytics classification service 160 is more likely to be implemented as a shared service, for example a cloud service. However, it is observed that a cloud service may also be used in the context of a single video management system and local services may be used in the context of multi-video management systems.
As illustrated, the whole system 150 further comprises a video content analytics selective execution service 170 that is similar to video content analytics selective execution service 125 in Figure 1a. Again, running identified video content analytics make it possible to analyze recordings corresponding to the associated video source devices in order to extract the most valuable information for further improved use of recordings.
This may be used, for example, when a law enforcement agency searches to identify video sequences containing cars, by selecting the video content analytics that are configured for carrying out such identification. Based on scores associated with the video content analytics and video source contents, one can identify the relevant video source devices from the selected video content analytics and then conduct searches for video sequences output from the identified video source devices.
It is observed that in a multi-VMS architecture such as the one illustrated in Figure 1b, the video content analytics classification service and the video content analytics selective execution service may take advantage of the distributed processing capabilities of the VMS gateways: indeed, all the service operations may be performed using resources from the central device accessing all the VMS video source devices or the central device may manage the use of each of the VMS gateway resources for local execution of the classification service and of the VGA selective execution service.
According to embodiments and as set forth above, identification (or selection) of video content analytics to be executed for a given video source device is based on scores, for example a table of scores for a plurality of video content analytics that may be executed by a video management system (a single video management system or a multi-video management system), a plurality of video source devices of this video management system, and a plurality of environment conditions.
Table 1 in the Appendix illustrates an example of a score table. For the sake of illustration, the table illustrates the scores relating to four video content analytics (VCAi to VCA4), n video source devices (C? to Cn), and two different environment conditions (Envi and Env2), for example two periods of the day such as night and day. Each line of the score table contains the scores obtained for the corresponding video source device, for all the video content analytics and all the environment conditions used to build the table for which results are obtained or an empty value if no result has been obtained.
Still for the sake of illustration, it is assumed in this example that
- VCA1 is directed to car detection and the score result that is stored in Table 1 represents a number of cars detected per time period, for example a number of cars per minute,
- VCA2 is directed to license plate recognition and the score result that is stored in Table 1 represents a number of license plates identified per time period, for example a number of license plates per minute,
- VCA3 is directed to person detection and the score result that is stored in Table 1 represents a number of persons detected per time period, for example a number of persons per minute, and
- VCA4 is directed to face detection and the score result that is stored in Table 1 represents a number of faces detected per time period, for example a number of faces per minute.
Likewise, it is assumed that video source device Ci is operable for capturing video streams of a first parking area by using a large field of view camera, video source device C2 is operable for capturing high-resolution video streams of a building entrance, and video source device Cn is operable for capturing low-resolution video streams of a second parking area.
As illustrated in Table 1, VCA1 makes it possible to detect a huge number of cars from images obtained from video source device Ci during any period of the day (scores of 10.5 and 7.3 for day and night periods, respectively) while VCA2 makes it possible to recognize license plates mostly during the day period (scores of 8.4 and 0.5 for day and night periods, respectively). This may result from low lighting of the parking area during the night period. Regarding detection of persons from images obtained from video source device Ci, VCA3 makes it possible to detect a high number of persons during any period of the day (scores of 5.2 and 6.8 for day and night periods, respectively). Finally, VCA4 is not used for processing images obtained from video source device Ci, for example because the large field of view of the camera used to obtain these images does not provide enough detail for face recognition.
Regarding video source device C2, VCA1 makes it possible to detect a low number of cars during any period of the day (scores of 1.5 and 0.2 for day and night periods, respectively) while VCA2 does not make it possible to recognize any license plate (scores of 0 for day and night periods). This may result from the location of the video camera corresponding to the video source device C2 (e.g. cars only appear in the background of the images). Regarding detection of persons from images obtained from video source device C2, VCA3 makes it possible to detect a huge number of persons during the day period (score of 8.2) and a few number of persons during the night period (score of 2.4). Similarly, VCA4 makes it possible to detect a huge number of faces during the day period (score of 7.5) and a few number of faces during the night period (score of 1.8).
Regarding video source device Cn, VCA1 makes it possible to detect a high number of cars during the day period but not any during the night period (scores of 5 and for day and night periods, respectively). Likewise, VCA3 makes it possible to detect a huge number of persons during the day period but not any during the night period (scores of 9.4 and 0 for day and night periods, respectively). VCA2 and VCA4 are not used for processing images obtained from video source device Cn, for example because of the low-resolution of the images.
From a score table such as Table 1, it may be decided which video content analytics should be used and which video content analytics should not be used for each video source device. For example, the two video content analytics providing the best results, for each of the environment conditions and for each video source device, with a threshold that may be set, for example, to two detections per minute, may be selected:
- for video source device Ci, • for the day period: VCA1 and VCA2, • for the night period: VCA1 and VCA3
- for video source device C2, • for the day period: VCA3 and VCA4, • for the night period: VCA3
- for video source device Cn, • for the day period: VCA1 and VCA3, • for the night period: none.
This makes it possible to select and to execute the most appropriate video content analytics in relation with each video source device. As a result, the use of the resources of the video management system(s) is optimized.
Of course, other criteria may be used to select video content analytics, targeting for example the use of only two video content analytics per period of the day, for all the video source devices. According to the example illustrated in Table 1, this would lead to use of VCA1 and VCA3 for each day period.
Figure 2 illustrates an example of steps for building a score table, processing such a table, and identifying video content analytics to be used in relation with video source devices based on this score table.
For the sake of illustration, these steps may be executed by video content classification service 120 or 165 described by reference to Figures 1a and 1b.
As illustrated, a first step is directed to the selection of a set of video source devices for which video content analytics are to be identified (step 200). As described above, identification of the video content analytics to be executed in relation with a given video source content is based on the relevance of each of the video content analytics to process video streams obtained from this video source device.
When considering a single video management system, the set of video source devices may contain one, several, or all the video source devices of the video management system. The VMS owner may also specify, for sub-sets of the video source devices, a set of VCAs comprising one, several, or all the VCAs the video management system is able to execute. This makes it possible to target VCAs for sub-sets of video source devices using the user’s knowledge of the VMS. For example, all the outdoor cameras may be selected to analyze the relevance of VCAs directed to person detection, to car detection, and to license plate recognition and all the indoor cameras may be selected to analyze only the relevance of VCAs directed to person and face detection.
When considering a multi-VMS architecture, the set of selected video source devices may contain all the video source devices from one VMS or from several VMSs. This makes it possible for a user to configure on a step-by-step basis the use of all the VCAs that may be executed. This may be of special interest, for example, when adding a new VMS location to a law enforcement agency (LEA) global system. In such a case, all the video source devices of the new VMS are selected and all the VCAs are selected to determine automatically the most relevant configuration of VCAs per video source device, without requiring the LEA to know the use of each video source device.
After having selected a set of video source devices, indexes / and J, representing indexes on video source devices and on video content analytics, respectively, are initialized to zero (step 205).
Next, properties P(i) of the video source device having index /, denoted VSD(i), are obtained (step 210). These properties may comprise a type of the device, for example thermal camera or video camera, a color and/or pixel resolution, a frame rate, a video coding configuration, and/or a field of view.
Next, the VSD properties PR(j) required by the video content analytics having index J, denoted VCA(j), for conducting a relevant analysis are determined (step 215). For example, for a face detection analysis, a video camera having a minimum pixel resolution and a specific video encoding may be required. Still for the sake of illustration, a video camera having a large field of view, a minimum frame rate, and a minimum color resolution may be required to provide relevant results for a flame detection analysis.
Next, the required VSD properties PR(j) are compared with the properties P(i) of the current video source device (step 220) to determine whether or not the properties of the current video source device (having index /) comply with the VSD properties required by the current video content analytics (having index j).
If the properties of the current video source device do not comply with the VSD properties required by the current video content analytics, the value of the score associated with the current video source device (having index /) and with the current video content analytics (having index j), denoted S(i,j), is set to empty (step 225). This means that the current video content analytics should not be used to process video streams obtained from the current video source device. This results from inappropriate characteristics or settings of the video source device. For example, it has been observed that performing a face detection analysis on images obtained from a thermal or from a low-resolution camera gives no result or poor results. The score value S(i,j) is stored to be used later.
On the contrary, if the properties of the current video source device comply with the VSD properties required by the current video content analytics, the score information S(i,j) associated with the current video source device and with the current video content analytics is determined (step 230). This step, which can be carried out as a background process, is based on an analysis by the current video content analytics of video stream samples acquired from the current video source device. If the determination is made as a background process, scheduling information directed to the analysis may be stored. Such information may comprise a time period duration, denoted TD, for estimating the score information, a sample timing interval (specifying a date for sample validity), denoted Tl, or minimum number of samples to study, denoted Nsampie, for estimating the score information.
According to embodiments, score information comprises at least one evaluation result R that may indicate, for example, a number of objects or events detected by period of time in the analyzed video stream samples (e.g. 10 cars/min or 5 persons/min). Score information may comprise additional information such as:
- a total number of video stream samples that have been processed by the current VCA, denoted S,
- a total number of results obtained by the VCA for all the video stream samples, denoted Nresuit, that may correspond to the number of video stream samples in which objects have been detected,
- a cumulative duration of all the video stream samples that have been processed by the current VCA, denoted D,
- the average and maximum numbers of objects identified by the current VGA for all the video stream samples, denoted Coverage and Omax, respectively,
- an average confidence value of object detection by the current VGA, denoted paverage, and/or
- the time (or date) indicating the end time of score evaluation process, denoted Tend·
Still according to embodiments, such information may be obtained for different periods of time, such as specific days or week periods (e.g. days of the week or of the week-end), for specific periods of the day (e.g. day or night period), and/or for specific weather conditions (e.g. rain or sunshine). At the initialization of the score evaluation, most of the items of the score information, such as the total number of video stream samples processed by the current VGA and the total number of results obtained by the VGA for all the video stream samples are initialized (for example set to zero) except some of them such as the time (or date) indicating the end time of the score evaluation process that is set to the value TCUrrent + TD.
Step 230 is described in more detail by reference to Figure 3.
As described above, step 230 may be executed as a background process. Accordingly, the next step (i.e. step 235, for proceeding to the evaluation of the efficiency of the next VGA for the current VSD (if any) or for determining the efficiency of VCAs for a next VSD (if any)) may be executed before the end of step 230.
As illustrated, index j is increased by one (step 235) and a test is performed to determine whether or not index j is equal to the number of VCAs to be evaluated (step 240). If index j is not equal to the number of VCAs to be evaluated, steps 215 to 240 are repeated to evaluate the efficiency of the next VGA for the current video source device.
On the contrary, if index j is equal to the number of VCAs to be evaluated, index / is increased by one and index j is set zero (step 245). Next, a test is performed to determine whether or not index / is equal to the number of VSDs to be tested (step 250). If index / is not equal to the number of VSDs to be tested, steps 210 to 250 are repeated to evaluate the efficiency of the VCAs for the next video source device.
Accordingly, steps 215 to 240 are performed for each VGA and steps 210 to 250 are performed for each VSD.
After having evaluated all the video content analytics for all the video source devices, a score table such as the one given in the Appendix is obtained (wherein only the score results R are given, for the sake of clarity).
Next, the video content analytics that should be executed, for each video source device, are identified or selected (step 255).
According to embodiments, such an identification or selection is based at least on the score results R and on a threshold criteria denoted Th defining a minimal score value that is expected for executing the corresponding VGA.
In such cases, all the VCAs for which score results are greater than the threshold (R > Th) are selected to be executed in relation with the corresponding video source device. However, other criteria may be used to control resource consumption. For example, the maximum number of VCAs that can be executed in relation with a single video source device may be set to a given value, for example to three. Likewise, it is possible to limit the number of VCAs that can be executed in relation with video source devices, for example to limit the number of VCAs that can be used with video source devices of a set of video source devices to twice the number of video source devices in this set. This makes it possible to adapt the selection of the video content analytics to be used according to resource constraints of the video management system(s).
The selection of the VCAs to be executed may be static, that is to say performed once, for example after carrying out steps 210 to 250, and after each execution of these steps. This selection may also be dynamic. In such a case, it may be performed regularly, for example twice a day or twice a week, and may be adapted according to environment conditions, for example taking into account the weather conditions (e.g. through external database information connection and retrieval).
To select one VGA among two VCAs having the same score result R, for example when one VGA only should be applied to one VSD, the selection may give priority to the one having either the highest additional average confidence (paverage), the highest obtained results Nresuit, or the highest obtained result ratio Nresuit/S.
It is also noted that in a multi-VMS architecture, each VMS can execute locally a VGA in relation with a video source device. Therefore, resource consumption may be optimized by taking profit of available results (obtained, for example, through metadata associated with video streams) so as to evaluate only complementary VCAs (if any) to obtain further information for third party users. Such an information may enable, for example, a more effective research investigation for law enforcement agencies or security providers. This may also apply on standalone VMS, when VSD embeds local VGA.
In such cases, additional steps may be executed to refine the list of VCAs to be executed in relation with VSDs (the list of recommended VCAs) that is obtained after executing step 255 in Figure 2.
A first additional step aims at determining, for each VMS, the list of the VCAs that have been executed locally in relation with each video source device and at obtaining the corresponding results.
Next, a list of missing VCAs is determined by comparing the list of recommended VCAs with the list of VCAs locally executed. According to embodiments, this list of missing VCAs comprises:
- the recommended VCA(s) that are never executed by a local VMS, providing complementary analysis (this can be the case, for example, of a VGA directed to face recognition when considering a camera locally running a VGA directed to face detection), and
- the recommended VCA(s) that are also executed by local VMS but that give score results that are much higher than those obtained locally. This may be due to different settings configuration. In this case, using such recommended VCAs makes it possible for third parties to obtain more reliable results from video source devices by independent use of its own VCA(s).
When the same recommended VGA and locally executed VGA have the same range of score values, the recommended VGA is not added to the list of missing VCA(s). Therefore, processing power of third parties is used efficiently to provide only additional analysis of streams received from the video source devices.
The list of missing VCAs is then used instead of the list of recommended VCAs.
Figure 3 illustrates an example of steps for characterizing the efficiency of a video content analytics for processing video stream samples obtained from a video source device. Such steps illustrate an example of implementation of step 230 in Figure 2 which aims at determining a score information when executing a VGA for a given video source device using previously recorded video streams or using live video streams. They can be executed as a background process.
For the sake of clarity, it is assumed that a VCA can detect only a single type of object such as cars, persons, or license plates. In the case where a VCA can detect several types of objects, it is considered as a set of several VCAs each being able to detect only a single type of object.
To generate video stream samples, a sampling policy may be used. Such a sampling policy may be used to obtain video stream samples based on video streams recorded from the video source device, according to a scheduled sample time period duration TD and/or on a minimum number of samples Nsampie· The sampling policy may also define the portions of the video streams recorded by a video source device that should be analyzed. The sampling may be periodic, for example twice every hour during one week for example. Naturally, there are many sampling policies that can be used. Still for the sake of illustration, some specific time intervals that should be sampled more than others may be defined (e.g. specific times of the day or specific days of the week). This may be useful to obtain video stream samples corresponding to time intervals that are not represented in actual recordings.
Moreover, the sampling should advantageously take into consideration the current workload of the different elements that are involved. For example, getting multiple streams from several VSDs of the same VMS at the same time should be avoided in a multi-VMS architecture.
As illustrated in Figure 3, a first step is directed to obtaining a video stream sample (i.e. a stream of media data) from the considered video source device (step 300). There exist several solutions for obtaining such stream.
According to particular embodiment, such a video stream sample is obtained from previously recorded video streams that may be stored by the VMS associated with the video source device considered. An advantage of such a solution is that the media data are already available and thus, the process of evaluating the relevance of a VGA may be launched at any time without time delay. However, in such a case, it may happen that the available recordings do not satisfy the sampling policy to be used, such as the sample timing interval Tl and/or the periodic sampling policy.
According to other embodiments, the video stream sample is obtained from the live video stream recorded by the given video source device, that may be obtained at given sampling times as specified by the sampling policy.
Next, context information relating to the obtained video stream sample is determined (step 305). Context information may be obtained from metadata associated with the obtained media data or via an external source such as weather database resources.
Context information may characterize the relevance of the obtained video stream sample with regards to the defined sample timing interval Tl (if any). This could be done, for example, by checking the time duration of the recording, associated with the start time/date and/or end time/date information. If the sample timing interval Tl is not defined, then the video stream sample is preferably considered as relevant for the evaluation process. If the video stream sample should be considered as not relevant, for example because the recording is not in the period range, then another video stream sample is preferably retrieved from the video source device (as illustrated with the dashed arrow).
Context information may also be used for classification refinement of the results obtained when evaluating the relevance of a considered video content analytics to process a video stream obtained from a considered video source device with regard, for example, to a period of time of the recording (e.g. the day of the week or the period of the day (day/night)) or to weather conditions (e.g. rainy or foggy).
Next, the considered video content analytics is executed in relation with the video stream sample obtained from the considered video source device (step 310).
Next, a test is performed to determine whether or not metadata have been generated during processing of the obtained video stream sample by the considered video content analytics (step 315).
According to embodiments, this step is carried out by determining whether or not a result file (e.g. an XML file) is provided by the considered VGA. Indeed, most of the VCAs produce an XML file comprising fields providing information about each object or person that has been detected in the frames of the analyzed video stream. These items of information may include a type of the objects, a level of confidence of the detections, the position of the objects in the frames, tracking identification information for the objects, and many others.
Accordingly, when a video stream does not contain any object of interest for a considered VGA, no result file is created or the created result file indicates that no object of interest has been detected.
If the considered video content analytics does not detect any object in the obtained video stream sample, that is to say if no metadata result is obtained (step 315), the obtained video stream sample is considered as useless and an evaluated score variable r is set to zero for this video stream sample (step 320).
On the contrary, if the considered video content analytics detects objects in the obtained video stream sample, that is to say if metadata results are obtained (step 315), these metadata results are analyzed to determine a score information item (step 325). For the sake of illustration, the score information may comprise:
- the number of detections for the considered objects, denoted O,
- the average confidence for the detected objects, denoted p, where p may be computed according to the equation p = (Σ£=ι<α)/°, where Ck represents the confidence for the detection of the A*h object,
- the evaluated score denoted rthat may represent the ratio of the number of objects detected (i.e. O) versus the duration of the video stream sample.
Next, after performing step 320 or step 325, items of score information that have been previously determined for the considered video content analytics and the considered video source device are retrieved (step 330). These items of information may comprise the total number of video stream samples that have been processed by the considered VGA (S), the total number of results obtained by the considered VGA for all the video stream samples (Nresuit), the cumulative duration of all the video stream samples that have been processed by the considered VGA (D), the average and the maximum numbers of objects identified by the considered VGA for all the video stream samples (Oaverage and Omax), the average confidence value of object detection by the considered VGA (paverage), and/or the time (or date) indicating the end time of the score evaluation process (Tend).
As described previously, these results are either classified without taking into account any specific condition or by taking into account refinement information such as that obtained in step 305.
Next, the items of the score information are updated (step 335). Such an updating step may comprise, for example, increasing the number of video stream samples that have been processed by the considered VGA (S = S+1) and, if the evaluated score r is different to 0, the following operations:
- adding the duration of the obtained video stream sample to the previous duration (D = D + video stream sample time duration),
- checking the previous maximum number of detected objects (Omax) and, if it is below the current number of detected objects (O), updating the value of the maximum number of detected objects (O max ~ O),
- updating the average number of detected objects (Oaverage = (Nresuit* Oaverage + Ο) I (Nresult +1)),
- updating the average confidence value (paverage = (Nresult* Paverage + p) I (Nresuit + 1)),
- increasing the number of obtained VGA results (Nresuit = Nresuit + 1), and
- updating the evaluation value (R = (Nresuit x Oaverage) ID.
It is noted that if the evaluated score r is equal to 0, the number S of samples is increased which results in decreasing the ratio of the number of useful samples (Nresuit
S).
The updated items of score information are stored in relation with the considered video content analytics and the considered video source device (step 335).
Next, a test is performed to determine whether or not the evaluation process is finished (step 340). Such a step may consist in checking whether or not the Tend item of the score information is lower than the current time (or date) value. If the Tend value is lower than the current time (or date) value, steps 300 to 340 are repeated to refine the score information by using another video stream sample.
On the contrary, if the Tend value is greater than or equal to the current time (or date) value, the obtained score information is provided as notification, transmitted, or stored (step 345).
It is observed that step 340 may also be performed at a regular interval to detect the end of the background evaluation process and trigger step 345, as soon as sampling time duration TD has elapsed. In addition to the test of time duration, another test may be performed in step 340, for example to compare the number of video stream samples that have been processed by the considered VGA with a predetermined number of video stream samples (i.e. to determine whether or not S < Nsampie). In this case, if the sampling time duration TD has elapsed or if the number of video stream samples that have been processed by the considered VGA is equal to or greater than the predetermined number of video stream samples, the background evaluation process is terminated (i.e. step 345 is triggered).
Figure 4 illustrates an example of steps of an assessment process for verifying and adapting the selection of VCAs for the VSDs, according to the change in the VMS(s). It is based on the determination of VGA scores of VCAs that may be used for processing video streams obtained from VSDs, as described by reference to Figures and 3.
This process makes it possible to detect a malfunction of the system following an accidental or incorrect VMS configuration modification. It may be executed on a regular basis, upon detection of particular events (e.g. upon modification of the configuration of a VGA or upon detection of a new VGA) and/or triggered manually by a user, for example to check the correct use of the VCAs with the VSDs of the VMS(s) (i.e. for health monitoring).
As illustrated, a first step (step 400) is directed to determining whether or not the efficiency of video content analytics has been evaluated for video source devices of the considered video management system(s). This step may consist, for example, in determining whether or not a score table such as the one described by reference to Table 1 in the Appendix exists for the considered video management system(s).
If the efficiency of the video content analytics has not yet been evaluated for the video source devices of the considered video management system(s), a score table like Table 1 in the Appendix, denoted basis T1, is set as empty (step 405).
On the contrary, if the efficiency of the video content analytics has already been evaluated for the video source devices of the considered video management system(s), the items of the score information corresponding to this evaluation (also referred to as the previous score information), that may be stored in a corresponding score table, are stored in the score table denoted basis T1 (step 410).
Next, the efficiency of the video content analytics for processing video streams obtained from the video source devices of the considered video management system(s) is evaluated (step 415). The items of the corresponding score information are stored in a score table denoted T2. According to embodiments, this evaluation is similar to that described by reference to Figures 2 and 3.
Then, the current score information (stored in table T2) is compared with the previous score information (stored in table T1).
To that end, a first step aims at determining whether or not the current score information is similar to the previous score information (step 420). This can be done by comparing all the values of score tables T1 and T2, by comparing the score information associated with each pair of VGA and VSD, for example by determining that the current score information and the previous score information are both empty or are both of the same order of value for score R and optionally for the numbers of detected objects (Omax and Oaverage)· One or several thresholds may be defined for determining whether two values should be considered as similar or as different.
If the current score information is similar to the previous score information, this makes it possible to confirm that the selection of VCAs for the VSDs is still valid, that VSDs are still operating with the same level of performance, and that the configuration settings of the VCAs are still correct. In such a case, the current score information is stored in replacement of the previous score information as reference score information (step 445).
On the contrary, if at least one item of the current score information is different from the corresponding item of the previous score information, it is determined whether or not the difference results from new results (step 425), that is to say if a nonempty value of the current score information (T2) corresponds to an empty value of the previous score information (T1).
As a result of this comparison, it may be determined whether or not the VCA(s) to be used in relation with VSD(s) are to be updated to comply with change in the VMS(s). This may happen, for example, if a camera of the considered VMS(s) has been replaced with a camera of higher resolution, satisfying the property requirements of a video content analytics. It may also happen when a new camera is added to the considered VMS(s) since it is considered that the initial score value of a new pair of VGA and VSD is an empty value.
If the VCA(s) to be used in relation with VSD(s) are to be updated, the video content analytics that should be executed, for each video source device, are identified (step 430). This step is similar to step 255 in Figure 2.
If the difference between the previous score information and the current score information does not result from new results (which happens when non-empty values of the current score information (T2) correspond to empty values of the previous score information (T1), for example when VCAs and/or VSDs have been added to the system) or after having identified the video content analytics that should be executed for each video source device (step 430), which means that at least one current score result obtained for a (VSD, VGA) pair (in table T2) is significantly different from the corresponding previous result (in table T1), a test is performed to determine whether or not the previous score information is greater than the current score information for at least one pair of VGA and VSD (step 435). This is detected, for example, from a large decrease of the score R or from abnormal object count values (e.g. current Omax value is substantially lower than the previous Coverage value). For the sake of illustration, this may result from a deficient camera (e.g. electronic failure resulting in lower image quality) or a voluntary malicious action (tampering with a camera).
This may also result, for example, from an abnormal increase of the maximum number of identified objects (Omax) with a large decrease of the average confidence value (paverage)· This may correspond to a modification of the VGA configuration with incorrect settings.
In such cases, users are preferably alerted of the abnormal situation (step 440) for example through a graphical user interface.
Next, the current score information is stored in replacement of the previous score information as reference score information (step 445).
Likewise, if the current score information is greater than or equal to the previous score information, for all the pairs of VGA and VSD (step 435), the current score information is stored in replacement of the previous score information as reference score information (step 445).
It is to be noted that steps 435 and 440 and/or step 445 may be optional.
Figure 5 illustrates an example of associating video content analytics, or more generally video processing modules, to video source devices, or more generally to video sources.
According to the illustrated example, the system comprises n video sources (denoted VSDi to VSDn). The video data obtained from these video sources may be processed by video processing modules of a set of p video processing modules (denoted VCAi to VCAp). For the sake of clarity, it may be considered that a video processing module having a first configuration and the same video processing module having a second configuration, different from the first one, are two different video processing modules.
As illustrated and according to embodiments, each video source is associated with one or several video processing modules. For example, video source VSDi is associated with a list of video processing modules comprising only video processing module VCAk and video source VSDj is associated with a list of video processing modules comprising video processing modules VCAk, VCAi, and VCAP.
As described above, determining which video processing module is to be associated with which video source may be based on scores (or relevance indications) that may be determined for each pair of a set of pairs of video source and video processing module, where the pairs are different from each other, where the set of pairs comprises at least one pair for each video source, and where among the pairs directed to one video source, there are one pair for each of several video processing modules. A score or a relevance indication represents the relevance for the video processing module of a pair to process video streams obtained from the video source referenced by this pair.
Figure 6 is a schematic block diagram of a processing device for implementing embodiments of the invention.
The computing device 600 comprises a communication bus connected to:
- a central processing unit 610, such as a microprocessor, denoted CPU;
- an I/O module 620 for receiving data from and sending data to external devices;
- a read only memory 630, denoted ROM, for storing computer programs for implementing embodiments;
- a hard disk 640 denoted HD; and
- a random access memory 650, denoted RAM, for storing the executable code of the method of embodiments of the invention as well as registers adapted to record variables and parameters.
The executable code may be stored either in random access memory 650, in hard disk 640, or in a removable digital medium (not represented) such as a disk of a memory card.
The central processing unit 610 is adapted to control and direct the execution of the instructions or portions of software code of the program or programs according to embodiments of the invention, which instructions are stored in one of the aforementioned storage means. After powering on, CPU 610 may execute instructions from main RAM memory 650 relating to a software application after those instructions have been loaded, for example, from the program ROM 630 or hard disk 640.
The computing device is adapted for carrying the steps of some of the steps described by reference to Figures 3 to 5.
While the invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive, the invention being not restricted to the disclosed embodiments. Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in putting into practice (i.e. performing) the claimed invention, from a study of the drawings, the disclosure and the appended claims.
In the claims, the word “comprising” does not exclude other elements or steps, and the indefinite article “a” or “an” does not exclude a plurality. A single processor or other unit may fulfil the functions of several items recited in the claims. The mere fact that different features are recited in mutually different dependent claims does not indicate that a combination of these features cannot be advantageously used. Any reference signs in the claims should not be construed as limiting the scope of the invention.
APPENDIX
VCAi | vca2 | vca3 | vca4 | |||||
Envi | Env2 | Envi | Env2 | Envi | Env2 | Envi | Env2 | |
Ci | 10.5 | 7.3 | 8.4 | 0.5 | 5.2 | 6.8 | / | |
c2 | 1.5 | 0.2 | 0 | 0 | 8.2 | 2.4 | 7.5 | 1.8 |
Cn | 5 | 0 | / | 9.4 | 0 | / |
Table 1: example of a score table for four video content analytics, n video source devices, and two environmental conditions
Claims (20)
1. A method of configuring at least one video management system comprising a plurality of video processing modules, the at least one video management system receiving video streams from a plurality of video sources, the method comprising:
determining a set of pairs of video source and video processing module, the pairs being different from each other, the set of pairs comprising at least one pair for each video source of the plurality of video sources and among the pairs directed to one video source, one pair for each of several video processing modules;
for each pair of the set of pairs, processing at least one video stream sample obtained from the video source referenced by the pair;
in response to the processing, determining a relevance indication for the video processing module referenced by the pair to process video streams obtained from the video source referenced by the pair;
for each video source of the plurality of video sources, if at least one video processing module is relevant to process video streams obtained from the video source, determining a list of at least one video processing module that is relevant to process video streams obtained from the video source, the determination being based on the relevance indication.
2. The method of claim 1, further comprising a step of obtaining at least one item of property of at least one video source and a step of obtaining at least one item of property requirement for at least one video processing module, the set of pairs being determined as a function of the at least one item of property and of the at least one item of property requirement.
3. The method of claim 1 or claim 2, wherein the step of determining a relevance indication comprises a step of determining whether or not the corresponding step of processing at least one video stream sample produced a result.
4. The method of claim 3, wherein the step of determining a relevance indication comprises a step of analyzing an obtained processing result.
5. The method of any one of claims 1 to 4, wherein items of context information are associated with at least one pair of the set of pairs, a relevance indication being determined for each of the items of context information, a list of at least one video processing module being further determined as a function of an item of context information.
6. The method of any one of claims 1 to 5, wherein a list of at least one video processing module is further determined as a function of at least one threshold.
7. The method of claim 6, wherein the maximum number of video processing modules of a list of at least one video processing module is predetermined.
8. The method of any one of claims 1 to 7, further comprising a step of obtaining at least one video stream sample, the at least one video stream sample being obtained from a previously recorded video stream or from a live video stream.
9. The method of any one of claims 1 to 8, wherein the relevance indication comprises a number of predetermined events detected per period of time.
10. The method of any one of claims 1 to 9, wherein video sources of the plurality of video sources belong to different video management systems and wherein a given video processing module can be used in a video management system for processing a video stream received from another video management system if the other video management system does not use the given video processing module.
11. A non-transitory computer-readable storage medium storing instructions of a computer program for implementing the method according to any one of claims 1 to 10.
12. A device for configuring at least one video management system comprising a plurality of video processing modules, the at least one video management system receiving video streams from a plurality of video sources, the device comprising a microprocessor configured for:
determining a set of pairs of video source and video processing module, the pairs being different from each other, the set of pairs comprising at least one pair for each video source of the plurality of video sources and among the pairs directed to one video source, one pair for each of several video processing modules;
for each pair of the set of pairs, processing at least one video stream sample obtained from the video source referenced by the pair;
in response to the processing, determining a relevance indication for the video processing module referenced by the pair to process video streams obtained from the video source referenced by the pair;
for each video source of the plurality of video sources, if at least one video processing module is relevant to process video streams obtained from the video source, determining a list of at least one video processing module that is relevant to process video streams obtained from the video source, the determination being based on the relevance indication.
13. The device of claim 12, wherein the microprocessor is further configured for obtaining at least one item of property of at least one video source and for obtaining at least one item of property requirement for at least one video processing module, and the microprocessor is further configured so that the set of pairs is determined as a function of the at least one item of property and of the at least one item of property requirement.
14. The device of claim 12 or claim 13, wherein the microprocessor is further configured so that determining a relevance indication comprises determining whether or not the corresponding processing of at least one video stream sample has produced a result.
15. The device of claim 14, wherein the microprocessor is further configured so that determining a relevance indication comprises analyzing an obtained processing result.
16. The device of any one of claims 12 to 15, wherein items of context information are associated with at least one pair of the set of pairs, the microprocessor being further configured so that a relevance indication is determined for each of the items of context information, a list of at least one video processing module being further determined as a function of an item of context information.
17. The device of any one of claims 12 to 16, wherein the microprocessor is further configured so that a list of at least one video processing module is further determined as a function of at least one threshold.
18. The device of claim 17, wherein the maximum number of video processing modules of a list of at least one video processing module is predetermined.
19. The device of any one of claims 12 to 18, wherein the microprocessor is
10 further configured for obtaining at least one video stream sample, the at least one video stream sample being obtained from a previously recorded video stream or from a live video stream.
20. The device of any one of claims 12 to 19, wherein the relevance indication
15 comprises a number of predetermined events detected per period of time.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB1811870.3A GB2575683A (en) | 2018-07-20 | 2018-07-20 | Method, device, and computer program for identifying relevant video processing modules in video management systems |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB1811870.3A GB2575683A (en) | 2018-07-20 | 2018-07-20 | Method, device, and computer program for identifying relevant video processing modules in video management systems |
Publications (2)
Publication Number | Publication Date |
---|---|
GB201811870D0 GB201811870D0 (en) | 2018-09-05 |
GB2575683A true GB2575683A (en) | 2020-01-22 |
Family
ID=63364364
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB1811870.3A Withdrawn GB2575683A (en) | 2018-07-20 | 2018-07-20 | Method, device, and computer program for identifying relevant video processing modules in video management systems |
Country Status (1)
Country | Link |
---|---|
GB (1) | GB2575683A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220174076A1 (en) * | 2020-11-30 | 2022-06-02 | Microsoft Technology Licensing, Llc | Methods and systems for recognizing video stream hijacking on edge devices |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050232462A1 (en) * | 2004-03-16 | 2005-10-20 | Vallone Robert P | Pipeline architecture for analyzing multiple video streams |
US20110109742A1 (en) * | 2009-10-07 | 2011-05-12 | Robert Laganiere | Broker mediated video analytics method and system |
US20120045090A1 (en) * | 2010-08-17 | 2012-02-23 | International Business Machines Corporation | Multi-mode video event indexing |
-
2018
- 2018-07-20 GB GB1811870.3A patent/GB2575683A/en not_active Withdrawn
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050232462A1 (en) * | 2004-03-16 | 2005-10-20 | Vallone Robert P | Pipeline architecture for analyzing multiple video streams |
US20110109742A1 (en) * | 2009-10-07 | 2011-05-12 | Robert Laganiere | Broker mediated video analytics method and system |
US20120045090A1 (en) * | 2010-08-17 | 2012-02-23 | International Business Machines Corporation | Multi-mode video event indexing |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20220174076A1 (en) * | 2020-11-30 | 2022-06-02 | Microsoft Technology Licensing, Llc | Methods and systems for recognizing video stream hijacking on edge devices |
Also Published As
Publication number | Publication date |
---|---|
GB201811870D0 (en) | 2018-09-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11128838B2 (en) | Systems and methods for automated cloud-based analytics for security and/or surveillance | |
CN109166261B (en) | Image processing method, device and equipment based on image recognition and storage medium | |
CN106060442B (en) | Video storage method, device and system | |
US11893796B2 (en) | Methods and systems for detection of anomalous motion in a video stream and for creating a video summary | |
US20110025847A1 (en) | Service management using video processing | |
US20140071273A1 (en) | Recognition Based Security | |
US20180150683A1 (en) | Systems, methods, and devices for information sharing and matching | |
US11418701B2 (en) | Method and system for auto-setting video content analysis modules | |
WO2021114985A1 (en) | Companionship object identification method and apparatus, server and system | |
US20150312535A1 (en) | Self-rousing surveillance system, method and computer program product | |
US11017240B2 (en) | System and method for image analysis based security system | |
US11836935B2 (en) | Method and apparatus for detecting motion deviation in a video | |
CN110895663A (en) | Two-wheel vehicle identification method and device, electronic equipment and monitoring system | |
GB2575683A (en) | Method, device, and computer program for identifying relevant video processing modules in video management systems | |
CN109120896B (en) | Security video monitoring guard system | |
US10922819B2 (en) | Method and apparatus for detecting deviation from a motion pattern in a video | |
US20220165140A1 (en) | System and method for image analysis based security system | |
CN111914591A (en) | Duration determination method and device | |
CN112149451B (en) | Affinity analysis method and device | |
US11290707B2 (en) | Method for carrying out a health check of cameras and a camera system | |
US10922349B1 (en) | Filtering remote access monitoring data | |
GB2567150B (en) | Method and device for optimizing the search for samples at a video management system | |
Yao | Integrating AI Into CCTV Systems: A Comprehensive Evaluation of Smart Video Surveillance in Community Space With Online Anomaly Training | |
US20230306583A1 (en) | System and method for improving admissibility of electronic evidence | |
CN115862105A (en) | Network model training method and image processing method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WAP | Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1) |