WO2013131189A1 - Cloud-based video analytics with post-processing at the video source-end - Google Patents

Cloud-based video analytics with post-processing at the video source-end Download PDF

Info

Publication number
WO2013131189A1
WO2013131189A1 PCT/CA2013/050161 CA2013050161W WO2013131189A1 WO 2013131189 A1 WO2013131189 A1 WO 2013131189A1 CA 2013050161 W CA2013050161 W CA 2013050161W WO 2013131189 A1 WO2013131189 A1 WO 2013131189A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
video analytics
data
processing
analytics engine
Prior art date
Application number
PCT/CA2013/050161
Other languages
French (fr)
Inventor
Charles Black
Jason Phillips
Robert Laganiere
Pascal Blais
Original Assignee
Iwatchlife Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Iwatchlife Inc. filed Critical Iwatchlife Inc.
Publication of WO2013131189A1 publication Critical patent/WO2013131189A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the instant invention relates generally to video analytics, and more particularly to cloud-based video analytics with post processing at the video source- end.
  • Video cameras have become ubiquitous in modern society. They are commonly deployed in public and private spaces as part of security and surveillance systems, and increasingly they are appearing in mobile consumer electronic devices, vehicles, etc.
  • captured video data may be compressed and stored for later use or it may be reviewed to identify the occurrence of predetermined events, etc.
  • a predictable result of capturing large amounts of video data is that a considerable amount of time must be expended reviewing it. Humans tend to find the task of reviewing video data to be rather tedious, and as a result the vast majority of captured video data historically has not been subjected to review, or at least not subjected to sufficiently thorough review.
  • IP-based hardware edge devices with built-in video analytics, such as IP cameras and encoders, including passive infrared (PIR) based motion detection, analytics in a box, etc.
  • Video analytics electronically recognizes the significant features within a series of frames of video and allows the system to issue alerts when specific types of events occur, thereby speeding real-time security responses or increasing the frequency of social media updates, etc. Automatically searching captured video for specific content also relieves the user from spending tedious hours reviewing the video, or alternatively decreases the number of people that are required to screen the video data.
  • video data Once the video data has been moved into the cloud, it may be subjected to complex video analytics processing using video analytics engines that are in execution on powerful, cloud-based servers. Further, cloud-based systems readily support brokering of video analytics processing, in which the video data is passed to one or more video analytics engines in dependence upon the processing that is requested.
  • An example of a brokered video analytics system is described in United States Pre-Grant Publication 2011/0109742-A1, the entire contents of which are incorporated herein by reference.
  • pre-processing may be performed at the source-end including using video analytics to identify portions of the captured video data to be transmitted to the cloud-based system for further processing, as is described in WIPO Publication WO 2011/041903, the entire contents of which are incorporated herein by reference.
  • video analytics may be performed at the source-end including using video analytics to identify portions of the captured video data to be transmitted to the cloud-based system for further processing, as is described in WIPO Publication WO 2011/041903, the entire contents of which are incorporated herein by reference.
  • video analytics capabilities of 'smart' cameras, or of another device that is capable of performing video analytics, located at the source end may not be utilized in a meaningful way.
  • a method comprising: capturing video data at a source end using a video camera that is disposed at the source end, the captured video data including first video data relating to an event of interest; transmitting, via a Wide Area Network (WAN), at least a portion of the first video data from the source end to a first processor of a cloud-based video analytics system; using the first processor, performing first video analytics processing of the at least the portion of the first video data; based on a result of the first video analytics processing, determining control data for affecting second video analytics processing of the captured video data; transmitting, via the WAN, the control data from the first processor to a second processor at the source end; and using the second processor, performing the second video analytics processing of the captured video data based on the control data.
  • WAN Wide Area Network
  • a method comprising: capturing video data using a video camera disposed at a source end; providing at least a portion of the captured video data to a cloud-based video analytics system via a communications network; pre-processing the at least a portion of the captured video data using a first video analytics engine of the cloud-based video analytics system; based on a result of the pre-processing, providing control data via the communications network from the cloud-based video analytics system to a second video analytics engine that is other than a video analytics engine of the cloud-based video analytics system; and using the second video analytics engine, processing the captured video data based on the control data.
  • a method comprising: capturing video data using a video camera disposed at a source end; providing at least a portion of the captured video data to a cloud-based video analytics system via a communications network; using a first video analytics engine of the cloud-based video analytics system, performing first video analytics processing of the at least a portion of the captured video data; using a second video analytics engine that is other than a video analytics engine of the cloud-based video analytics system, performing second video analytics processing of the at least a portion of the captured video data; and transmitting feedback data between the first video analytics engine and the second video analytics engine via the communications network, the feedback data based on a result of respective video analytics processing by one of the first video analytics engine and the second video analytics engine, and the feedback data for affecting video analytics processing by the other one of the first video analytics engine and the second video analytics engine.
  • a system for performing video analytics processing of video data comprising: a cloud-based first video analytics engine for performing first video analytics processing of video data; a second video analytics engine that is other than a cloud-based video analytics engine for performing second video analytics processing of the video data, the second video analytics engine in communication with the cloud-based first video analytics engine via a communication network; and a source of video data in communication with the cloud-based first video analytics engine and the second video analytics engine via the communication network, wherein, during use, video data is provided from the source of video data to the cloud-based first video analytics engine and to the second video analytics engine, and wherein feedback data is exchanged between the cloud-based first video analytics engine and the second video analytics engine, the feedback data based on a result of video analytics processing by one of the cloud-based first video analytics engine and the second video analytics engine for affecting video analytics processing by the other one of the cloud-based first video analytics engine and the second video analytics engine.
  • FIG. 1 is a simplified block diagram of a system that is suitable for implementing a method according to an embodiment of the instant invention
  • FIG. 2 is a simplified flow diagram of a method according to an embodiment of the instant invention.
  • Fig. 3 is a simplified flow diagram of a method according to an embodiment of the instant invention.
  • Fig. 4 is a simplified flow diagram of a method according to an embodiment of the instant invention.
  • Video analytics is defined as any technology that is used to analyze video for specific data, behavior, objects or attitude.
  • video analytics includes both video content analysis and inference processing.
  • Some specific and non-limiting examples of video analytics applications include: counting the number of pedestrians entering a door or a geographic region; determining the location, speed and direction of travel; identifying suspicious movement of people or assets; vehicle license plate identification; evaluating how long a package has been left in an area; facial recognition; recognition of individuals in a group; recognizing a type of activity; recognizing friends or other contacts of a user, etc.
  • Post-processing is defined as using control data to affect the video analytics processing of video data, wherein the control data is based on a result of previous video analytics processing of the video data. More particularly, the control data affects a parameter of the video analytics processing during post-processing.
  • Cloud computing is a general term for anything that involves delivering hosted services over the Internet.
  • a cloud service has three distinct characteristics that differentiate it from traditional hosting: it is sold on demand, typically by the minute or the hour; it is elastic, a user can have as much or as little of a service as they want at any given time; and the service is fully managed by the provider, the client needs nothing but a terminal with Internet access. Examples of terminals include IP video cameras, mobile phones, personal computers, IP TVs, etc. Moving the video analytics processing into the cloud may reduce a client's initial capital expenditure, avoid the need for the client to maintain a local server farm, while at the same time providing available additional processing capability to support significant expansion and flexibility of a client's video analytics monitoring system.
  • cloud computing as applied to video analytics supports parallel processing with multiple different video analytics engines and/or hierarchal processing with different video analytics engines. In addition, some video analytics processing may be "farmed out” or brokered to third parties if specialized video analytics engines are required.
  • modem IP network video cameras support high definition video formats that result in very large amounts of video data being captured. Even the amount of video data that is captured by VGA cameras can be significant in a monitoring system of moderate size.
  • the bandwidth that is available across a WAN such as the Internet is limited and cannot be increased easily.
  • a major obstacle to the adoption of cloud computing for video analytics has been the inability to transmit the video data across the WAN to the centralized video analytics processing resources, due to the limited bandwidth of the WAN. That said, once the video data has been moved into the cloud there is for all intents and purposes an unlimited amount of processing resources available.
  • video data that is captured at a source end is transmitted via a communication network to a cloud-based video analytics system.
  • the actual amount of video data that is transmitted depends on a number of factors, including the data transmission capacity of any local area network (LAN) or wide area network (WAN) disposed between the source end and the cloud-based video analytics system, any data limits that are imposed by the cloud-based video analytics system, the resolution and/or compression algorithms utilized at the source end, etc.
  • LAN local area network
  • WAN wide area network
  • FIG. 1 shown is a schematic block diagram of a system 100 including a video source 102 that in communication with a cloud-based video analytics system 108 via a Wide Area Network (WAN) 106, such as for instance the Internet of the World Wide Web.
  • WAN Wide Area Network
  • the video source 102 is disposed at a source end of the system 100.
  • the video source 102 is a network IP camera, such as for instance a Nextiva S2600e Network Camera or another similar device having on-board video analytics capabilities.
  • the video source 102 is a basic IP camera that does not support onboard video analytics processing, but that is in communication with another (not illustrated) device at the source end, which is capable of performing video analytics processing on the video data that is captured using the video source 102.
  • video data captured using the video source 102 are transmitted to the cloud-based video analytics system 108 via gateway 104 and WAN 106.
  • the video source 102 connects to the IP network without a gateway 104.
  • the video source 102 is a mobile device, such as for instance a camera embedded in a smart phone or laptop computer.
  • the cloud-based video analytics system 108 is a broker system comprising at least a central server and one or more video analytics engines in communication therewith.
  • at least some of the one or more video analytics engines are in execution on third party servers, and may be subscription based or pay-per-use based.
  • a video storage device 110 is provided at the source end via a router 116, the video storage device 110 for retrievably storing the captured video data.
  • the video storage device 110 is one of a digital video recorder (DVR), a network video recorder (NVR), and a storage device in a box with a searchable file structure.
  • DVR digital video recorder
  • NVR network video recorder
  • the captured video data is compressed prior to being stored in the video storage device 110.
  • the video storage device supports video analytics processing.
  • the video source 102 is deployed at the acquisition end for monitoring a known field of view (FOV).
  • FOV field of view
  • the video source 102 monitors one of a parking lot, an entry/exit point of a building, and a stack of shipping containers.
  • the video source monitors a room, a workspace, or another area where individuals gather in a social setting, etc.
  • the video source 102 captures video data of the FOV at a known frame rate, such as for instance between about 5 FPS and about 30 FPS, and performs on-board compression of the captured video data using a suitable compression standard such as for instance MPEG-4 or H.264.
  • At least a portion of the captured video data is transmitted from the acquisition end to the cloud-based video analytics system 108 via WAN 106.
  • First video analytics processing of the at least a portion of the captured video data is performed using a first video analytics engine of the cloud-based video analytics system 108.
  • second video analytics processing of the video data is performed using a second video analytics engine, which is other than a cloud- based video analytics engine.
  • the second video analytics processing is performed on the at least a portion of the captured video data that was transmitted to the cloud-based video analytics system 108, as well as additional captured video data that was not transmitted to the cloud-based video analytics system 108.
  • the second video analytics engine is in execution on the video source 102, such as for instance a network IP camera with on-board video analytics capability.
  • Control data is exchanged between the first video analytics engine of the cloud-based video analytics system 108 and the second video analytics engine at the video-source end. The control data is used to affect the second video analytics processing of the video data, based on the result of the first video analytics processing.
  • a synergistic video analytics processing relationship is achieved by transmitting at least a portion of the captured video data to a cloud-based video analytics system 108 for undergoing first video analytics processing, and thereafter affecting second video analytics processing of the captured video data at the video-source end in dependence upon control data that is determined based on a result of the first video analytics processing.
  • complex video analytics processing or parallel video analytics processing of captured video data is performed "in the cloud” where processing resources are large, and the results of the cloud-based video analytics processing are used to affect subsequent video analytics processing of the captured video data at the video source end where processing resources are limited but where the entire set of captured video data is available for processing.
  • the amount of captured video data that is transmitted to the cloud-based video analytics system 108 is optionally minimized, thereby avoiding problems relating to network data capacity limitations.
  • the captured video data that is transmitted to the cloud-based video analytics system 108 can be subjected to video analytics processing that requires greater processing resources than are available at the source end, including parallel video analytics processing.
  • Control data based on a result of the cloud-based video analytics processing, is transmitted via the WAN to the video analytics engine at the video source end, and affects a parameter of the second video analytics processing.
  • the control data supports more sophisticated video analytics processing at the video source end than would otherwise be possible given the available processing capabilities.
  • the video source 102 is capable of performing different video analytics processing in series, but not in parallel due to limited processing capability.
  • the cloud-based video analytics processing determines the occurrences of different types of events of interest, and then provides control data back to the video source 102 at the source end, the control data indicative of the locations of events of interest within the video data. Based on the control data, different video analytics processes and/or different template sets are used to process different locations within the video data. Of course, other parameters of the video analytics performed at the source end may be affected based on the control data that is transmitted from the cloud-based video analytics system 108.
  • video data is captured at a source end using a video camera, the video camera being disposed at the source end.
  • the captured video data includes first video data relating to an event of interest.
  • the event of interest is an intrusion into a monitored area in the case of a surveillance or security application, or the event of interest is a grouping of a predetermined number of friends in the case of a social media application.
  • At 202 at least a portion of the first video data is transmitted, via a Wide Area Network (WAN), from the source end to a first processor of a cloud-based video analytics system.
  • WAN Wide Area Network
  • first video analytics processing of the at least the portion of the first video data is performed, using the first processor.
  • a first video analytics engine in execution on the first processor performs requested or default video analytics processing of the at least the portion of the first video data.
  • a result of the first video analytics processing is obtained, for instance, the result is detecting an occurrence of an event of interest within the at least a portion of the first video data.
  • control data is determined at 206 for affecting second video analytics processing of the captured video data.
  • the control data is transmitted, via the WAN, from the first processor to a second processor at the source end.
  • the second video analytics processing of the captured video data is performed based on the control data.
  • video data is captured using a video camera that is disposed at a source end.
  • the captured video data includes video data relating to an event of interest.
  • the event of interest is an intrusion into a monitored area in the case of a surveillance or security application, or the event of interest is a grouping of a predetermined number of friends in the case of a social media application.
  • At 302 at least a portion of the captured video data is provided from the source end to a cloud-based video analytics system via a communications network.
  • the communications network is a Wide Area Network (WAN) such as for instance the Internet of the World Wide Web.
  • WAN Wide Area Network
  • the at least a portion of the captured video data is pre-processed using a first video analytics engine of the cloud-based video analytics system.
  • control data is provided via the communications network from the cloud-based video analytics system to a second video analytics engine that is other than a video analytics engine of the cloud-based video analytics system.
  • the captured video data is processed, using the second video analytics engine, based on the control data.
  • video data is captured using a video camera disposed at a source end.
  • the captured video data includes video data relating to an event of interest.
  • the event of interest is an intrusion into a monitored area in the case of a surveillance or security application, or the event of interest is a grouping of a predetermined number of friends in the case of a social media application.
  • At 402 at least a portion of the captured video data is provided to a cloud-based video analytics system via a communications network.
  • the communications network is a Wide Area Network (WAN) such as for instance the Internet of the World Wide Web.
  • WAN Wide Area Network
  • first video analytics processing of the at least a portion of the captured video data is performed using a first video analytics engine of the cloud-based video analytics system.
  • second video analytics processing of the at least a portion of the captured video data is performed using a second video analytics engine that is other than a video analytics engine of the cloud-based video analytics system.
  • feedback data is transmitted between the first video analytics engine and the second video analytics engine via the communications network.
  • the feedback data is based on a result of respective video analytics processing by one of the first video analytics engine and the second video analytics engine.
  • control data relating to a result of cloud-based video analytics processing of captured video data is used to affect the video analytics processing of the captured video data at the source end, such as for instance on a 'smart' camera having built-in video analytics capabilities.
  • the second video analytics processing is performed using a video analytics engine that is in communication with a plurality of video sources 102 via a local area network (LAN) or another video analytics engine that is disposed between the video source 102 and the cloud based video analytics system 108.
  • LAN local area network
  • a plurality of processes are in execution within the cloud for analyzing video data provided thereto. Each process is for identifying one or more trigger events. Upon detecting a trigger event, a process transmits a signal to a control processor, for example within the cloud, for providing the control data therefrom. As such, a plurality of processes is executed in parallel within the cloud to allow selection of a process for execution local to the video data capture device in the form of the video camera or the video capture network.
  • processing local to the video data capture device is performed under the control of the control processor such that local processing switches between pre-processing of video data, post-processing of video data, and series processing of same video data depending on a result of cloud processing of at least some of the captured video data.
  • the cloud processing determines when three or more people are within a video frame and local processing is used to identify the best from a series of video frames including the three or more people for use in an automatically generated album.
  • the method is used for switching between video analytics applications based on cloud processing.
  • a video camera disposed for seeing who is at the door is also useful for viewing the road in front of the building.
  • video analytics for identifying the individual is selected and in the absence of an individual, a process is executed to see if a car is parked in front of the building.
  • Cloud based analytics is used to switch between the two functions and optionally is used as part of the processing.
  • the cloud based analytics determines whether a person is in the frame or not. When a person is in the frame, a local analytics process selects the two best facial images of the person based on angle, lighting, clarity, features, etc.
  • the two best frames are then transmitted to the cloud for identification and archiving purposes.
  • the cloud then transmits a further control data to the local analytics engine that the data received was adequate or, alternatively, that more data is required.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

A method for performing video analytics includes capturing video data using a video camera disposed at a source end. At least a portion of the captured video data is provided to a cloud-based video analytics system via a communications network, and is pre-processed using a first video analytics engine of the cloud-based video analytics system. Based on a result of the pre-processing, control data is determined and provided via the communications network from the cloud-based video analytics system to a second video analytics engine that is other than a video analytics engine of the cloud-based video analytics system. Using the second video analytics engine, the captured video data based is processed base on the control data.

Description

CLOUD-BASED VIDEO ANALYTICS WITH POST-PROCESSING AT THE
VIDEO SOURCE-END
FIELD OF THE INVENTION
[001] The instant invention relates generally to video analytics, and more particularly to cloud-based video analytics with post processing at the video source- end.
BACKGROUND OF THE INVENTION
[002] Video cameras have become ubiquitous in modern society. They are commonly deployed in public and private spaces as part of security and surveillance systems, and increasingly they are appearing in mobile consumer electronic devices, vehicles, etc. Depending on the particular application, captured video data may be compressed and stored for later use or it may be reviewed to identify the occurrence of predetermined events, etc. Of course, a predictable result of capturing large amounts of video data is that a considerable amount of time must be expended reviewing it. Humans tend to find the task of reviewing video data to be rather tedious, and as a result the vast majority of captured video data historically has not been subjected to review, or at least not subjected to sufficiently thorough review.
[003] The market is currently witnessing a migration toward IP-based hardware edge devices with built-in video analytics, such as IP cameras and encoders, including passive infrared (PIR) based motion detection, analytics in a box, etc. Video analytics electronically recognizes the significant features within a series of frames of video and allows the system to issue alerts when specific types of events occur, thereby speeding real-time security responses or increasing the frequency of social media updates, etc. Automatically searching captured video for specific content also relieves the user from spending tedious hours reviewing the video, or alternatively decreases the number of people that are required to screen the video data.
Furthermore, when 'smart' cameras and encoders process the captured images at the edge it becomes possible to record or transmit only important events, for example only when someone enters a predefined area that is under surveillance such as a perimeter along a fence, thereby reducing storage and/or reviewing requirements. [004] Unfortunately, the processing power of a typical edge device may be quite limited such that only relatively simple video analytics processes can be carried out at the video source end. Performing highly complex video analytics processes, or performing multiple video analytics processes in parallel, is simply beyond the capability of many commercially available edge devices. As such, an alternative approach involves transmitting some or all of the captured video data to a cloud-based video analytics system for processing. Once the video data has been moved into the cloud, it may be subjected to complex video analytics processing using video analytics engines that are in execution on powerful, cloud-based servers. Further, cloud-based systems readily support brokering of video analytics processing, in which the video data is passed to one or more video analytics engines in dependence upon the processing that is requested. An example of a brokered video analytics system is described in United States Pre-Grant Publication 2011/0109742-A1, the entire contents of which are incorporated herein by reference. While a cloud-based video analytics solution is well suited for use with 'dumb' video cameras that do not posses any video analytics capabilities, clearly it is wasteful of resources in situations that involve video capture using 'smart' cameras, since the video analytics capabilities of the 'smart' cameras may not be utilized in any meaningful way.
[005] Of course, a further drawback that is associated with current cloud-based video analytics systems is that the amount of captured video data may exceed the network capacity of the communication network that is disposed between the source end and the cloud-based system. As such, it is known to transmit only a portion of the captured video data via the communication network. For instance, as is described in WIPO Publication WO 2011/041904, the entire contents of which are incorporated herein by reference, it is known to transmit a plurality of single, non-adjacent frames of video data to a remote server. If an event of interest is detected based on the non- adjacent frames of video data, then transmission of additional video data is triggered and the additional video data is subsequently subjected to video analytics in the cloud. Alternatively, pre-processing may be performed at the source-end including using video analytics to identify portions of the captured video data to be transmitted to the cloud-based system for further processing, as is described in WIPO Publication WO 2011/041903, the entire contents of which are incorporated herein by reference. Of course, in both of the approaches that are mentioned above only a small amount of the video data is transmitted initially to the cloud-based system, and as such it is possible that some events of interest will not be subjected to video analytics processing and may therefore escape detection. Further, the video analytics capabilities of 'smart' cameras, or of another device that is capable of performing video analytics, located at the source end, may not be utilized in a meaningful way.
[006] It would therefore be advantageous to provide a method and system that overcomes at least some of the above-mentioned limitations of the prior art.
SUMMARY OF EMBODIMENTS OF THE INVENTION
[007] In accordance with an aspect of the invention there is provided a method comprising: capturing video data at a source end using a video camera that is disposed at the source end, the captured video data including first video data relating to an event of interest; transmitting, via a Wide Area Network (WAN), at least a portion of the first video data from the source end to a first processor of a cloud-based video analytics system; using the first processor, performing first video analytics processing of the at least the portion of the first video data; based on a result of the first video analytics processing, determining control data for affecting second video analytics processing of the captured video data; transmitting, via the WAN, the control data from the first processor to a second processor at the source end; and using the second processor, performing the second video analytics processing of the captured video data based on the control data.
[008] In accordance with an aspect of the invention there is provided a method comprising: capturing video data using a video camera disposed at a source end; providing at least a portion of the captured video data to a cloud-based video analytics system via a communications network; pre-processing the at least a portion of the captured video data using a first video analytics engine of the cloud-based video analytics system; based on a result of the pre-processing, providing control data via the communications network from the cloud-based video analytics system to a second video analytics engine that is other than a video analytics engine of the cloud-based video analytics system; and using the second video analytics engine, processing the captured video data based on the control data. [009] In accordance with an aspect of the invention there is provided a method comprising: capturing video data using a video camera disposed at a source end; providing at least a portion of the captured video data to a cloud-based video analytics system via a communications network; using a first video analytics engine of the cloud-based video analytics system, performing first video analytics processing of the at least a portion of the captured video data; using a second video analytics engine that is other than a video analytics engine of the cloud-based video analytics system, performing second video analytics processing of the at least a portion of the captured video data; and transmitting feedback data between the first video analytics engine and the second video analytics engine via the communications network, the feedback data based on a result of respective video analytics processing by one of the first video analytics engine and the second video analytics engine, and the feedback data for affecting video analytics processing by the other one of the first video analytics engine and the second video analytics engine. [0010] In accordance with an aspect of the invention there is provided a system for performing video analytics processing of video data, comprising: a cloud-based first video analytics engine for performing first video analytics processing of video data; a second video analytics engine that is other than a cloud-based video analytics engine for performing second video analytics processing of the video data, the second video analytics engine in communication with the cloud-based first video analytics engine via a communication network; and a source of video data in communication with the cloud-based first video analytics engine and the second video analytics engine via the communication network, wherein, during use, video data is provided from the source of video data to the cloud-based first video analytics engine and to the second video analytics engine, and wherein feedback data is exchanged between the cloud-based first video analytics engine and the second video analytics engine, the feedback data based on a result of video analytics processing by one of the cloud-based first video analytics engine and the second video analytics engine for affecting video analytics processing by the other one of the cloud-based first video analytics engine and the second video analytics engine.
BRIEF DESCRIPTION OF THE DRAWINGS [0011] Exemplary embodiments of the invention will now be described in conjunction with the following drawings, wherein similar reference numerals denote similar elements throughout the several views, in which:
[0012] Fig. 1 is a simplified block diagram of a system that is suitable for implementing a method according to an embodiment of the instant invention;
[0013] Fig. 2 is a simplified flow diagram of a method according to an embodiment of the instant invention;
[0014] Fig. 3 is a simplified flow diagram of a method according to an embodiment of the instant invention; and, [0015] Fig. 4 is a simplified flow diagram of a method according to an embodiment of the instant invention.
DETAILED DESCRIPTION OF EMBODIMENTS OF THE INVENTION
[0016] The following description is presented to enable a person skilled in the art to make and use the invention, and is provided in the context of a particular application and its requirements. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the scope of the invention. Thus, the present invention is not intended to be limited to the embodiments disclosed, but is to be accorded the widest scope consistent with the principles and features disclosed herein.
[0017] Throughout the description of the embodiments of the instant invention, and in the appended claims, the following definitions are to be accorded to the following terms:
[0018] Video analytics is defined as any technology that is used to analyze video for specific data, behavior, objects or attitude. Typically, video analytics includes both video content analysis and inference processing. Some specific and non-limiting examples of video analytics applications include: counting the number of pedestrians entering a door or a geographic region; determining the location, speed and direction of travel; identifying suspicious movement of people or assets; vehicle license plate identification; evaluating how long a package has been left in an area; facial recognition; recognition of individuals in a group; recognizing a type of activity; recognizing friends or other contacts of a user, etc. [0019] Post-processing is defined as using control data to affect the video analytics processing of video data, wherein the control data is based on a result of previous video analytics processing of the video data. More particularly, the control data affects a parameter of the video analytics processing during post-processing.
[0020] Cloud computing is a general term for anything that involves delivering hosted services over the Internet. A cloud service has three distinct characteristics that differentiate it from traditional hosting: it is sold on demand, typically by the minute or the hour; it is elastic, a user can have as much or as little of a service as they want at any given time; and the service is fully managed by the provider, the client needs nothing but a terminal with Internet access. Examples of terminals include IP video cameras, mobile phones, personal computers, IP TVs, etc. Moving the video analytics processing into the cloud may reduce a client's initial capital expenditure, avoid the need for the client to maintain a local server farm, while at the same time providing available additional processing capability to support significant expansion and flexibility of a client's video analytics monitoring system. Furthermore, cloud computing as applied to video analytics supports parallel processing with multiple different video analytics engines and/or hierarchal processing with different video analytics engines. In addition, some video analytics processing may be "farmed out" or brokered to third parties if specialized video analytics engines are required.
[0021] In many instances, modem IP network video cameras support high definition video formats that result in very large amounts of video data being captured. Even the amount of video data that is captured by VGA cameras can be significant in a monitoring system of moderate size. Unfortunately, the bandwidth that is available across a WAN such as the Internet is limited and cannot be increased easily. A major obstacle to the adoption of cloud computing for video analytics has been the inability to transmit the video data across the WAN to the centralized video analytics processing resources, due to the limited bandwidth of the WAN. That said, once the video data has been moved into the cloud there is for all intents and purposes an unlimited amount of processing resources available.
[0022] In the description that follows, it is to be understood that at least a portion of video data that is captured at a source end is transmitted via a communication network to a cloud-based video analytics system. The actual amount of video data that is transmitted depends on a number of factors, including the data transmission capacity of any local area network (LAN) or wide area network (WAN) disposed between the source end and the cloud-based video analytics system, any data limits that are imposed by the cloud-based video analytics system, the resolution and/or compression algorithms utilized at the source end, etc.
[0023] Referring to FIG. 1, shown is a schematic block diagram of a system 100 including a video source 102 that in communication with a cloud-based video analytics system 108 via a Wide Area Network (WAN) 106, such as for instance the Internet of the World Wide Web. For certainty, the video source 102 is disposed at a source end of the system 100. In a specific and non-limiting example, the video source 102 is a network IP camera, such as for instance a Nextiva S2600e Network Camera or another similar device having on-board video analytics capabilities.
Alternatively, the video source 102 is a basic IP camera that does not support onboard video analytics processing, but that is in communication with another (not illustrated) device at the source end, which is capable of performing video analytics processing on the video data that is captured using the video source 102.
[0024] During use, video data captured using the video source 102 are transmitted to the cloud-based video analytics system 108 via gateway 104 and WAN 106.
Optionally, the video source 102 connects to the IP network without a gateway 104. Optionally, the video source 102 is a mobile device, such as for instance a camera embedded in a smart phone or laptop computer. Optionally, the cloud-based video analytics system 108 is a broker system comprising at least a central server and one or more video analytics engines in communication therewith. Optionally, at least some of the one or more video analytics engines are in execution on third party servers, and may be subscription based or pay-per-use based. Optionally, a video storage device 110 is provided at the source end via a router 116, the video storage device 110 for retrievably storing the captured video data. By way of a specific and non-limiting example, the video storage device 110 is one of a digital video recorder (DVR), a network video recorder (NVR), and a storage device in a box with a searchable file structure. In general, the captured video data is compressed prior to being stored in the video storage device 110. Optionally, the video storage device supports video analytics processing.
[0025] During use the video source 102 is deployed at the acquisition end for monitoring a known field of view (FOV). For example, the video source 102 monitors one of a parking lot, an entry/exit point of a building, and a stack of shipping containers. Alternatively, the video source monitors a room, a workspace, or another area where individuals gather in a social setting, etc. The video source 102 captures video data of the FOV at a known frame rate, such as for instance between about 5 FPS and about 30 FPS, and performs on-board compression of the captured video data using a suitable compression standard such as for instance MPEG-4 or H.264. As is described in greater detail below, at least a portion of the captured video data is transmitted from the acquisition end to the cloud-based video analytics system 108 via WAN 106. First video analytics processing of the at least a portion of the captured video data is performed using a first video analytics engine of the cloud-based video analytics system 108. Subsequently, second video analytics processing of the video data is performed using a second video analytics engine, which is other than a cloud- based video analytics engine. Optionally, the second video analytics processing is performed on the at least a portion of the captured video data that was transmitted to the cloud-based video analytics system 108, as well as additional captured video data that was not transmitted to the cloud-based video analytics system 108. In a specific and non-limiting example the second video analytics engine is in execution on the video source 102, such as for instance a network IP camera with on-board video analytics capability. Control data is exchanged between the first video analytics engine of the cloud-based video analytics system 108 and the second video analytics engine at the video-source end. The control data is used to affect the second video analytics processing of the video data, based on the result of the first video analytics processing.
[0026] According to the instant embodiment, a synergistic video analytics processing relationship is achieved by transmitting at least a portion of the captured video data to a cloud-based video analytics system 108 for undergoing first video analytics processing, and thereafter affecting second video analytics processing of the captured video data at the video-source end in dependence upon control data that is determined based on a result of the first video analytics processing. In this way, complex video analytics processing or parallel video analytics processing of captured video data is performed "in the cloud" where processing resources are large, and the results of the cloud-based video analytics processing are used to affect subsequent video analytics processing of the captured video data at the video source end where processing resources are limited but where the entire set of captured video data is available for processing. The amount of captured video data that is transmitted to the cloud-based video analytics system 108 is optionally minimized, thereby avoiding problems relating to network data capacity limitations. At the same time the captured video data that is transmitted to the cloud-based video analytics system 108 can be subjected to video analytics processing that requires greater processing resources than are available at the source end, including parallel video analytics processing. Control data, based on a result of the cloud-based video analytics processing, is transmitted via the WAN to the video analytics engine at the video source end, and affects a parameter of the second video analytics processing. The control data supports more sophisticated video analytics processing at the video source end than would otherwise be possible given the available processing capabilities.
[0027] By way of an example, the video source 102 is capable of performing different video analytics processing in series, but not in parallel due to limited processing capability. The cloud-based video analytics processing determines the occurrences of different types of events of interest, and then provides control data back to the video source 102 at the source end, the control data indicative of the locations of events of interest within the video data. Based on the control data, different video analytics processes and/or different template sets are used to process different locations within the video data. Of course, other parameters of the video analytics performed at the source end may be affected based on the control data that is transmitted from the cloud-based video analytics system 108.
[0028] A method according to an embodiment of the instant invention is now described with reference to the simplified flow diagram shown in FIG. 2, and also with reference to the system that is shown in FIG. 1. At 200 video data is captured at a source end using a video camera, the video camera being disposed at the source end. In particular, the captured video data includes first video data relating to an event of interest. For instance, the event of interest is an intrusion into a monitored area in the case of a surveillance or security application, or the event of interest is a grouping of a predetermined number of friends in the case of a social media application. At 202 at least a portion of the first video data is transmitted, via a Wide Area Network (WAN), from the source end to a first processor of a cloud-based video analytics system. For instance, single, non-adjacent frames of the captured video data are transmitted at predetermined intervals or in response to a trigger event being detected. Optionally, substantially continuous segments of the captured video data are transmitted via the WAN. At 204 first video analytics processing of the at least the portion of the first video data is performed, using the first processor. For instance, a first video analytics engine in execution on the first processor performs requested or default video analytics processing of the at least the portion of the first video data. A result of the first video analytics processing is obtained, for instance, the result is detecting an occurrence of an event of interest within the at least a portion of the first video data. Based on the result of the first video analytics processing, control data is determined at 206 for affecting second video analytics processing of the captured video data. At 208 the control data is transmitted, via the WAN, from the first processor to a second processor at the source end. At 210, using the second processor, the second video analytics processing of the captured video data is performed based on the control data.
[0029] A method according to an embodiment of the instant invention is now described with reference to the simplified flow diagram shown in FIG. 3, and also with reference to the system that is shown in FIG. 1. At 300 video data is captured using a video camera that is disposed at a source end. In particular, the captured video data includes video data relating to an event of interest. For instance, the event of interest is an intrusion into a monitored area in the case of a surveillance or security application, or the event of interest is a grouping of a predetermined number of friends in the case of a social media application. At 302 at least a portion of the captured video data is provided from the source end to a cloud-based video analytics system via a communications network. By way of a non-limiting example, the communications network is a Wide Area Network (WAN) such as for instance the Internet of the World Wide Web. At 304 the at least a portion of the captured video data is pre-processed using a first video analytics engine of the cloud-based video analytics system. At 306, based on a result of the pre-processing, control data is provided via the communications network from the cloud-based video analytics system to a second video analytics engine that is other than a video analytics engine of the cloud-based video analytics system. At 308 the captured video data is processed, using the second video analytics engine, based on the control data.
[0030] A method according to an embodiment of the instant invention is described with reference to the simplified flow diagram shown in FIG. 4, and with reference to the system shown in FIG. 1. At 400 video data is captured using a video camera disposed at a source end. In particular, the captured video data includes video data relating to an event of interest. For instance, the event of interest is an intrusion into a monitored area in the case of a surveillance or security application, or the event of interest is a grouping of a predetermined number of friends in the case of a social media application. At 402 at least a portion of the captured video data is provided to a cloud-based video analytics system via a communications network. By way of a non- limiting example, the communications network is a Wide Area Network (WAN) such as for instance the Internet of the World Wide Web. At 404 first video analytics processing of the at least a portion of the captured video data is performed using a first video analytics engine of the cloud-based video analytics system. At 406 second video analytics processing of the at least a portion of the captured video data is performed using a second video analytics engine that is other than a video analytics engine of the cloud-based video analytics system. At 408 feedback data is transmitted between the first video analytics engine and the second video analytics engine via the communications network. In particular, the feedback data is based on a result of respective video analytics processing by one of the first video analytics engine and the second video analytics engine. Further, the feedback data is for affecting video analytics processing by the other one of the first video analytics engine and the second video analytics engine. [0031] In the methods that are described above with reference to FIGS. 2-4, control data relating to a result of cloud-based video analytics processing of captured video data is used to affect the video analytics processing of the captured video data at the source end, such as for instance on a 'smart' camera having built-in video analytics capabilities. Optionally, the second video analytics processing is performed using a video analytics engine that is in communication with a plurality of video sources 102 via a local area network (LAN) or another video analytics engine that is disposed between the video source 102 and the cloud based video analytics system 108.
[0032] In an embodiment, a plurality of processes are in execution within the cloud for analyzing video data provided thereto. Each process is for identifying one or more trigger events. Upon detecting a trigger event, a process transmits a signal to a control processor, for example within the cloud, for providing the control data therefrom. As such, a plurality of processes is executed in parallel within the cloud to allow selection of a process for execution local to the video data capture device in the form of the video camera or the video capture network.
[0033] In another embodiment, processing local to the video data capture device is performed under the control of the control processor such that local processing switches between pre-processing of video data, post-processing of video data, and series processing of same video data depending on a result of cloud processing of at least some of the captured video data. For example, the cloud processing determines when three or more people are within a video frame and local processing is used to identify the best from a series of video frames including the three or more people for use in an automatically generated album.
[0034] In yet another embodiment, the method is used for switching between video analytics applications based on cloud processing. For example, a video camera disposed for seeing who is at the door is also useful for viewing the road in front of the building. When an individual is detected in front of the door, video analytics for identifying the individual is selected and in the absence of an individual, a process is executed to see if a car is parked in front of the building. Cloud based analytics is used to switch between the two functions and optionally is used as part of the processing. For example, the cloud based analytics determines whether a person is in the frame or not. When a person is in the frame, a local analytics process selects the two best facial images of the person based on angle, lighting, clarity, features, etc.
The two best frames are then transmitted to the cloud for identification and archiving purposes. Optionally, the cloud then transmits a further control data to the local analytics engine that the data received was adequate or, alternatively, that more data is required.
[0035] Numerous other embodiments may be envisaged without departing from the scope of the invention.

Claims

CLAIMS What is claimed is:
1. A method comprising:
capturing video data at a source end using a video camera that is disposed at the source end, the captured video data including first video data relating to an event of interest;
transmitting, via a Wide Area Network (WAN), at least a portion of the first video data from the source end to a first processor of a cloud-based video analytics system;
using the first processor, performing first video analytics processing of the at least the portion of the first video data;
based on a result of the first video analytics processing, determining control data for affecting second video analytics processing of the captured video data; transmitting, via the WAN, the control data from the first processor to a second processor at the source end; and
using the second processor, performing the second video analytics processing of the captured video data based on the control data.
2. The method of claim 1 wherein the second processor is an on-board processor of the video camera.
3. The method of claim 1 wherein the second processor is a processor of a server in communication with the video camera via a Local Area Network (LAN).
4. The method of claim 1 wherein transmitting the at least a portion of the first video data comprises transmitting a plurality of non-adjacent frames of the first video data via the WAN.
5. The method of claim 1 wherein the control data includes an identifier of a location of the event of interest within the first video data.
6. The method of claim 1 wherein the control data identifies a subset of the first video data for being subjected to the second video analytics processing.
7. A method comprising:
capturing video data using a video camera disposed at a source end;
providing at least a portion of the captured video data to a cloud-based video analytics system via a communications network;
pre-processing the at least a portion of the captured video data using a first video analytics engine of the cloud-based video analytics system;
based on a result of the pre-processing, providing control data via the communications network from the cloud-based video analytics system to a second video analytics engine that is other than a video analytics engine of the cloud-based video analytics system; and
using the second video analytics engine, processing the captured video data based on the control data.
8. The method of claim 7 wherein the second video analytics engine is in execution on an on-board processor of the video camera.
9. The method of claim 7 wherein the second video analytics engine is in execution on a processor of a server in communication with the video camera via a Local Area Network (LAN).
10. The method of claim 7 wherein providing the at least a portion of the captured video data comprises transmitting a plurality of non-adjacent frames of the captured video data via the communications network, wherein the communications network is a Wide Area Network (WAN).
11. The method of claim 7 wherein the control data includes an identifier of a location of an event of interest within the at least a portion of the captured video data.
12. The method of claim 7 wherein the control data identifies a subset of the at least a portion of the captured video data for being subjected to processing by the second video analytics engine.
13. The method of claim 7 wherein the control data identifies a subset of video data for being captured and subjected to processing by the second video analytics engine.
14. The method of claim 7 wherein the control data identifies a process for use by the second video analytics engine in processing of captured video data.
15. The method of claim 7 wherein the control data identifies a process for use by the second video analytics engine in processing of captured video data, the process selected from a plurality of available processes for execution on the second video analytics engine, some of the plurality of available processes each stored within the second video analytics engine simultaneously.
16. The method of claim 7 wherein the control data identifies a process for use by the second video analytics engine in processing of captured video data, the process selected from a plurality of available processes for execution on the second video analytics engine, the process consuming a significant portion of the processing resources of the second video analytics engine.
17. The method of claim 7 wherein the result is indicative of an identified feature within the at least a portion of the first video data.
18. The method of claim 7 wherein the result is indicative of an identified change within the at least a portion of the first video data.
19. The method of claim 7 wherein the result is indicative of a specific process for execution selected from at least two specific potential processes for execution.
20. The method of claim 7 wherein the result is indicative of a specific process for selection of second data from the captured video data for provision to a cloud based video analytics process, comprising:
processing the at least a portion of the captured video data using a local video analytics engine and the specific process to provide second video data, the second video data provided to a cloud-based video analytics system for further processing thereof.
21. A method comprising:
capturing video data using a video camera disposed at a source end;
providing at least a portion of the captured video data to a cloud-based video analytics system via a communications network;
using a first video analytics engine of the cloud-based video analytics system, performing first video analytics processing of the at least a portion of the captured video data;
using a second video analytics engine that is other than a video analytics engine of the cloud-based video analytics system, performing second video analytics processing of the at least a portion of the captured video data; and
transmitting feedback data between the first video analytics engine and the second video analytics engine via the communications network, the feedback data based on a result of respective video analytics processing by one of the first video analytics engine and the second video analytics engine, and the feedback data for affecting video analytics processing by the other one of the first video analytics engine and the second video analytics engine.
22. A system for performing video analytics processing of video data, comprising: a cloud-based first video analytics engine for performing first video analytics processing of video data;
a second video analytics engine that is other than a cloud-based video analytics engine for performing second video analytics processing of the video data, the second video analytics engine in communication with the cloud-based first video analytics engine via a communication network; and
a source of video data in communication with the cloud-based first video analytics engine and the second video analytics engine via the communication network,
wherein, during use, video data is provided from the source of video data to the cloud-based first video analytics engine and to the second video analytics engine, and wherein feedback data is exchanged between the cloud-based first video analytics engine and the second video analytics engine, the feedback data based on a result of video analytics processing by one of the cloud-based first video analytics engine and the second video analytics engine for affecting video analytics processing by the other one of the cloud-based first video analytics engine and the second video analytics engine.
PCT/CA2013/050161 2012-03-08 2013-03-05 Cloud-based video analytics with post-processing at the video source-end WO2013131189A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201261608362P 2012-03-08 2012-03-08
US61/608,362 2012-03-08

Publications (1)

Publication Number Publication Date
WO2013131189A1 true WO2013131189A1 (en) 2013-09-12

Family

ID=49115829

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CA2013/050161 WO2013131189A1 (en) 2012-03-08 2013-03-05 Cloud-based video analytics with post-processing at the video source-end

Country Status (1)

Country Link
WO (1) WO2013131189A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9374870B2 (en) 2012-09-12 2016-06-21 Sensity Systems Inc. Networked lighting infrastructure for sensing applications
US9456293B2 (en) 2013-03-26 2016-09-27 Sensity Systems Inc. Sensor nodes with multicast transmissions in lighting sensory network
US9582671B2 (en) 2014-03-06 2017-02-28 Sensity Systems Inc. Security and data privacy for lighting sensory networks
US9746370B2 (en) 2014-02-26 2017-08-29 Sensity Systems Inc. Method and apparatus for measuring illumination characteristics of a luminaire
US9933297B2 (en) 2013-03-26 2018-04-03 Sensity Systems Inc. System and method for planning and monitoring a light sensory network
CN109639486A (en) * 2018-12-13 2019-04-16 杭州当虹科技股份有限公司 A kind of cloud host elastic telescopic method based on live streaming
US10362112B2 (en) 2014-03-06 2019-07-23 Verizon Patent And Licensing Inc. Application environment for lighting sensory networks
US10417570B2 (en) 2014-03-06 2019-09-17 Verizon Patent And Licensing Inc. Systems and methods for probabilistic semantic sensing in a sensory network
US11721099B2 (en) 2016-02-19 2023-08-08 Carrier Corporation Cloud based active commissioning system for video analytics

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060239645A1 (en) * 2005-03-31 2006-10-26 Honeywell International Inc. Event packaged video sequence
CA2638621A1 (en) * 2007-10-04 2008-11-26 Kd Secure, Llc An alerting system for safety, security, and business productivity having alerts weighted by attribute data
US20090015671A1 (en) * 2007-07-13 2009-01-15 Honeywell International, Inc. Features in video analytics
CA2716705A1 (en) * 2009-10-07 2011-04-07 Telewatch Inc. Broker mediated video analytics method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060239645A1 (en) * 2005-03-31 2006-10-26 Honeywell International Inc. Event packaged video sequence
US20090015671A1 (en) * 2007-07-13 2009-01-15 Honeywell International, Inc. Features in video analytics
CA2638621A1 (en) * 2007-10-04 2008-11-26 Kd Secure, Llc An alerting system for safety, security, and business productivity having alerts weighted by attribute data
CA2716705A1 (en) * 2009-10-07 2011-04-07 Telewatch Inc. Broker mediated video analytics method and system

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9959413B2 (en) 2012-09-12 2018-05-01 Sensity Systems Inc. Security and data privacy for lighting sensory networks
US9374870B2 (en) 2012-09-12 2016-06-21 Sensity Systems Inc. Networked lighting infrastructure for sensing applications
US9699873B2 (en) 2012-09-12 2017-07-04 Sensity Systems Inc. Networked lighting infrastructure for sensing applications
US10158718B2 (en) 2013-03-26 2018-12-18 Verizon Patent And Licensing Inc. Sensor nodes with multicast transmissions in lighting sensory network
US9933297B2 (en) 2013-03-26 2018-04-03 Sensity Systems Inc. System and method for planning and monitoring a light sensory network
US9456293B2 (en) 2013-03-26 2016-09-27 Sensity Systems Inc. Sensor nodes with multicast transmissions in lighting sensory network
US9746370B2 (en) 2014-02-26 2017-08-29 Sensity Systems Inc. Method and apparatus for measuring illumination characteristics of a luminaire
US10417570B2 (en) 2014-03-06 2019-09-17 Verizon Patent And Licensing Inc. Systems and methods for probabilistic semantic sensing in a sensory network
US10362112B2 (en) 2014-03-06 2019-07-23 Verizon Patent And Licensing Inc. Application environment for lighting sensory networks
US9582671B2 (en) 2014-03-06 2017-02-28 Sensity Systems Inc. Security and data privacy for lighting sensory networks
US10791175B2 (en) 2014-03-06 2020-09-29 Verizon Patent And Licensing Inc. Application environment for sensory networks
US11544608B2 (en) 2014-03-06 2023-01-03 Verizon Patent And Licensing Inc. Systems and methods for probabilistic semantic sensing in a sensory network
US11616842B2 (en) 2014-03-06 2023-03-28 Verizon Patent And Licensing Inc. Application environment for sensory networks
US11721099B2 (en) 2016-02-19 2023-08-08 Carrier Corporation Cloud based active commissioning system for video analytics
CN109639486A (en) * 2018-12-13 2019-04-16 杭州当虹科技股份有限公司 A kind of cloud host elastic telescopic method based on live streaming

Similar Documents

Publication Publication Date Title
US10123051B2 (en) Video analytics with pre-processing at the source end
WO2013131189A1 (en) Cloud-based video analytics with post-processing at the video source-end
Sultana et al. IoT-guard: Event-driven fog-based video surveillance system for real-time security management
US9704393B2 (en) Integrated intelligent server based system and method/systems adapted to facilitate fail-safe integration and/or optimized utilization of various sensory inputs
JP6088541B2 (en) Cloud-based video surveillance management system
CA2824330C (en) An integrated intelligent server based system and method/systems adapted to facilitate fail-safe integration and/or optimized utilization of various sensory inputs
US20110109742A1 (en) Broker mediated video analytics method and system
AU2009243916B2 (en) A system and method for electronic surveillance
US9143739B2 (en) Video analytics with burst-like transmission of video data
US11335097B1 (en) Sharing video footage from audio/video recording and communication devices
US20140071273A1 (en) Recognition Based Security
US10650247B2 (en) Sharing video footage from audio/video recording and communication devices
US20150161449A1 (en) System and method for the use of multiple cameras for video surveillance
US20160357762A1 (en) Smart View Selection In A Cloud Video Service
US20190370559A1 (en) Auto-segmentation with rule assignment
US20150085114A1 (en) Method for Displaying Video Data on a Personal Device
US20170034483A1 (en) Smart shift selection in a cloud video service
CN107360404A (en) Mobile video monitor system
US20220319171A1 (en) System for Distributed Video Analytics

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13758188

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13758188

Country of ref document: EP

Kind code of ref document: A1