US20220312021A1 - Analytics-modulated coding of surveillance video - Google Patents
Analytics-modulated coding of surveillance video Download PDFInfo
- Publication number
- US20220312021A1 US20220312021A1 US17/520,121 US202117520121A US2022312021A1 US 20220312021 A1 US20220312021 A1 US 20220312021A1 US 202117520121 A US202117520121 A US 202117520121A US 2022312021 A1 US2022312021 A1 US 2022312021A1
- Authority
- US
- United States
- Prior art keywords
- video frame
- code
- video
- processor
- pixels
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 abstract description 64
- 238000007906 compression Methods 0.000 abstract description 38
- 230000006835 compression Effects 0.000 abstract description 37
- 230000008569 process Effects 0.000 abstract description 19
- 238000013459 approach Methods 0.000 abstract description 13
- 238000013139 quantization Methods 0.000 description 35
- 230000000694 effects Effects 0.000 description 19
- 230000011218 segmentation Effects 0.000 description 18
- 230000008859 change Effects 0.000 description 14
- 230000000875 corresponding effect Effects 0.000 description 13
- 238000010586 diagram Methods 0.000 description 13
- 230000002123 temporal effect Effects 0.000 description 10
- 238000012545 processing Methods 0.000 description 9
- 239000013598 vector Substances 0.000 description 9
- 230000000007 visual effect Effects 0.000 description 9
- 230000003044 adaptive effect Effects 0.000 description 8
- 238000004458 analytical method Methods 0.000 description 6
- 230000002829 reductive effect Effects 0.000 description 6
- 241001465754 Metazoa Species 0.000 description 4
- 230000008901 benefit Effects 0.000 description 4
- 230000015654 memory Effects 0.000 description 4
- 239000000203 mixture Substances 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 238000006073 displacement reaction Methods 0.000 description 3
- 238000009472 formulation Methods 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 230000008520 organization Effects 0.000 description 3
- 230000009467 reduction Effects 0.000 description 3
- 230000003068 static effect Effects 0.000 description 3
- 230000006399 behavior Effects 0.000 description 2
- 230000015556 catabolic process Effects 0.000 description 2
- 238000006731 degradation reaction Methods 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 230000004438 eyesight Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 238000003711 image thresholding Methods 0.000 description 2
- 238000002372 labelling Methods 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000012216 screening Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 238000005452 bending Methods 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000004313 glare Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 230000009191 jumping Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 238000004549 pulsed laser deposition Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/167—Position within a video image, e.g. region of interest [ROI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration using histogram techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/107—Selection of coding mode or of prediction mode between spatial and temporal predictive coding, e.g. picture refresh
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/114—Adapting the group of pictures [GOP] structure, e.g. number of B-frames between two anchor frames
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/115—Selection of the code volume for a coding unit prior to coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/124—Quantisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/132—Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
- H04N19/137—Motion inside a coding unit, e.g. average field, frame or block difference
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
- H04N19/14—Coding unit complexity, e.g. amount of activity or edge presence estimation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/142—Detection of scene cut or scene change
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/146—Data rate or code amount at the encoder output
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/146—Data rate or code amount at the encoder output
- H04N19/149—Data rate or code amount at the encoder output by estimating the code amount by means of a model, e.g. mathematical model or statistical model
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/146—Data rate or code amount at the encoder output
- H04N19/152—Data rate or code amount at the encoder output by measuring the fullness of the transmission buffer
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/172—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/177—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a group of pictures [GOP]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/183—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Definitions
- the systems and methods disclosed relate generally to video processing and more particularly to adaptively compressing video based on video analytics.
- surveillance technology has been increasingly used to monitor people, places and activities. For example, high-quality surveillance video is being used to better monitor events and/or to reduce visually distracting artifacts that may interfere with human recognition. As surveillance video data is retained and archived for longer periods of time, large amounts of data storage space are typically needed. In addition, more innovative applications are emerging in which the streaming of video to wireless and mobile devices is used over evermore bandwidth-constrained networks. Such uses are demanding not only new surveillance solutions, but also new or enhanced video compression techniques.
- a method and apparatus for encoding surveillance video where one or more regions of interest are identified and the encoding parameter values associated with those regions are specified in accordance with intermediate outputs of a video analytics process.
- Such analytics-modulated video compression allows the coding process to adapt dynamically based on the content of the surveillance images. In this manner, the fidelity of the region of interest (ROI) is increased relative to that of a background region such that the coding efficiency is improved, including instances when no target objects appear in the scene. Better compression results can be achieved by assigning different coding priority levels to different types of detected objects.
- classification and tracking modules can be used as well. Because shape information need not be coded, fewer computational resources and/or fewer bits are necessary.
- the analytics-modulated video compression approach is not limited to specific profiles, does not require a new shape-based coding profile, and produces a compressed video stream that is compliant with multiple standards.
- the analytics-modulated video compression approach produces smooth, high-quality video at a low bit rate by adjusting encoding parameters at a finer granularity.
- FIG. 1 is a system block diagram of an MPEG encoder architecture.
- FIG. 2 is a diagram illustrating motion-compensated prediction in a P-frame.
- FIG. 3 is a diagram illustrating motion-compensated bidirectional prediction in a B-frame.
- FIG. 4 is a block diagram illustrating a video analytics processing pipeline, according to an embodiment.
- FIG. 5 illustrates the use of difference image thresholding to obtain foreground pixels, according to an embodiment.
- FIG. 6 illustrates a classifier discriminating between a person and a car, according to an embodiment.
- FIG. 7 is a system block diagram of video analytics and coding modules used in scene-adaptive video coding, according to an embodiment.
- FIG. 8 is a system block diagram of region-based coding by varying quantization parameter (QP), according to an embodiment.
- FIG. 9 is a system block diagram of a region-based coding incorporating rate control (RC), according to an embodiment.
- FIGS. 10A-10B illustrate different approaches to determining a motion vector search range, according to embodiments.
- FIGS. 11A-11E illustrate analytics-modulated coding of video images, according to other embodiments.
- FIGS. 12A-12C illustrate analytics-modulated coding of video images, according to other embodiments.
- FIG. 13 shows various scenes used to illustrate analytics-modulated coding, according to other embodiments.
- Novel techniques can be used for coding objects in a surveillance scene so that a region-of-interest (ROI) can be compressed at higher quality relative to other regions that are visually less-important such as the scene background, for example.
- a scene without objects can be encoded at a lower bit rate (e.g., higher compression) than a scene with detected objects.
- a scene with different types of objects as well as regions with different brightness, spatial or temporal activities can have the objects and/or regions encoded at different levels of fidelity. It is desirable that these techniques allow for scaling of various encoding parameter values so as to use fewer bits when appropriate to produce significantly greater compression of the surveillance scene without visual artifacts.
- the Moving Picture Expert Group is a working group of the International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) that develops standards for coded representation of digital audio and video.
- ISO/IEC International Organization for Standardization/International Electrotechnical Commission
- a benefit of compression is in data rate reduction that lowers transmission and storage cost, and where a fixed transmission capacity is available, results in better video quality.
- Video signals typically contain a significant amount of redundancy; video samples are typically similar to each other so that one sample can be predicted fairly accurately from another, thereby exploiting the correlation among the samples to reduce the video bit rate.
- the MPEG standard(s) achieve high compression rates by removing spatial and temporal redundancy.
- FIG. 1 is a system block diagram of an MPEG encoder architecture that is configured to compress video signals.
- the MPEG encoder includes multiple modules. Each module in the MPEG encoder can be software-based (e.g., set of instructions executable at a processor, software code) and/or hardware-based (e.g., circuit system, processor, application-specific integrated circuit (ASIC), field programmable gate array (FPGA)).
- software-based e.g., set of instructions executable at a processor, software code
- hardware-based e.g., circuit system, processor, application-specific integrated circuit (ASIC), field programmable gate array (FPGA)
- the MPEG encoder divides a video frame into smaller blocks of pixels that are then operated by a Discrete Cosine Transform (DCT) operation.
- the DCT operation decorrelates the pixel elements in the spatial domain and converts them to independent frequency domain coefficients.
- the process is localized, i.e., the encoder samples an 8 ⁇ 8 spatial window to compute the 64 transform coefficients.
- the DCT coefficients are energy concentrated, i.e., most of the signal energy is concentrated in a few low frequency coefficients such that a few of the coefficients contain most of the information in the frame.
- the DC coefficient that appears as the top left coefficient contains the average energy of the entire signal in that macroblock, while the remaining AC coefficients contain higher frequency information of the macroblock.
- the DCT coefficients are then adaptively quantized.
- a quantization operation involves mapping an input signal with a range of values to a reduced range of output values.
- the quantization operation is generally regarded as the lossy part of video compression.
- the amount of compression at this stage is typically controlled by a quantization parameter (QP).
- QP quantization parameter
- a high QP value produces fewer bits used (i.e., greater compression) at the expense of reduced image or scene quality.
- most of the high frequency coefficients e.g., AC coefficients
- VLC Variable length coding
- Intra-coded (I) frames also known as I-frames
- I-frames contain full frame information that is independent of other frames
- inter-coded frames often referred to as predictive-coded (P) or bidirectionally-predictive-coded (B) frames, represent or are associated with image differences.
- FIGS. 2-3 are each a diagram illustrating motion-compensated prediction in a P-frame and a B-frame respectively.
- FIG. 2 shows a P-frame being predicted from a previously encoded I or P-frame (reference frame).
- FIG. 3 shows a B-frame being predicted from both a previous reference (I or P) frame, and a future reference (I or P) frame in both backward and forward directions, respectively.
- the predictive coding process of inter-coded frames typically involves producing or generating motion vectors.
- Motion estimation involves searching for a macroblock (e.g., a 16 ⁇ 16 block of pixels) in the reference frame that best matches the current block in the current frame.
- the residual energy that reflects the difference between the blocks is then quantized and entropy-coded.
- the displacement between the two blocks is represented by a motion vector (MV).
- MV motion vector
- MPEG-1 and MPEG-2 are Emmy Award winning standards that made interactive video on CD-ROM and Digital Television possible.
- the MPEG-1 standard was originally used for digital storage media such as video compact discs (CD) and supports interactivity such as fast forward, fast reverse and random access into stored bitstreams.
- the MPEG-2 standard is the format typically used for DVD and HDTV and for broadcast applications.
- the MPEG-2 standard includes multiplexing schemes for carrying multiple programs in a single stream, as well as mechanisms that offer robustness when delivering compressed video and audio over error prone channels such as coaxial cable television networks and satellite transponders, for example.
- the MPEG-4 standard was originally developed for low bit-rate video communications devices and provides higher compression than its predecessors.
- the MPEG-4 standard later evolved to include means for coding arbitrarily shaped natural and synthetic objects characterized by shape, texture and motion, in addition to frame-based video.
- the standard also enables interactivity, managing and synchronizing of multiple objects in a multimedia presentation.
- H.264 is jointly developed by the International Telecommunications Union (ITU) Video Coding Expert Group (VCEG) and ISO/IEC MPEG to address increasing needs for higher compression.
- ITU International Telecommunications Union
- VCEG Video Coding Expert Group
- ISO/IEC MPEG ISO/IEC MPEG
- MPEG-2 and MPEG-4 Visual the H.264 standard offers significant improvements in video quality and is currently the standard of choice for the video format for Blu-Ray disc, HDTV services, and mobile applications, for example.
- the H.264 standard is capable of delivering the same high-quality video with savings of between 25% and 50% on bandwidth and storage requirements compared to its predecessors.
- Some of the enhanced encoding features of the H.264 standard include techniques for reducing artifacts that may appear around the boundary of the macroblocks (i.e., reduce “blockiness”), adaptive decomposition of the block into various smaller block sizes for regions with finer spatial details, sampling at less than one integer pixel for higher accuracy, use of integer transform, and improved VLC techniques that may use a fractional number (instead of a series of bits) to represent the data symbol.
- the VLC techniques are typically based on context information (i.e., prior knowledge of how the previous pixels or symbols were encoded).
- Video analytics also known as Video Content Analysis (VCA) or intelligent video, refers to the extraction of meaningful and relevant information from digital video.
- Video analytics builds upon research in computer vision, pattern analysis and machine intelligence.
- video analytics uses computer vision algorithms that allow a system to perceive (e.g., “see”) information associated with the video, and then uses machine intelligence to interpret, learn, and/or draw inferences from the information perceived.
- scene understanding that is, understand the context around an object in the video.
- Other aspects of video analytics include the detection of motion and tracking an object through the scene.
- smart cameras that include or provide video analytics can be used to detect the presence of people and detect suspicious activities such as loitering or motion into an unauthorized area.
- FIG. 4 is a block diagram illustrating a video analytics processing pipeline, according to an embodiment.
- the video analytics processing pipeline consists of a chain of processing blocks or modules including segmentation, classification, tracking, and activity recognition. Each module can be software-based, or software-based and hardware-based. It is desirable for the video analytics processing pipeline to detect changes that occur over successive frames of video, qualify these changes in each frame, correlate qualified changes over multiple frames, and interpret these correlated changes.
- the segmentation module is configured to identify foreground blobs (i.e., associated pixel clusters) using one of multiple segmentation techniques.
- a segmentation technique can use a background subtraction operation to subtract a current frame from a background model.
- the background model is initialized and then updated over time, and is used by the background subtraction operation to detect changes and identify foreground pixels.
- the background model can be constructed using a first frame or the mean image over N frames.
- a terrain map can be used to separate the foreground from the background in a frame. An example of using a terrain map is described in U.S. Pat. No. 6,940,998, entitled “System for Automated Screening of Security Cameras,” which is hereby incorporated herein by reference in its entirety.
- To produce an accurate background model it is desirable to account for changes in illumination and/or changes that result from foreground blobs becoming part of the background.
- the background model adapts to these changes and continues to update the background.
- FIG. 5 illustrates the use of difference image thresholding to obtain foreground pixels, according to an embodiment.
- a low threshold value can allow smaller changes to be qualified as foreground pixels, resulting in clutter because of sensor noise, moving foliage, rain or snow, illumination changes, shadows, glare, and reflections, for example. Simple motion detection does not adequately remove clutter and will cause false detections.
- a high threshold value can result in holes and gaps that can be filled using a morphological filter.
- the threshold value and the frequency at which the background is updated can impact the results of the segmentation technique.
- Other embodiments may use adaptive thresholding or gain control, and can be configured such that, for example, the gain is controlled by area of the image. An example of area-based gain control is described in U.S. Pat. No. 7,218,756, entitled “Video Analysis Using Segmentation Gain by Area,” which is hereby incorporated herein by reference in its entirety.
- each connected blob is uniquely labeled to produce foreground blobs.
- Blob labeling can be done by recursively visiting all foreground neighbors of a foreground pixel and labeling them until no unvisited neighbor is available.
- Such segmentation yields fine pixel-level separation between foreground and background as opposed to techniques that use macro-block level motion estimation for this purpose.
- the blobs are classified by, for example, assigning a category to each foreground blob.
- Classification uses image features to discriminate one class from another. For example, classification produces the likelihood of an object belonging to a certain given class.
- Binary classifiers are used to separate object blobs into one of two classes (e.g., object is a person or a non-person).
- Multi-class classifiers separate object blobs into one of multiple classes (e.g., object is a person, a vehicle, or an animal).
- FIG. 6 illustrates a classifier discriminating between a person and a car, according to an embodiment.
- a simple classifier that separates persons from vehicles can be constructed by, for example, examining the aspect ratio of the segmented blob. People tend to be taller than wide, while cars are wider than tall. Other features that can be useful for classification are histograms and outlines.
- FIG. 6 shows two foreground blobs, one classified as a person and the other classified as a car.
- Other embodiments can use machine learning to classify a test blob after being trained by using positive and negative blob examples.
- Classified objects can be tracked across multiple video frames by establishing a correspondence or association between objects (e.g., blobs) in different video frames. These correspondences can be used for scene interpretation and for behavior or activity recognition. Because an object may change its pose or orientation with respect to the camera, that object may look different over multiple frames. Furthermore, people moving in a scene exhibit articulated motion, which can substantially change the shape of the blob. During tracking it is desirable to be able to identify invariant features or situations when objects occlude each other. For example, it is desirable to handle a situation wherein a person walks behind a tree and re-appears or when the parts of an articulated object occlude one another, such as when swinging arms occlude the torso of a walking person.
- Foreground objects have been segmented, classified, and/or tracked, their motion and behavior can be analyzed and described.
- activities in which such analysis and description can be performed include a loitering person, a fallen person or a slow-moving vehicle.
- body parts can also be analyzed to provide information on human activities such as jumping, crouching, reaching, and bending, for example.
- Gesture recognition techniques can also be used to identify activities such as grasping, pointing and waving.
- surveillance images are encoded for transmission or storage by leveraging the intermediate outputs of video analytics (e.g., outputs from segmentation, classification, tacking, and/or activity recognition) to achieve better coding efficiency.
- video analytics e.g., outputs from segmentation, classification, tacking, and/or activity recognition
- the encoding parameters are typically fixed over an entire frame or over an entire sequence of frames.
- One aspect of the video compression described herein is to use the intermediate outputs of the video analytics processing pipeline described above with respect to FIG. 4 to produce analytics-modulated coding, also called scene-adaptive video coding. For example, a region-of-interest (ROI) can be identified and one or more encoding parameters, including the QP values, can be varied, adjusted, or modified during the coding process to adapt the coding process based on the content of the surveillance scene.
- ROI region-of-interest
- Such adaptation can be based on changes that occur when, for example, an object (or different types of objects) enters or leaves a scene, or when the brightness, spatial, and/or temporal activity associated with an object changes.
- Scene semantics based on activity recognition can also be used to adapt the coding process.
- activity recognition in surveillance video can detect a loitering person.
- the series of frames corresponding to the loitering activity can be coded at a higher fidelity compared to other frames.
- Analytics-modulated coding differs, at least in part, from other schemes that update the frame-rate, resolution, or overall compression bit rate (which applies to the whole frame) by applying finer level control at each aspect of the coding process, at a much higher spatial and temporal granularity. This approach provides greater compression at the same quality level, and does not cause objectionable jumps in the frame that may result from a sudden change in full-frame resolution and/or quality.
- the current MPEG-4 standard handles a video sequence as a composition of one or more objects of arbitrary shape (e.g., ROI)
- the shape information in addition to the image data, are encoded and transmitted as part of the video stream.
- Such an approach can result in added computational burden and can require more memory and bandwidth resources.
- the H.264 standard offers better compression than the MPEG-4 standard.
- one functionality that is absent in the H.264/Advanced Video Coding (AVC) standard is the ability to code arbitrary shaped objects.
- video analytics modules including segmentation, classification, and tracking, for example, are used for scene-adaptive video coding.
- the encoding parameters can be varied during the coding process to adapt to the content of the scene.
- the shape information of each ROI need not be coded and the coding operation need not be limited to specific profiles.
- the analytics-modulated coding approach described herein produces an encoded video stream decodable by players that do not support shape-based coding.
- FIG. 7 is a system block diagram of video analytics and coding modules used in scene-adaptive video coding, according to an embodiment.
- a foreground alpha mask generated by the segmentation module is used to identify regions of interest (ROI) for region-based coding, as well as to adjust encoding parameters in the coding modules such as the Group-of-Pictures (GOP) size and/or the QP.
- the GOP is a group of successive video frames and defines the arrangement or organization of the I, P, and/or B-frames.
- the GOP includes an I-frame, followed by a series of P and/or B-frames.
- the GOP size is the number of frames between two I-frames.
- the QP is a parameter used in the quantization process and is associated with the amount of compression. The value of the QP influences the perceived quality of the compressed images.
- the segmented objects are classified as assigned to or belonging to 1 of N classes of objects through the classification process.
- Weights are assigned to each of the classes to define relative priorities among the classes. These weights determine the relative priorities for bit allocation. For example, blobs belonging to one class (e.g., person class), using a greater fraction of the bit budget compared to blobs belonging to another class (e.g., trees class).
- two or more classes may have the same weights.
- each of the classes may have a unique weight.
- the classified objects are tracked over multiple frames by establishing a correspondence or association between blobs in the frames.
- the tracking module produces motion information that can be utilized to determine a suitable motion vector search range.
- the motion vector search range determines a search window during the Motion Estimation and Compensation (ME/MC) process.
- the search window is used to search for a group or block of pixels (e.g., a macroblock) in the reference frame that best matches a group or block of pixels being considered in the current frame during the ME/MC process.
- temporal redundancies are used such that, in some instances, only the difference between consecutive frames is encoded.
- FIG. 8 is a system block diagram of region-based coding by varying QP, according to an embodiment.
- the video compression process involves first transforming the image from spatial to frequency domain, employing a transformation such as a DCT or integer transform. The transformed coefficients are then quantized based on the QP, and entropy coded to produce the compressed 2-D signals.
- quantization is the process of mapping a range of input values to a smaller range of output values and is the lossy compression part of video coding.
- the value of the QP is used to specify the extent of compression that is desired. For example, a larger QP value uses fewer bits to code, resulting in more compression and reduced image quality. In another example, a smaller QP value can produce better quality images at lower compression.
- the alpha mask serves as input to a module to compute and derive the coordinates for overlays, which are geometric shapes used for bounding the detected targets. These overlays are often useful in surveillance video to draw attention to objects or activities of interest.
- the alpha mask is used to distinguish foreground objects from background objects. A higher QP value can then be used to encode the background object while a lower QP value can be used to encode the foreground object in such a way that the overall bitrate is reduced without compromising quality.
- Such analytics-modulated coding (AMC) is applicable, in general, to any standard that is based on block-based video coding scheme, including the H.264 standard where integer and Hadamard transforms are used.
- DCT is the fundamental transformation utilized in most video coding standards such as the MPEG and H.26x standards.
- Tables 1-4 present experimental results associated with FIG. 13 . These results were obtained by encoding several 2-hr long video clips using different combinations of QP values for foreground objects and background objects.
- the video clips (a frame of each is shown in FIG. 13 ) contain both indoor and outdoor surveillance scenes with different levels of motion activity.
- a Joint Model (JM) codec is used as a baseline for comparison.
- JM is the reference software implementation of the H.264 codec adopted by the Joint Video Team (JVT) of the International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) Motion Picture Experts Group (MPEG) and the International Telecommunication Union (ITU-T) Video Coding Experts Group (VCEG) standards committees.
- the tables illustrate the bit rate and the Structural Similarity Index (SSIM) for JM and AMC-encoded video using various sets of QP values ranging from 24 to 31, as well as the percentage bit rate savings.
- SSIM values are used instead of Peak Signal-to-Noise Ratio (PSNR) values because SSIM better correlates to subjective quality.
- PSNR Peak Signal-to-Noise Ratio
- video (the lobby scene in FIG. 13 ) is encoded using JM with a QP value of 24.
- the resulting bitrate is 111.72 kbps and the corresponding SSIM value is 0.98.
- the same video is encoded using AMC with a higher QP value for the background and a lower QP value for the foreground.
- the video is encoded using AMC with QP values of 24 for the foreground and 25 for the background to derive the same SSIM value (0.98) as the JM encoded video with a QP value of 24.
- the resulting bitrate is 91.61 kbps, giving a bitrate savings of 18% when AMC is used instead of JM.
- Tables 2-4 illustrate the results for the other scenes shown in FIG. 13 .
- FIG. 9 is a block diagram of a system using region-based coding incorporating rate control (RC), according to an embodiment.
- the RC process of video encoding involves adjusting the QP value to meet a target or predetermined bit rate.
- the RC module can be composed of a high-level RC (HLRC) component and a low-level RC (LLRC) component.
- HLRC high-level RC
- LLRC low-level RC
- HLRC high-level RC
- LLRC low-level RC
- HLRC high-level RC
- LLRC low-level RC
- QP is then derived from the Rate-Quantization (RQ) model and used by the quantization module for video compression.
- target bit allocation for each frame can be dependent on the number of remaining frames in the GOP, number of remaining bits available for the GOP as well as scene complexity.
- Scene complexity can be expressed as a function of the number, the size and the type of objects in the scene.
- bit budget for frame i, T i is computed by taking a proportion of bits remaining in the GOP as shown in equation (1).
- R t refers to the total bit budget whereas
- Equation (3) s g,i refers to the size of object g (in pixels) in frame i; m i is the number of objects in frame i, while S and M are the total number of pixels in the video frame and the expected maximum number of objects respectively.
- T i_final ⁇ *TB i +(1 ⁇ )* T i (5)
- T i_final The final target bit budget, T i_final , is allocated for each frame as shown in equation (5).
- T i_final is the weighted sum of the bit budget computed based on buffer occupancy, TB i ;, and the bit budget predicted based on complexity as derived from equation (4), T i .
- ⁇ indicates how much to weight each component (TB i and T i ) of the sum.
- buffer occupancy There are numerous known techniques to compute buffer occupancy.
- TB i can be derived by subtracting delta bits, ⁇ p, from bits per frame,
- target bit allocation for each macroblock can be computed as a fraction of bit budget for the frame that includes the macroblock.
- the bit allocation for the macroblock can be dependent on the macroblock complexity.
- Analytics can be used to calculate the macroblock complexity using the segmentation and/or classification results, in addition to the Human Visual System (HVS) factors.
- HVS Human Visual System
- c j ( 1 - g j ) ⁇ 1 / S j ( 9 )
- d j ( 1 - g j ) ⁇ 1 / M j ( 10 )
- e j ( 1 - g j ) ⁇ 1 / I j ( 11 )
- the bit budget for each macroblock, r j can be derived as a fraction of frame bit budget, T i_final , based on the macroblock complexity, x j , as shown in equation (7) where p is the total number of macroblocks in each frame.
- x j is expressed as a weighted linear combination of features where f k,j , where f k,j ⁇ c j , d j , e j , h j ⁇ and ⁇ k,j are weights associated with each of these features (equation (8)).
- HVS factors such as brightness, spatial activities and temporal activities, denoted as normalized quantities I j , S j and M j respectively (equations (9)-(11)).
- h j analytics are incorporated into the calculation of the macroblock complexity.
- g j in equation (12), indicates whether the macroblock belongs to the foreground or background object and is derived based on the segmentation module shown in FIG. 9 .
- the pixel boundary of the foreground object is divided by the size of the macroblock and rounded down to the nearest integer for mapping to a macroblock unit.
- Each macroblock has a width and height of 16 pixels.
- L is the number of object classes. For example, a larger weight can be assigned to a PERSON object and a smaller weight can be assigned to a CAR/VEHICLE object and/or other objects in an application used to detect target persons in the scene. This directly incorporates weights based on object class into the rate control technique used for compression.
- the HVS factors are typically produced from a first pass encoding of the video image using a low complexity encoder as shown in FIG. 9 .
- the HVS factors can be derived from analytics and incorporated during encoding an image during the first single pass through the image (e.g., a single scan of the pixel values). These factors can be obtained from the gradient information.
- the gradient information can be obtained from a degree of slope parameter and a direction of slope parameter as described in U.S. Pat. No. 6,940,998, entitled “System for Automated Screening of Security Cameras,” which is hereby incorporated herein by reference in its entirety.
- the Rate-Quantization (RQ) model defines the relationship between the bitrate, QP, and complexity.
- the mathematical formulation is shown below:
- the bit budget for macroblock j, r j , and the macroblock complexity, x j are derived from equation (7) and equation (8) respectively.
- K 1 and K 2 are model parameters.
- a combination of segmentation and classification results from analytics can be used, as well as HVS factors to compute macroblock complexity as shown in equations (8)-(12).
- a QP value for the macroblock can be derived and used in the quantization module for video compression.
- the GOP is a group of pictures within an MPEG-coded video stream.
- a GOP structure determines the arrangement of I, P and/or B-frames.
- An I-frame contains I macroblocks (MBs), each MB being intra-coded and can be based on prediction from previously coded blocks within the same frame.
- An I-frame can be inserted whenever a scene change occurs. In scene-adaptive video coding for surveillance applications, for example, this can happen whenever an object enters or leaves the scene or when the scene changes.
- a scheme can be implemented to adaptively change the GOP size and structure depending on the content and/or scene (i.e., content/scene-adaptive).
- a maximum GOP size can also be specified such that an I-frame can be inserted when the period of inactivity exceeds a certain predetermined duration or a predetermined criterion (e.g., number of frames of inactivity).
- a minimum GOP size can be specified such that no two I-frames are more than a certain duration apart.
- the structure instead of having a fixed GOP structure and size (e.g. IPPPIPPP . . . ), the structure can adaptively change based on recognition of activity in the scene and on the number and class of objects in the scene. This allows placement of P-frames up to the moment that an object enters the scene, or an object of a specific class enters the scene or a significant scene change is detected by the analytics.
- Tables 5-7 describe results from experiments conducted on the surveillance scenes shown in FIG. 13 , with low and high activity, using a minimum GOP size of 60 or 250 and a maximum GOP size of 60, 250, 1000 or 5000.
- a minimum GOP size of 250 and a maximum GOP size of 1000 the bitrate savings varies from 10% to 19% depending on the scene content.
- a minimum GOP size of 250 and a maximum GOP size of 5000 the bitrate savings varies from 11% to 24% due to the larger maximum GOP size.
- the performance gain is 39% to 52% using a maximum GOP size of 1000, and 40% to 55% using a maximum GOP size of 5000.
- the bitrate savings is higher for a scene with low activity since there are relatively fewer objects entering and leaving the scene. This results in fewer I-frames.
- GOP bitrate size size bitrate savings 250 250 84.76 250 1000 76.68 10 250 5000 75.4 11 60 60 132.04 60 1000 80.44 39 60 5000 79.17 40
- segmented objects can be classified as being assigned to or belonging to 1 of L object classes (e.g., person, animal, automobile, etc.) through the classification process, and weights can be assigned to each of the object classes to establish relative priorities among the object classes.
- L object classes e.g., person, animal, automobile, etc.
- weights can be assigned to each of the object classes to establish relative priorities among the object classes.
- the GOP size can be adapted or modified based on the class to which the objects are assigned and on the weighted priorities of those classes.
- a background region is coded at a relatively higher QP value than a foreground object of interest (e.g., target person) in such a way that fewer bits are allocated to the background than the foreground object. Since the number of pixels of the foreground object is typically smaller than the background region, this significantly reduces the total number of bits used to compress the frame without significantly compromising quality.
- Adaptive I-frame Placement a video frame is encoded as an I-frame only when an object is detected entering or leaving the scene. Thus, fewer I-frames are necessary when compressing the image. Using fewer I-frames reduces the overall bitrate without degradation in quality.
- Tables 8-10 illustrate results obtained by combining both Region-based Coding by Varying QP and Adaptive I-frame Placement to achieve a greater reduction in bitrate when compressing the scenes shown in FIG. 13 .
- Tables 8-10 are similar to Tables 5-7 but include an additional column indicating QP values and additional rows displaying results of video encoded using different QP values for the foreground objects and background objects. These results are highlighted in the tables.
- the reduction in bitrate is between 32% and 42%, depending on the content of the scene.
- With a minimum GOP size of 60, a maximum GOP size of 5000, and using foreground and background QP values of (28, 29) the bitrate savings is between 54% to 67%.
- GOP Structure Adaptive B-frame Placement (Main, Extended and High Profiles)
- a B-frame provides higher compression at the expense of greater visual distortion (i.e., lower visual quality).
- B-frames typically result in noticeably poorer video quality.
- An adaptive B-frame placement algorithm can be used to vary the number of B-frames.
- the placement of B-frames can change from a high-motion scene (e.g. ESPN sports program) to a low-motion scene (e.g., a news program).
- the placement of B-frames can change from a low-motion scene to a high-motion scene.
- Motion information from the tracking module can be used to indicate the level of motion in the scene.
- B-frames can be included into the GOP structure to benefit from greater bit savings while maintaining reasonably good quality, while for high-motion scenes, the number of B-frames can be reduced or omitted.
- a P-frame can include intra-coded (I) macroblocks (MBs), predictive-coded (P) MBs, bidirectionally-predictive-coded (B) MBs or skipped MBs.
- I MBs contain full frame information for an MB that is independent of other frames, while P or B MBs, represent or are associated with image differences of an MB across frames.
- a skipped MB contains no information about the MB. As such, if an MB is coded in a frame as a skipped MB, the MB in the frame will be identical to the MB in the previous frame. Note that in the H.264 standard, an I MB can be spatially predicted using intra-prediction from previously encoded blocks within the same frame.
- the picture when an object enters/leaves the scene, instead of coding the entire frame as an I-frame, the picture can be coded as a P-frame with MBs corresponding to the foreground object coded as one or more I MBs.
- MBs can be encoded as I, P or skipped MBs at regions having substantial changes, minor changes or no changes, respectively. The amount of change can be determined using analytics.
- the frame can be encoded as a P-frame instead of an I-frame.
- MBs in the regions of the background with little or no changes can be encoded as P MBs or skipped MBs, while MBs of a foreground object can be encoded as I MBs. This can reduce the overall bitrate while maintaining the segmented object at a higher visual quality than the background.
- motion information from the tracking module can be used to determine if a background MB should be coded as a P MB or a skipped MB.
- the MBs corresponding to a background region having moving foliage (such as wavering trees) can be coded as P MBs while the MBs corresponding to a static background region can be coded as skipped MBs.
- the foreground MBs can be coded as I MBs when a scene change occurs in a frame.
- the foreground MBs can be coded as I MBs when an object is detected entering or leaving the scene.
- whether to use I MBs, P MBs or skipped MBs can be determined using the Mean Absolute Difference (MAD) between pixels in an original image and pixels in a predicted image.
- the MAD can be compared against a threshold to determine if the MB should be an I MB or a P MB.
- a threshold includes a high computation complexity due to the need to compute the MAD.
- the chosen threshold may not guarantee that all MBs of the foreground object will be coded as I MBs.
- segmentation and classification output can be used to directly perform the I/P/skipped mode selection. Encoding the MBs corresponding to the static background regions as skipped MBs reduces the overall bitrate without quality degradation.
- the segmented background MBs can be encoded as skipped MBs except where the tracking module identifies significant motion in the MB. Such motion may be due to, for example, foliage or water, which is a real change, but not a classified foreground object.
- Foliage background MBs can be coded as P MBs while fully static background MBs can be coded as skipped MBs.
- the class of foreground object type (e.g., person, animal, automobile, etc.) can be used to determine the encoding mode.
- MBs that are part of the foreground and classified as a Person might be encoded as I MBs while foreground MBs that are classified as Animals may be encoded as P MBs.
- SD Standard-Definition
- HD High-Definition
- the region/object properties from the analytics modules can be used instead of block-based local statistics.
- the region and/or object properties reflect the semantics of the video better than the block-based local statistics. In such a manner, MAD computation and thresholding can be avoided, resulting in lower computational overhead and higher accuracy.
- Inter frames whether B-frames or P-frames, are predicted from reference frames.
- motion estimation a search area is defined and a motion estimation algorithm is used to find a prediction block that best matches the current block to produce a motion-compensated prediction (MCP) block, which is then transformed, quantized and entropy coded.
- MCP motion-compensated prediction
- the vertical and horizontal displacements between the prediction and current block are coded as motion vectors (MVs), which can themselves be predictively coded as well.
- the motion estimation search area is typically determined by the MV search range.
- the vertical MV search range is bounded by the different types of profiles and levels in the H.264 standard.
- Most of the computational complexity of a video encoder typically occurs in the motion estimation.
- a large search area can result in high computational complexity while a small range can restrict or reduce the inter-frame prediction accuracy.
- the scene-adaptive video coding includes methods to find an adequate search range with a good trade-off in accuracy and complexity.
- the motion information or data from the tracking module in the video analytics processing pipeline could be used to select an MV range.
- the tracking module provides the motion trajectory for a foreground blob. This trajectory can be used to select the motion vector search range for all the macroblocks corresponding to the foreground blob.
- MV motion vectors
- FIGS. 10A-10B illustrate different approaches to determine a motion vector search range, according to embodiments.
- the approach described in FIG. 10A includes tracking the centroid of a matched pair of blocks across consecutive frames. The centroids of the matched pair of blocks are compared to determine the range.
- the approach described in FIG. 10B includes considering the neighborhood pixels of the centroid. An N ⁇ N window is centered on the centroid and the displacements over the blocks (e.g., macroblocks) in the window are aggregated.
- the MV search range can be scaled based on the weighted priorities of the object classes. As described above, most of the computational complexity of a video encoder occurs in the motion estimation. A large search area results in high computational complexity while a small range restricts or reduces the inter-frame prediction accuracy. The size of the search area can be scaled based on the weight associated with the class assigned to an object such that a higher-priority object is associated with a larger search range. Alternately, a different set of search ranges can be used for objects corresponding to different classes. For instance, cars move more rapidly compared to people hence blobs corresponding to cars would have a larger search range.
- the average of the motion information of objects belonging to the same class is first determined.
- the weighted average of the aggregated motion information of different classes is then used to determine a final search range.
- the final search range can be based on the aggregate motion information of objects belonging to the class having the maximum weight.
- the final search range can be based on a dominant MV determined from a MV histogram.
- the search range can be updated on a frame-by-frame basis, over a window size, or over a GOP, for example.
- the search range can be updated less frequently, i.e., over a larger window size for lower-priority objects or for slow moving objects. Based on the motion history, when objects exhibit consistent motion pattern, the search range is unlikely to change from frame to frame.
- a better estimation of an appropriate search range is to consider the blocks of pixels (e.g. macroblocks) at the lower half section of the person where there is more temporal activity (e.g., moving legs).
- Objects of higher priority can be coded with higher fidelity than others by assigning a QP value based on the weights of the object classes.
- the weights of these object classes in addition to the HVS factors, can be incorporated into the RC process to modify the QP values as described above.
- FIGS. 11A-11E illustrate analytics-modulated coding of video images, according to other embodiments. Table 11, below, describes preliminary results associated with FIGS. 11A-11E .
- FIG. 11A FIG. 11B FIG. 11C FIG. 11D FIG. 11E I-frame fg28-bg28 fg45-bg45 fg28-bg45 fg45-bg28 fg28-bg40 bits 22584 4896 9512 18008 11224 SNR(Y) 37.43 25.97 26.93 32.12 30.11 SNR(U) 41.28 36.52 37.48 39.35 38.23 SNR(V) 42.82 36.77 38.15 39.61 39.22
- FIGS. 12A-12C illustrate analytics-modulated coding of video images, according to other embodiments. Table 12, below, describes preliminary results associated with FIGS. 12A-12C .
- FIG. 12A FIG. 12B FIG. 12C I-frame fg28-bg28 fg28-bg40 fg28-bg35 bits 109488 32896 53800 SNR(Y) 36.81 28.18 31.3 SNR(U) 40.34 36.33 37.78 SNR(V) 38.86 34.51 35.75 1st P-frame bits 2448 2272 2344 SNR(Y) 35.41 28.1 31 SNR(U) 39.92 36.24 37.6 SNR(V) 38.25 34.36 35.54 2nd P-frame bits 2584 2152 2256 SNR(Y) 35.76 28.1 31.1 SNR(U) 40.14 36.34 37.75 SNR(V) 38.38 34.44 35.58
- a method includes assigning a class from multiple classes to a foreground object from a video frame.
- the foreground object has multiple pixels.
- Each class from among the multiple classes has associated therewith a quantization parameter value.
- Multiple discrete cosine transform (DCT) coefficients are produced for pixels from the multiple pixels of the video frame associated with the foreground object.
- the DCT coefficients associated with the foreground object are quantized based on the quantization parameter value associated with the class assigned to the foreground object.
- the method further includes coding the quantized DCT coefficients associated with the foreground object.
- the foreground object can be a first foreground object
- the class assigned to the foreground object can be a first class
- the quantization parameter value associated with the first class can be a first quantization parameter value.
- a second class from among multiple classes can be assigned to a second foreground object from the video frame, the second class being different from the first class.
- Multiple DCT coefficients can be produced for pixels from the multiple pixels of the video frame associated with the second foreground object.
- the multiple DCT coefficients associated with the second foreground object can be quantized based on the quantization parameter value associated with the second class assigned to the second foreground object.
- the method further includes coding the quantized DCT coefficients associated with the second foreground object.
- the video frame can include a background portion. Multiple DCT coefficients can be produced for pixels from the multiple pixels of the video frame associated with the background portion of the video frame.
- the multiple DCT coefficients associated with the background portion of the video frame can be quantized based on a quantization parameter value greater than the quantization parameter associated with each class from among the multiple classes.
- the method further includes coding the quantized DCT coefficients associated with the background portion of the video frame.
- the class assigned to the foreground object can be a first class.
- the multiple classes can include a second class that is different from the first class.
- the first class can have an associated coding priority and an associated quantization parameter value.
- the second class can have an associated coding priority and an associated quantization parameter value.
- the quantization parameter value associated with the first class can be less than the quantization parameter value associated with the second class when the coding priority associated with the first class is greater than the coding priority associated with the second class.
- the multiple pixels of the video frame can be organized into multiple blocks of pixels. Multiple DCT coefficients can be produced for each block of pixels from the multiple blocks of pixels of the video frame associated with the foreground object.
- the multiple DCT coefficients of each block of pixels associated with the foreground object can be quantized based on the quantization parameter value associated with the class assigned to the foreground object.
- the method further includes coding the quantized DCT coefficients associated with the foreground object.
- the foreground object includes at least one block of pixels from multiple blocks of pixels of the video frame.
- the least one block of pixels associated with the foreground object can define a contour associated with the foreground object.
- a method in another embodiment, includes assigning a class from among multiple classes to a foreground object from a video frame having multiple pixels.
- a quantization parameter value associated with the foreground object is derived based on at least one of a target bit rate, the number and size of objects in the scene and a weight associated with the class assigned to the foreground object, wherein the weight is based on a coding priority associated with the class assigned to the foreground object.
- the adjustment can include scaling the quantization parameter value associated with the foreground object based on at least one of the target bit rate, the number and size of objects in the scene and the weight associated with the class assigned to the foreground object.
- Multiple DCT coefficients are produced for pixels from the plurality of pixels of the video frame associated with the foreground object.
- the DCT coefficients associated with the foreground object are quantized based on the computed quantization parameter value.
- the method further includes coding the quantized DCT coefficients associated with the foreground object.
- the method can include coding the video frame via two pass encoding.
- a first pass operation can be performed using a low-complexity encoder to produce statistics (e.g., brightness, spatial and temporal frequencies) in order to take into account the characteristics of the Human Visual System (HVS).
- HVS Human Visual System
- the quantization parameter value associated with the foreground object can be derived based on the target bit rate, the number and size of objects in the scene and the weight associated with the class assigned to the foreground object.
- the method can include generating gradient information associated with the video frame via a single pass through the video frame and deriving a Human Visual System (HVS) factor associated with the video frame using the gradient information.
- the quantization parameter value associated with the foreground object can be computed and/or adjusted based on at least one of the target bit rate, the weight associated with the class assigned to the foreground object, and the Human Visual System factor.
- the foreground object can be a first foreground object
- the class assigned to the foreground object can be a first class
- the weight associated with the first class can be a first weight
- the quantization parameter value associated with the first foreground object can be a first quantization parameter value.
- a second class from among the multiple classes can be assigned to a second foreground object from the video frame. The second class can be different from the first class.
- a second quantization parameter value associated with the second foreground object can be derived based on at least one of a target bit rate, the number and size of objects in the scene and a second weight associated with the second class assigned to the second foreground object. The second quantization parameter value can be different from the first quantization parameter value and the second weight can be different from the first weight.
- Multiple DCT coefficients can be produced for pixels from the multiple pixels of the video frame associated with the second foreground object.
- the DCT coefficients associated with the second foreground object can be quantized based on the adjusted second quantization parameter value.
- the method further includes coding the quantized DCT coefficients associated with the second foreground object.
- a method in yet another embodiment, includes assigning a class from multiple classes to a foreground object from a first video frame having multiple blocks of pixels.
- the foreground object includes a block of pixels from the multiple blocks of pixels of the first video frame.
- Each class from among the multiple classes has associated therewith a coding priority.
- the method further includes identifying in a second video frame with multiple blocks of pixels a prediction block of pixels associated with the block of pixels in the foreground object. The identification is based on a prediction search window that has a search area associated with the coding priority of the class assigned to the foreground object.
- the method also includes coding the first video frame based on the identified prediction block of pixels.
- the search area of the prediction search window can be updated according to tracked motion information associated with the foreground object over multiple video frames including the first video frame.
- the search area of the prediction search window can be adjusted based on moving portions of the foreground object.
- the class assigned to the foreground object can be a first class.
- the multiple classes include a second class different from the first class.
- the first class can have an associated coding priority and an associated prediction search window.
- the second class can have an associated coding priority and an associated prediction search window.
- a search area of the prediction search window associated with the first class can be smaller than a search area of the prediction search window associated with the second class when the coding priority associated with the first class is lower than the coding priority associated with the second class.
- a method in another embodiment, includes tracking motion information associated with a foreground object in a first video frame having multiple blocks of pixels.
- the foreground object includes a block of pixels from the multiple blocks of pixels of the first video frame.
- the method further includes identifying in a second video frame having multiple blocks of pixels a prediction block of pixels associated with the block of pixels in the foreground object. The identifying can be based on a prediction search window having a search area associated with the tracked motion information associated with the foreground object.
- the method also includes coding the first video frame based on the identified prediction block of pixels.
- a class from multiple classes can be assigned to the foreground object.
- Each class from among the multiple classes has associated therewith a coding priority.
- the search area of the prediction search window can be updated according to the coding priority associated with the class assigned to the foreground object.
- a method in yet another embodiment, includes assigning a class from multiple classes to a foreground object from a picture in a group of pictures (GOP). Each class from among the multiple classes has associated therewith a coding priority. The method further includes tracking motion information associated with the foreground object over multiple pictures. The method also includes inserting an intra-frame picture in the GOP based on at least one of the tracked motion information associated with the foreground object and the coding priority associated with the class assigned to the foreground object.
- GOP group of pictures
- a structure associated with the GOP can be modified based on segmentation results associated with the foreground object and with the coding priority associated with the class assigned to the foreground object.
- a number of pictures associated with the GOP can be modified based on segmentation results and tracked motion information associated with the foreground object as well as based on the coding priority associated with the class assigned to the foreground object.
- a method in another embodiment, includes assigning a class from multiple classes to a foreground object from a picture in a GOP. Each class from among the multiple classes has associated therewith a coding priority. The method further includes tracking motion information associated with the foreground object over multiple pictures. The method also includes selectively replacing a block of pixels in the foreground object with an intra-coded block of pixels based on at least one of the tracked motion information associated with the foreground object and the coding priority associated with the class assigned to the foreground object.
- a method in another embodiment, includes segmenting a foreground object from a background of a picture in a group of pictures (GOP).
- Motion information associated with a block of pixels of the foreground object, a first block of pixels of the background, and a second block of pixels of the background is tracked.
- the block of pixels of the foreground object is encoded as an intra-coded block of pixels based on the motion information associated with the block of pixels of the foreground object.
- the first block of pixels of the background is encoded as a predictive-coded block of pixels based on the motion information associated with the first block of pixels of the background.
- the second block of pixels of the background is encoded as a skipped block of pixels based on the motion information associated with the second block of pixels of the background.
- the tracking of motion information can include detecting motion in the first block of pixels of the background and detecting an absence of motion in the second block of pixels of the background.
- the scene-adaptive video encoding can include a subset of the intermediate outputs produced by the video analytics processing pipeline.
- Some embodiments include a processor and a related processor-readable medium having instructions or computer code thereon for performing various processor-implemented operations.
- processors can be implemented as hardware modules such as embedded microprocessors, microprocessors as part of a computer system, Application-Specific Integrated Circuits (“ASICs”), and Programmable Logic Devices (“PLDs”).
- ASICs Application-Specific Integrated Circuits
- PLDs Programmable Logic Devices
- Such processors can also be implemented as one or more software modules in programming languages as Java, C++, C, assembly, a hardware description language, or any other suitable programming language.
- a processor includes media and computer code (also can be referred to as code) specially designed and constructed for the specific purpose or purposes.
- processor-readable media include, but are not limited to: magnetic storage media such as hard disks, floppy disks, and magnetic tape; optical storage media such as Compact Disc/Digital Video Discs (“CD/DVDs”), Compact Disc-Read Only Memories (“CD-ROMs”), and holographic devices; magneto-optical storage media such as optical disks, and read-only memory (“ROM”) and random-access memory (“RAM”) devices.
- Examples of computer code include, but are not limited to, micro-code or micro-instructions, machine instructions, such as produced by a compiler, and files containing higher-level instructions that are executed by a computer using an interpreter.
- an embodiment of the invention can be implemented using Java, C++, or other object oriented programming language and development tools.
- Additional examples of computer code include, but are not limited to, control signals, encrypted code, and compressed code.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Algebra (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
- This application is a continuation of U.S. patent application Ser. No. 15/843,430, filed Dec. 15, 2017, entitled is a “Analytics-Modulated Coding of Surveillance Video,” which is a divisional application of U.S. patent application Ser. No. 14/966,083, filed Dec. 11, 2015, entitled “Analytics-Modulated Coding of Surveillance Video,” which is a continuation of U.S. patent application Ser. No. 12/620,232, filed Nov. 17, 2009, now U.S. Pat. No. 9,215,467, entitled “Analytics-Modulated Coding of Surveillance Video,” which claims priority to, and the benefit of U.S. Provisional Patent Application Ser. No. 61/115,427, filed on Nov. 17, 2008, entitled “Analytics-Modulated Coding of Surveillance Video,” the disclosures of each of which are incorporated herein by reference in their entireties.
- The systems and methods disclosed relate generally to video processing and more particularly to adaptively compressing video based on video analytics.
- Surveillance technology has been increasingly used to monitor people, places and activities. For example, high-quality surveillance video is being used to better monitor events and/or to reduce visually distracting artifacts that may interfere with human recognition. As surveillance video data is retained and archived for longer periods of time, large amounts of data storage space are typically needed. In addition, more innovative applications are emerging in which the streaming of video to wireless and mobile devices is used over evermore bandwidth-constrained networks. Such uses are demanding not only new surveillance solutions, but also new or enhanced video compression techniques.
- To address the above needs for enhanced compression, it is desirable to have a technique of coding objects in the surveillance scene so that a region-of-interest (ROI) can be compressed at higher quality relative to other regions that are visually less-important such as the scene background, for example. While such techniques have been proposed, they require the use of custom encoders and decoders. The widespread use of video makes the deployment of such devices complicated and expensive; a more desirable solution would be one that permits compressed video streams to be decoded by industry-standard decoders without requiring special plug-ins or customization. It is furthermore desirable to have an encoder that produces bit streams that are compliant with the MPEG-4 or H.264 compression standards. Within these standards, it is also desirable to selectively allocate bits to portions of the scene that are deemed to be important; scene analysis using video analytics (also called “video content analysis”) can be a powerful tool for performing this function.
- In one or more embodiments, a method and apparatus for encoding surveillance video where one or more regions of interest are identified and the encoding parameter values associated with those regions are specified in accordance with intermediate outputs of a video analytics process. Such analytics-modulated video compression allows the coding process to adapt dynamically based on the content of the surveillance images. In this manner, the fidelity of the region of interest (ROI) is increased relative to that of a background region such that the coding efficiency is improved, including instances when no target objects appear in the scene. Better compression results can be achieved by assigning different coding priority levels to different types of detected objects. In addition to segmentation, classification and tracking modules can be used as well. Because shape information need not be coded, fewer computational resources and/or fewer bits are necessary. The analytics-modulated video compression approach is not limited to specific profiles, does not require a new shape-based coding profile, and produces a compressed video stream that is compliant with multiple standards. In contrast to other approaches where varying the frame rate and frame size (i.e., temporal and spatial resolution) may result in noticeable discontinuity in perceptual quality, the analytics-modulated video compression approach produces smooth, high-quality video at a low bit rate by adjusting encoding parameters at a finer granularity.
-
FIG. 1 is a system block diagram of an MPEG encoder architecture. -
FIG. 2 is a diagram illustrating motion-compensated prediction in a P-frame. -
FIG. 3 is a diagram illustrating motion-compensated bidirectional prediction in a B-frame. -
FIG. 4 is a block diagram illustrating a video analytics processing pipeline, according to an embodiment. -
FIG. 5 illustrates the use of difference image thresholding to obtain foreground pixels, according to an embodiment. -
FIG. 6 illustrates a classifier discriminating between a person and a car, according to an embodiment. -
FIG. 7 is a system block diagram of video analytics and coding modules used in scene-adaptive video coding, according to an embodiment. -
FIG. 8 is a system block diagram of region-based coding by varying quantization parameter (QP), according to an embodiment. -
FIG. 9 is a system block diagram of a region-based coding incorporating rate control (RC), according to an embodiment. -
FIGS. 10A-10B illustrate different approaches to determining a motion vector search range, according to embodiments. -
FIGS. 11A-11E illustrate analytics-modulated coding of video images, according to other embodiments. -
FIGS. 12A-12C illustrate analytics-modulated coding of video images, according to other embodiments. -
FIG. 13 shows various scenes used to illustrate analytics-modulated coding, according to other embodiments. - Novel techniques can be used for coding objects in a surveillance scene so that a region-of-interest (ROI) can be compressed at higher quality relative to other regions that are visually less-important such as the scene background, for example. A scene without objects can be encoded at a lower bit rate (e.g., higher compression) than a scene with detected objects. A scene with different types of objects as well as regions with different brightness, spatial or temporal activities can have the objects and/or regions encoded at different levels of fidelity. It is desirable that these techniques allow for scaling of various encoding parameter values so as to use fewer bits when appropriate to produce significantly greater compression of the surveillance scene without visual artifacts.
- MPEG Video Compression
- The Moving Picture Expert Group (MPEG) is a working group of the International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) that develops standards for coded representation of digital audio and video. A benefit of compression is in data rate reduction that lowers transmission and storage cost, and where a fixed transmission capacity is available, results in better video quality. Video signals typically contain a significant amount of redundancy; video samples are typically similar to each other so that one sample can be predicted fairly accurately from another, thereby exploiting the correlation among the samples to reduce the video bit rate. The MPEG standard(s) achieve high compression rates by removing spatial and temporal redundancy.
-
FIG. 1 is a system block diagram of an MPEG encoder architecture that is configured to compress video signals. The MPEG encoder includes multiple modules. Each module in the MPEG encoder can be software-based (e.g., set of instructions executable at a processor, software code) and/or hardware-based (e.g., circuit system, processor, application-specific integrated circuit (ASIC), field programmable gate array (FPGA)). - To remove spatial redundancy, the MPEG encoder divides a video frame into smaller blocks of pixels that are then operated by a Discrete Cosine Transform (DCT) operation. The DCT operation decorrelates the pixel elements in the spatial domain and converts them to independent frequency domain coefficients. The process is localized, i.e., the encoder samples an 8×8 spatial window to compute the 64 transform coefficients. The DCT coefficients are energy concentrated, i.e., most of the signal energy is concentrated in a few low frequency coefficients such that a few of the coefficients contain most of the information in the frame. For a particular macroblock (e.g., a block of pixels), the DC coefficient that appears as the top left coefficient contains the average energy of the entire signal in that macroblock, while the remaining AC coefficients contain higher frequency information of the macroblock. The DCT coefficients are then adaptively quantized. A quantization operation involves mapping an input signal with a range of values to a reduced range of output values. The quantization operation is generally regarded as the lossy part of video compression. The amount of compression at this stage is typically controlled by a quantization parameter (QP). A high QP value produces fewer bits used (i.e., greater compression) at the expense of reduced image or scene quality. After quantization, most of the high frequency coefficients (e.g., AC coefficients) are reduced to zeros. The quantized DCT coefficients are subjected to run-length coding to generate (run, level) pairs that indicate the number of zero coefficients and the amplitude of the following non-zero coefficient. These (run, level) pairs are then variable-length coded. Variable length coding (VLC) is typically used to further compress the representation of the signal by assigning shorter code words to more frequently occurring symbols (pairs) and longer code words to those that appear less frequently in the sequence.
- To take advantage of temporal correlation, instead of a full frame, it is desirable to encode the differences that exist between images. Intra-coded (I) frames, also known as I-frames, contain full frame information that is independent of other frames, while inter-coded frames, often referred to as predictive-coded (P) or bidirectionally-predictive-coded (B) frames, represent or are associated with image differences.
-
FIGS. 2-3 are each a diagram illustrating motion-compensated prediction in a P-frame and a B-frame respectively.FIG. 2 shows a P-frame being predicted from a previously encoded I or P-frame (reference frame).FIG. 3 shows a B-frame being predicted from both a previous reference (I or P) frame, and a future reference (I or P) frame in both backward and forward directions, respectively. The predictive coding process of inter-coded frames typically involves producing or generating motion vectors. Motion estimation involves searching for a macroblock (e.g., a 16×16 block of pixels) in the reference frame that best matches the current block in the current frame. The residual energy that reflects the difference between the blocks is then quantized and entropy-coded. The displacement between the two blocks is represented by a motion vector (MV). The difference between this MV and that of a predicted block is then coded and transmitted as part of the compressed video stream. - Video Compression Standards
- MPEG-1 and MPEG-2 are Emmy Award winning standards that made interactive video on CD-ROM and Digital Television possible. The MPEG-1 standard was originally used for digital storage media such as video compact discs (CD) and supports interactivity such as fast forward, fast reverse and random access into stored bitstreams. The MPEG-2 standard, on the other hand, is the format typically used for DVD and HDTV and for broadcast applications. The MPEG-2 standard includes multiplexing schemes for carrying multiple programs in a single stream, as well as mechanisms that offer robustness when delivering compressed video and audio over error prone channels such as coaxial cable television networks and satellite transponders, for example. The MPEG-4 standard was originally developed for low bit-rate video communications devices and provides higher compression than its predecessors. The MPEG-4 standard later evolved to include means for coding arbitrarily shaped natural and synthetic objects characterized by shape, texture and motion, in addition to frame-based video. The standard also enables interactivity, managing and synchronizing of multiple objects in a multimedia presentation.
- One of the latest compression standards is H.264, which is jointly developed by the International Telecommunications Union (ITU) Video Coding Expert Group (VCEG) and ISO/IEC MPEG to address increasing needs for higher compression. Built on the concepts of earlier standards such as MPEG-2 and MPEG-4 Visual, the H.264 standard offers significant improvements in video quality and is currently the standard of choice for the video format for Blu-Ray disc, HDTV services, and mobile applications, for example. The H.264 standard is capable of delivering the same high-quality video with savings of between 25% and 50% on bandwidth and storage requirements compared to its predecessors. Some of the enhanced encoding features of the H.264 standard include techniques for reducing artifacts that may appear around the boundary of the macroblocks (i.e., reduce “blockiness”), adaptive decomposition of the block into various smaller block sizes for regions with finer spatial details, sampling at less than one integer pixel for higher accuracy, use of integer transform, and improved VLC techniques that may use a fractional number (instead of a series of bits) to represent the data symbol. The VLC techniques are typically based on context information (i.e., prior knowledge of how the previous pixels or symbols were encoded).
- Video Analytics
- Video analytics, also known as Video Content Analysis (VCA) or intelligent video, refers to the extraction of meaningful and relevant information from digital video. Video analytics builds upon research in computer vision, pattern analysis and machine intelligence. For example, video analytics uses computer vision algorithms that allow a system to perceive (e.g., “see”) information associated with the video, and then uses machine intelligence to interpret, learn, and/or draw inferences from the information perceived. One aspect of video analytics is scene understanding, that is, understand the context around an object in the video. Other aspects of video analytics include the detection of motion and tracking an object through the scene. For example, smart cameras that include or provide video analytics can be used to detect the presence of people and detect suspicious activities such as loitering or motion into an unauthorized area.
-
FIG. 4 is a block diagram illustrating a video analytics processing pipeline, according to an embodiment. The video analytics processing pipeline consists of a chain of processing blocks or modules including segmentation, classification, tracking, and activity recognition. Each module can be software-based, or software-based and hardware-based. It is desirable for the video analytics processing pipeline to detect changes that occur over successive frames of video, qualify these changes in each frame, correlate qualified changes over multiple frames, and interpret these correlated changes. - The segmentation module is configured to identify foreground blobs (i.e., associated pixel clusters) using one of multiple segmentation techniques. A segmentation technique can use a background subtraction operation to subtract a current frame from a background model. The background model is initialized and then updated over time, and is used by the background subtraction operation to detect changes and identify foreground pixels. In one embodiment, the background model can be constructed using a first frame or the mean image over N frames. In one embodiment, a terrain map can be used to separate the foreground from the background in a frame. An example of using a terrain map is described in U.S. Pat. No. 6,940,998, entitled “System for Automated Screening of Security Cameras,” which is hereby incorporated herein by reference in its entirety. To produce an accurate background model, it is desirable to account for changes in illumination and/or changes that result from foreground blobs becoming part of the background. The background model adapts to these changes and continues to update the background.
-
FIG. 5 illustrates the use of difference image thresholding to obtain foreground pixels, according to an embodiment. A low threshold value can allow smaller changes to be qualified as foreground pixels, resulting in clutter because of sensor noise, moving foliage, rain or snow, illumination changes, shadows, glare, and reflections, for example. Simple motion detection does not adequately remove clutter and will cause false detections. A high threshold value can result in holes and gaps that can be filled using a morphological filter. The threshold value and the frequency at which the background is updated can impact the results of the segmentation technique. Other embodiments may use adaptive thresholding or gain control, and can be configured such that, for example, the gain is controlled by area of the image. An example of area-based gain control is described in U.S. Pat. No. 7,218,756, entitled “Video Analysis Using Segmentation Gain by Area,” which is hereby incorporated herein by reference in its entirety. - During segmentation, each connected blob is uniquely labeled to produce foreground blobs. Blob labeling can be done by recursively visiting all foreground neighbors of a foreground pixel and labeling them until no unvisited neighbor is available. Such segmentation yields fine pixel-level separation between foreground and background as opposed to techniques that use macro-block level motion estimation for this purpose.
- Once the image is segmented, the blobs are classified by, for example, assigning a category to each foreground blob. Classification uses image features to discriminate one class from another. For example, classification produces the likelihood of an object belonging to a certain given class. Binary classifiers are used to separate object blobs into one of two classes (e.g., object is a person or a non-person). Multi-class classifiers separate object blobs into one of multiple classes (e.g., object is a person, a vehicle, or an animal).
-
FIG. 6 illustrates a classifier discriminating between a person and a car, according to an embodiment. A simple classifier that separates persons from vehicles can be constructed by, for example, examining the aspect ratio of the segmented blob. People tend to be taller than wide, while cars are wider than tall. Other features that can be useful for classification are histograms and outlines.FIG. 6 shows two foreground blobs, one classified as a person and the other classified as a car. Other embodiments can use machine learning to classify a test blob after being trained by using positive and negative blob examples. - Classified objects can be tracked across multiple video frames by establishing a correspondence or association between objects (e.g., blobs) in different video frames. These correspondences can be used for scene interpretation and for behavior or activity recognition. Because an object may change its pose or orientation with respect to the camera, that object may look different over multiple frames. Furthermore, people moving in a scene exhibit articulated motion, which can substantially change the shape of the blob. During tracking it is desirable to be able to identify invariant features or situations when objects occlude each other. For example, it is desirable to handle a situation wherein a person walks behind a tree and re-appears or when the parts of an articulated object occlude one another, such as when swinging arms occlude the torso of a walking person.
- Once foreground objects have been segmented, classified, and/or tracked, their motion and behavior can be analyzed and described. Examples of activities in which such analysis and description can be performed include a loitering person, a fallen person or a slow-moving vehicle. In addition, body parts can also be analyzed to provide information on human activities such as jumping, crouching, reaching, and bending, for example. Gesture recognition techniques can also be used to identify activities such as grasping, pointing and waving.
- In some embodiments, surveillance images are encoded for transmission or storage by leveraging the intermediate outputs of video analytics (e.g., outputs from segmentation, classification, tacking, and/or activity recognition) to achieve better coding efficiency. In traditional coding methods, the encoding parameters are typically fixed over an entire frame or over an entire sequence of frames. One aspect of the video compression described herein is to use the intermediate outputs of the video analytics processing pipeline described above with respect to
FIG. 4 to produce analytics-modulated coding, also called scene-adaptive video coding. For example, a region-of-interest (ROI) can be identified and one or more encoding parameters, including the QP values, can be varied, adjusted, or modified during the coding process to adapt the coding process based on the content of the surveillance scene. Such adaptation can be based on changes that occur when, for example, an object (or different types of objects) enters or leaves a scene, or when the brightness, spatial, and/or temporal activity associated with an object changes. Scene semantics based on activity recognition can also be used to adapt the coding process. For example, activity recognition in surveillance video can detect a loitering person. The series of frames corresponding to the loitering activity can be coded at a higher fidelity compared to other frames. Analytics-modulated coding differs, at least in part, from other schemes that update the frame-rate, resolution, or overall compression bit rate (which applies to the whole frame) by applying finer level control at each aspect of the coding process, at a much higher spatial and temporal granularity. This approach provides greater compression at the same quality level, and does not cause objectionable jumps in the frame that may result from a sudden change in full-frame resolution and/or quality. - While the current MPEG-4 standard handles a video sequence as a composition of one or more objects of arbitrary shape (e.g., ROI), the shape information, in addition to the image data, are encoded and transmitted as part of the video stream. Such an approach can result in added computational burden and can require more memory and bandwidth resources. The H.264 standard offers better compression than the MPEG-4 standard. However, one functionality that is absent in the H.264/Advanced Video Coding (AVC) standard is the ability to code arbitrary shaped objects. Some recent work done in this area has resulted in progress in incorporating shape-coding functionality in the H.264/AVC standard. For example, certain proposed techniques encode the shape information and use a non-standard-based player/decoder, while other proposed techniques support certain profiles of the H.264/AVC standard, or limit the use of specific frame types or encoding parameters. None of this recent work, however, fully exploits the use of other encoding parameters and/or the outputs produced by the video analytics modules such as object class, track history and activity recognition
- In some embodiments, several video analytics modules including segmentation, classification, and tracking, for example, are used for scene-adaptive video coding. Based on analytics output, the encoding parameters can be varied during the coding process to adapt to the content of the scene. When coding ROIs the shape information of each ROI need not be coded and the coding operation need not be limited to specific profiles. Furthermore, the analytics-modulated coding approach described herein produces an encoded video stream decodable by players that do not support shape-based coding.
-
FIG. 7 is a system block diagram of video analytics and coding modules used in scene-adaptive video coding, according to an embodiment. A foreground alpha mask generated by the segmentation module is used to identify regions of interest (ROI) for region-based coding, as well as to adjust encoding parameters in the coding modules such as the Group-of-Pictures (GOP) size and/or the QP. The GOP is a group of successive video frames and defines the arrangement or organization of the I, P, and/or B-frames. The GOP includes an I-frame, followed by a series of P and/or B-frames. The GOP size is the number of frames between two I-frames. The QP is a parameter used in the quantization process and is associated with the amount of compression. The value of the QP influences the perceived quality of the compressed images. - The segmented objects are classified as assigned to or belonging to 1 of N classes of objects through the classification process. Weights are assigned to each of the classes to define relative priorities among the classes. These weights determine the relative priorities for bit allocation. For example, blobs belonging to one class (e.g., person class), using a greater fraction of the bit budget compared to blobs belonging to another class (e.g., trees class). In some embodiments, two or more classes may have the same weights. In other embodiments, each of the classes may have a unique weight. The classified objects are tracked over multiple frames by establishing a correspondence or association between blobs in the frames. The tracking module produces motion information that can be utilized to determine a suitable motion vector search range. The motion vector search range determines a search window during the Motion Estimation and Compensation (ME/MC) process. The search window is used to search for a group or block of pixels (e.g., a macroblock) in the reference frame that best matches a group or block of pixels being considered in the current frame during the ME/MC process. In this manner, temporal redundancies are used such that, in some instances, only the difference between consecutive frames is encoded.
- Region Based Coding by Varying QP
-
FIG. 8 is a system block diagram of region-based coding by varying QP, according to an embodiment. The video compression process involves first transforming the image from spatial to frequency domain, employing a transformation such as a DCT or integer transform. The transformed coefficients are then quantized based on the QP, and entropy coded to produce the compressed 2-D signals. As described above, quantization is the process of mapping a range of input values to a smaller range of output values and is the lossy compression part of video coding. The value of the QP is used to specify the extent of compression that is desired. For example, a larger QP value uses fewer bits to code, resulting in more compression and reduced image quality. In another example, a smaller QP value can produce better quality images at lower compression. The alpha mask serves as input to a module to compute and derive the coordinates for overlays, which are geometric shapes used for bounding the detected targets. These overlays are often useful in surveillance video to draw attention to objects or activities of interest. The alpha mask is used to distinguish foreground objects from background objects. A higher QP value can then be used to encode the background object while a lower QP value can be used to encode the foreground object in such a way that the overall bitrate is reduced without compromising quality. Such analytics-modulated coding (AMC) is applicable, in general, to any standard that is based on block-based video coding scheme, including the H.264 standard where integer and Hadamard transforms are used. As previously discussed, DCT is the fundamental transformation utilized in most video coding standards such as the MPEG and H.26x standards. - Tables 1-4 present experimental results associated with
FIG. 13 . These results were obtained by encoding several 2-hr long video clips using different combinations of QP values for foreground objects and background objects. The video clips (a frame of each is shown inFIG. 13 ) contain both indoor and outdoor surveillance scenes with different levels of motion activity. A Joint Model (JM) codec is used as a baseline for comparison. JM is the reference software implementation of the H.264 codec adopted by the Joint Video Team (JVT) of the International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) Motion Picture Experts Group (MPEG) and the International Telecommunication Union (ITU-T) Video Coding Experts Group (VCEG) standards committees. - The tables illustrate the bit rate and the Structural Similarity Index (SSIM) for JM and AMC-encoded video using various sets of QP values ranging from 24 to 31, as well as the percentage bit rate savings. SSIM values are used instead of Peak Signal-to-Noise Ratio (PSNR) values because SSIM better correlates to subjective quality. In Table 1, video (the lobby scene in
FIG. 13 ) is encoded using JM with a QP value of 24. The resulting bitrate is 111.72 kbps and the corresponding SSIM value is 0.98. The same video is encoded using AMC with a higher QP value for the background and a lower QP value for the foreground. As shown in Table 1, the video is encoded using AMC with QP values of 24 for the foreground and 25 for the background to derive the same SSIM value (0.98) as the JM encoded video with a QP value of 24. The resulting bitrate is 91.61 kbps, giving a bitrate savings of 18% when AMC is used instead of JM. Tables 2-4 illustrate the results for the other scenes shown inFIG. 13 . The highest bitrate savings is achieved when video encoded using AMC with a foreground QP value of 28 and a background QP value of 29 (see e.g., Table 1) or a foreground QP value of 27 and a background QP value of 29 (see e.g., Table 2) is compared to video encoded using JM with a QP value of 28. Using these QP values, Region-based Coding by Varying QP provides performance/compression gains between 13% and 26.5% for the scenes shown inFIG. 13 . -
TABLE 1 Lobby JM JM JM AMC AMC AMC Bitrate QP Bitrate SSIM QP Bitrate SSIM Savings (%) 24 111.72 0.98 24, 25 91.61 0.98 18 26 65.81 0.98 26, 27 51.45 0.98 21.8 28 37 0.97 28, 29 27.21 0.97 26.5 30 25.56 0.96 30, 31 20.65 0.96 19 -
TABLE 2 Scene with car parked JM JM JM AMC AMC AMC Bitrate QP Bitrate SSIM QP Bitrate SSIM Savings (%) 24 54.14 0.96 24, 25 45.73 0.96 14.8 26 37.72 0.94 25, 27 31.94 0.94 15 28 25.26 0.93 27, 29 21.99 0.93 13 30 19.19 0.91 29, 31 17.66 0.91 8 -
TABLE 3 Entrance of building JM JM JM AMC AMC AMC Bitrate QP Bitrate SSIM QP Bitrate SSIM Savings (%) 24 276.64 0.98 23, 25 224.84 0.98 18.7 26 150.3 0.97 25, 27 118.65 0.97 21 28 84.2 0.96 27, 29 64.35 0.96 24 30 56.92 0.94 29, 31 46.47 0.94 18.4 -
TABLE 4 Trailer JM JM JM AMC AMC AMC Bitrate QP Bitrate SSIM QP Bitrate SSIM Savings (%) 24 84.21 0.98 24, 25 64.34 0.98 23 26 55.01 0.98 26, 27 43.13 0.98 21.5 28 31.33 0.97 28, 29 23.85 0.97 23 30 25.92 0.97 30, 31 22.57 0.97 13 - Region Based Coding Incorporating Rate Control (RC)
-
FIG. 9 is a block diagram of a system using region-based coding incorporating rate control (RC), according to an embodiment. The RC process of video encoding involves adjusting the QP value to meet a target or predetermined bit rate. The RC module can be composed of a high-level RC (HLRC) component and a low-level RC (LLRC) component. At the HLRC, a bit budget is computed for each frame, given the target bitrate. This frame bit budget serves as input to the LLRC where a bit budget is then computed for each macroblock (MB bit budget), taking into consideration several features from a number of external modules. The corresponding quantization parameter, QP, is then derived from the Rate-Quantization (RQ) model and used by the quantization module for video compression. - At the HLRC, target bit allocation for each frame can be dependent on the number of remaining frames in the GOP, number of remaining bits available for the GOP as well as scene complexity. Scene complexity can be expressed as a function of the number, the size and the type of objects in the scene. These three quantities can be derived from the analytics module. According to an embodiment, the mathematical formulations for calculating frame budget at the HLRC, while incorporating analytics are as follows:
-
- For a GOP with N frames, the bit budget for frame i, Ti, is computed by taking a proportion of bits remaining in the GOP as shown in equation (1). Rt refers to the total bit budget whereas
-
- gives total bits used to encode frame 1 up to the previous frame i-1 and c is a constant. The proportion is based on complexity, Xi, due to multiple features. For example, 2 features are used in equations (2) and (3): normalized size of objects, a1, and normalized number of objects, bi, in the scene. αk,i is the weight associated with Fk,i which denotes feature k of frame i. In equation (3), sg,i refers to the size of object g (in pixels) in frame i; mi is the number of objects in frame i, while S and M are the total number of pixels in the video frame and the expected maximum number of objects respectively.
- Substituting (2) into (1) gives:
-
- And finally:
-
T i_final =γ*TB i+(1−γ)*T i (5) -
- The final target bit budget, Ti_final, is allocated for each frame as shown in equation (5). Ti_final is the weighted sum of the bit budget computed based on buffer occupancy, TBi;, and the bit budget predicted based on complexity as derived from equation (4), Ti. γ indicates how much to weight each component (TBi and Ti) of the sum. There are numerous known techniques to compute buffer occupancy. In some embodiments, for example, TBi can be derived by subtracting delta bits, Δp, from bits per frame,
-
- as shown in equation (6).
- At the LLRC, target bit allocation for each macroblock can be computed as a fraction of bit budget for the frame that includes the macroblock. The bit allocation for the macroblock can be dependent on the macroblock complexity. Analytics can be used to calculate the macroblock complexity using the segmentation and/or classification results, in addition to the Human Visual System (HVS) factors. The mathematical formulation is shown below:
-
- The bit budget for each macroblock, rj, can be derived as a fraction of frame bit budget, Ti_final, based on the macroblock complexity, xj, as shown in equation (7) where p is the total number of macroblocks in each frame. xj is expressed as a weighted linear combination of features where fk,j, where fk,j ∈{cj, dj, ej, hj} and λk,j are weights associated with each of these features (equation (8)). These features include HVS factors such as brightness, spatial activities and temporal activities, denoted as normalized quantities Ij, Sj and Mj respectively (equations (9)-(11)). Using hj, analytics are incorporated into the calculation of the macroblock complexity. gj, in equation (12), indicates whether the macroblock belongs to the foreground or background object and is derived based on the segmentation module shown in
FIG. 9 . According to an embodiment, the pixel boundary of the foreground object is divided by the size of the macroblock and rounded down to the nearest integer for mapping to a macroblock unit. Each macroblock has a width and height of 16 pixels. The classification module inFIG. 9 is used to compute the normalized weight for each object class, wj, j∈{1, . . . , L}, where L is the number of object classes. For example, a larger weight can be assigned to a PERSON object and a smaller weight can be assigned to a CAR/VEHICLE object and/or other objects in an application used to detect target persons in the scene. This directly incorporates weights based on object class into the rate control technique used for compression. - A corresponding normalized quantity can be computed by multiplication with a normalization constant, e.g., normalized spatial activity Sj=Cm*sj, where the normalization constant is determined by the following expression:
-
- Because the human eye is less sensitive to distortion in regions that are bright, in regions that include many spatial details, or in regions where there is motion, fewer bits are allocated to these regions so that more bits are available to code the foreground region. The HVS factors are typically produced from a first pass encoding of the video image using a low complexity encoder as shown in
FIG. 9 . The HVS factors can be derived from analytics and incorporated during encoding an image during the first single pass through the image (e.g., a single scan of the pixel values). These factors can be obtained from the gradient information. For example, the gradient information can be obtained from a degree of slope parameter and a direction of slope parameter as described in U.S. Pat. No. 6,940,998, entitled “System for Automated Screening of Security Cameras,” which is hereby incorporated herein by reference in its entirety. - The Rate-Quantization (RQ) model defines the relationship between the bitrate, QP, and complexity. The mathematical formulation is shown below:
-
- The bit budget for macroblock j, rj, and the macroblock complexity, xj, are derived from equation (7) and equation (8) respectively. K1 and K2 are model parameters. In contrast to schemes that use a Mean Absolute Difference (MAD) between pixels in an original image and pixels in a predicted image, in some embodiments, a combination of segmentation and classification results from analytics can be used, as well as HVS factors to compute macroblock complexity as shown in equations (8)-(12). From the RQ model, a QP value for the macroblock can be derived and used in the quantization module for video compression.
- GOP Size and Structure: Adaptive I-Frame Placement
- The GOP is a group of pictures within an MPEG-coded video stream. A GOP structure determines the arrangement of I, P and/or B-frames. An I-frame contains I macroblocks (MBs), each MB being intra-coded and can be based on prediction from previously coded blocks within the same frame. An I-frame can be inserted whenever a scene change occurs. In scene-adaptive video coding for surveillance applications, for example, this can happen whenever an object enters or leaves the scene or when the scene changes. In one embodiment, a scheme can be implemented to adaptively change the GOP size and structure depending on the content and/or scene (i.e., content/scene-adaptive). A maximum GOP size can also be specified such that an I-frame can be inserted when the period of inactivity exceeds a certain predetermined duration or a predetermined criterion (e.g., number of frames of inactivity). A minimum GOP size can be specified such that no two I-frames are more than a certain duration apart. Thus, instead of having a fixed GOP structure and size (e.g. IPPPIPPP . . . ), the structure can adaptively change based on recognition of activity in the scene and on the number and class of objects in the scene. This allows placement of P-frames up to the moment that an object enters the scene, or an object of a specific class enters the scene or a significant scene change is detected by the analytics.
- Tables 5-7 describe results from experiments conducted on the surveillance scenes shown in
FIG. 13 , with low and high activity, using a minimum GOP size of 60 or 250 and a maximum GOP size of 60, 250, 1000 or 5000. Using a minimum GOP size of 250 and a maximum GOP size of 1000, the bitrate savings varies from 10% to 19% depending on the scene content. Using a minimum GOP size of 250 and a maximum GOP size of 5000, the bitrate savings varies from 11% to 24% due to the larger maximum GOP size. Using a minimum GOP size of 60, the performance gain is 39% to 52% using a maximum GOP size of 1000, and 40% to 55% using a maximum GOP size of 5000. The bitrate savings is higher for a scene with low activity since there are relatively fewer objects entering and leaving the scene. This results in fewer I-frames. -
TABLE 5 Entrance of building min GOP max GOP bitrate size size bitrate savings 250 250 84.76 250 1000 76.68 10 250 5000 75.4 11 60 60 132.04 60 1000 80.44 39 60 5000 79.17 40 -
TABLE 6 Trailer min GOP max GOP bitrate size size bitrate savings 250 250 26.74 250 1000 21.56 19 250 5000 20.07 24 60 60 48.12 60 1000 22.9 52 60 5000 21.47 55 -
TABLE 7 Scene with moving foliage and cars min GOP max GOP bitrate size size bitrate savings 250 250 50.81 250 1000 42.74 15 250 5000 40.57 20 60 60 83 60 1000 43.28 47 60 5000 41.25 50 - As described above, segmented objects can be classified as being assigned to or belonging to 1 of L object classes (e.g., person, animal, automobile, etc.) through the classification process, and weights can be assigned to each of the object classes to establish relative priorities among the object classes. When a scene includes multiple objects belonging to different classes, the GOP size can be adapted or modified based on the class to which the objects are assigned and on the weighted priorities of those classes.
- Combining Region-based Coding by Varying QP and Adaptive I-frame Placement
- Using Region-based Coding by Varying QP, a background region is coded at a relatively higher QP value than a foreground object of interest (e.g., target person) in such a way that fewer bits are allocated to the background than the foreground object. Since the number of pixels of the foreground object is typically smaller than the background region, this significantly reduces the total number of bits used to compress the frame without significantly compromising quality. Using Adaptive I-frame Placement, a video frame is encoded as an I-frame only when an object is detected entering or leaving the scene. Thus, fewer I-frames are necessary when compressing the image. Using fewer I-frames reduces the overall bitrate without degradation in quality.
- Tables 8-10 illustrate results obtained by combining both Region-based Coding by Varying QP and Adaptive I-frame Placement to achieve a greater reduction in bitrate when compressing the scenes shown in
FIG. 13 . Tables 8-10 are similar to Tables 5-7 but include an additional column indicating QP values and additional rows displaying results of video encoded using different QP values for the foreground objects and background objects. These results are highlighted in the tables. As shown, with a minimum GOP size of 250, a maximum GOP size of 5000, and using foreground and background QP values of (28, 29), the reduction in bitrate is between 32% and 42%, depending on the content of the scene. With a minimum GOP size of 60, a maximum GOP size of 5000, and using foreground and background QP values of (28, 29), the bitrate savings is between 54% to 67%. -
TABLE 8 Entrance of building min GOP max GOP bitrate size size QP bitrate savings 250 250 28 84.76 250 1000 28 76.68 10 250 5000 28 75.4 11 250 5000 28, 29 57.54 32 60 60 28 132.04 60 1000 28 80.44 39 60 5000 28 79.17 40 60 5000 28, 29 60.86 54 -
TABLE 9 Trailer min GOP max GOP bitrate size size QP bitrate savings 250 250 28 26.74 250 1000 28 21.56 19 250 5000 28 20.07 24 250 5000 28, 29 15.6 42 60 60 28 48.12 60 1000 28 22.9 52 60 5000 28 21.47 55 60 5000 28, 29 15.89 67 -
TABLE 10 Scene with moving foliage and cars min GOP max GOP bitrate size size QP bitrate savings 250 250 28 50.81 250 1000 28 42.74 15 250 5000 28 40.57 20 250 5000 28, 29 32.56 36 60 60 28 83 60 1000 28 43.28 47 60 5000 28 41.25 50 60 5000 28, 29 32.99 60 - GOP Structure: Adaptive B-frame Placement (Main, Extended and High Profiles)
- A B-frame provides higher compression at the expense of greater visual distortion (i.e., lower visual quality). In high-motion scenes, B-frames typically result in noticeably poorer video quality. An adaptive B-frame placement algorithm can be used to vary the number of B-frames. For example, the placement of B-frames can change from a high-motion scene (e.g. ESPN sports program) to a low-motion scene (e.g., a news program). In another example, the placement of B-frames can change from a low-motion scene to a high-motion scene. Motion information from the tracking module can be used to indicate the level of motion in the scene. In low-motion scenes, for example, B-frames can be included into the GOP structure to benefit from greater bit savings while maintaining reasonably good quality, while for high-motion scenes, the number of B-frames can be reduced or omitted.
- I/P/Skipped Mode Decision Based on Video Analytics Results
- In some embodiments, a P-frame can include intra-coded (I) macroblocks (MBs), predictive-coded (P) MBs, bidirectionally-predictive-coded (B) MBs or skipped MBs. I MBs contain full frame information for an MB that is independent of other frames, while P or B MBs, represent or are associated with image differences of an MB across frames. A skipped MB contains no information about the MB. As such, if an MB is coded in a frame as a skipped MB, the MB in the frame will be identical to the MB in the previous frame. Note that in the H.264 standard, an I MB can be spatially predicted using intra-prediction from previously encoded blocks within the same frame.
- In some embodiments, for example, when an object enters/leaves the scene, instead of coding the entire frame as an I-frame, the picture can be coded as a P-frame with MBs corresponding to the foreground object coded as one or more I MBs. MBs can be encoded as I, P or skipped MBs at regions having substantial changes, minor changes or no changes, respectively. The amount of change can be determined using analytics. When an object enters or leaves a scene, the background likely includes little or no change. Accordingly, in some embodiments, the frame can be encoded as a P-frame instead of an I-frame. Further, MBs in the regions of the background with little or no changes can be encoded as P MBs or skipped MBs, while MBs of a foreground object can be encoded as I MBs. This can reduce the overall bitrate while maintaining the segmented object at a higher visual quality than the background.
- In some embodiments, motion information from the tracking module can be used to determine if a background MB should be coded as a P MB or a skipped MB. For example, the MBs corresponding to a background region having moving foliage (such as wavering trees) can be coded as P MBs while the MBs corresponding to a static background region can be coded as skipped MBs. In some embodiments, the foreground MBs can be coded as I MBs when a scene change occurs in a frame. For example, the foreground MBs can be coded as I MBs when an object is detected entering or leaving the scene.
- In some embodiments, whether to use I MBs, P MBs or skipped MBs can be determined using the Mean Absolute Difference (MAD) between pixels in an original image and pixels in a predicted image. The MAD can be compared against a threshold to determine if the MB should be an I MB or a P MB. Such an approach, however, includes a high computation complexity due to the need to compute the MAD. Furthermore, the chosen threshold may not guarantee that all MBs of the foreground object will be coded as I MBs.
- In other embodiments, segmentation and classification output can be used to directly perform the I/P/skipped mode selection. Encoding the MBs corresponding to the static background regions as skipped MBs reduces the overall bitrate without quality degradation. In one embodiment, for example, the segmented background MBs can be encoded as skipped MBs except where the tracking module identifies significant motion in the MB. Such motion may be due to, for example, foliage or water, which is a real change, but not a classified foreground object. Foliage background MBs can be coded as P MBs while fully static background MBs can be coded as skipped MBs. In another embodiment, the class of foreground object type (e.g., person, animal, automobile, etc.) can be used to determine the encoding mode. MBs that are part of the foreground and classified as a Person might be encoded as I MBs while foreground MBs that are classified as Animals may be encoded as P MBs. This increases the compression efficiency and the compression gain for Standard-Definition (SD) video. The gain is higher for High-Definition (HD) video. In such embodiments, the region/object properties from the analytics modules can be used instead of block-based local statistics. The region and/or object properties reflect the semantics of the video better than the block-based local statistics. In such a manner, MAD computation and thresholding can be avoided, resulting in lower computational overhead and higher accuracy.
- Modify MVMV Search Range with Constraints Bounded by Selected Profile
- Inter frames, whether B-frames or P-frames, are predicted from reference frames. In motion estimation, a search area is defined and a motion estimation algorithm is used to find a prediction block that best matches the current block to produce a motion-compensated prediction (MCP) block, which is then transformed, quantized and entropy coded. The vertical and horizontal displacements between the prediction and current block are coded as motion vectors (MVs), which can themselves be predictively coded as well.
- The motion estimation search area is typically determined by the MV search range. The vertical MV search range is bounded by the different types of profiles and levels in the H.264 standard. Most of the computational complexity of a video encoder typically occurs in the motion estimation. A large search area can result in high computational complexity while a small range can restrict or reduce the inter-frame prediction accuracy. In some embodiments, the scene-adaptive video coding includes methods to find an adequate search range with a good trade-off in accuracy and complexity. For example, the motion information or data from the tracking module in the video analytics processing pipeline could be used to select an MV range. The tracking module provides the motion trajectory for a foreground blob. This trajectory can be used to select the motion vector search range for all the macroblocks corresponding to the foreground blob. This approach saves computation because the same motion estimate can be re-used for all macroblocks corresponding to that blob. Further motion vectors (MV) are predicted for foreground blobs that have been classified as rigid objects such as cars because all macroblocks corresponding to the car typically move together.
-
FIGS. 10A-10B illustrate different approaches to determine a motion vector search range, according to embodiments. The approach described inFIG. 10A includes tracking the centroid of a matched pair of blocks across consecutive frames. The centroids of the matched pair of blocks are compared to determine the range. The approach described inFIG. 10B includes considering the neighborhood pixels of the centroid. An N×N window is centered on the centroid and the displacements over the blocks (e.g., macroblocks) in the window are aggregated. - Using Classification Results
- The MV search range can be scaled based on the weighted priorities of the object classes. As described above, most of the computational complexity of a video encoder occurs in the motion estimation. A large search area results in high computational complexity while a small range restricts or reduces the inter-frame prediction accuracy. The size of the search area can be scaled based on the weight associated with the class assigned to an object such that a higher-priority object is associated with a larger search range. Alternately, a different set of search ranges can be used for objects corresponding to different classes. For instance, cars move more rapidly compared to people hence blobs corresponding to cars would have a larger search range. For a scene consisting of a number of objects belonging to different classes, the average of the motion information of objects belonging to the same class is first determined. The weighted average of the aggregated motion information of different classes is then used to determine a final search range. Alternatively, the final search range can be based on the aggregate motion information of objects belonging to the class having the maximum weight. Moreover, the final search range can be based on a dominant MV determined from a MV histogram.
- The search range can be updated on a frame-by-frame basis, over a window size, or over a GOP, for example. The search range can be updated less frequently, i.e., over a larger window size for lower-priority objects or for slow moving objects. Based on the motion history, when objects exhibit consistent motion pattern, the search range is unlikely to change from frame to frame.
- For objects classified as persons, a better estimation of an appropriate search range is to consider the blocks of pixels (e.g. macroblocks) at the lower half section of the person where there is more temporal activity (e.g., moving legs). Objects of higher priority can be coded with higher fidelity than others by assigning a QP value based on the weights of the object classes. The weights of these object classes, in addition to the HVS factors, can be incorporated into the RC process to modify the QP values as described above.
-
FIGS. 11A-11E illustrate analytics-modulated coding of video images, according to other embodiments. Table 11, below, describes preliminary results associated withFIGS. 11A-11E . -
- Filename of raw sequence : foreman-part-qcif.yuv
- Resolution: 176×144
- Frame rate: 30 fps
- Encoded video: H.264 Baseline Profile
- Slice group map type: 2
- Slice group config file: sg2conf.cfg
-
TABLE 11 FIG. 11A FIG. 11B FIG. 11C FIG. 11D FIG. 11E I-frame fg28-bg28 fg45-bg45 fg28-bg45 fg45-bg28 fg28-bg40 bits 22584 4896 9512 18008 11224 SNR(Y) 37.43 25.97 26.93 32.12 30.11 SNR(U) 41.28 36.52 37.48 39.35 38.23 SNR(V) 42.82 36.77 38.15 39.61 39.22 -
FIGS. 12A-12C illustrate analytics-modulated coding of video images, according to other embodiments. Table 12, below, describes preliminary results associated withFIGS. 12A-12C . -
TABLE 12 FIG. 12A FIG. 12B FIG. 12C I-frame fg28-bg28 fg28-bg40 fg28-bg35 bits 109488 32896 53800 SNR(Y) 36.81 28.18 31.3 SNR(U) 40.34 36.33 37.78 SNR(V) 38.86 34.51 35.75 1st P-frame bits 2448 2272 2344 SNR(Y) 35.41 28.1 31 SNR(U) 39.92 36.24 37.6 SNR(V) 38.25 34.36 35.54 2nd P-frame bits 2584 2152 2256 SNR(Y) 35.76 28.1 31.1 SNR(U) 40.14 36.34 37.75 SNR(V) 38.38 34.44 35.58 - In one embodiment, a method includes assigning a class from multiple classes to a foreground object from a video frame. The foreground object has multiple pixels. Each class from among the multiple classes has associated therewith a quantization parameter value. Multiple discrete cosine transform (DCT) coefficients are produced for pixels from the multiple pixels of the video frame associated with the foreground object. The DCT coefficients associated with the foreground object are quantized based on the quantization parameter value associated with the class assigned to the foreground object. The method further includes coding the quantized DCT coefficients associated with the foreground object.
- The foreground object can be a first foreground object, the class assigned to the foreground object can be a first class, and the quantization parameter value associated with the first class can be a first quantization parameter value. A second class from among multiple classes can be assigned to a second foreground object from the video frame, the second class being different from the first class. Multiple DCT coefficients can be produced for pixels from the multiple pixels of the video frame associated with the second foreground object. The multiple DCT coefficients associated with the second foreground object can be quantized based on the quantization parameter value associated with the second class assigned to the second foreground object. The method further includes coding the quantized DCT coefficients associated with the second foreground object.
- The video frame can include a background portion. Multiple DCT coefficients can be produced for pixels from the multiple pixels of the video frame associated with the background portion of the video frame. The multiple DCT coefficients associated with the background portion of the video frame can be quantized based on a quantization parameter value greater than the quantization parameter associated with each class from among the multiple classes. The method further includes coding the quantized DCT coefficients associated with the background portion of the video frame.
- The class assigned to the foreground object can be a first class. The multiple classes can include a second class that is different from the first class. The first class can have an associated coding priority and an associated quantization parameter value. The second class can have an associated coding priority and an associated quantization parameter value. The quantization parameter value associated with the first class can be less than the quantization parameter value associated with the second class when the coding priority associated with the first class is greater than the coding priority associated with the second class.
- The multiple pixels of the video frame can be organized into multiple blocks of pixels. Multiple DCT coefficients can be produced for each block of pixels from the multiple blocks of pixels of the video frame associated with the foreground object. The multiple DCT coefficients of each block of pixels associated with the foreground object can be quantized based on the quantization parameter value associated with the class assigned to the foreground object. The method further includes coding the quantized DCT coefficients associated with the foreground object.
- The foreground object includes at least one block of pixels from multiple blocks of pixels of the video frame. The least one block of pixels associated with the foreground object can define a contour associated with the foreground object.
- In another embodiment, a method includes assigning a class from among multiple classes to a foreground object from a video frame having multiple pixels. A quantization parameter value associated with the foreground object is derived based on at least one of a target bit rate, the number and size of objects in the scene and a weight associated with the class assigned to the foreground object, wherein the weight is based on a coding priority associated with the class assigned to the foreground object. The adjustment can include scaling the quantization parameter value associated with the foreground object based on at least one of the target bit rate, the number and size of objects in the scene and the weight associated with the class assigned to the foreground object. Multiple DCT coefficients are produced for pixels from the plurality of pixels of the video frame associated with the foreground object. The DCT coefficients associated with the foreground object are quantized based on the computed quantization parameter value. The method further includes coding the quantized DCT coefficients associated with the foreground object.
- The method can include coding the video frame via two pass encoding. A first pass operation can be performed using a low-complexity encoder to produce statistics (e.g., brightness, spatial and temporal frequencies) in order to take into account the characteristics of the Human Visual System (HVS). In addition to these HVS factors, the quantization parameter value associated with the foreground object can be derived based on the target bit rate, the number and size of objects in the scene and the weight associated with the class assigned to the foreground object. In other embodiments, the method can include generating gradient information associated with the video frame via a single pass through the video frame and deriving a Human Visual System (HVS) factor associated with the video frame using the gradient information. In such embodiments, the quantization parameter value associated with the foreground object can be computed and/or adjusted based on at least one of the target bit rate, the weight associated with the class assigned to the foreground object, and the Human Visual System factor.
- The foreground object can be a first foreground object, the class assigned to the foreground object can be a first class, the weight associated with the first class can be a first weight, and the quantization parameter value associated with the first foreground object can be a first quantization parameter value. A second class from among the multiple classes can be assigned to a second foreground object from the video frame. The second class can be different from the first class. A second quantization parameter value associated with the second foreground object can be derived based on at least one of a target bit rate, the number and size of objects in the scene and a second weight associated with the second class assigned to the second foreground object. The second quantization parameter value can be different from the first quantization parameter value and the second weight can be different from the first weight. Multiple DCT coefficients can be produced for pixels from the multiple pixels of the video frame associated with the second foreground object. The DCT coefficients associated with the second foreground object can be quantized based on the adjusted second quantization parameter value. The method further includes coding the quantized DCT coefficients associated with the second foreground object.
- In yet another embodiment, a method includes assigning a class from multiple classes to a foreground object from a first video frame having multiple blocks of pixels. The foreground object includes a block of pixels from the multiple blocks of pixels of the first video frame. Each class from among the multiple classes has associated therewith a coding priority. The method further includes identifying in a second video frame with multiple blocks of pixels a prediction block of pixels associated with the block of pixels in the foreground object. The identification is based on a prediction search window that has a search area associated with the coding priority of the class assigned to the foreground object. The method also includes coding the first video frame based on the identified prediction block of pixels.
- The search area of the prediction search window can be updated according to tracked motion information associated with the foreground object over multiple video frames including the first video frame. The search area of the prediction search window can be adjusted based on moving portions of the foreground object.
- The class assigned to the foreground object can be a first class. The multiple classes include a second class different from the first class. The first class can have an associated coding priority and an associated prediction search window. The second class can have an associated coding priority and an associated prediction search window. A search area of the prediction search window associated with the first class can be smaller than a search area of the prediction search window associated with the second class when the coding priority associated with the first class is lower than the coding priority associated with the second class.
- In another embodiment, a method includes tracking motion information associated with a foreground object in a first video frame having multiple blocks of pixels. The foreground object includes a block of pixels from the multiple blocks of pixels of the first video frame. The method further includes identifying in a second video frame having multiple blocks of pixels a prediction block of pixels associated with the block of pixels in the foreground object. The identifying can be based on a prediction search window having a search area associated with the tracked motion information associated with the foreground object. The method also includes coding the first video frame based on the identified prediction block of pixels.
- A class from multiple classes can be assigned to the foreground object. Each class from among the multiple classes has associated therewith a coding priority. The search area of the prediction search window can be updated according to the coding priority associated with the class assigned to the foreground object.
- In yet another embodiment, a method includes assigning a class from multiple classes to a foreground object from a picture in a group of pictures (GOP). Each class from among the multiple classes has associated therewith a coding priority. The method further includes tracking motion information associated with the foreground object over multiple pictures. The method also includes inserting an intra-frame picture in the GOP based on at least one of the tracked motion information associated with the foreground object and the coding priority associated with the class assigned to the foreground object.
- A structure associated with the GOP can be modified based on segmentation results associated with the foreground object and with the coding priority associated with the class assigned to the foreground object. A number of pictures associated with the GOP can be modified based on segmentation results and tracked motion information associated with the foreground object as well as based on the coding priority associated with the class assigned to the foreground object.
- In another embodiment, a method includes assigning a class from multiple classes to a foreground object from a picture in a GOP. Each class from among the multiple classes has associated therewith a coding priority. The method further includes tracking motion information associated with the foreground object over multiple pictures. The method also includes selectively replacing a block of pixels in the foreground object with an intra-coded block of pixels based on at least one of the tracked motion information associated with the foreground object and the coding priority associated with the class assigned to the foreground object.
- In another embodiment, a method includes segmenting a foreground object from a background of a picture in a group of pictures (GOP). Motion information associated with a block of pixels of the foreground object, a first block of pixels of the background, and a second block of pixels of the background is tracked. The block of pixels of the foreground object is encoded as an intra-coded block of pixels based on the motion information associated with the block of pixels of the foreground object. The first block of pixels of the background is encoded as a predictive-coded block of pixels based on the motion information associated with the first block of pixels of the background. The second block of pixels of the background is encoded as a skipped block of pixels based on the motion information associated with the second block of pixels of the background. In some embodiments, the tracking of motion information can include detecting motion in the first block of pixels of the background and detecting an absence of motion in the second block of pixels of the background.
- While various embodiments have been described above, it should be understood that they have been presented by way of example only, and not limitation. For example, the scene-adaptive video encoding can include a subset of the intermediate outputs produced by the video analytics processing pipeline.
- Some embodiments include a processor and a related processor-readable medium having instructions or computer code thereon for performing various processor-implemented operations. Such processors can be implemented as hardware modules such as embedded microprocessors, microprocessors as part of a computer system, Application-Specific Integrated Circuits (“ASICs”), and Programmable Logic Devices (“PLDs”). Such processors can also be implemented as one or more software modules in programming languages as Java, C++, C, assembly, a hardware description language, or any other suitable programming language.
- A processor according to some embodiments includes media and computer code (also can be referred to as code) specially designed and constructed for the specific purpose or purposes. Examples of processor-readable media include, but are not limited to: magnetic storage media such as hard disks, floppy disks, and magnetic tape; optical storage media such as Compact Disc/Digital Video Discs (“CD/DVDs”), Compact Disc-Read Only Memories (“CD-ROMs”), and holographic devices; magneto-optical storage media such as optical disks, and read-only memory (“ROM”) and random-access memory (“RAM”) devices. Examples of computer code include, but are not limited to, micro-code or micro-instructions, machine instructions, such as produced by a compiler, and files containing higher-level instructions that are executed by a computer using an interpreter. For example, an embodiment of the invention can be implemented using Java, C++, or other object oriented programming language and development tools. Additional examples of computer code include, but are not limited to, control signals, encrypted code, and compressed code.
Claims (8)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/520,121 US20220312021A1 (en) | 2008-11-17 | 2021-11-05 | Analytics-modulated coding of surveillance video |
US18/144,627 US12051212B1 (en) | 2008-11-17 | 2023-05-08 | Image analysis and motion detection using interframe coding |
Applications Claiming Priority (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11542708P | 2008-11-17 | 2008-11-17 | |
US12/620,232 US9215467B2 (en) | 2008-11-17 | 2009-11-17 | Analytics-modulated coding of surveillance video |
US14/966,083 US20160337647A1 (en) | 2008-11-17 | 2015-12-11 | Analytics-modulated coding of surveillance video |
US15/843,430 US11172209B2 (en) | 2008-11-17 | 2017-12-15 | Analytics-modulated coding of surveillance video |
US17/520,121 US20220312021A1 (en) | 2008-11-17 | 2021-11-05 | Analytics-modulated coding of surveillance video |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/843,430 Continuation US11172209B2 (en) | 2008-11-17 | 2017-12-15 | Analytics-modulated coding of surveillance video |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/144,627 Continuation US12051212B1 (en) | 2008-11-17 | 2023-05-08 | Image analysis and motion detection using interframe coding |
Publications (1)
Publication Number | Publication Date |
---|---|
US20220312021A1 true US20220312021A1 (en) | 2022-09-29 |
Family
ID=42170411
Family Applications (5)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/620,232 Active 2031-09-09 US9215467B2 (en) | 2008-11-17 | 2009-11-17 | Analytics-modulated coding of surveillance video |
US14/966,083 Abandoned US20160337647A1 (en) | 2008-11-17 | 2015-12-11 | Analytics-modulated coding of surveillance video |
US15/843,430 Active US11172209B2 (en) | 2008-11-17 | 2017-12-15 | Analytics-modulated coding of surveillance video |
US17/520,121 Abandoned US20220312021A1 (en) | 2008-11-17 | 2021-11-05 | Analytics-modulated coding of surveillance video |
US18/144,627 Active US12051212B1 (en) | 2008-11-17 | 2023-05-08 | Image analysis and motion detection using interframe coding |
Family Applications Before (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/620,232 Active 2031-09-09 US9215467B2 (en) | 2008-11-17 | 2009-11-17 | Analytics-modulated coding of surveillance video |
US14/966,083 Abandoned US20160337647A1 (en) | 2008-11-17 | 2015-12-11 | Analytics-modulated coding of surveillance video |
US15/843,430 Active US11172209B2 (en) | 2008-11-17 | 2017-12-15 | Analytics-modulated coding of surveillance video |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/144,627 Active US12051212B1 (en) | 2008-11-17 | 2023-05-08 | Image analysis and motion detection using interframe coding |
Country Status (2)
Country | Link |
---|---|
US (5) | US9215467B2 (en) |
WO (1) | WO2010057170A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US12051212B1 (en) | 2008-11-17 | 2024-07-30 | Check Video LLC | Image analysis and motion detection using interframe coding |
Families Citing this family (113)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9446305B2 (en) | 2002-12-10 | 2016-09-20 | Sony Interactive Entertainment America Llc | System and method for improving the graphics performance of hosted applications |
WO2007014216A2 (en) | 2005-07-22 | 2007-02-01 | Cernium Corporation | Directed attention digital video recordation |
CN101616310B (en) * | 2009-07-17 | 2011-05-11 | 清华大学 | Target image stabilizing method of binocular vision system with variable visual angle and resolution ratio |
EP2514207A2 (en) * | 2009-12-14 | 2012-10-24 | Thomson Licensing | Object-aware video encoding strategies |
US9082278B2 (en) * | 2010-03-19 | 2015-07-14 | University-Industry Cooperation Group Of Kyung Hee University | Surveillance system |
US20110235706A1 (en) * | 2010-03-25 | 2011-09-29 | Texas Instruments Incorporated | Region of interest (roi) video encoding |
US8917765B2 (en) * | 2010-07-20 | 2014-12-23 | Vixs Systems, Inc. | Video encoding system with region detection and adaptive encoding tools and method for use therewith |
FR2965354B1 (en) * | 2010-09-28 | 2012-10-12 | France Etat Ponts Chaussees | METHOD AND DEVICE FOR DETECTING FOG, NIGHT |
US8902970B1 (en) * | 2010-12-01 | 2014-12-02 | Amazon Technologies, Inc. | Altering streaming video encoding based on user attention |
US8498444B2 (en) | 2010-12-13 | 2013-07-30 | Texas Instruments Incorporated | Blob representation in video processing |
US9282333B2 (en) * | 2011-03-18 | 2016-03-08 | Texas Instruments Incorporated | Methods and systems for masking multimedia data |
US20140307798A1 (en) * | 2011-09-09 | 2014-10-16 | Newsouth Innovations Pty Limited | Method and apparatus for communicating and recovering motion information |
US8953044B2 (en) | 2011-10-05 | 2015-02-10 | Xerox Corporation | Multi-resolution video analysis and key feature preserving video reduction strategy for (real-time) vehicle tracking and speed enforcement systems |
EP2582134A1 (en) * | 2011-10-12 | 2013-04-17 | Thomson Licensing | Saliency value determination of predictively encoded video streams |
JP6034010B2 (en) * | 2011-10-24 | 2016-11-30 | ソニー株式会社 | Encoding apparatus, encoding method, and program |
DE102012009876A1 (en) * | 2011-11-10 | 2013-05-16 | Audi Ag | Method for processing an image sequence and test device for a motor vehicle |
EP2795899A4 (en) * | 2011-12-23 | 2016-01-27 | Intel Corp | Content adaptive high precision macroblock rate control |
US10205953B2 (en) * | 2012-01-26 | 2019-02-12 | Apple Inc. | Object detection informed encoding |
US9094681B1 (en) * | 2012-02-28 | 2015-07-28 | Google Inc. | Adaptive segmentation |
WO2013148595A2 (en) * | 2012-03-26 | 2013-10-03 | Onlive, Inc. | System and method for improving the graphics performance of hosted applications |
US9317751B2 (en) * | 2012-04-18 | 2016-04-19 | Vixs Systems, Inc. | Video processing system with video to text description generation, search system and methods for use therewith |
IL219795A0 (en) | 2012-05-15 | 2012-08-30 | D V P Technologies Ltd | Detection of foreign objects in maritime environments |
US9532080B2 (en) | 2012-05-31 | 2016-12-27 | Sonic Ip, Inc. | Systems and methods for the reuse of encoding information in encoding alternative streams of video data |
US11284133B2 (en) * | 2012-07-10 | 2022-03-22 | Avago Technologies International Sales Pte. Limited | Real-time video coding system of multiple temporally scaled video and of multiple profile and standards based on shared video coding information |
FR3000350A1 (en) * | 2012-12-21 | 2014-06-27 | France Telecom | METHOD AND DEVICE FOR TRANSMITTING AN IMAGE SEQUENCE, METHOD AND DEVICE FOR RECEIVING, CORRESPONDING COMPUTER PROGRAM AND RECORDING MEDIUM. |
US9357210B2 (en) | 2013-02-28 | 2016-05-31 | Sonic Ip, Inc. | Systems and methods of encoding multiple video streams for adaptive bitrate streaming |
US10097851B2 (en) * | 2014-03-10 | 2018-10-09 | Euclid Discoveries, Llc | Perceptual optimization for model-based video encoding |
US10091507B2 (en) * | 2014-03-10 | 2018-10-02 | Euclid Discoveries, Llc | Perceptual optimization for model-based video encoding |
US20150264357A1 (en) * | 2014-03-11 | 2015-09-17 | Stmicroelectronics S.R.L. | Method and system for encoding digital images, corresponding apparatus and computer program product |
US9589363B2 (en) * | 2014-03-25 | 2017-03-07 | Intel Corporation | Object tracking in encoded video streams |
FR3022095B1 (en) | 2014-06-06 | 2017-09-01 | Daniel Elalouf | DEVICE AND METHOD FOR TRANSMITTING MULTIMEDIA DATA |
TWI586176B (en) * | 2014-10-01 | 2017-06-01 | 大猩猩科技股份有限公司 | Method and system for video synopsis from compressed video images |
CN105554436B (en) * | 2014-10-31 | 2018-12-18 | 鸿富锦精密工业(深圳)有限公司 | monitoring device and dynamic object monitoring method |
TWI594211B (en) * | 2014-10-31 | 2017-08-01 | 鴻海精密工業股份有限公司 | Monitor device and method for monitoring moving object |
US10121080B2 (en) * | 2015-01-15 | 2018-11-06 | vClick3d, Inc. | Systems and methods for controlling the recording, storing and transmitting of video surveillance content |
US10825310B2 (en) * | 2015-01-15 | 2020-11-03 | vClick3d, Inc. | 3D monitoring of sensors physical location in a reduced bandwidth platform |
EP3239946A1 (en) * | 2015-03-16 | 2017-11-01 | Axis AB | Method and system for generating an event video se-quence, and camera comprising such system |
USD803241S1 (en) | 2015-06-14 | 2017-11-21 | Google Inc. | Display screen with animated graphical user interface for an alert screen |
US9361011B1 (en) | 2015-06-14 | 2016-06-07 | Google Inc. | Methods and systems for presenting multiple live video feeds in a user interface |
US10133443B2 (en) | 2015-06-14 | 2018-11-20 | Google Llc | Systems and methods for smart home automation using a multifunction status and entry point icon |
USD812076S1 (en) | 2015-06-14 | 2018-03-06 | Google Llc | Display screen with graphical user interface for monitoring remote video camera |
EP3319317B1 (en) * | 2015-07-30 | 2021-04-28 | Huawei Technologies Co., Ltd. | Video encoding and decoding method and device |
CN106470341B (en) * | 2015-08-17 | 2020-10-02 | 恩智浦美国有限公司 | Media display system |
US10951914B2 (en) * | 2015-08-27 | 2021-03-16 | Intel Corporation | Reliable large group of pictures (GOP) file streaming to wireless displays |
CN105721740B (en) * | 2016-01-25 | 2018-12-07 | 四川长虹电器股份有限公司 | The compensation method of flat panel TV moving image |
JP6665611B2 (en) * | 2016-03-18 | 2020-03-13 | 富士通株式会社 | Encoding processing program, encoding processing method, and encoding processing device |
US20170359575A1 (en) * | 2016-06-09 | 2017-12-14 | Apple Inc. | Non-Uniform Digital Image Fidelity and Video Coding |
US10783397B2 (en) * | 2016-06-29 | 2020-09-22 | Intel Corporation | Network edge device with image thresholding |
US10666909B2 (en) * | 2016-06-29 | 2020-05-26 | Intel Corporation | Methods and apparatus to perform remote monitoring |
USD882583S1 (en) | 2016-07-12 | 2020-04-28 | Google Llc | Display screen with graphical user interface |
US9916493B2 (en) | 2016-08-03 | 2018-03-13 | At&T Intellectual Property I, L.P. | Method and system for aggregating video content |
US10452951B2 (en) * | 2016-08-26 | 2019-10-22 | Goodrich Corporation | Active visual attention models for computer vision tasks |
JP6884856B2 (en) | 2016-09-26 | 2021-06-09 | ドルビー ラボラトリーズ ライセンシング コーポレイション | Streaming of video data based on content |
CN109496431A (en) * | 2016-10-13 | 2019-03-19 | 富士通株式会社 | Image coding/decoding method, device and image processing equipment |
WO2018072675A1 (en) | 2016-10-18 | 2018-04-26 | Zhejiang Dahua Technology Co., Ltd. | Methods and systems for video processing |
US10386999B2 (en) | 2016-10-26 | 2019-08-20 | Google Llc | Timeline-video relationship presentation for alert events |
US11238290B2 (en) | 2016-10-26 | 2022-02-01 | Google Llc | Timeline-video relationship processing for alert events |
USD843398S1 (en) | 2016-10-26 | 2019-03-19 | Google Llc | Display screen with graphical user interface for a timeline-video relationship presentation for alert events |
EP3328051B1 (en) | 2016-11-29 | 2019-01-02 | Axis AB | Method for controlling an infrared cut filter of a video camera |
KR102636099B1 (en) | 2016-12-22 | 2024-02-13 | 삼성전자주식회사 | Apparatus and method for encoding video adjusting quantization parameter |
US10999602B2 (en) | 2016-12-23 | 2021-05-04 | Apple Inc. | Sphere projected motion estimation/compensation and mode decision |
US10405003B2 (en) | 2017-01-20 | 2019-09-03 | Google Llc | Image compression based on semantic relevance |
US11259046B2 (en) | 2017-02-15 | 2022-02-22 | Apple Inc. | Processing of equirectangular object data to compensate for distortion by spherical projections |
US10924747B2 (en) | 2017-02-27 | 2021-02-16 | Apple Inc. | Video coding techniques for multi-view video |
EP3376766B1 (en) | 2017-03-14 | 2019-01-30 | Axis AB | Method and encoder system for determining gop length for encoding video |
US10728616B2 (en) * | 2017-04-19 | 2020-07-28 | Intel Corporation | User interest-based enhancement of media quality |
EP3396954A1 (en) * | 2017-04-24 | 2018-10-31 | Axis AB | Video camera and method for controlling output bitrate of a video encoder |
EP3396961A1 (en) | 2017-04-24 | 2018-10-31 | Axis AB | Method and rate controller for controlling output bitrate of a video encoder |
US10683962B2 (en) | 2017-05-25 | 2020-06-16 | Google Llc | Thermal management for a compact electronic device |
US10819921B2 (en) | 2017-05-25 | 2020-10-27 | Google Llc | Camera assembly having a single-piece cover element |
US10972685B2 (en) | 2017-05-25 | 2021-04-06 | Google Llc | Video camera assembly having an IR reflector |
US11093752B2 (en) | 2017-06-02 | 2021-08-17 | Apple Inc. | Object tracking in multi-view video |
US10754242B2 (en) | 2017-06-30 | 2020-08-25 | Apple Inc. | Adaptive resolution and projection format in multi-direction video |
US11330804B2 (en) | 2017-08-07 | 2022-05-17 | The Jackson Laboratory | Long-term and continuous animal behavioral monitoring |
US10586302B1 (en) * | 2017-08-23 | 2020-03-10 | Meta View, Inc. | Systems and methods to generate an environmental record for an interactive space |
KR102543444B1 (en) | 2017-08-29 | 2023-06-13 | 삼성전자주식회사 | Video encoding apparatus |
US10582147B2 (en) | 2017-12-28 | 2020-03-03 | Ademco Inc. | Systems and methods for intelligently recording video data streams |
CN110324622B (en) | 2018-03-28 | 2022-09-23 | 腾讯科技(深圳)有限公司 | Video coding rate control method, device, equipment and storage medium |
CN108900849B (en) * | 2018-07-26 | 2021-07-27 | 苏州科达科技股份有限公司 | Video data encryption method, system, device and storage medium |
CN109005409B (en) * | 2018-07-27 | 2021-04-09 | 浙江工业大学 | Intelligent video coding method based on target detection and tracking |
KR102090785B1 (en) * | 2018-07-30 | 2020-03-18 | 이노뎁 주식회사 | syntax-based method of providing inter-operative processing with video analysis system of compressed video |
US10893281B2 (en) * | 2018-10-12 | 2021-01-12 | International Business Machines Corporation | Compression of a video stream having frames with relatively heightened quality parameters on blocks on an identified point of interest (PoI) |
CN110072103A (en) * | 2019-03-15 | 2019-07-30 | 西安电子科技大学 | Video Fast Compression method, HD video system, 4K video system based on ROI |
US10659848B1 (en) | 2019-03-21 | 2020-05-19 | International Business Machines Corporation | Display overlays for prioritization of video subjects |
US11223838B2 (en) | 2019-05-26 | 2022-01-11 | Alibaba Group Holding Limited | AI-assisted programmable hardware video codec |
FR3102026B1 (en) * | 2019-10-14 | 2022-06-10 | Awecom Inc | SEMANTICALLY SEGMENTED VIDEO IMAGE COMPRESSION |
US10999582B1 (en) * | 2019-10-14 | 2021-05-04 | Awecom, Inc. | Semantically segmented video image compression |
US11496770B2 (en) * | 2019-12-19 | 2022-11-08 | Apple Inc. | Media object compression/decompression with adaptive processing for block-level sub-errors and/or decomposed block-level sub-errors |
US11430136B2 (en) * | 2019-12-19 | 2022-08-30 | Intel Corporation | Methods and apparatus to improve efficiency of object tracking in video frames |
FR3105905B1 (en) * | 2019-12-26 | 2022-12-16 | Thales Sa | Stream transmission and reception methods, devices and computer program thereof |
KR20210092588A (en) * | 2020-01-16 | 2021-07-26 | 삼성전자주식회사 | Image processing apparatus and method thereof |
CN113301337A (en) * | 2020-02-24 | 2021-08-24 | 北京三星通信技术研究有限公司 | Coding and decoding method and device |
CN111464834B (en) * | 2020-04-07 | 2023-04-07 | 腾讯科技(深圳)有限公司 | Video frame processing method and device, computing equipment and storage medium |
US11558548B2 (en) * | 2020-05-04 | 2023-01-17 | Ademco Inc. | Systems and methods for encoding regions containing an element of interest in a sequence of images with a high resolution |
WO2021248349A1 (en) | 2020-06-10 | 2021-12-16 | Plantronics, Inc. | Combining high-quality foreground with enhanced low-quality background |
CN111479112B (en) * | 2020-06-23 | 2020-11-03 | 腾讯科技(深圳)有限公司 | Video coding method, device, equipment and storage medium |
US11711638B2 (en) | 2020-06-29 | 2023-07-25 | The Nielsen Company (Us), Llc | Audience monitoring systems and related methods |
CN111901603B (en) * | 2020-07-28 | 2023-06-02 | 上海工程技术大学 | Coding method and decoding method for static background video |
CN112004114B (en) * | 2020-08-31 | 2022-07-05 | 广州市百果园信息技术有限公司 | Video processing method and device, readable storage medium and electronic equipment |
EP4009635A1 (en) * | 2020-12-07 | 2022-06-08 | Axis AB | Method and system for producing streams of image frames |
CN114650421A (en) * | 2020-12-18 | 2022-06-21 | 中兴通讯股份有限公司 | Video processing method and device, electronic equipment and storage medium |
US11800056B2 (en) | 2021-02-11 | 2023-10-24 | Logitech Europe S.A. | Smart webcam system |
US11800048B2 (en) | 2021-02-24 | 2023-10-24 | Logitech Europe S.A. | Image generating system with background replacement or modification capabilities |
US11496738B1 (en) * | 2021-03-24 | 2022-11-08 | Amazon Technologies, Inc. | Optimized reduced bitrate encoding for titles and credits in video content |
US20220335656A1 (en) | 2021-04-14 | 2022-10-20 | Tencent America LLC | Adaptive neural image compression with smooth quality control by meta-learning |
CN113132757B (en) * | 2021-04-21 | 2022-07-05 | 北京汇钧科技有限公司 | Data processing method and device |
CN113079375B (en) * | 2021-06-03 | 2022-03-08 | 浙江智慧视频安防创新中心有限公司 | Method and device for determining video coding and decoding priority order based on correlation comparison |
US11860704B2 (en) | 2021-08-16 | 2024-01-02 | The Nielsen Company (Us), Llc | Methods and apparatus to determine user presence |
WO2023055266A1 (en) * | 2021-09-28 | 2023-04-06 | Telefonaktiebolaget Lm Ericsson (Publ) | Rate-control using machine vision performance |
US11758223B2 (en) | 2021-12-23 | 2023-09-12 | The Nielsen Company (Us), Llc | Apparatus, systems, and methods for user presence detection for audience monitoring |
US11997283B2 (en) * | 2022-03-30 | 2024-05-28 | Sony Group Corporation | Machine learning based content-aware image frame encoding |
WO2023203509A1 (en) | 2022-04-19 | 2023-10-26 | Instituto De Telecomunicações | Image data compression method and device using segmentation and classification |
US12088882B2 (en) | 2022-08-26 | 2024-09-10 | The Nielsen Company (Us), Llc | Systems, apparatus, and related methods to estimate audience exposure based on engagement level |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6356664B1 (en) * | 1999-02-24 | 2002-03-12 | International Business Machines Corporation | Selective reduction of video data using variable sampling rates based on importance within the image |
US20030128298A1 (en) * | 2002-01-08 | 2003-07-10 | Samsung Electronics Co., Ltd. | Method and apparatus for color-based object tracking in video sequences |
US20050175251A1 (en) * | 2004-02-09 | 2005-08-11 | Sanyo Electric Co., Ltd. | Image coding apparatus, image decoding apparatus, image display apparatus and image processing apparatus |
US20060140279A1 (en) * | 1996-08-15 | 2006-06-29 | Tokumichi Murakami | Image coding apparatus with segment classification and segmentation-type motion prediction circuit |
US20090087027A1 (en) * | 2007-09-27 | 2009-04-02 | John Eric Eaton | Estimator identifier component for behavioral recognition system |
Family Cites Families (126)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2049273A1 (en) | 1990-10-25 | 1992-04-26 | Cindy E. Daniell | Self adaptive hierarchical target identification and recognition neural network |
US5579471A (en) | 1992-11-09 | 1996-11-26 | International Business Machines Corporation | Image query system and method |
US6182069B1 (en) | 1992-11-09 | 2001-01-30 | International Business Machines Corporation | Video query system and method |
EP0669034B1 (en) | 1992-11-10 | 1997-01-15 | Siemens Aktiengesellschaft | Process for detecting and eliminating the shadow of moving objects in a sequence of digital images |
WO1994017636A1 (en) | 1993-01-29 | 1994-08-04 | Bell Communications Research, Inc. | Automatic tracking camera control system |
KR100292138B1 (en) * | 1993-07-12 | 2002-06-20 | 이데이 노부유끼 | Transmitter and Receiver for Digital Video Signal |
US5434927A (en) | 1993-12-08 | 1995-07-18 | Minnesota Mining And Manufacturing Company | Method and apparatus for machine vision classification and tracking |
US6122411A (en) | 1994-02-16 | 2000-09-19 | Apple Computer, Inc. | Method and apparatus for storing high and low resolution images in an imaging device |
US6628887B1 (en) | 1998-04-17 | 2003-09-30 | Honeywell International, Inc. | Video security system |
GB2305061B (en) | 1994-07-26 | 1998-12-09 | Maxpro Systems Pty Ltd | Text insertion system |
US5455561A (en) | 1994-08-02 | 1995-10-03 | Brown; Russell R. | Automatic security monitor reporter |
EP1098527A1 (en) | 1994-11-04 | 2001-05-09 | Matsushita Electric Industrial Co., Ltd. | Picture coding apparatus and decoding apparatus |
CA2155719C (en) | 1994-11-22 | 2005-11-01 | Terry Laurence Glatt | Video surveillance system with pilot and slave cameras |
KR960028217A (en) | 1994-12-22 | 1996-07-22 | 엘리 웨이스 | Motion Detection Camera System and Method |
JP3258840B2 (en) | 1994-12-27 | 2002-02-18 | シャープ株式会社 | Video encoding device and region extraction device |
US5886743A (en) | 1994-12-28 | 1999-03-23 | Hyundai Electronics Industries Co. Ltd. | Object-by information coding apparatus and method thereof for MPEG-4 picture instrument |
US5689442A (en) | 1995-03-22 | 1997-11-18 | Witness Systems, Inc. | Event surveillance system |
US5850352A (en) | 1995-03-31 | 1998-12-15 | The Regents Of The University Of California | Immersive video, including video hypermosaicing to generate from multiple video views of a scene a three-dimensional video mosaic from which diverse virtual video scene images are synthesized, including panoramic, scene interactive and stereoscopic images |
US6415057B1 (en) * | 1995-04-07 | 2002-07-02 | Sony Corporation | Method and apparatus for selective control of degree of picture compression |
US5724475A (en) | 1995-05-18 | 1998-03-03 | Kirsten; Jeff P. | Compressed digital video reload and playback system |
JP3309642B2 (en) | 1995-05-31 | 2002-07-29 | ソニー株式会社 | Image information recording method and system |
US7386372B2 (en) | 1995-06-07 | 2008-06-10 | Automotive Technologies International, Inc. | Apparatus and method for determining presence of objects in a vehicle |
US5809200A (en) | 1995-08-07 | 1998-09-15 | Victor Company Of Japan, Ltd. | Video signal recording apparatus |
US5959672A (en) | 1995-09-29 | 1999-09-28 | Nippondenso Co., Ltd. | Picture signal encoding system, picture signal decoding system and picture recognition system |
JP3788823B2 (en) * | 1995-10-27 | 2006-06-21 | 株式会社東芝 | Moving picture encoding apparatus and moving picture decoding apparatus |
US5825413A (en) | 1995-11-01 | 1998-10-20 | Thomson Consumer Electronics, Inc. | Infrared surveillance system with controlled video recording |
GB2308262B (en) | 1995-12-16 | 1999-08-04 | Paul Gordon Wilkins | Method for analysing the content of a video signal |
US6148030A (en) | 1996-02-07 | 2000-11-14 | Sharp Kabushiki Kaisha | Motion picture coding and decoding apparatus |
US5982418A (en) | 1996-04-22 | 1999-11-09 | Sensormatic Electronics Corporation | Distributed video data storage in video surveillance system |
JP3202606B2 (en) | 1996-07-23 | 2001-08-27 | キヤノン株式会社 | Imaging server and its method and medium |
KR100501902B1 (en) | 1996-09-25 | 2005-10-10 | 주식회사 팬택앤큐리텔 | Image information encoding / decoding apparatus and method |
US6055330A (en) * | 1996-10-09 | 2000-04-25 | The Trustees Of Columbia University In The City Of New York | Methods and apparatus for performing digital image and video segmentation and compression using 3-D depth information |
KR100211055B1 (en) * | 1996-10-28 | 1999-07-15 | 정선종 | Scarable transmitting method for divided image objects based on content |
US6031573A (en) | 1996-10-31 | 2000-02-29 | Sensormatic Electronics Corporation | Intelligent video information management system performing multiple functions in parallel |
US5969764A (en) * | 1997-02-14 | 1999-10-19 | Mitsubishi Electric Information Technology Center America, Inc. | Adaptive video coding method |
US6249613B1 (en) * | 1997-03-31 | 2001-06-19 | Sharp Laboratories Of America, Inc. | Mosaic generation and sprite-based coding with automatic foreground and background separation |
ATE208114T1 (en) | 1997-06-04 | 2001-11-15 | Ascom Systec Ag | METHOD FOR MONITORING A SPECIFIED MONITORING AREA |
US6215505B1 (en) | 1997-06-20 | 2001-04-10 | Nippon Telegraph And Telephone Corporation | Scheme for interactive video manipulation and display of moving object on background image |
JP3870491B2 (en) | 1997-07-02 | 2007-01-17 | 松下電器産業株式会社 | Inter-image correspondence detection method and apparatus |
US6573907B1 (en) | 1997-07-03 | 2003-06-03 | Obvious Technology | Network distribution and management of interactive video and multi-media containers |
US6233356B1 (en) | 1997-07-08 | 2001-05-15 | At&T Corp. | Generalized scalability for video coder based on video objects |
KR100251051B1 (en) | 1997-07-14 | 2000-04-15 | 윤종용 | An arbitrary shape coding method |
US6097429A (en) | 1997-08-01 | 2000-08-01 | Esco Electronics Corporation | Site control unit for video security system |
US6069655A (en) | 1997-08-01 | 2000-05-30 | Wells Fargo Alarm Services, Inc. | Advanced video security system |
EP0903742B1 (en) | 1997-09-17 | 2003-03-19 | Matsushita Electric Industrial Co., Ltd | Video data editing apparatus, optical disc for use as a recording medium of a video data editing apparatus, and computer-readable recording medium |
JP3528548B2 (en) | 1997-11-18 | 2004-05-17 | トヨタ自動車株式会社 | Vehicle moving image processing method and vehicle moving image processing device |
JP4261630B2 (en) * | 1998-02-04 | 2009-04-30 | キヤノン株式会社 | Image encoding apparatus and method, and computer-readable recording medium on which an image encoding program is recorded |
US7196720B2 (en) | 1998-03-06 | 2007-03-27 | Intel Corporation | Method and apparatus for powering on an electronic device with a video camera that detects motion |
DE69937816T2 (en) * | 1998-04-28 | 2008-12-24 | Canon K.K. | Data processing device and method |
US7630570B1 (en) * | 1998-05-06 | 2009-12-08 | At&T Intellectual Property Ii, L.P. | Method and apparatus to prioritize video information during coding and decoding |
US6826228B1 (en) | 1998-05-12 | 2004-11-30 | Stmicroelectronics Asia Pacific (Pte) Ltd. | Conditional masking for video encoder |
US7576770B2 (en) | 2003-02-11 | 2009-08-18 | Raymond Metzger | System for a plurality of video cameras disposed on a common network |
US6542621B1 (en) | 1998-08-31 | 2003-04-01 | Texas Instruments Incorporated | Method of dealing with occlusion when tracking multiple objects and people in video sequences |
US6301386B1 (en) | 1998-12-09 | 2001-10-09 | Ncr Corporation | Methods and apparatus for gray image based text identification |
US6233226B1 (en) * | 1998-12-14 | 2001-05-15 | Verizon Laboratories Inc. | System and method for analyzing and transmitting video over a switched network |
KR20010108159A (en) * | 1999-01-29 | 2001-12-07 | 다니구찌 이찌로오, 기타오카 다카시 | Method of image feature encoding and method of image search |
US6493022B1 (en) | 1999-03-05 | 2002-12-10 | Biscom, Inc. | Security system for notification of an undesired condition at a monitored area with minimized false alarms |
US6330025B1 (en) | 1999-05-10 | 2001-12-11 | Nice Systems Ltd. | Digital video logging system |
US20040075738A1 (en) | 1999-05-12 | 2004-04-22 | Sean Burke | Spherical surveillance system architecture |
JP3531532B2 (en) * | 1999-05-18 | 2004-05-31 | 日本電気株式会社 | Video encoding apparatus and method |
US6591006B1 (en) | 1999-06-23 | 2003-07-08 | Electronic Data Systems Corporation | Intelligent image recording system and method |
US6437819B1 (en) | 1999-06-25 | 2002-08-20 | Rohan Christopher Loveland | Automated video person tracking system |
US6879705B1 (en) | 1999-07-14 | 2005-04-12 | Sarnoff Corporation | Method and apparatus for tracking multiple objects in a video sequence |
US6707486B1 (en) | 1999-12-15 | 2004-03-16 | Advanced Technology Video, Inc. | Directional motion estimator |
GB0001591D0 (en) | 2000-01-24 | 2000-03-15 | Technical Casino Services Ltd | Casino video security system |
US6940998B2 (en) * | 2000-02-04 | 2005-09-06 | Cernium, Inc. | System for automated screening of security cameras |
US7307652B2 (en) | 2000-03-10 | 2007-12-11 | Sensormatic Electronics Corporation | Method and apparatus for object tracking and detection |
US6901110B1 (en) * | 2000-03-10 | 2005-05-31 | Obvious Technology | Systems and methods for tracking objects in video sequences |
US20010046262A1 (en) | 2000-03-10 | 2001-11-29 | Freda Robert M. | System and method for transmitting a broadcast television signal over broadband digital transmission channels |
US6563874B1 (en) * | 2000-06-23 | 2003-05-13 | Hitachi America, Ltd. | Fast search method for motion estimation |
US6504479B1 (en) | 2000-09-07 | 2003-01-07 | Comtrak Technologies Llc | Integrated security system |
US6680745B2 (en) | 2000-11-10 | 2004-01-20 | Perceptive Network Technologies, Inc. | Videoconferencing method with tracking of face and dynamic bandwidth allocation |
US7020335B1 (en) | 2000-11-21 | 2006-03-28 | General Dynamics Decision Systems, Inc. | Methods and apparatus for object recognition and compression |
US8374237B2 (en) * | 2001-03-02 | 2013-02-12 | Dolby Laboratories Licensing Corporation | High precision encoding and decoding of video images |
US7173650B2 (en) | 2001-03-28 | 2007-02-06 | Koninklijke Philips Electronics N.V. | Method for assisting an automated video tracking system in reaquiring a target |
US6771306B2 (en) | 2001-03-28 | 2004-08-03 | Koninklijke Philips Electronics N.V. | Method for selecting a target in an automated video tracking system |
CN1554193A (en) | 2001-07-25 | 2004-12-08 | �����J��ʷ����ɭ | A camera control apparatus and method |
US7940299B2 (en) | 2001-08-09 | 2011-05-10 | Technest Holdings, Inc. | Method and apparatus for an omni-directional video surveillance system |
US6980485B2 (en) | 2001-10-25 | 2005-12-27 | Polycom, Inc. | Automatic camera tracking using beamforming |
US7650058B1 (en) | 2001-11-08 | 2010-01-19 | Cernium Corporation | Object selective video recording |
US20040064838A1 (en) | 2002-01-08 | 2004-04-01 | Lykke Olesen | Method and device for viewing a live performance |
US20060165386A1 (en) | 2002-01-08 | 2006-07-27 | Cernium, Inc. | Object selective video recording |
JP3870124B2 (en) | 2002-06-14 | 2007-01-17 | キヤノン株式会社 | Image processing apparatus and method, computer program, and computer-readable storage medium |
US20040022322A1 (en) * | 2002-07-19 | 2004-02-05 | Meetrix Corporation | Assigning prioritization during encode of independently compressed objects |
US6839067B2 (en) | 2002-07-26 | 2005-01-04 | Fuji Xerox Co., Ltd. | Capturing and producing shared multi-resolution video |
US7321386B2 (en) | 2002-08-01 | 2008-01-22 | Siemens Corporate Research, Inc. | Robust stereo-driven video-based surveillance |
US20040143602A1 (en) | 2002-10-18 | 2004-07-22 | Antonio Ruiz | Apparatus, system and method for automated and adaptive digital image/video surveillance for events and configurations using a rich multimedia relational database |
US7385626B2 (en) | 2002-10-21 | 2008-06-10 | Sarnoff Corporation | Method and system for performing surveillance |
US7680393B2 (en) | 2002-11-13 | 2010-03-16 | Sony Corporation | Content editing assistance system, video processing apparatus, playback apparatus, editing apparatus, computer program, and content processing method |
US20040100563A1 (en) | 2002-11-27 | 2004-05-27 | Sezai Sablak | Video tracking system and method |
US7151454B2 (en) | 2003-01-02 | 2006-12-19 | Covi Technologies | Systems and methods for location of objects |
US6954501B2 (en) * | 2003-02-17 | 2005-10-11 | Xvd Corporation | Method and apparatus for object based motion compensation |
US20040186813A1 (en) | 2003-02-26 | 2004-09-23 | Tedesco Daniel E. | Image analysis method and apparatus in a network that is structured with multiple layers and differentially weighted neurons |
WO2004088858A2 (en) * | 2003-03-29 | 2004-10-14 | Regents Of University Of California | Method and apparatus for improved data transmission |
US7528881B2 (en) | 2003-05-02 | 2009-05-05 | Grandeye, Ltd. | Multiple object processing in wide-angle video camera |
JP2004354420A (en) * | 2003-05-27 | 2004-12-16 | Fuji Photo Film Co Ltd | Automatic photographing system |
US7956889B2 (en) | 2003-06-04 | 2011-06-07 | Model Software Corporation | Video surveillance system |
KR20050000276A (en) | 2003-06-24 | 2005-01-03 | 주식회사 성진씨앤씨 | Virtual joystick system for controlling the operation of a security camera and controlling method thereof |
US7428000B2 (en) | 2003-06-26 | 2008-09-23 | Microsoft Corp. | System and method for distributed meetings |
US20050012817A1 (en) | 2003-07-15 | 2005-01-20 | International Business Machines Corporation | Selective surveillance system with active sensor management policies |
US7525570B2 (en) | 2003-07-17 | 2009-04-28 | Igt | Security camera interface |
US20050104958A1 (en) | 2003-11-13 | 2005-05-19 | Geoffrey Egnal | Active camera video-based surveillance systems and methods |
US7106193B2 (en) | 2003-12-23 | 2006-09-12 | Honeywell International, Inc. | Integrated alarm detection and verification device |
US7447331B2 (en) | 2004-02-24 | 2008-11-04 | International Business Machines Corporation | System and method for generating a viewable video index for low bandwidth applications |
JP4819380B2 (en) | 2004-03-23 | 2011-11-24 | キヤノン株式会社 | Surveillance system, imaging setting device, control method, and program |
WO2005096215A1 (en) * | 2004-03-24 | 2005-10-13 | Cernium, Inc. | Improvement in video analysis using segmentation gain by area |
EP1762114B1 (en) | 2004-05-24 | 2015-11-04 | Google, Inc. | Location based access control in a wireless network |
US7447337B2 (en) | 2004-10-25 | 2008-11-04 | Hewlett-Packard Development Company, L.P. | Video content understanding through real time video motion analysis |
EP1825684A1 (en) * | 2004-12-10 | 2007-08-29 | Koninklijke Philips Electronics N.V. | Wireless video streaming using single layer coding and prioritized streaming |
US7457433B2 (en) | 2005-01-20 | 2008-11-25 | International Business Machines Corporation | System and method for analyzing video from non-static camera |
GB0502371D0 (en) * | 2005-02-04 | 2005-03-16 | British Telecomm | Identifying spurious regions in a video frame |
US7944469B2 (en) | 2005-02-14 | 2011-05-17 | Vigilos, Llc | System and method for using self-learning rules to enable adaptive security monitoring |
AR052601A1 (en) * | 2005-03-10 | 2007-03-21 | Qualcomm Inc | CLASSIFICATION OF CONTENTS FOR MULTIMEDIA PROCESSING |
WO2007014216A2 (en) * | 2005-07-22 | 2007-02-01 | Cernium Corporation | Directed attention digital video recordation |
US8019170B2 (en) * | 2005-10-05 | 2011-09-13 | Qualcomm, Incorporated | Video frame motion-based automatic region-of-interest detection |
US7437755B2 (en) | 2005-10-26 | 2008-10-14 | Cisco Technology, Inc. | Unified network and physical premises access control server |
KR100750138B1 (en) * | 2005-11-16 | 2007-08-21 | 삼성전자주식회사 | Method and apparatus for image encoding and decoding considering the characteristic of human visual system |
US7672524B2 (en) * | 2006-03-02 | 2010-03-02 | Mitsubishi Electric Research Laboratories, Inc. | Quality control for image transcoding |
US8077769B2 (en) * | 2006-03-28 | 2011-12-13 | Sony Corporation | Method of reducing computations in transform and scaling processes in a digital video encoder using a threshold-based approach |
US8848053B2 (en) * | 2006-03-28 | 2014-09-30 | Objectvideo, Inc. | Automatic extraction of secondary video streams |
WO2007130425A2 (en) * | 2006-05-01 | 2007-11-15 | Georgia Tech Research Corporation | Expert system and method for elastic encoding of video according to regions of interest |
WO2008057285A2 (en) * | 2006-10-27 | 2008-05-15 | Vidient Systems, Inc. | An apparatus for image capture with automatic and manual field of interest processing with a multi-resolution camera |
US20080181507A1 (en) * | 2007-01-29 | 2008-07-31 | Intellivision Technologies Corp. | Image manipulation for videos and still images |
FR2914124B1 (en) * | 2007-03-21 | 2009-08-28 | Assistance Tech Et Etude De Ma | METHOD AND DEVICE FOR CONTROLLING THE RATE OF ENCODING VIDEO PICTURE SEQUENCES TO A TARGET RATE |
US20080279279A1 (en) | 2007-05-09 | 2008-11-13 | Wenjin Liu | Content adaptive motion compensated temporal filter for video pre-processing |
WO2010057170A1 (en) | 2008-11-17 | 2010-05-20 | Cernium Corporation | Analytics-modulated coding of surveillance video |
-
2009
- 2009-11-17 WO PCT/US2009/064759 patent/WO2010057170A1/en active Application Filing
- 2009-11-17 US US12/620,232 patent/US9215467B2/en active Active
-
2015
- 2015-12-11 US US14/966,083 patent/US20160337647A1/en not_active Abandoned
-
2017
- 2017-12-15 US US15/843,430 patent/US11172209B2/en active Active
-
2021
- 2021-11-05 US US17/520,121 patent/US20220312021A1/en not_active Abandoned
-
2023
- 2023-05-08 US US18/144,627 patent/US12051212B1/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060140279A1 (en) * | 1996-08-15 | 2006-06-29 | Tokumichi Murakami | Image coding apparatus with segment classification and segmentation-type motion prediction circuit |
US6356664B1 (en) * | 1999-02-24 | 2002-03-12 | International Business Machines Corporation | Selective reduction of video data using variable sampling rates based on importance within the image |
US20030128298A1 (en) * | 2002-01-08 | 2003-07-10 | Samsung Electronics Co., Ltd. | Method and apparatus for color-based object tracking in video sequences |
US20050175251A1 (en) * | 2004-02-09 | 2005-08-11 | Sanyo Electric Co., Ltd. | Image coding apparatus, image decoding apparatus, image display apparatus and image processing apparatus |
US20090087027A1 (en) * | 2007-09-27 | 2009-04-02 | John Eric Eaton | Estimator identifier component for behavioral recognition system |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US12051212B1 (en) | 2008-11-17 | 2024-07-30 | Check Video LLC | Image analysis and motion detection using interframe coding |
Also Published As
Publication number | Publication date |
---|---|
US20100124274A1 (en) | 2010-05-20 |
US11172209B2 (en) | 2021-11-09 |
US9215467B2 (en) | 2015-12-15 |
US12051212B1 (en) | 2024-07-30 |
WO2010057170A1 (en) | 2010-05-20 |
US20160337647A1 (en) | 2016-11-17 |
US20180139456A1 (en) | 2018-05-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US12051212B1 (en) | Image analysis and motion detection using interframe coding | |
Babu et al. | A survey on compressed domain video analysis techniques | |
US8750372B2 (en) | Treating video information | |
US9426475B2 (en) | Scene change detection using sum of variance and estimated picture encoding cost | |
Doulamis et al. | Low bit-rate coding of image sequences using adaptive regions of interest | |
US8139883B2 (en) | System and method for image and video encoding artifacts reduction and quality improvement | |
US7894531B1 (en) | Method of compression for wide angle digital video | |
US20100303150A1 (en) | System and method for cartoon compression | |
US8179961B2 (en) | Method and apparatus for adapting a default encoding of a digital video signal during a scene change period | |
Poppe et al. | Moving object detection in the H. 264/AVC compressed domain for video surveillance applications | |
Gao et al. | The IEEE 1857 standard: Empowering smart video surveillance systems | |
EP1022667A2 (en) | Methods of feature extraction of video sequences | |
US11095899B2 (en) | Image processing apparatus, image processing method, and storage medium | |
US7031388B2 (en) | System for and method of sharpness enhancement for coded digital video | |
KR101149522B1 (en) | Apparatus and method for detecting scene change | |
GB2459671A (en) | Scene Change Detection For Use With Bit-Rate Control Of A Video Compression System | |
US20060109902A1 (en) | Compressed domain temporal segmentation of video sequences | |
Tong et al. | Human centered perceptual adaptation for video coding | |
Yang et al. | MPEG-7 descriptors based shot detection and adaptive initial quantization parameter estimation for the H. 264/AVC | |
Kim et al. | Moving Object Detection Using Syntax Elements | |
Perera et al. | Evaluation of compression schemes for wide area video | |
WO1999059342A1 (en) | Method and system for mpeg-2 encoding with frame partitioning | |
Grecos et al. | An improved rate control algorithm based on a novel shot detection scheme for the H. 264/AVC standard | |
Akram | Surveillance centric coding | |
JP2002369206A (en) | Device and method for selective encoding of dynamic region and static region |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: CHECKVIDEO LLC, VIRGINIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CERNIUM CORPORATION;REEL/FRAME:058058/0118 Effective date: 20130618 Owner name: CERNIUM CORPORATION, VIRGINIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CHEOK, LAI-TEE;GAGVANI, NIKHIL;REEL/FRAME:058058/0109 Effective date: 20091203 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |