US20120057633A1 - Video Classification Systems and Methods - Google Patents
Video Classification Systems and Methods Download PDFInfo
- Publication number
- US20120057633A1 US20120057633A1 US13/225,202 US201113225202A US2012057633A1 US 20120057633 A1 US20120057633 A1 US 20120057633A1 US 201113225202 A US201113225202 A US 201113225202A US 2012057633 A1 US2012057633 A1 US 2012057633A1
- Authority
- US
- United States
- Prior art keywords
- macroblock
- distortion
- frame
- encoding
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
- H04N19/517—Processing of motion vectors by encoding
- H04N19/52—Processing of motion vectors by encoding by predictive encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/115—Selection of the code volume for a coding unit prior to coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/124—Quantisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/164—Feedback from the receiver or from the transmission channel
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/189—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
- H04N19/196—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding being specially adapted for the computation of encoding parameters, e.g. by averaging previously computed encoding parameters
- H04N19/198—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding being specially adapted for the computation of encoding parameters, e.g. by averaging previously computed encoding parameters including smoothing of a sequence of encoding parameters, e.g. by averaging, by choice of the maximum, minimum or median value
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/144—Movement detection
- H04N5/145—Movement estimation
Definitions
- Patent non-provisional applications entitled “Rho-Domain Metrics” (attorney docket no. 043497-0393276), “Video Analytics for Security Systems and Methods” (attorney docket no. 043497-0393277) and “Systems And Methods for Video Content Analysis” (attorney docket no. 043497-0393278), which are expressly incorporated by reference herein.
- FIG. 1 illustrates the relationship of distortion and rate difference between Intra Inter modes for a given quantization parameter.
- FIG. 2 is a flowchart illustrating a content classification based mode decision method.
- FIG. 3 is a simplified block schematic illustrating a processing system employed in certain embodiments of the invention.
- Video standards such as H.264/AVC employ mode decision as an encoding decision process to determine whether a macroblock (“MB”) is encoded as an intra-prediction mode (“Intra Mode”) or an inter-prediction mode (“Inter Mode”). Rate-distortion optimization techniques are commonly applied in various implementations. When encoding a MB, rate-distortion cost is calculated for both Intra Modes and Inter Modes. The minimum cost mode is selected as the final encoding mode. Depending on the video standard, multiple Intra Modes and Inter Modes are applied.
- Rate-distortion cost J is defined as
- distortion D is defined as the difference between reconstructed MB and original MB
- rate R represents the bits used to encode the current MB
- coefficient ⁇ is a weighting factor.
- SAD sum of absolute differences
- Rate-distortion optimization (RDO) techniques can provide a balance of encoding quality and compression ratio.
- An accurate calculation of rate R in equation (1) is computationally costly and generally involves a dual-pass encoding process which requires the use of hardware resources and which introduces additional delays.
- Research has been conducted to optimize the calculation of R and to provide a fast rate-distortion balanced mode decision algorithm.
- estimation of bit rate R per MB is generally very costly due to tight pipeline architectures employed in hardware embodiments that provide real-time encoding and multiple-channel encoding.
- distortion D is used to determine the mode decision when R is omitted from equation (1).
- Mode optimization typically cannot be achieved by using D alone without considering the bit rate perspective of encoding.
- Intra Mode's SAD values for background MBs can be smaller than Inter Mode SAD values: therefore, Intra-mode is typically selected for background MBs.
- Intra Mode encoding usually consumes many more bits than Inter Mode encoding and, consequentially, encoding bits may be wasted and background blocky artifacts can be observed.
- Certain embodiments employ a comparison of rate cost distortions J_int ra and J_int er. Based on equation (1), a comparison can be taken as equivalent to the comparison of D int ra + ⁇ *( ⁇ R) D int er shown in Equation (2), where ⁇ *( ⁇ R) (denoted hereafter ⁇ ) is the rate difference weighting factor between Intra mode and Inter mode.
- R int ra represents bit numbers used by the intra mode encoder to encode the current microblock
- R int er represents bit numbers used by the inter mode encoder to encode current microblock.
- a point P is defined as the point at which Diff_R ( ⁇ R) is equal to the zero point on axis X.
- Experimental results show there is a pseudo tangent relationship between ⁇ R and distortion for a given QP.
- the location of P-point is a function of QP and video motion complexity, P-points increase along with the increasing of QP and motion complexity. After a P-point is located, deviation r can be estimated and the Intra Mode/Inter Mode decision can be reached quickly and with greater ease, based on the tangent curve and D value distribution frequency.
- Rho-domain (“ ⁇ -domain”) content classification and certain embodiments provide an innovative ⁇ -domain metric “ ⁇ ” and employ systems and methods that apply the metric.
- the definition of ⁇ in ⁇ -domain can be taken to be the number of non-zero coefficients after transform and quantization in a video encoding process.
- NZ will be used herein to represent ⁇ , where NZ can be understood as meaning a number of non-zero coefficients after quantization of each macroblock in video standards such as the H.264 video standard.
- a ⁇ -domain deviation metric ⁇ may be defined as a recursive weighted ratio between the theoretical NZ_QP curve and the actual NZ_QP curve. Normalized ⁇ typically fluctuates around 1.0. A value of ⁇ smaller than 1.0 can indicate that the actual encoded bit rate is larger than the expectation, implying that a more complicated motion contextual content has been encountered. In contrast, a value of ⁇ larger than 1.0 indicates that the actual encoded bit rate is smaller than the expectation, implying that smoother motion content has been encountered. Therefore, ⁇ -domain deviation ⁇ can be used as an indicator to classify video content to high motion complexity, medium, medium-low and low motion complexity categories. Based on motion complexity classification, a fast mode decision algorithm can be employed.
- a content classification based mode decision algorithm is illustrated.
- the algorithm may be embodied in a combination of hardware and software and may be deployed as instructions and data stored in a non-transitory computer readable media. It will be appreciated that the instructions and data may be configured and/or adapted such that execution of the instructions by a processor cause the processor to perform the method described in FIG. 2 .
- NZ_QP deviation ⁇ is calculated based on the encoded frame NZ and QP information.
- the motion complexity index based on T n is then recalculated based on deviation ⁇ .
- a table lookup from QP_P_T n tables may be performed to find P for the current frame based on weighted previous frame's QP value and content classification index T n before performing step 206 .
- deviation ⁇ is calculated with respect to distortion D based on the tangent relationship of ⁇ and D, the distribution frequency of D, and the location of P-point.
- a mathematical model ⁇ can be established as a function of P-point, D and QP for each motion complexity class to represent the cost deviation ⁇ for each MB.
- QP_P_T n One example of a QP_P_T n is shown in Table 1, here below:
- mode decisions for each MB of current frame can be taken.
- Inter Mode RD cost J int er can be replaced by D as shown in equation (2) and
- Intra Mode cost J int ra can be replaced by D+ ⁇ , where ⁇ is derived from experimental model ⁇ as described at step 206 .
- a winning mode may be selected as the mode which yields the minimum mode cost J min . The process is typically repeated until it is determined at 210 the encoding of the current frame is finished.
- the mode-decision algorithm, QP_P_T n table and deviation model ⁇ are built offline from experimental results.
- Motion classification index T n and its corresponding methods are described in a related, concurrently filed application titled “ ⁇ -domain metrics ⁇ and its applications.”
- the video classification based mode decision algorithms, systems and methods described herein can provide a very cost efficient, fast and robust alternative approach compared with conventional systems that tend to be computationally costly, and which are usually involve dual-pass encoding mode decision algorithms.
- a fast table-lookup method is used to get a P-point value. From the P-point, QP and content classification index T n , and MB cost deviation ⁇ can be obtained from a selected experimental model ⁇ . Mode decisions can be made efficiently by inserting ⁇ into equation (2).
- computing system 30 may be a commercially available system that executes commercially available operating systems such as Microsoft Windows®, UNIX or a variant thereof, Linux, a real time operating system and or a proprietary operating system.
- the architecture of the computing system may be adapted, configured and/or designed for integration in the processing system, for embedding in one or more of an image capture system, communications device and/or graphics processing systems.
- computing system 30 comprises a bus 302 and/or other mechanisms for communicating between processors, whether those processors are integral to the computing system 30 (e.g.
- processor 304 and/or 305 comprises a CISC or RISC computing processor and/or one or more digital signal processors.
- processor 304 and/or 305 may be embodied in a custom device and/or may perform as a configurable sequencer.
- Device drivers 303 may provide output signals used to control internal and external components and to communicate between processors 304 and 305 .
- Computing system 30 also typically comprises memory 306 that may include one or more of random access memory (“RAM”), static memory, cache, flash memory and any other suitable type of storage device that can be coupled to bus 302 .
- Memory 306 can be used for storing instructions and data that can cause one or more of processors 304 and 305 to perform a desired process.
- Main memory 306 may be used for storing transient and/or temporary data such as variables and intermediate information generated and/or used during execution of the instructions by processor 304 or 305 .
- Computing system 30 also typically comprises non-volatile storage such as read only memory (“ROM”) 308 , flash memory, memory cards or the like; non-volatile storage may be connected to the bus 302 , but may equally be connected using a high-speed universal serial bus (USB), Firewire or other such bus that is coupled to bus 302 .
- Non-volatile storage can be used for storing configuration, and other information, including instructions executed by processors 304 and/or 305 .
- Non-volatile storage may also include mass storage device 310 , such as a magnetic disk, optical disk, flash disk that may be directly or indirectly coupled to bus 302 and used for storing instructions to be executed by processors 304 and/or 305 , as well as other information.
- computing system 30 may be communicatively coupled to a display system 312 , such as an LCD flat panel display, including touch panel displays, electroluminescent display, plasma display, cathode ray tube or other display device that can be configured and adapted to receive and display information to a user of computing system 30 .
- a display system 312 such as an LCD flat panel display, including touch panel displays, electroluminescent display, plasma display, cathode ray tube or other display device that can be configured and adapted to receive and display information to a user of computing system 30 .
- device drivers 303 can include a display driver, graphics adapter and/or other modules that maintain a digital representation of a display and convert the digital representation to a signal for driving a display system 312 .
- Display system 312 may also include logic and software to generate a display from a signal provided by system 300 . In that regard, display 312 may be provided as a remote terminal or in a session on a different computing system 30 .
- An input device 314 is generally provided locally or through a remote system and typically provides for alphanumeric input as well as cursor control 316 input, such as a mouse, a trackball, etc. It will be appreciated that input and output can be provided to a wireless device such as a PDA, a tablet computer or other system suitable equipped to display the images and provide user input.
- a wireless device such as a PDA, a tablet computer or other system suitable equipped to display the images and provide user input.
- computing system 30 may be embedded in a system that captures and/or processes images, including video images.
- computing system may include a video processor or accelerator 317 , which may have its own processor, non-transitory storage and input/output interfaces.
- video processor or accelerator 317 may be implemented as a combination of hardware and software operated by the one or more processors 304 , 305 .
- computing system 30 functions as a video encoder, although other functions may be performed by computing system 30 .
- a video encoder that comprises computing system 30 may be embedded in another device such as a camera, a communications device, a mixing panel, a monitor, a computer peripheral, and so on.
- portions of the described invention may be performed by computing system 30 .
- Processor 304 executes one or more sequences of instructions. For example, such instructions may be stored in main memory 306 , having been received from a computer-readable medium such as storage device 310 . Execution of the sequences of instructions contained in main memory 306 causes processor 304 to perform process steps according to certain aspects of the invention.
- functionality may be provided by embedded computing systems that perform specific functions wherein the embedded systems employ a customized combination of hardware and software to perform a set of predefined tasks. Thus, embodiments of the invention are not limited to any specific combination of hardware circuitry and software.
- Non-volatile storage may be embodied on media such as optical or magnetic disks, including DVD, CD-ROM and BluRay. Storage may be provided locally and in physical proximity to processors 304 and 305 or remotely, typically by use of network connection. Non-volatile storage may be removable from computing system 304 , as in the example of BluRay, DVD or CD storage or memory cards or sticks that can be easily connected or disconnected from a computer using a standard interface, including USB, etc.
- computer-readable media can include floppy disks, flexible disks, hard disks, magnetic tape, any other magnetic medium, CD-ROMs, DVDs, BluRay, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, RAM, PROM, EPROM, FLASH/EEPROM, any other memory chip or cartridge, or any other medium from which a computer can read.
- Transmission media can be used to connect elements of the processing system and/or components of computing system 30 .
- Such media can include twisted pair wiring, coaxial cables, copper wire and fiber optics.
- Transmission media can also include wireless media such as radio, acoustic and light waves. In particular radio frequency (RF), fiber optic and infrared (IR) data communications may be used.
- RF radio frequency
- IR infrared
- Various forms of computer readable media may participate in providing instructions and data for execution by processor 304 and/or 305 .
- the instructions may initially be retrieved from a magnetic disk of a remote computer and transmitted over a network or modem to computing system 30 .
- the instructions may optionally be stored in a different storage or a different part of storage prior to or during execution.
- Computing system 30 may include a communication interface 318 that provides two-way data communication over a network 320 that can include a local network 322 , a wide area network or some combination of the two.
- a network 320 can include a local network 322 , a wide area network or some combination of the two.
- ISDN integrated services digital network
- LAN local area network
- Network link 320 typically provides data communication through one or more networks to other data devices.
- network link 320 may provide a connection through local network 322 to a host computer 324 or to a wide are network such as the Internet 328 .
- Local network 322 and Internet 328 may both use electrical, electromagnetic or optical signals that carry digital data streams.
- Computing system 30 can use one or more networks to send messages and data, including program code and other information.
- a server 330 might transmit a requested code for an application program through Internet 328 and may receive in response a downloaded application that provides or augments functional modules such as those described in the examples above.
- the received code may be executed by processor 304 and/or 305 .
- Certain embodiments of the invention provide video encoder systems and methods.
- the encoder systems employ content classification. Some of these embodiments comprise maintaining one or more tables relating quantization parameters and P-points for a frame of video.
- the frame comprises one or more macroblocks. Some of these embodiments comprise calculating a deviation representative of a difference between original and decoded versions of a macroblock. Some of these embodiments comprise calculating a deviation representative of a distribution frequency of the value of a distortion. Some of these embodiments comprise calculating a deviation representative of the location of a P-point. In some of these embodiments, the P-point corresponds to a distortion value that is associated with a minimum rate difference between encoding modes for a macroblock.
- Some of these embodiments comprise updating a motion complexity index using a quantization parameter and a number of non-zero coefficients of the encoded frame. Some of these embodiments comprise selecting an encoding mode for the macroblock using the motion complexity index to reference mode information maintained in the one or more tables.
- the selected mode yields a least cost encoding.
- the deviation comprises a weighted difference of estimated distortion and measured distortion for a selected quantization parameter value.
- the deviation is normalized.
- calculating the deviation representative of the difference between original and decoded versions of a macroblock is based on a tangential relationship between the distortion and a rate difference between the encoding modes.
- each P-point corresponds to a distortion value is associated with no rate difference between encoding modes for the macroblock.
- the motion complexity index is initiated during receipt of an initial number of frames in a video sequence. In some of these embodiments, there are at least 5 frames in the initial number of frames in the video sequence.
- Some of these embodiments comprise modeling cost of deviation for each motion complexity class for each macroblock as a function of P-point, distortion and quantization parameter. Some of these embodiments comprise looking up a P-point for a current frame using a weighted quantization parameter value of a previous frame.
- the encoding modes comprise an inter-prediction mode and an intra-prediction mode. In some of these embodiments, the encoding modes are defined by the H.264 video standard.
- Certain embodiments of the invention provide a video encoder 317 (see FIG. 3 ). Some of these embodiments comprise a plurality of tables relating quantization parameters and encoding modes for a video frame. Some of these embodiments comprise a content classifier that selects an encoding mode for a macroblock of the video frame from the plurality of tables using a deviation representative of difference between original and decoded versions of the macroblock. Some of these embodiments comprise a processor that maintains a motion complexity index using a quantization parameter and non-zero coefficients of the encoded frame. In some of these embodiments, the motion complexity index is operable to select an encoding mode based on the motion complexity of the frame. In some of these embodiments, the selected mode yields a least cost encoding for the frame. In some of these embodiments, the selected mode yields a least cost encoding for the macroblock. In some of these embodiments, each P-point corresponds to a distortion value that is associated with a minimum rate difference between encoding modes for a macroblock.
Abstract
Video encoder systems and methods are described that employ table-based content classification. One or more tables relate quantization parameters and P-points for a frame of video that typically comprises macroblocks. A deviation representative of a difference between original and decoded versions of a macroblock is determined, the deviation being further representative of a distribution frequency of the value of a distortion for a P-point. The P-point corresponds to a distortion value that is associated with a minimum rate difference between encoding modes for a macroblock. A motion complexity index is updated using a quantization parameter and non-zero coefficients of the encoded frame. An encoding mode for the macroblock can be retrieved from the tables using the motion complexity index to reference mode information maintained in the tables.
Description
- The present application claims priority from PCT/CN2010/076569 (title: “Video Classification Systems and Methods”) which was filed in the Chinese Receiving Office on Sep. 2, 2010, from PCT/CN2010/076564 (title: “Rho-Domain Metrics”) which was filed in the Chinese Receiving Office on Sep. 2, 2010, from PCT/CN2010/076555 (title: “Video Analytics for Security Systems and Methods”) which was filed in the Chinese Receiving Office on Sep. 2, 2010, and from PCT/CN2010/076567 (title: “Systems And Methods for Video Content Analysis) which was filed in the Chinese Receiving Office on Sep. 2, 2010, each of these applications being hereby incorporated herein by reference. The present Application is also related to concurrently filed U.S. Patent non-provisional applications entitled “Rho-Domain Metrics” (attorney docket no. 043497-0393276), “Video Analytics for Security Systems and Methods” (attorney docket no. 043497-0393277) and “Systems And Methods for Video Content Analysis” (attorney docket no. 043497-0393278), which are expressly incorporated by reference herein.
-
FIG. 1 illustrates the relationship of distortion and rate difference between Intra Inter modes for a given quantization parameter. -
FIG. 2 is a flowchart illustrating a content classification based mode decision method. -
FIG. 3 is a simplified block schematic illustrating a processing system employed in certain embodiments of the invention. - Embodiments of the present invention will now be described in detail with reference to the drawings, which are provided as illustrative examples so as to enable those skilled in the art to practice the invention. Notably, the figures and examples below are not meant to limit the scope of the present invention to a single embodiment, but other embodiments are possible by way of interchange of some or all of the described or illustrated elements. Wherever convenient, the same reference numbers will be used throughout the drawings to refer to same or like parts. Where certain elements of these embodiments can be partially or fully implemented using known components, only those portions of such known components that are necessary for an understanding of the disclosed embodiments will be described, and detailed descriptions of other portions of such known components will be omitted so as not to obscure the disclosed embodiments. In the present specification, an embodiment showing a singular component should not be considered limiting; rather, the invention is intended to encompass other embodiments including a plurality of the same component, and vice-versa, unless explicitly stated otherwise herein. Moreover, applicants do not intend for any term in the specification or claims to be ascribed an uncommon or special meaning unless explicitly set forth as such. Further, certain embodiments of the present invention encompass present and future known equivalents to the components referred to herein by way of illustration.
- Video standards such as H.264/AVC employ mode decision as an encoding decision process to determine whether a macroblock (“MB”) is encoded as an intra-prediction mode (“Intra Mode”) or an inter-prediction mode (“Inter Mode”). Rate-distortion optimization techniques are commonly applied in various implementations. When encoding a MB, rate-distortion cost is calculated for both Intra Modes and Inter Modes. The minimum cost mode is selected as the final encoding mode. Depending on the video standard, multiple Intra Modes and Inter Modes are applied. For example, in H.264 standard, there are 4 Intra 16×16 Modes and 9 Intra 4×4 Modes for each MB, and skip macroblock, Inter 16×16 Mode, Inter 16×8, 8×16, 8×8, 8×4, 4×8 and 4×4 Modes for each MB. Rate-distortion cost J is defined as
-
J=D+λ*R, (1) - where distortion D is defined as the difference between reconstructed MB and original MB, where rate R represents the bits used to encode the current MB, and where coefficient λ is a weighting factor. In one example, the sum of absolute differences (SAD) can be used to quantify distortion.
- Rate-distortion optimization (RDO) techniques can provide a balance of encoding quality and compression ratio. An accurate calculation of rate R in equation (1) is computationally costly and generally involves a dual-pass encoding process which requires the use of hardware resources and which introduces additional delays. Research has been conducted to optimize the calculation of R and to provide a fast rate-distortion balanced mode decision algorithm. However, estimation of bit rate R per MB is generally very costly due to tight pipeline architectures employed in hardware embodiments that provide real-time encoding and multiple-channel encoding.
- Accordingly, in certain embodiments, distortion D is used to determine the mode decision when R is omitted from equation (1). Mode optimization typically cannot be achieved by using D alone without considering the bit rate perspective of encoding. For example, in the low-complex background cases, Intra Mode's SAD values for background MBs can be smaller than Inter Mode SAD values: therefore, Intra-mode is typically selected for background MBs. However, Intra Mode encoding usually consumes many more bits than Inter Mode encoding and, consequentially, encoding bits may be wasted and background blocky artifacts can be observed.
- Certain embodiments employ a comparison of rate cost distortions J_int ra and J_int er. Based on equation (1), a comparison can be taken as equivalent to the comparison of Dint ra+λ*(ΔR)Dint er shown in Equation (2), where λ*(ΔR) (denoted hereafter τ) is the rate difference weighting factor between Intra mode and Inter mode.
-
J int er =D int er -
J int ra =D int ra - Experimental results show there is a pseudo tangent relationship between ΔR and distortion for a given quantization parameter (“QP”) as shown in
FIG. 1 . -
FIG. 1 shows the relationship of ΔR and D for a given QP (inFIG. 1 , QP=26), and, inFIG. 1 , SAD is used as distortion and ΔR=Rint ra−Rint er. For the purposes of this description, Rint ra represents bit numbers used by the intra mode encoder to encode the current microblock, and Rint er represents bit numbers used by the inter mode encoder to encode current microblock. A point P is defined as the point at which Diff_R (ΔR) is equal to the zero point on axis X. Points with D values less than P will consume more bits with Intra Mode encoding (ΔR(=Rint ra−rint er)>0), while points with D values larger than P will consume less bits with Intra Mode, as shown in the drawing. Experimental results show there is a pseudo tangent relationship between ΔR and distortion for a given QP. The location of P-point is a function of QP and video motion complexity, P-points increase along with the increasing of QP and motion complexity. After a P-point is located, deviation r can be estimated and the Intra Mode/Inter Mode decision can be reached quickly and with greater ease, based on the tangent curve and D value distribution frequency. - Certain embodiments of the invention use Rho-domain (“ρ-domain”) content classification and certain embodiments provide an innovative ρ-domain metric “θ” and employ systems and methods that apply the metric. In some embodiments, the definition of ρ in ρ-domain can be taken to be the number of non-zero coefficients after transform and quantization in a video encoding process. Additionally, the term “NZ” will be used herein to represent ρ, where NZ can be understood as meaning a number of non-zero coefficients after quantization of each macroblock in video standards such as the H.264 video standard. For the purposes of this description, a ρ-domain deviation metric θ may be defined as a recursive weighted ratio between the theoretical NZ_QP curve and the actual NZ_QP curve. Normalized θ typically fluctuates around 1.0. A value of θ smaller than 1.0 can indicate that the actual encoded bit rate is larger than the expectation, implying that a more complicated motion contextual content has been encountered. In contrast, a value of θ larger than 1.0 indicates that the actual encoded bit rate is smaller than the expectation, implying that smoother motion content has been encountered. Therefore, ρ-domain deviation θ can be used as an indicator to classify video content to high motion complexity, medium, medium-low and low motion complexity categories. Based on motion complexity classification, a fast mode decision algorithm can be employed.
- In the example of
FIG. 2 , a content classification based mode decision algorithm is illustrated. The algorithm may be embodied in a combination of hardware and software and may be deployed as instructions and data stored in a non-transitory computer readable media. It will be appreciated that the instructions and data may be configured and/or adapted such that execution of the instructions by a processor cause the processor to perform the method described inFIG. 2 . - At
step 200, offline trained quantization parameter QP and P-point tables QP_P_Tn are built based on p-domain content classifications, while Tn(Tn=1, 2, 3, . . . 51) denotes different motion complexity classifications. If atstep 203 it is determined that a current frame belongs to the first 5 frames of a video sequence, then step 204 is performed next; otherwise step 203 is performed next. Atstep 204, motion complexity index Tn is initiated based on initial QP and complexity information and the P-point can be found from QP_P_Tn tables. Step 206 can then be performed. - If at
step 202 it is identified that the current frame does not belong to the first 5 frames of a video sequence then, atstep 203, NZ_QP deviation θ is calculated based on the encoded frame NZ and QP information. Atstep 205, the motion complexity index based on Tn is then recalculated based on deviation θ. A table lookup from QP_P_Tn tables may be performed to find P for the current frame based on weighted previous frame's QP value and content classification index Tn before performingstep 206. - At
step 206, deviation τ is calculated with respect to distortion D based on the tangent relationship of τ and D, the distribution frequency of D, and the location of P-point. A mathematical model φ can be established as a function of P-point, D and QP for each motion complexity class to represent the cost deviation τ for each MB. - One example of a QP_P_Tn is shown in Table 1, here below:
-
TABLE 1 QP_P_Tn table QP_P_Tn : static int MD_P_TABLE[ ][ ]={ //{T1,T2,T3,P_point_T1,P_point_T2,P_point_T3} {0.8,1.1,2,4,6,6}, //QP = 14 {0.8,1.1,2,4,6,6}, //QP = 15 {0.8,1.1,2,5,7,7}, //QP = 16 {0.8,1.1,2,5,7,7}, //QP = 17 {0.8,1.1,2,6,8,8}, //QP = 18 {0.8,1.1,2,6,8,8}, //QP = 19 {0.8,1.1,2,7,9,9}, //QP = 20 {0.8,1.1,2,8,9,9}, //QP = 20 .... } //Listed in the table are relative values. //From QP and content classification index Tn, and P_point can be obtained form the MD_P_TABLE - At
step 208, mode decisions for each MB of current frame can be taken. Inter Mode RD cost Jint er can be replaced by D as shown in equation (2) and Intra Mode cost Jint ra can be replaced by D+τ, where τ is derived from experimental model φ as described atstep 206. A winning mode may be selected as the mode which yields the minimum mode cost Jmin. The process is typically repeated until it is determined at 210 the encoding of the current frame is finished. - In certain embodiments, the mode-decision algorithm, QP_P_Tn table and deviation model φ are built offline from experimental results. Motion classification index Tn and its corresponding methods are described in a related, concurrently filed application titled “ρ-domain metrics θ and its applications.” The video classification based mode decision algorithms, systems and methods described herein can provide a very cost efficient, fast and robust alternative approach compared with conventional systems that tend to be computationally costly, and which are usually involve dual-pass encoding mode decision algorithms. In certain embodiments of the present invention. A fast table-lookup method is used to get a P-point value. From the P-point, QP and content classification index Tn, and MB cost deviation τ can be obtained from a selected experimental model φ. Mode decisions can be made efficiently by inserting τ into equation (2).
- Turning now to
FIG. 3 , certain embodiments of the invention employ a processing system that includes at least onecomputing system 30 deployed to perform certain of the steps described above.Computing system 30 may be a commercially available system that executes commercially available operating systems such as Microsoft Windows®, UNIX or a variant thereof, Linux, a real time operating system and or a proprietary operating system. The architecture of the computing system may be adapted, configured and/or designed for integration in the processing system, for embedding in one or more of an image capture system, communications device and/or graphics processing systems. In one example,computing system 30 comprises abus 302 and/or other mechanisms for communicating between processors, whether those processors are integral to the computing system 30 (e.g. 304, 305) or located in different, perhaps physically separated computing systems 300. Typically,processor 304 and/or 305 comprises a CISC or RISC computing processor and/or one or more digital signal processors. In some embodiments,processor 304 and/or 305 may be embodied in a custom device and/or may perform as a configurable sequencer.Device drivers 303 may provide output signals used to control internal and external components and to communicate betweenprocessors -
Computing system 30 also typically comprisesmemory 306 that may include one or more of random access memory (“RAM”), static memory, cache, flash memory and any other suitable type of storage device that can be coupled tobus 302.Memory 306 can be used for storing instructions and data that can cause one or more ofprocessors Main memory 306 may be used for storing transient and/or temporary data such as variables and intermediate information generated and/or used during execution of the instructions byprocessor Computing system 30 also typically comprises non-volatile storage such as read only memory (“ROM”) 308, flash memory, memory cards or the like; non-volatile storage may be connected to thebus 302, but may equally be connected using a high-speed universal serial bus (USB), Firewire or other such bus that is coupled tobus 302. Non-volatile storage can be used for storing configuration, and other information, including instructions executed byprocessors 304 and/or 305. Non-volatile storage may also includemass storage device 310, such as a magnetic disk, optical disk, flash disk that may be directly or indirectly coupled tobus 302 and used for storing instructions to be executed byprocessors 304 and/or 305, as well as other information. - In some embodiments,
computing system 30 may be communicatively coupled to adisplay system 312, such as an LCD flat panel display, including touch panel displays, electroluminescent display, plasma display, cathode ray tube or other display device that can be configured and adapted to receive and display information to a user ofcomputing system 30. Typically,device drivers 303 can include a display driver, graphics adapter and/or other modules that maintain a digital representation of a display and convert the digital representation to a signal for driving adisplay system 312.Display system 312 may also include logic and software to generate a display from a signal provided by system 300. In that regard,display 312 may be provided as a remote terminal or in a session on adifferent computing system 30. Aninput device 314 is generally provided locally or through a remote system and typically provides for alphanumeric input as well ascursor control 316 input, such as a mouse, a trackball, etc. It will be appreciated that input and output can be provided to a wireless device such as a PDA, a tablet computer or other system suitable equipped to display the images and provide user input. - In certain embodiments,
computing system 30 may be embedded in a system that captures and/or processes images, including video images. In one example, computing system may include a video processor oraccelerator 317, which may have its own processor, non-transitory storage and input/output interfaces. In another example, video processor oraccelerator 317 may be implemented as a combination of hardware and software operated by the one ormore processors computing system 30 functions as a video encoder, although other functions may be performed by computingsystem 30. In particular, a video encoder that comprisescomputing system 30 may be embedded in another device such as a camera, a communications device, a mixing panel, a monitor, a computer peripheral, and so on. - According to one embodiment of the invention, portions of the described invention may be performed by computing
system 30.Processor 304 executes one or more sequences of instructions. For example, such instructions may be stored inmain memory 306, having been received from a computer-readable medium such asstorage device 310. Execution of the sequences of instructions contained inmain memory 306 causesprocessor 304 to perform process steps according to certain aspects of the invention. In certain embodiments, functionality may be provided by embedded computing systems that perform specific functions wherein the embedded systems employ a customized combination of hardware and software to perform a set of predefined tasks. Thus, embodiments of the invention are not limited to any specific combination of hardware circuitry and software. - The term “computer-readable medium” is used to define any medium that can store and provide instructions and other data to
processor 304 and/or 305, particularly where the instructions are to be executed byprocessor 304 and/or 305 and/or other peripheral of the processing system. Such medium can include non-volatile storage, volatile storage and transmission media. Non-volatile storage may be embodied on media such as optical or magnetic disks, including DVD, CD-ROM and BluRay. Storage may be provided locally and in physical proximity toprocessors computing system 304, as in the example of BluRay, DVD or CD storage or memory cards or sticks that can be easily connected or disconnected from a computer using a standard interface, including USB, etc. Thus, computer-readable media can include floppy disks, flexible disks, hard disks, magnetic tape, any other magnetic medium, CD-ROMs, DVDs, BluRay, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, RAM, PROM, EPROM, FLASH/EEPROM, any other memory chip or cartridge, or any other medium from which a computer can read. - Transmission media can be used to connect elements of the processing system and/or components of
computing system 30. Such media can include twisted pair wiring, coaxial cables, copper wire and fiber optics. Transmission media can also include wireless media such as radio, acoustic and light waves. In particular radio frequency (RF), fiber optic and infrared (IR) data communications may be used. - Various forms of computer readable media may participate in providing instructions and data for execution by
processor 304 and/or 305. For example, the instructions may initially be retrieved from a magnetic disk of a remote computer and transmitted over a network or modem tocomputing system 30. The instructions may optionally be stored in a different storage or a different part of storage prior to or during execution. -
Computing system 30 may include acommunication interface 318 that provides two-way data communication over a network 320 that can include alocal network 322, a wide area network or some combination of the two. For example, an integrated services digital network (ISDN) may used in combination with a local area network (LAN). In another example, a LAN may include a wireless link. Network link 320 typically provides data communication through one or more networks to other data devices. For example, network link 320 may provide a connection throughlocal network 322 to ahost computer 324 or to a wide are network such as theInternet 328.Local network 322 andInternet 328 may both use electrical, electromagnetic or optical signals that carry digital data streams. -
Computing system 30 can use one or more networks to send messages and data, including program code and other information. In the Internet example, aserver 330 might transmit a requested code for an application program throughInternet 328 and may receive in response a downloaded application that provides or augments functional modules such as those described in the examples above. The received code may be executed byprocessor 304 and/or 305. - The foregoing descriptions of the invention are intended to be illustrative and not limiting. For example, those skilled in the art will appreciate that the invention can be practiced with various combinations of the functionalities and capabilities described above, and can include fewer or additional components than described above. Certain additional aspects and features of the invention are further set forth below, and can be obtained using the functionalities and components described in more detail above, as will be appreciated by those skilled in the art after being taught by the present disclosure.
- Certain embodiments of the invention provide video encoder systems and methods. In some of these embodiments, the encoder systems employ content classification. Some of these embodiments comprise maintaining one or more tables relating quantization parameters and P-points for a frame of video. In some of these embodiments, the frame comprises one or more macroblocks. Some of these embodiments comprise calculating a deviation representative of a difference between original and decoded versions of a macroblock. Some of these embodiments comprise calculating a deviation representative of a distribution frequency of the value of a distortion. Some of these embodiments comprise calculating a deviation representative of the location of a P-point. In some of these embodiments, the P-point corresponds to a distortion value that is associated with a minimum rate difference between encoding modes for a macroblock. Some of these embodiments comprise updating a motion complexity index using a quantization parameter and a number of non-zero coefficients of the encoded frame. Some of these embodiments comprise selecting an encoding mode for the macroblock using the motion complexity index to reference mode information maintained in the one or more tables.
- In some of these embodiments, the selected mode yields a least cost encoding. In some of these embodiments. In some of these embodiments, the deviation comprises a weighted difference of estimated distortion and measured distortion for a selected quantization parameter value. In some of these embodiments, the deviation is normalized. In some of these embodiments, calculating the deviation representative of the difference between original and decoded versions of a macroblock is based on a tangential relationship between the distortion and a rate difference between the encoding modes. In some of these embodiments, each P-point corresponds to a distortion value is associated with no rate difference between encoding modes for the macroblock. In some of these embodiments, the motion complexity index is initiated during receipt of an initial number of frames in a video sequence. In some of these embodiments, there are at least 5 frames in the initial number of frames in the video sequence.
- Some of these embodiments comprise modeling cost of deviation for each motion complexity class for each macroblock as a function of P-point, distortion and quantization parameter. Some of these embodiments comprise looking up a P-point for a current frame using a weighted quantization parameter value of a previous frame. In some of these embodiments, the encoding modes comprise an inter-prediction mode and an intra-prediction mode. In some of these embodiments, the encoding modes are defined by the H.264 video standard.
- Certain embodiments of the invention provide a video encoder 317 (see
FIG. 3 ). Some of these embodiments comprise a plurality of tables relating quantization parameters and encoding modes for a video frame. Some of these embodiments comprise a content classifier that selects an encoding mode for a macroblock of the video frame from the plurality of tables using a deviation representative of difference between original and decoded versions of the macroblock. Some of these embodiments comprise a processor that maintains a motion complexity index using a quantization parameter and non-zero coefficients of the encoded frame. In some of these embodiments, the motion complexity index is operable to select an encoding mode based on the motion complexity of the frame. In some of these embodiments, the selected mode yields a least cost encoding for the frame. In some of these embodiments, the selected mode yields a least cost encoding for the macroblock. In some of these embodiments, each P-point corresponds to a distortion value that is associated with a minimum rate difference between encoding modes for a macroblock. - Although the present invention has been described with reference to specific exemplary embodiments, it will be evident to one of ordinary skill in the art that various modifications and changes may be made to these embodiments without departing from the broader spirit and scope of the invention. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense.
Claims (20)
1. A method of content classification in a video encoder, comprising:
calculating a deviation representative of a difference between original and decoded versions of a macroblock in a frame of video, a distribution frequency of the value of a distortion and the location of a P-point, wherein the macroblock is associated with a bit rate representing bits used to encode the macroblock, and wherein a P-point represents a point in the frame at which a rate of change of bit rate is equal to zero;
updating a motion complexity index using a quantization parameter and a number of non-zero coefficients in the macroblock when encoded; and
selecting an encoding mode for the macroblock using the motion complexity index to reference mode information maintained in one or more tables relating quantization parameters to one or more P-points for the frame of video, wherein the mode is selected to yield a least cost encoding,
wherein the frame comprises a plurality of macroblocks, each macroblock associated with a bit rate representing bits used to encode the each macroblock, and
wherein each P-point corresponds to a distortion value that is associated with a minimum rate difference between encoding modes for a macroblock.
2. The method of claim 1 , wherein the deviation comprises a weighted difference of estimated distortion and measured distortion for a selected quantization parameter value.
3. The method of claim 1 , wherein the deviation is normalized.
4. The method of claim 1 , wherein calculating the deviation representative of the difference between original and decoded versions of a macroblock is based on a tangential relationship between the distortion and a rate difference between the encoding modes.
5. The method of claim 1 , wherein each P-point corresponds to a distortion value that is associated with no rate difference between encoding modes for the macroblock.
6. The method of claim 1 , wherein the motion complexity index is initiated during receipt of an initial number of frames in a video sequence.
7. The method of claim 6 , wherein the initial number of frames in the video sequence comprises 5 frames.
8. The method of claim 1 , further comprising modeling a cost of deviation for each motion complexity class for each macroblock as a function of P-point, distortion and quantization parameter.
9. The method of claim 1 , further comprising looking up a P-point for a current frame using a weighted quantization parameter value of a previous frame.
10. The method of claim 1 , wherein the encoding modes comprise an inter-prediction mode and an intra-prediction mode.
11. The method of claim 1 , wherein the encoding modes are defined by the H.264 video standard.
12. A video encoder, comprising:
non-transitory storage adapted to maintain a plurality of tables relating quantization parameters and encoding modes for a video frame; and
a content classifier that selects an encoding mode for a macroblock of the video frame from the plurality of tables using a deviation representative of a difference between original and decoded versions of the macroblock; and
wherein the video encoder maintains a motion complexity index corresponding to a quantization parameter and non-zero coefficients of the encoded frame, the motion complexity index being operable to select the encoding mode as a function of the motion complexity of the video frame, wherein the selected encoding mode yields a least-cost encoding.
13. The video encoder of claim 12 , wherein the deviation is represented by a function of a P-point, a distortion and a quantization parameter, wherein each P-point corresponds to a distortion value that is associated with a minimum rate difference between encoding modes for the macroblock.
14. A non-transitory computer-readable medium encoded with data and instructions wherein the data and instructions, when executed by a processor of a video encoder, cause the video encoder to perform a content classification method comprising:
calculating a deviation representative of a difference between original and decoded versions of a macroblock of a frame of video, a distribution frequency of the value of a distortion and the location of a minimum point corresponding to a distortion value associated with a minimum rate difference between possible encoding modes for the macroblock;
updating a motion complexity index using a quantization parameter and a number of non-zero coefficients in the encoded macroblock; and
selecting an encoding mode for the macroblock using the motion complexity index to reference mode information maintained in one or more tables by the video encoder, the one or more tables relating quantization parameters and minimum points for the frame, wherein each macroblock of the frame is associated with a bit rate representing bits used to encode the each macroblock, and wherein each minimum point represents a point in the frame at which a rate of change of bit rate is equal to zero.
15. The non-transitory computer-readable medium of claim 14 , wherein the deviation comprises a weighted difference of estimated distortion and measured distortion for a selected quantization parameter value, and wherein the selected mode yields a least cost encoding.
16. The non-transitory computer-readable medium of claim 15 , wherein the deviation comprises a weighted difference of estimated distortion and measured distortion for a selected quantization parameter value and wherein calculating the deviation representative of the difference between original and decoded versions of a macroblock includes determining a tangential relationship between the distortion and a rate difference between the encoding modes.
17. The non-transitory computer-readable medium of claim 14 , wherein the method further comprises modeling cost of deviation for each motion complexity class for each macroblock as a function of minimum point, distortion and quantization parameter.
18. The non-transitory computer-readable medium of claim 14 , wherein the method further comprises looking up a minimum point for a current frame using a weighted quantization parameter value of a previous frame.
19. The non-transitory computer-readable medium of claim 14 , wherein the encoding modes comprise an inter-prediction mode and an intra-prediction mode.
20. The non-transitory computer-readable medium of claim 14 , wherein the encoding modes are defined by the H.264 video standard.
Applications Claiming Priority (8)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2010/076564 WO2012027892A1 (en) | 2010-09-02 | 2010-09-02 | Rho-domain metrics |
CNPCT/CN2010/076555 | 2010-09-02 | ||
CNPCT/CN2010/076569 | 2010-09-02 | ||
PCT/CN2010/076567 WO2012027893A1 (en) | 2010-09-02 | 2010-09-02 | Systems and methods for video content analysis |
CNPCT/CN2010/076567 | 2010-09-02 | ||
PCT/CN2010/076555 WO2012027891A1 (en) | 2010-09-02 | 2010-09-02 | Video analytics for security systems and methods |
CNPCT/CN2010/076564 | 2010-09-02 | ||
PCT/CN2010/076569 WO2012027894A1 (en) | 2010-09-02 | 2010-09-02 | Video classification systems and methods |
Publications (1)
Publication Number | Publication Date |
---|---|
US20120057633A1 true US20120057633A1 (en) | 2012-03-08 |
Family
ID=45770713
Family Applications (5)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/225,269 Expired - Fee Related US8824554B2 (en) | 2010-09-02 | 2011-09-02 | Systems and methods for video content analysis |
US13/225,202 Abandoned US20120057633A1 (en) | 2010-09-02 | 2011-09-02 | Video Classification Systems and Methods |
US13/225,238 Abandoned US20120057640A1 (en) | 2010-09-02 | 2011-09-02 | Video Analytics for Security Systems and Methods |
US13/225,222 Abandoned US20120057629A1 (en) | 2010-09-02 | 2011-09-02 | Rho-domain Metrics |
US14/472,313 Expired - Fee Related US9609348B2 (en) | 2010-09-02 | 2014-08-28 | Systems and methods for video content analysis |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/225,269 Expired - Fee Related US8824554B2 (en) | 2010-09-02 | 2011-09-02 | Systems and methods for video content analysis |
Family Applications After (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/225,238 Abandoned US20120057640A1 (en) | 2010-09-02 | 2011-09-02 | Video Analytics for Security Systems and Methods |
US13/225,222 Abandoned US20120057629A1 (en) | 2010-09-02 | 2011-09-02 | Rho-domain Metrics |
US14/472,313 Expired - Fee Related US9609348B2 (en) | 2010-09-02 | 2014-08-28 | Systems and methods for video content analysis |
Country Status (1)
Country | Link |
---|---|
US (5) | US8824554B2 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8850182B1 (en) * | 2012-09-28 | 2014-09-30 | Shoretel, Inc. | Data capture for secure protocols |
CN111901597A (en) * | 2020-08-05 | 2020-11-06 | 杭州当虹科技股份有限公司 | CU (CU) level QP (quantization parameter) allocation algorithm based on video complexity |
Families Citing this family (61)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8121361B2 (en) | 2006-05-19 | 2012-02-21 | The Queen's Medical Center | Motion tracking system for real time adaptive imaging and spectroscopy |
US8824554B2 (en) * | 2010-09-02 | 2014-09-02 | Intersil Americas LLC | Systems and methods for video content analysis |
EP2747641A4 (en) | 2011-08-26 | 2015-04-01 | Kineticor Inc | Methods, systems, and devices for intra-scan motion correction |
US9213781B1 (en) | 2012-09-19 | 2015-12-15 | Placemeter LLC | System and method for processing image data |
US20140096014A1 (en) * | 2012-09-29 | 2014-04-03 | Oracle International Corporation | Method for enabling dynamic client user interfaces on multiple platforms from a common server application via metadata |
US9717461B2 (en) | 2013-01-24 | 2017-08-01 | Kineticor, Inc. | Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan |
US10327708B2 (en) | 2013-01-24 | 2019-06-25 | Kineticor, Inc. | Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan |
US9305365B2 (en) | 2013-01-24 | 2016-04-05 | Kineticor, Inc. | Systems, devices, and methods for tracking moving targets |
EP2950714A4 (en) | 2013-02-01 | 2017-08-16 | Kineticor, Inc. | Motion tracking system for real time adaptive motion compensation in biomedical imaging |
US9177245B2 (en) | 2013-02-08 | 2015-11-03 | Qualcomm Technologies Inc. | Spiking network apparatus and method with bimodal spike-timing dependent plasticity |
CN105074791B (en) | 2013-02-08 | 2018-01-09 | 罗伯特·博世有限公司 | To the mark of video flowing addition user's selection |
US20140328406A1 (en) * | 2013-05-01 | 2014-11-06 | Raymond John Westwater | Method and Apparatus to Perform Optimal Visually-Weighed Quantization of Time-Varying Visual Sequences in Transform Space |
KR101480348B1 (en) * | 2013-05-31 | 2015-01-09 | 삼성에스디에스 주식회사 | People Counting Apparatus and Method |
US10437658B2 (en) | 2013-06-06 | 2019-10-08 | Zebra Technologies Corporation | Method, apparatus, and computer program product for collecting and displaying sporting event data based on real time data for proximity and movement of objects |
US10609762B2 (en) | 2013-06-06 | 2020-03-31 | Zebra Technologies Corporation | Method, apparatus, and computer program product improving backhaul of sensor and other data to real time location system network |
US9699278B2 (en) | 2013-06-06 | 2017-07-04 | Zih Corp. | Modular location tag for a real time location system network |
US11423464B2 (en) | 2013-06-06 | 2022-08-23 | Zebra Technologies Corporation | Method, apparatus, and computer program product for enhancement of fan experience based on location data |
US9180357B2 (en) | 2013-06-06 | 2015-11-10 | Zih Corp. | Multiple antenna interference rejection in ultra-wideband real time locating systems |
US9715005B2 (en) | 2013-06-06 | 2017-07-25 | Zih Corp. | Method, apparatus, and computer program product improving real time location systems with multiple location technologies |
US9517417B2 (en) | 2013-06-06 | 2016-12-13 | Zih Corp. | Method, apparatus, and computer program product for performance analytics determining participant statistical data and game status data |
US20150085111A1 (en) * | 2013-09-25 | 2015-03-26 | Symbol Technologies, Inc. | Identification using video analytics together with inertial sensor data |
JP2015136057A (en) * | 2014-01-17 | 2015-07-27 | ソニー株式会社 | Communication device, communication data generation method, and communication data processing method |
CN106572810A (en) | 2014-03-24 | 2017-04-19 | 凯内蒂科尔股份有限公司 | Systems, methods, and devices for removing prospective motion correction from medical imaging scans |
US9589363B2 (en) * | 2014-03-25 | 2017-03-07 | Intel Corporation | Object tracking in encoded video streams |
US10169661B2 (en) * | 2014-03-28 | 2019-01-01 | International Business Machines Corporation | Filtering methods for visual object detection |
US9939253B2 (en) | 2014-05-22 | 2018-04-10 | Brain Corporation | Apparatus and methods for distance estimation using multiple image sensors |
US9713982B2 (en) | 2014-05-22 | 2017-07-25 | Brain Corporation | Apparatus and methods for robotic operation using video imagery |
US10194163B2 (en) * | 2014-05-22 | 2019-01-29 | Brain Corporation | Apparatus and methods for real time estimation of differential motion in live video |
EP3149909A4 (en) | 2014-05-30 | 2018-03-07 | Placemeter Inc. | System and method for activity monitoring using video data |
US9661455B2 (en) | 2014-06-05 | 2017-05-23 | Zih Corp. | Method, apparatus, and computer program product for real time location system referencing in physically and radio frequency challenged environments |
GB2542298B (en) | 2014-06-05 | 2021-01-20 | Zebra Tech Corp | Method for iterative target location in a multiple receiver target location system |
US20150375083A1 (en) | 2014-06-05 | 2015-12-31 | Zih Corp. | Method, Apparatus, And Computer Program Product For Enhancement Of Event Visualizations Based On Location Data |
GB2541617B (en) | 2014-06-05 | 2021-07-07 | Zebra Tech Corp | Systems, apparatus and methods for variable rate ultra-wideband communications |
US9668164B2 (en) | 2014-06-05 | 2017-05-30 | Zih Corp. | Receiver processor for bandwidth management of a multiple receiver real-time location system (RTLS) |
US9626616B2 (en) | 2014-06-05 | 2017-04-18 | Zih Corp. | Low-profile real-time location system tag |
GB2541834B (en) | 2014-06-05 | 2020-12-23 | Zebra Tech Corp | Receiver processor for adaptive windowing and high-resolution TOA determination in a multiple receiver target location system |
WO2015186043A1 (en) | 2014-06-06 | 2015-12-10 | Zih Corp. | Method, apparatus, and computer program product improving real time location systems with multiple location technologies |
US9759803B2 (en) | 2014-06-06 | 2017-09-12 | Zih Corp. | Method, apparatus, and computer program product for employing a spatial association model in a real time location system |
US9848112B2 (en) | 2014-07-01 | 2017-12-19 | Brain Corporation | Optical detection apparatus and methods |
US10057593B2 (en) | 2014-07-08 | 2018-08-21 | Brain Corporation | Apparatus and methods for distance estimation using stereo imagery |
CN106714681A (en) | 2014-07-23 | 2017-05-24 | 凯内蒂科尔股份有限公司 | Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan |
US9870617B2 (en) | 2014-09-19 | 2018-01-16 | Brain Corporation | Apparatus and methods for saliency detection based on color occurrence analysis |
CN104539890A (en) * | 2014-12-18 | 2015-04-22 | 苏州阔地网络科技有限公司 | Target tracking method and system |
US10091504B2 (en) | 2015-01-08 | 2018-10-02 | Microsoft Technology Licensing, Llc | Variations of rho-domain rate control |
US10043146B2 (en) * | 2015-02-12 | 2018-08-07 | Wipro Limited | Method and device for estimating efficiency of an employee of an organization |
US10037504B2 (en) * | 2015-02-12 | 2018-07-31 | Wipro Limited | Methods for determining manufacturing waste to optimize productivity and devices thereof |
US10298942B1 (en) * | 2015-04-06 | 2019-05-21 | Zpeg, Inc. | Method and apparatus to process video sequences in transform space |
US11334751B2 (en) | 2015-04-21 | 2022-05-17 | Placemeter Inc. | Systems and methods for processing video data for activity monitoring |
US10043078B2 (en) * | 2015-04-21 | 2018-08-07 | Placemeter LLC | Virtual turnstile system and method |
US9712828B2 (en) * | 2015-05-27 | 2017-07-18 | Indian Statistical Institute | Foreground motion detection in compressed video data |
US11138442B2 (en) | 2015-06-01 | 2021-10-05 | Placemeter, Inc. | Robust, adaptive and efficient object detection, classification and tracking |
US10197664B2 (en) | 2015-07-20 | 2019-02-05 | Brain Corporation | Apparatus and methods for detection of objects using broadband signals |
US9943247B2 (en) | 2015-07-28 | 2018-04-17 | The University Of Hawai'i | Systems, devices, and methods for detecting false movements for motion correction during a medical imaging scan |
US10716515B2 (en) | 2015-11-23 | 2020-07-21 | Kineticor, Inc. | Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan |
US10075640B2 (en) * | 2015-12-31 | 2018-09-11 | Sony Corporation | Motion compensation for image sensor with a block based analog-to-digital converter |
CN105809136A (en) | 2016-03-14 | 2016-07-27 | 中磊电子(苏州)有限公司 | Image data processing method and image data processing system |
EP3720130B1 (en) | 2017-04-21 | 2022-09-28 | Zenimax Media Inc. | System and method for rendering and pre-encoded load estimation based encoder hinting |
US10694205B2 (en) * | 2017-12-18 | 2020-06-23 | Google Llc | Entropy coding of motion vectors using categories of transform blocks |
TWI720830B (en) * | 2019-06-27 | 2021-03-01 | 多方科技股份有限公司 | Image processing device and method thereof |
US11875495B2 (en) * | 2020-08-10 | 2024-01-16 | Tencent America LLC | Methods of video quality assessment using parametric and pixel level models |
US11425412B1 (en) * | 2020-11-10 | 2022-08-23 | Amazon Technologies, Inc. | Motion cues for video encoding |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4837632A (en) * | 1985-01-16 | 1989-06-06 | Mitsubishi Denki Kabushiki Kaisha | Video encoding apparatus including movement compensation |
US20030202594A1 (en) * | 2002-03-15 | 2003-10-30 | Nokia Corporation | Method for coding motion in a video sequence |
US6795504B1 (en) * | 2000-06-21 | 2004-09-21 | Microsoft Corporation | Memory efficient 3-D wavelet transform for video coding without boundary effects |
US20060114989A1 (en) * | 2004-11-29 | 2006-06-01 | Prasanjit Panda | Rate control techniques for video encoding using parametric equations |
US20080298464A1 (en) * | 2003-09-03 | 2008-12-04 | Thompson Licensing S.A. | Process and Arrangement for Encoding Video Pictures |
Family Cites Families (76)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5128754A (en) * | 1990-03-30 | 1992-07-07 | New York Institute Of Technology | Apparatus and method for encoding and decoding video |
US6400996B1 (en) * | 1999-02-01 | 2002-06-04 | Steven M. Hoffberg | Adaptive pattern recognition based control system and method |
GB9510093D0 (en) * | 1995-05-18 | 1995-07-12 | Philips Electronics Uk Ltd | Interactive image manipulation |
US5854856A (en) * | 1995-07-19 | 1998-12-29 | Carnegie Mellon University | Content based video compression system |
JPH10164581A (en) * | 1996-12-03 | 1998-06-19 | Sony Corp | Method and device for coding image signal and signal-recording medium |
US6782132B1 (en) * | 1998-08-12 | 2004-08-24 | Pixonics, Inc. | Video coding and reconstruction apparatus and methods |
US20050203927A1 (en) * | 2000-07-24 | 2005-09-15 | Vivcom, Inc. | Fast metadata generation and delivery |
US7868912B2 (en) | 2000-10-24 | 2011-01-11 | Objectvideo, Inc. | Video surveillance system employing video primitives |
US6987866B2 (en) * | 2001-06-05 | 2006-01-17 | Micron Technology, Inc. | Multi-modal motion estimation for video sequences |
US6662564B2 (en) * | 2001-09-27 | 2003-12-16 | Siemens Westinghouse Power Corporation | Catalytic combustor cooling tube vibration dampening device |
US20030159152A1 (en) * | 2001-10-23 | 2003-08-21 | Shu Lin | Fast motion trick mode using dummy bidirectional predictive pictures |
JP4099973B2 (en) | 2001-10-30 | 2008-06-11 | 松下電器産業株式会社 | Video data transmission method, video data reception method, and video surveillance system |
CN101448162B (en) * | 2001-12-17 | 2013-01-02 | 微软公司 | Method for processing video image |
US20030163477A1 (en) | 2002-02-25 | 2003-08-28 | Visharam Mohammed Zubair | Method and apparatus for supporting advanced coding formats in media files |
GB0227570D0 (en) * | 2002-11-26 | 2002-12-31 | British Telecomm | Method and system for estimating global motion in video sequences |
GB0227565D0 (en) * | 2002-11-26 | 2002-12-31 | British Telecomm | Method and system for generating panoramic images from video sequences |
GB0227566D0 (en) * | 2002-11-26 | 2002-12-31 | British Telecomm | Method and system for estimating global motion in video sequences |
US7474355B2 (en) * | 2003-08-06 | 2009-01-06 | Zoran Corporation | Chroma upsampling method and apparatus therefor |
US20050047504A1 (en) * | 2003-09-03 | 2005-03-03 | Sung Chih-Ta Star | Data stream encoding method and apparatus for digital video compression |
US7317839B2 (en) * | 2003-09-07 | 2008-01-08 | Microsoft Corporation | Chroma motion vector derivation for interlaced forward-predicted fields |
US7672370B1 (en) * | 2004-03-16 | 2010-03-02 | 3Vr Security, Inc. | Deep frame analysis of multiple video streams in a pipeline architecture |
US20060062478A1 (en) * | 2004-08-16 | 2006-03-23 | Grandeye, Ltd., | Region-sensitive compression of digital video |
US20060056511A1 (en) * | 2004-08-27 | 2006-03-16 | University Of Victoria Innovation And Development Corporation | Flexible polygon motion estimating method and system |
US8243820B2 (en) * | 2004-10-06 | 2012-08-14 | Microsoft Corporation | Decoding variable coded resolution video with native range/resolution post-processing operation |
US8948266B2 (en) * | 2004-10-12 | 2015-02-03 | Qualcomm Incorporated | Adaptive intra-refresh for digital video encoding |
CN101112101A (en) | 2004-11-29 | 2008-01-23 | 高通股份有限公司 | Rate control techniques for video encoding using parametric equations |
WO2006110890A2 (en) | 2005-04-08 | 2006-10-19 | Sarnoff Corporation | Macro-block based mixed resolution video compression system |
US20060232673A1 (en) * | 2005-04-19 | 2006-10-19 | Objectvideo, Inc. | Video-based human verification system and method |
US7801330B2 (en) * | 2005-06-24 | 2010-09-21 | Objectvideo, Inc. | Target detection and tracking from video streams |
US9113147B2 (en) * | 2005-09-27 | 2015-08-18 | Qualcomm Incorporated | Scalability techniques based on content information |
CN100456834C (en) * | 2005-10-17 | 2009-01-28 | 华为技术有限公司 | Method for monitoring service quality of H.264 multimedia communication |
US8130828B2 (en) * | 2006-04-07 | 2012-03-06 | Microsoft Corporation | Adjusting quantization to preserve non-zero AC coefficients |
CN100551072C (en) | 2006-06-05 | 2009-10-14 | 华为技术有限公司 | Quantization matrix system of selection in a kind of coding, device and decoding method and system |
US20070291118A1 (en) * | 2006-06-16 | 2007-12-20 | Shu Chiao-Fe | Intelligent surveillance system and method for integrated event based surveillance |
JP4363421B2 (en) | 2006-06-30 | 2009-11-11 | ソニー株式会社 | Monitoring system, monitoring system server and monitoring method |
KR100773761B1 (en) * | 2006-09-14 | 2007-11-09 | 한국전자통신연구원 | The apparatus and method of moving picture encoding |
US20080074496A1 (en) * | 2006-09-22 | 2008-03-27 | Object Video, Inc. | Video analytics for banking business process monitoring |
WO2008046243A1 (en) | 2006-10-16 | 2008-04-24 | Thomson Licensing | Method and device for encoding a data stream, method and device for decoding a data stream, video indexing system and image retrieval system |
US20100194868A1 (en) * | 2006-12-15 | 2010-08-05 | Daniel Peled | System, apparatus and method for flexible modular programming for video processors |
US20080184245A1 (en) * | 2007-01-30 | 2008-07-31 | March Networks Corporation | Method and system for task-based video analytics processing |
CN100508610C (en) | 2007-02-02 | 2009-07-01 | 清华大学 | Method for quick estimating rate and distortion in H.264/AVC video coding |
US7595815B2 (en) * | 2007-05-08 | 2009-09-29 | Kd Secure, Llc | Apparatus, methods, and systems for intelligent security and safety |
CN101325689A (en) | 2007-06-16 | 2008-12-17 | 翰华信息科技(厦门)有限公司 | System and method for monitoring mobile phone remote video |
US10116904B2 (en) * | 2007-07-13 | 2018-10-30 | Honeywell International Inc. | Features in video analytics |
CN101090498B (en) | 2007-07-19 | 2010-06-02 | 华为技术有限公司 | Device and method for motion detection of image |
US20090031381A1 (en) * | 2007-07-24 | 2009-01-29 | Honeywell International, Inc. | Proxy video server for video surveillance |
US9734464B2 (en) * | 2007-09-11 | 2017-08-15 | International Business Machines Corporation | Automatically generating labor standards from video data |
US8624733B2 (en) * | 2007-11-05 | 2014-01-07 | Francis John Cusack, JR. | Device for electronic access control with integrated surveillance |
KR101623890B1 (en) * | 2007-12-20 | 2016-06-07 | 에이티아이 테크놀로지스 유엘씨 | Adjusting video processing in a system haivng a video source device and a video sink device |
CN101179729A (en) | 2007-12-20 | 2008-05-14 | 清华大学 | Interframe mode statistical classification based H.264 macroblock mode selecting method |
WO2009094591A2 (en) * | 2008-01-24 | 2009-07-30 | Micropower Appliance | Video delivery systems using wireless cameras |
US9584710B2 (en) * | 2008-02-28 | 2017-02-28 | Avigilon Analytics Corporation | Intelligent high resolution video system |
US8872940B2 (en) * | 2008-03-03 | 2014-10-28 | Videoiq, Inc. | Content aware storage of video data |
GB2491987B (en) * | 2008-03-03 | 2013-03-27 | Videoiq Inc | Method of searching data for objects identified by object detection |
US8128503B1 (en) * | 2008-05-29 | 2012-03-06 | Livestream LLC | Systems, methods and computer software for live video/audio broadcasting |
US8897359B2 (en) * | 2008-06-03 | 2014-11-25 | Microsoft Corporation | Adaptive quantization for enhancement layer video coding |
US8325228B2 (en) * | 2008-07-25 | 2012-12-04 | International Business Machines Corporation | Performing real-time analytics using a network processing solution able to directly ingest IP camera video streams |
CN101389029B (en) | 2008-10-21 | 2012-01-11 | 北京中星微电子有限公司 | Method and apparatus for video image encoding and retrieval |
CN101389023B (en) | 2008-10-21 | 2011-10-12 | 镇江唐桥微电子有限公司 | Adaptive movement estimation method |
US8301792B2 (en) * | 2008-10-28 | 2012-10-30 | Panzura, Inc | Network-attached media plug-in |
JP2010128727A (en) * | 2008-11-27 | 2010-06-10 | Hitachi Kokusai Electric Inc | Image processor |
KR101173560B1 (en) | 2008-12-15 | 2012-08-13 | 한국전자통신연구원 | Fast mode decision apparatus and method |
CN101448145A (en) | 2008-12-26 | 2009-06-03 | 北京中星微电子有限公司 | IP camera, video monitor system and signal processing method of IP camera |
US20100215104A1 (en) * | 2009-02-26 | 2010-08-26 | Akira Osamoto | Method and System for Motion Estimation |
US8675736B2 (en) * | 2009-05-14 | 2014-03-18 | Qualcomm Incorporated | Motion vector processing |
CA2776909A1 (en) * | 2009-10-07 | 2011-04-14 | Telewatch Inc. | Video analytics method and system |
US8780978B2 (en) * | 2009-11-04 | 2014-07-15 | Qualcomm Incorporated | Controlling video encoding using audio information |
US9203883B2 (en) * | 2009-12-08 | 2015-12-01 | Citrix Systems, Inc. | Systems and methods for a client-side remote presentation of a multimedia stream |
US8306314B2 (en) * | 2009-12-28 | 2012-11-06 | Mitsubishi Electric Research Laboratories, Inc. | Method and system for determining poses of objects |
CN101778260B (en) | 2009-12-29 | 2012-01-04 | 公安部第三研究所 | Method and system for monitoring and managing videos on basis of structured description |
US8503539B2 (en) * | 2010-02-26 | 2013-08-06 | Bao Tran | High definition personal computer (PC) cam |
US20110221895A1 (en) * | 2010-03-10 | 2011-09-15 | Vinay Sharma | Detection of Movement of a Stationary Video Camera |
US9143739B2 (en) * | 2010-05-07 | 2015-09-22 | Iwatchlife, Inc. | Video analytics with burst-like transmission of video data |
US8824554B2 (en) * | 2010-09-02 | 2014-09-02 | Intersil Americas LLC | Systems and methods for video content analysis |
US8890936B2 (en) * | 2010-10-12 | 2014-11-18 | Texas Instruments Incorporated | Utilizing depth information to create 3D tripwires in video |
KR101398319B1 (en) * | 2011-04-15 | 2014-05-22 | 스카이파이어 랩스, 인크. | Real-time video detector |
-
2011
- 2011-09-02 US US13/225,269 patent/US8824554B2/en not_active Expired - Fee Related
- 2011-09-02 US US13/225,202 patent/US20120057633A1/en not_active Abandoned
- 2011-09-02 US US13/225,238 patent/US20120057640A1/en not_active Abandoned
- 2011-09-02 US US13/225,222 patent/US20120057629A1/en not_active Abandoned
-
2014
- 2014-08-28 US US14/472,313 patent/US9609348B2/en not_active Expired - Fee Related
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4837632A (en) * | 1985-01-16 | 1989-06-06 | Mitsubishi Denki Kabushiki Kaisha | Video encoding apparatus including movement compensation |
US6795504B1 (en) * | 2000-06-21 | 2004-09-21 | Microsoft Corporation | Memory efficient 3-D wavelet transform for video coding without boundary effects |
US20030202594A1 (en) * | 2002-03-15 | 2003-10-30 | Nokia Corporation | Method for coding motion in a video sequence |
US20080298464A1 (en) * | 2003-09-03 | 2008-12-04 | Thompson Licensing S.A. | Process and Arrangement for Encoding Video Pictures |
US20060114989A1 (en) * | 2004-11-29 | 2006-06-01 | Prasanjit Panda | Rate control techniques for video encoding using parametric equations |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8850182B1 (en) * | 2012-09-28 | 2014-09-30 | Shoretel, Inc. | Data capture for secure protocols |
CN111901597A (en) * | 2020-08-05 | 2020-11-06 | 杭州当虹科技股份有限公司 | CU (CU) level QP (quantization parameter) allocation algorithm based on video complexity |
Also Published As
Publication number | Publication date |
---|---|
US20140369417A1 (en) | 2014-12-18 |
US20120057629A1 (en) | 2012-03-08 |
US20120057634A1 (en) | 2012-03-08 |
US8824554B2 (en) | 2014-09-02 |
US20120057640A1 (en) | 2012-03-08 |
US9609348B2 (en) | 2017-03-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20120057633A1 (en) | Video Classification Systems and Methods | |
US9215466B2 (en) | Joint frame rate and resolution adaptation | |
US10212456B2 (en) | Deblocking filter for high dynamic range (HDR) video | |
US20180091812A1 (en) | Video compression system providing selection of deblocking filters parameters based on bit-depth of video data | |
US8571106B2 (en) | Digital video compression acceleration based on motion vectors produced by cameras | |
GB2580173A (en) | A filter | |
US10715818B2 (en) | Techniques for hardware video encoding | |
US10623744B2 (en) | Scene based rate control for video compression and video streaming | |
US20190132594A1 (en) | Noise Level Control in Video Coding | |
EP3545677A1 (en) | Methods and apparatuses for encoding and decoding video based on perceptual metric classification | |
CN110636312A (en) | Video encoding and decoding method and device and storage medium | |
US11902517B2 (en) | Method and system for adaptive cross-component filtering | |
WO2012027892A1 (en) | Rho-domain metrics | |
CN110545433A (en) | Video encoding and decoding method and device and storage medium | |
Milani | Fast H. 264/AVC FRExt intra coding using belief propagation | |
WO2012027894A1 (en) | Video classification systems and methods | |
CN110677721A (en) | Video encoding and decoding method and device and storage medium | |
CN110582022A (en) | Video encoding and decoding method and device and storage medium | |
US20150341659A1 (en) | Use of pipelined hierarchical motion estimator in video coding | |
US20230412807A1 (en) | Bit allocation for neural network feature channel compression | |
Barannik et al. | The Principles of Developing a Differential Video Controlling Scheme Based on the Use of Intelligent Agents | |
US20210360229A1 (en) | Online and offline selection of extended long term reference picture retention | |
US20240064298A1 (en) | Loop filtering, video encoding, and video decoding methods and apparatus, storage medium, and electronic device | |
US20230370622A1 (en) | Learned video compression and connectors for multiple machine tasks | |
Sharma et al. | Parameter optimization for HEVC/H. 265 encoder using multi-objective optimization technique |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: INTERSIL AMERICAS INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SHI, FANG;WANG, BIAO;SIGNING DATES FROM 20110909 TO 20110913;REEL/FRAME:027002/0692 |
|
AS | Assignment |
Owner name: INTERSIL AMERICAS LLC, CALIFORNIA Free format text: CHANGE OF NAME;ASSIGNOR:INTERSIL AMERICAS INC.;REEL/FRAME:033119/0484 Effective date: 20111223 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |