US20130083852A1 - Two-dimensional motion compensation filter operation and processing - Google Patents

Two-dimensional motion compensation filter operation and processing Download PDF

Info

Publication number
US20130083852A1
US20130083852A1 US13/333,529 US201113333529A US2013083852A1 US 20130083852 A1 US20130083852 A1 US 20130083852A1 US 201113333529 A US201113333529 A US 201113333529A US 2013083852 A1 US2013083852 A1 US 2013083852A1
Authority
US
United States
Prior art keywords
frame
fractional
dimension
pel
pixels
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/333,529
Inventor
Ba-Zhong Shen
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Avago Technologies International Sales Pte Ltd
Original Assignee
Broadcom Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Broadcom Corp filed Critical Broadcom Corp
Priority to US13/333,529 priority Critical patent/US20130083852A1/en
Assigned to BROADCOM CORPORATION reassignment BROADCOM CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SHEN, BA-ZHONG
Publication of US20130083852A1 publication Critical patent/US20130083852A1/en
Assigned to BANK OF AMERICA, N.A., AS COLLATERAL AGENT reassignment BANK OF AMERICA, N.A., AS COLLATERAL AGENT PATENT SECURITY AGREEMENT Assignors: BROADCOM CORPORATION
Assigned to AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD. reassignment AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BROADCOM CORPORATION
Assigned to BROADCOM CORPORATION reassignment BROADCOM CORPORATION TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS Assignors: BANK OF AMERICA, N.A., AS COLLATERAL AGENT
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/164Feedback from the receiver or from the transmission channel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/40Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video transcoding, i.e. partial or full decoding of a coded input stream followed by re-encoding of the decoded output stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/86Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/89Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving methods or arrangements for detection of transmission errors at the decoder
    • H04N19/895Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving methods or arrangements for detection of transmission errors at the decoder in combination with error concealment

Definitions

  • the invention relates generally to digital video processing; and, more particularly, it relates to performing motion compensation filter processing in accordance with video processing in accordance with such digital video processing.
  • Communication systems that operate to communicate digital media have been under continual development for many years.
  • digital media e.g., images, video, data, etc.
  • a number of digital images are output or displayed at some frame rate (e.g., frames per second) to effectuate a video signal suitable for output and consumption.
  • frame rate e.g., frames per second
  • throughput e.g., number of image frames that may be transmitted from a first location to a second location
  • video and/or image quality of the signal eventually to be output or displayed there can be a trade-off between throughput (e.g., number of image frames that may be transmitted from a first location to a second location) and video and/or image quality of the signal eventually to be output or displayed.
  • the present art does not adequately or acceptably provide a means by which video data may be transmitted from a first location to a second location in accordance with providing an adequate or acceptable video and/or image quality, ensuring a relatively low amount of overhead associated with the communications, relatively low complexity of the communication devices at respective ends of communication links, etc.
  • FIG. 1 illustrates an embodiment of communication system.
  • FIG. 2A illustrates an embodiment of a computer.
  • FIG. 2B illustrates an embodiment of a laptop computer.
  • FIG. 2C illustrates an embodiment of a high definition (HD) television.
  • FIG. 2D illustrates an embodiment of a standard definition (SD) television.
  • SD standard definition
  • FIG. 2E illustrates an embodiment of a handheld media unit.
  • FIG. 2F illustrates an embodiment of a set top box (STB).
  • STB set top box
  • FIG. 2H illustrates an embodiment of a generic digital image and/or video processing device.
  • FIG. 3 is a diagram illustrating an embodiment of video encoding architecture.
  • FIG. 4 is a diagram illustrating an embodiment of intra-prediction processing.
  • FIG. 5 is a diagram illustrating an embodiment of inter-prediction processing.
  • FIG. 6 is a diagram illustrating an embodiment of a video decoding architecture.
  • FIG. 7 , FIG. 8 , and FIG. 9 illustrate various embodiments, respectively, of discrete cosine transform (DCT) processing with respect to pixels of a frame or picture.
  • DCT discrete cosine transform
  • FIG. 10 is a diagram illustrating an embodiment of integer samples (blocks with upper-case letters) and fractional sample positions (blocks with lower-case letters).
  • FIG. 11 illustrates an embodiment of two-dimensional processing with respect to pixels of a frame or picture.
  • FIG. 12 illustrates an embodiment showing simultaneous or in parallel generation of coefficients in multiple dimensions in accordance with video processing operations.
  • FIG. 13A and FIG. 13B illustrate various embodiments of methods for performing various video processing operations.
  • digital media can be transmitted from a first location to a second location at which such media can be output or displayed.
  • the goal of digital communications systems, including those that operate to communicate digital video, is to transmit digital data from one location, or subsystem, to another either error free or with an acceptably low error rate.
  • data may be transmitted over a variety of communications channels in a wide variety of communication systems: magnetic media, wired, wireless, fiber, copper, and/or other types of media as well.
  • FIG. 1 illustrates an embodiment of communication system 100 .
  • this embodiment of a communication system 100 is a communication channel 199 that communicatively couples a communication device 110 (including a transmitter 112 having an encoder 114 and including a receiver 116 having a decoder 118 ) situated at one end of the communication channel 199 to another communication device 120 (including a transmitter 126 having an encoder 128 and including a receiver 122 having a decoder 124 ) at the other end of the communication channel 199 .
  • either of the communication devices 110 and 120 may only include a transmitter or a receiver.
  • the communication channel 199 may be implemented (e.g., a satellite communication channel 130 using satellite dishes 132 and 134 , a wireless communication channel 140 using towers 142 and 144 and/or local antennae 152 and 154 , a wired communication channel 150 , and/or a fiber-optic communication channel 160 using electrical to optical (E/O) interface 162 and optical to electrical (O/E) interface 164 )).
  • a satellite communication channel 130 using satellite dishes 132 and 134 e.g., a satellite communication channel 130 using satellite dishes 132 and 134 , a wireless communication channel 140 using towers 142 and 144 and/or local antennae 152 and 154 , a wired communication channel 150 , and/or a fiber-optic communication channel 160 using electrical to optical (E/O) interface 162 and optical to electrical (O/E) interface 164 )
  • E/O electrical to optical
  • O/E optical to electrical
  • communication devices 110 and/or 120 may be stationary or mobile without departing from the scope and spirit of the invention.
  • either one or both of the communication devices 110 and 120 may be implemented in a fixed location or may be a mobile communication device with capability to associate with and/or communicate with more than one network access point (e.g., different respective access points (APs) in the context of a mobile communication system including one or more wireless local area networks (WLANs), different respective satellites in the context of a mobile communication system including one or more satellite, or generally, different respective network access points in the context of a mobile communication system including one or more network access points by which communications may be effectuated with communication devices 110 and/or 120 .
  • APs access points
  • WLANs wireless local area networks
  • satellites in the context of a mobile communication system including one or more satellite
  • network access points by which communications may be effectuated with communication devices 110 and/or 120 .
  • error correction and channel coding schemes are often employed.
  • these error correction and channel coding schemes involve the use of an encoder at the transmitter end of the communication channel 199 and a decoder at the receiver end of the communication channel 199 .
  • ECC codes described can be employed within any such desired communication system (e.g., including those variations described with respect to FIG. 1 ), any information storage device (e.g., hard disk drives (HDDs), network information storage devices and/or servers, etc.) or any application in which information encoding and/or decoding is desired.
  • any information storage device e.g., hard disk drives (HDDs), network information storage devices and/or servers, etc.
  • any application in which information encoding and/or decoding is desired.
  • video data encoding may generally be viewed as being performed at a transmitting end of the communication channel 199
  • video data decoding may generally be viewed as being performed at a receiving end of the communication channel 199 .
  • the communication device 110 may include only video data encoding capability
  • the communication device 120 may include only video data decoding capability, or vice versa (e.g., in a uni-directional communication embodiment such as in accordance with a video broadcast embodiment).
  • Digital image and/or video processing of digital images and/or media may be performed by any of the various devices depicted below in FIG. 2A-2H to allow a user to view such digital images and/or video.
  • These various devices do not include an exhaustive list of devices in which the image and/or video processing described herein may be effectuated, and it is noted that any generic digital image and/or video processing device may be implemented to perform the processing described herein without departing from the scope and spirit of the invention.
  • FIG. 2A illustrates an embodiment of a computer 201 .
  • the computer 201 can be a desktop computer, or an enterprise storage devices such a server, of a host computer that is attached to a storage array such as a redundant array of independent disks (RAID) array, storage router, edge router, storage switch and/or storage director.
  • RAID redundant array of independent disks
  • a user is able to view still digital images and/or video (e.g., a sequence of digital images) using the computer 201 .
  • various image and/or video viewing programs and/or media player programs are included on a computer 201 to allow a user to view such images (including video).
  • FIG. 2B illustrates an embodiment of a laptop computer 202 .
  • a laptop computer 202 may be found and used in any of a wide variety of contexts. In recent years, with the ever-increasing processing capability and functionality found within laptop computers, they are being employed in many instances where previously higher-end and more capable desktop computers would be used.
  • the laptop computer 202 may include various image viewing programs and/or media player programs to allow a user to view such images (including video).
  • FIG. 2C illustrates an embodiment of a high definition (HD) television 203 .
  • Many HD televisions 203 include an integrated tuner to allow the receipt, processing, and decoding of media content (e.g., television broadcast signals) thereon.
  • an HD television 203 receives media content from another source such as a digital video disc (DVD) player, set top box (STB) that receives, processes, and decodes a cable and/or satellite television broadcast signal.
  • DVD digital video disc
  • STB set top box
  • the HD television 203 may be implemented to perform image and/or video processing as described herein.
  • an HD television 203 has capability to display HD media content and oftentimes is implemented having a 16:9 widescreen aspect ratio.
  • FIG. 2D illustrates an embodiment of a standard definition (SD) television 204 .
  • SD standard definition
  • an SD television 204 is somewhat analogous to an HD television 203 , with at least one difference being that the SD television 204 does not include capability to display HD media content, and an SD television 204 oftentimes is implemented having a 4:3 full screen aspect ratio. Nonetheless, even an SD television 204 may be implemented to perform image and/or video processing as described herein.
  • FIG. 2E illustrates an embodiment of a handheld media unit 205 .
  • a handheld media unit 205 may operate to provide general storage or storage of image/video content information such as joint photographic experts group (JPEG) files, tagged image file format (TIFF), bitmap, motion picture experts group (MPEG) files, Windows Media (WMA/WMV) files, other types of video content such as MPEG4 files, etc. for playback to a user, and/or any other type of information that may be stored in a digital format.
  • JPEG joint photographic experts group
  • TIFF tagged image file format
  • MPEG motion picture experts group
  • WMA/WMV Windows Media
  • other types of video content such as MPEG4 files, etc.
  • such a handheld media unit 205 may also include other functionality such
  • FIG. 2F illustrates an embodiment of a set top box (STB) 206 .
  • STB set top box
  • a STB 206 may be implemented to receive, process, and decode a cable and/or satellite television broadcast signal to be provided to any appropriate display capable device such as SD television 204 and/or HD television 203 .
  • Such an STB 206 may operate independently or cooperatively with such a display capable device to perform image and/or video processing as described herein.
  • FIG. 2G illustrates an embodiment of a digital video disc (DVD) player 207 .
  • DVD digital video disc
  • Such a DVD player may be a Blu-Ray DVD player, an HD capable DVD player, an SD capable DVD player, an up-sampling capable DVD player (e.g., from SD to HD, etc.) without departing from the scope and spirit of the invention.
  • the DVD player may provide a signal to any appropriate display capable device such as SD television 204 and/or HD television 203 .
  • the DVD player 205 may be implemented to perform image and/or video processing as described herein.
  • FIG. 2H illustrates an embodiment of a generic digital image and/or video processing device 208 .
  • these various devices described above do not include an exhaustive list of devices in which the image and/or video processing described herein may be effectuated, and it is noted that any generic digital image and/or video processing device 208 may be implemented to perform the image and/or video processing described herein without departing from the scope and spirit of the invention.
  • FIG. 3 is a diagram illustrating an embodiment 300 of video encoding architecture.
  • Such a video encoder carries out prediction, transform, and encoding processes to produce a compressed output bit stream.
  • a video encoder may operate in accordance with and be compliant with one or more video encoding protocols, standards, and/or recommended practices such as ISO/IEC 14496-10—MPEG-4 Part 10, AVC (Advanced Video Coding), alternatively referred to as H.264/MPEG-4 Part 10 or AVC (Advanced Video Coding), ITU H.264/MPEG4-AVC.
  • a corresponding video decoder such as located within a device at another end of a communication channel, is operative to perform the complementary processes of decoding, inverse transform, and reconstruction to produce a respective decoded video sequence that is (ideally) representative of the input video signal.
  • an encoder processes an input video signal (e.g., typically composed in units of macro-blocks, often times being square in shape and including N ⁇ N pixels therein).
  • the video encoding determines a prediction of the current macro-block based on previously coded data. That previously coded data may come from the current frame (or picture) itself (e.g., such as in accordance with intra-prediction) or from one or more other frames (or pictures) that have already been coded (e.g., such as in accordance with inter-prediction).
  • the video encoder subtracts the prediction of the current macro-block to form a residual.
  • intra-prediction is operative to employ block sizes of one or more particular sizes (e.g., 16 ⁇ 16, 8 ⁇ 8, or 4 ⁇ 4) to predict a current macro-block from surrounding, previously coded pixels within the same frame (or picture).
  • inter-prediction is operative to employ a range of block sizes (e.g., 16 ⁇ 16 down to 4 ⁇ 4) to predict pixels in the current frame (or picture) from regions that are selected from within one or more previously coded frames (or pictures).
  • a block of residual samples may undergo transformation using a particular transform (e.g., 4 ⁇ 4 or 8 ⁇ 8).
  • a particular transform e.g., 4 ⁇ 4 or 8 ⁇ 8.
  • DCT discrete cosine transform
  • the transform operation outputs a group of coefficients such that each respective coefficient corresponds to a respective weighting value of one or more basis functions associated with a transform.
  • a block of transform coefficients is quantized (e.g., each respective coefficient may be divided by an integer value and any associated remainder may be discarded, or they may be multiplied by an integer value).
  • the quantization process is generally inherently lossy, and it can reduce the precision of the transform coefficients according to a quantization parameter (QP).
  • QP quantization parameter
  • the video encoding process produces a number of values that are encoded to form the compressed bit stream.
  • values include the quantized transform coefficients, information to be employed by a decoder to re-create the appropriate prediction, information regarding the structure of the compressed data and compression tools employed during encoding, information regarding a complete video sequence, etc.
  • Such values and/or parameters may undergo encoding within an entropy encoder operating in accordance with CABAC, CAVLC, or some other entropy coding scheme, to produce an output bit stream that may be stored, transmitted (e.g., after undergoing appropriate processing to generate a continuous time signal that comports with a communication channel), etc.
  • the output of the transform and quantization undergoes inverse quantization and inverse transform.
  • One or both of intra-prediction and inter-prediction may be performed in accordance with video encoding.
  • motion compensation and/or motion estimation may be performed in accordance with such video encoding.
  • the output from the de-blocking filter is provided to one or more other in-loop filters (e.g., implemented in accordance with adaptive loop filter (ALF), sample adaptive offset (SAO) filter, and/or any other filter type) implemented to process the output from the inverse transform block.
  • ALF adaptive loop filter
  • SAO sample adaptive offset
  • filter type implemented to process the output from the inverse transform block.
  • such an ALF is applied to the decoded picture before it is stored in a picture buffer (again, sometimes alternatively referred to as a DPB, digital picture buffer).
  • Such an ALF is implemented to reduce coding noise of the decoded picture, and the filtering thereof may be selectively applied on a slice by slice basis, respectively, for luminance and chrominance whether or not such an ALF is applied either at slice level or at block level.
  • Two-dimensional 2-D finite impulse response (FIR) filtering may be used in application of such an ALF.
  • the coefficients of the filters may be designed slice by slice at the encoder, and such information is then signaled to the decoder (e.g., signaled from a transmitter communication device including a video encoder [alternatively referred to as encoder] to a receiver communication device including a video decoder [alternatively referred to as decoder]).
  • One embodiment is operative to generate the coefficients in accordance with Wiener filtering design.
  • it may be applied on a block by block based at the encoder whether the filtering is performed and such a decision is then signaled to the decoder (e.g., signaled from a transmitter communication device including a video encoder [alternatively referred to as encoder] to a receiver communication device including a video decoder [alternatively referred to as decoder]) based on quadtree structure, where the block size is decided according to the rate-distortion optimization.
  • the implementation of using such 2-D filtering may introduce a degree of complexity in accordance with both encoding and decoding. For example, by using 2-D filtering in accordance and implementation of an ALF, there may be some increasing complexity within encoder implemented within the transmitter communication device as well as within a decoder implemented within a receiver communication device.
  • an ALF can provide any of a number of improvements in accordance with such video processing, including an improvement on the objective quality measure by the peak to signal noise ratio (PSNR) that comes from performing random quantization noise removal.
  • PSNR peak to signal noise ratio
  • the subjective quality of a subsequently encoded video signal may be achieved from illumination compensation, which may be introduced in accordance with performing offset processing and scaling processing (e.g., in accordance with applying a gain) in accordance with ALF processing.
  • FIG. 4 is a diagram illustrating an embodiment 400 of intra-prediction processing.
  • a current block of video data e.g., often times being square in shape and including generally N ⁇ N pixels
  • Previously coded pixels located above and to the left of the current block are employed in accordance with such intra-prediction.
  • an intra-prediction direction may be viewed as corresponding to a vector extending from a current pixel to a reference pixel located above or to the left of the current pixel.
  • intra-prediction as applied to coding in accordance with H.264/AVC are specified within the corresponding standard (e.g., International Telecommunication Union, ITU-T, TELECOMMUNICATION STANDARDIZATION SECTOR OF ITU, H.264 (March 2010), SERIES H: AUDIOVISUAL AND MULTIMEDIA SYSTEMS, Infrastructure of audiovisual services—Coding of moving video, Advanced video coding for generic audiovisual services, Recommendation ITU-T H.264, also alternatively referred to as International Telecomm ISO/IEC 14496-10—MPEG-4 Part 10, AVC (Advanced Video Coding), H.264/MPEG-4 Part 10 or AVC (Advanced Video Coding), ITU H.264/MPEG4-AVC, or equivalent) that is incorporated by reference above.
  • ISO/IEC 14496-10 MPEG-4 Part 10
  • AVC Advanced Video Coding
  • H.264/MPEG-4 Part 10 or AVC (Advanced Video Coding)
  • the residual which is the difference between the current pixel and the reference or prediction pixel, is that which gets encoded.
  • intra-prediction operates using pixels within a common frame (or picture). It is of course noted that a given pixel may have different respective components associated therewith, and there may be different respective sets of samples for each respective component.
  • a basic unit may be employed for the prediction partition mode, namely, the prediction unit, or PU.
  • the PU is defined only for the last depth CU, and its respective size is limited to that of the CU.
  • FIG. 6 is a diagram illustrating an embodiment 600 of a video decoding architecture.
  • a decoder such as an entropy decoder (e.g., which may be implemented in accordance with CABAC, CAVLC, etc.) processes the input bitstream in accordance with performing the complementary process of encoding as performed within a video encoder architecture.
  • the input bitstream may be viewed as being, as closely as possible and perfectly in an ideal case, the compressed output bitstream generated by a video encoder architecture.
  • CABAC CABAC
  • CAVLC CAVLC
  • the entropy decoder processes the input bitstream and extracts the appropriate coefficients, such as the DCT coefficients (e.g., such as representing chroma, luma, etc. information) and provides such coefficients to an inverse quantization and inverse transform block.
  • the inverse quantization and inverse transform block may be implemented to perform an inverse DCT (IDCT) operation.
  • IDCT inverse DCT
  • A/D blocking filter is implemented to generate the respective frames and/or pictures corresponding to an output video signal. These frames and/or pictures may be provided into a picture buffer, or a digital picture buffer (DPB) for use in performing other operations including motion compensation.
  • the pixels, a, b, and c are determined as follows:
  • the pixels, d, h, and n are determined as follows:
  • n ( A 0, ⁇ 3 A 0, ⁇ 2 A 0, ⁇ 1 A 0,0 A 0,1 A 0,2 A 0,3 A 0,4 )* H (3/4)
  • the intermediate values are determined as follows:
  • d i,0 ( A i, ⁇ 3 A i, ⁇ 2 A i, ⁇ 1 A i,0 A i,1 A i,2 A i,3 A i,4 )* H (1/4)
  • h i,0 ( A i, ⁇ 3 A i, ⁇ 2 A i, ⁇ 1 A i,0 A i,1 A i,2 A i,3 A i,4 )* H (1/2) T
  • n i,0 ( A i, ⁇ 3 A i, ⁇ 2 A i, ⁇ 1 A i,0 A i,1 A i,2 A i,3 A i,4 )* H (3/4)
  • the processing operates to compute intermediate values in left 3 columns of full pixel samples and right 4 columns of full pixel samples.
  • At least one drawback may be understood, in that, the computation associated with fractional-pixels e, f, g, i, j, k, p, q, r cannot be carried out at the same time as that of sub-pixels a, b, c, d, h. n.
  • the fact that these operations can be performed simultaneously, in parallel, etc. prohibits the respective implementation in accordance with parallelism.
  • the fractional-pixels are always up-sampled (estimated) in one direction, either horizontal or vertical.
  • the picture or frame is actually a 2 dimensional object. That is to say, by performing such sampling only with respect to a single dimension, the operations associated therewith are inherently limited and imperfect.
  • FIG. 10 is a diagram illustrating an embodiment of integer samples (blocks with upper-case letters) and fractional sample positions (blocks with lower-case letters).
  • interpolation is then performed with respect to the fractional sample positions interveningly situated between actual pixel values within the reference frame or picture.
  • this diagram particularly shows three fractional sample positions intervening between respective actual pixel values in both the vertical and horizontal directions.
  • such interpolation may be employed for calculating the actual pixel values of a current frame or picture. For example, based upon such interpolation as may be performed to identify the appropriate value associated with an actual location within a reference frame or picture (e.g., again, which may not correspond exactly to an actual pixel value within the reference frame or picture and for which interpolation may be performed), these calculated filter coefficients may be employed in conjunction with a residual (e.g., as described above with respect to various video processing operations) to generate the actual pixel values of a current frame or picture.
  • a residual e.g., as described above with respect to various video processing operations
  • such calculation of these respective fractional sample positions intervening between actual pixel values may be performed simultaneously within more than one dimension. For example, such calculations may be performed simultaneously in both the vertical and horizontal directions. For example, considering 1/4 sub-pixel sampling with respect to the four respective pixels A 0,0 , A 1,0 , A 0,1 , and A 1,1 , then the corresponding to one the one fractional sample positions of a 0,0 , b 0,0 , c 0,0 , d 0,0 , e 0,0 , f 0,0 , g 0,0 , h 0,0 , i 0,0 , j 0,0 , k 0,0 , n 0,0 , p 0,0 , q 0,0 , and r 0,0 , and also a 0,1 , b 0,1 , c 0,1 , and also d 1,0 , h 1,0 , n 1,0 , may all be calculated simultaneously and/or in parallel with
  • FIG. 11 illustrates an embodiment 1100 of two-dimensional (2-D) processing with respect to pixels of a frame or picture.
  • 2-D two-dimensional
  • 2-D fractional-pel interpolation may be performed as follows:
  • x(t,s) a continuous two-dimensional signal with respective sampled sequences as follows:
  • processing in operation is effectuated such that more than one respective dimension is employed simultaneously and/or in parallel. That is to say, with respect to an input bitstream, at least with respect to a two-dimensional image, processing of the input bitstream is made with respect to a first dimension (e.g., vertical) and a second dimension (e.g., horizontal) simultaneously or in parallel thereby generating the coefficient values. Moreover, as may also be seen, at least one fractional-pel value in the first dimension and at least one fractional-pel value in the second dimension are employed.
  • at least one fractional-pel value [corresponding to that fractional-pel distance] in the vertical dimension is employed and at least one fractional-pel value corresponding to that same fractional-pel distance] in the horizontal dimension is employed.
  • fractional-pel distance employed commonly within more than one dimension
  • different respective fractional-pel distances may be employed with respect to the different dimensions.
  • a first fractional-pel distance may be employed in a first dimension (e.g., vertical)
  • a second fractional-pel distance may be employed in the second dimension (e.g., horizontal)
  • d (T s /U, T t /U, T u /U, T v /U))
  • U is an integer, and so on.
  • any to respective dimensions may employ the very same fractional-pel distance, such that either one of U 2 and U 3 may be the same as U 1 .
  • such operations may be extended towards signals having any of a number of multiple dimensions.
  • such simultaneous and/or in parallel processing may be made with respect to each of the respective three dimensions.
  • such motion compensation processing may be applied towards the respective image or video signals generated from those at least two respective cameras.
  • each respective camera may be operative for generating a two-dimensional image or video signal
  • motion compensation processing as described herein with respect to two-dimensional signals may be applied respectively to the respective image or video signals generated from those of these two respective cameras.
  • information related to more than one respective dimension may be employed simultaneously and/or in parallel in accordance with generating coefficient values.
  • coefficient values associated with a video signal are generated with respect to each of two or more dimensions simultaneously and/or in parallel.
  • the generation of these respective coefficient values is based on at least one respective fractional-pel value in each of the respective dimensions (e.g., a first at least one fractional-pel value in a first dimension, a second at least one fractional-pel value in a second dimension, etc. such that different respective fractional-pel values are employed in different respective dimensions).
  • the same fractional-pel distance may be employed within each of the respective dimensions in certain embodiments.
  • the method 1301 operates by processing an input plurality of pixels, with respect to a first dimension and a second dimension simultaneously and/or in parallel thereby generating a plurality of coefficient values, as shown in a block 1311 .
  • the operations associated with the block 1311 may be performed such that each of the plurality of coefficient values is based on a respective at least one fractional-pel value in the first dimension and a respective at least one fractional-pel value in the second dimension.
  • the same fractional-pel value may be used with respect to both dimensions or different respective fractional-pel values may be employed respectively within different respective dimensions.
  • such a device may be viewed generally as including an input for receiving an input bitstream or a signal corresponding to the input bitstream from a source device or another middling device via one or more communication networks, as well as including an output for outputting an output bitstream or a signal corresponding to an output bitstream or an output video signal to at least one destination device via the one or more communication networks or one or more additional communication networks.
  • processing module, module, processing circuit, and/or processing unit includes more than one processing device, the processing devices may be centrally located (e.g., directly coupled together via a wired and/or wireless bus structure) or may be distributedly located (e.g., cloud computing via indirect coupling via a local area network and/or a wide area network). Further note that if the processing module, module, processing circuit, and/or processing unit implements one or more of its functions via a state machine, analog circuitry, digital circuitry, and/or logic circuitry, the memory and/or memory element storing the corresponding operational instructions may be embedded within, or external to, the circuitry comprising the state machine, analog circuitry, digital circuitry, and/or logic circuitry.
  • the present invention may have also been described, at least in part, in terms of one or more embodiments.
  • An embodiment of the present invention is used herein to illustrate the present invention, an aspect thereof, a feature thereof, a concept thereof, and/or an example thereof
  • a physical embodiment of an apparatus, an article of manufacture, a machine, and/or of a process that embodies the present invention may include one or more of the aspects, features, concepts, examples, etc. described with reference to one or more of the embodiments discussed herein.
  • the embodiments may incorporate the same or similarly named functions, steps, modules, etc. that may use the same or different reference numbers and, as such, the functions, steps, modules, etc. may be the same or similar functions, steps, modules, etc. or different ones.
  • transistors such as field effect transistors (FETs)
  • FETs field effect transistors
  • MOSFET metal oxide semiconductor field effect transistors
  • N-well transistors N-well transistors
  • P-well transistors enhancement mode, depletion mode, and zero voltage threshold (VT) transistors.
  • VT zero voltage threshold
  • signals to, from, and/or between elements in a figure of any of the figures presented herein may be analog or digital, continuous time or discrete time, and single-ended or differential.
  • signals to, from, and/or between elements in a figure of any of the figures presented herein may be analog or digital, continuous time or discrete time, and single-ended or differential.
  • a signal path is shown as a single-ended path, it also represents a differential signal path.
  • a signal path is shown as a differential path, it also represents a single-ended signal path.
  • module is used in the description of the various embodiments of the present invention.
  • a module includes a processing module, a functional block, hardware, and/or software stored on memory for performing one or more functions as may be described herein. Note that, if the module is implemented via hardware, the hardware may operate independently and/or in conjunction software and/or firmware.
  • a module may contain one or more sub-modules, each of which may be one or more modules.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

Two-dimensional motion compensation filter operation and processing. A video bitstream or signal corresponding thereto undergoes motion compensation operations simultaneously or in parallel with respect to at least two respective dimensions (e.g., at least horizontal and vertical) in accordance with generating coefficient values employed for generating a decoded and/or output video signal. The simultaneous and in parallel operations made with respect to more than one dimension associated with the video bitstream or signal may employ a two-dimensional discrete cosine transform (2-D DCT) implemented to operate on more than one dimension simultaneously. Same or different respective fractional-pel distances may be employed with respect to multiple respective dimensions (e.g., common/same fractional-pel distance for all of the multiple respective dimensions, or different respective fractional-pel distances with respect to each of the multiple respective dimensions [such as a first fractional-pel distance for a first dimension, a second fractional-pel distance for a second dimension, etc.]).

Description

    CROSS REFERENCE TO RELATED PATENTS/PATENT APPLICATIONS Provisional Priority Claims
  • The present U.S. Utility patent application claims priority pursuant to 35 U.S.C. §119(e) to the following U.S. Provisional patent application which is hereby incorporated herein by reference in its entirety and made part of the present U.S. Utility patent application for all purposes:
    • 1. U.S. Provisional Patent Application Ser. No. 61/541,938, entitled “Coding, communications, and signaling of video content within communication systems,” (Attorney Docket No. BP23215), filed Sep. 30, 2011, pending.
    Incorporation by Reference
  • The following standards/draft standards are hereby incorporated herein by reference in their entirety and are made part of the present U.S. Utility patent application for all purposes:
    • 1. “WD4: Working Draft 4 of High-Efficiency Video Coding, Joint Collaborative Team on Video Coding (JCT-VC),” Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11 6th Meeting: Torino, IT, 14-22 Jul. 2011, Document: JCTVC-F803 d4, 230 pages.
    • 2. International Telecommunication Union, ITU-T, TELECOMMUNICATION STANDARDIZATION SECTOR OF ITU, H.264 (March 2010), SERIES H: AUDIOVISUAL AND MULTIMEDIA SYSTEMS, Infrastructure of audiovisual services—Coding of moving video, Advanced video coding for generic audiovisual services, Recommendation ITU-T H.264, also alternatively referred to as International Telecomm ISO/IEC 14496-10—MPEG-4 Part 10, AVC (Advanced Video Coding), H.264/MPEG-4 Part 10 or AVC (Advanced Video Coding), ITU H.264/MPEG4-AVC, or equivalent.
    BACKGROUND OF THE INVENTION
  • 1. Technical Field of the Invention
  • The invention relates generally to digital video processing; and, more particularly, it relates to performing motion compensation filter processing in accordance with video processing in accordance with such digital video processing.
  • 2. Description of Related Art
  • Communication systems that operate to communicate digital media (e.g., images, video, data, etc.) have been under continual development for many years. With respect to such communication systems employing some form of video data, a number of digital images are output or displayed at some frame rate (e.g., frames per second) to effectuate a video signal suitable for output and consumption. Within many such communication systems operating using video data, there can be a trade-off between throughput (e.g., number of image frames that may be transmitted from a first location to a second location) and video and/or image quality of the signal eventually to be output or displayed. The present art does not adequately or acceptably provide a means by which video data may be transmitted from a first location to a second location in accordance with providing an adequate or acceptable video and/or image quality, ensuring a relatively low amount of overhead associated with the communications, relatively low complexity of the communication devices at respective ends of communication links, etc.
  • BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS
  • FIG. 1 illustrates an embodiment of communication system.
  • FIG. 2A illustrates an embodiment of a computer.
  • FIG. 2B illustrates an embodiment of a laptop computer.
  • FIG. 2C illustrates an embodiment of a high definition (HD) television.
  • FIG. 2D illustrates an embodiment of a standard definition (SD) television.
  • FIG. 2E illustrates an embodiment of a handheld media unit.
  • FIG. 2F illustrates an embodiment of a set top box (STB).
  • FIG. 2G illustrates an embodiment of a digital video disc (DVD) player.
  • FIG. 2H illustrates an embodiment of a generic digital image and/or video processing device.
  • FIG. 3 is a diagram illustrating an embodiment of video encoding architecture.
  • FIG. 4 is a diagram illustrating an embodiment of intra-prediction processing.
  • FIG. 5 is a diagram illustrating an embodiment of inter-prediction processing.
  • FIG. 6 is a diagram illustrating an embodiment of a video decoding architecture.
  • FIG. 7, FIG. 8, and FIG. 9 illustrate various embodiments, respectively, of discrete cosine transform (DCT) processing with respect to pixels of a frame or picture.
  • FIG. 10 is a diagram illustrating an embodiment of integer samples (blocks with upper-case letters) and fractional sample positions (blocks with lower-case letters).
  • FIG. 11 illustrates an embodiment of two-dimensional processing with respect to pixels of a frame or picture.
  • FIG. 12 illustrates an embodiment showing simultaneous or in parallel generation of coefficients in multiple dimensions in accordance with video processing operations.
  • FIG. 13A and FIG. 13B illustrate various embodiments of methods for performing various video processing operations.
  • DETAILED DESCRIPTION OF THE INVENTION
  • Within many devices that use digital media such as digital video, respective images thereof, being digital in nature, are represented using pixels. Within certain communication systems, digital media can be transmitted from a first location to a second location at which such media can be output or displayed. The goal of digital communications systems, including those that operate to communicate digital video, is to transmit digital data from one location, or subsystem, to another either error free or with an acceptably low error rate. As shown in FIG. 1, data may be transmitted over a variety of communications channels in a wide variety of communication systems: magnetic media, wired, wireless, fiber, copper, and/or other types of media as well.
  • FIG. 1 illustrates an embodiment of communication system 100.
  • Referring to FIG. 1, this embodiment of a communication system 100 is a communication channel 199 that communicatively couples a communication device 110 (including a transmitter 112 having an encoder 114 and including a receiver 116 having a decoder 118) situated at one end of the communication channel 199 to another communication device 120 (including a transmitter 126 having an encoder 128 and including a receiver 122 having a decoder 124) at the other end of the communication channel 199. In some embodiments, either of the communication devices 110 and 120 may only include a transmitter or a receiver. There are several different types of media by which the communication channel 199 may be implemented (e.g., a satellite communication channel 130 using satellite dishes 132 and 134, a wireless communication channel 140 using towers 142 and 144 and/or local antennae 152 and 154, a wired communication channel 150, and/or a fiber-optic communication channel 160 using electrical to optical (E/O) interface 162 and optical to electrical (O/E) interface 164)). In addition, more than one type of media may be implemented and interfaced together thereby forming the communication channel 199.
  • It is noted that such communication devices 110 and/or 120 may be stationary or mobile without departing from the scope and spirit of the invention. For example, either one or both of the communication devices 110 and 120 may be implemented in a fixed location or may be a mobile communication device with capability to associate with and/or communicate with more than one network access point (e.g., different respective access points (APs) in the context of a mobile communication system including one or more wireless local area networks (WLANs), different respective satellites in the context of a mobile communication system including one or more satellite, or generally, different respective network access points in the context of a mobile communication system including one or more network access points by which communications may be effectuated with communication devices 110 and/or 120.
  • To reduce transmission errors that may undesirably be incurred within a communication system, error correction and channel coding schemes are often employed. Generally, these error correction and channel coding schemes involve the use of an encoder at the transmitter end of the communication channel 199 and a decoder at the receiver end of the communication channel 199.
  • Any of various types of ECC codes described can be employed within any such desired communication system (e.g., including those variations described with respect to FIG. 1), any information storage device (e.g., hard disk drives (HDDs), network information storage devices and/or servers, etc.) or any application in which information encoding and/or decoding is desired.
  • Generally speaking, when considering a communication system in which video data is communicated from one location, or subsystem, to another, video data encoding may generally be viewed as being performed at a transmitting end of the communication channel 199, and video data decoding may generally be viewed as being performed at a receiving end of the communication channel 199.
  • Also, while the embodiment of this diagram shows bi-directional communication being capable between the communication devices 110 and 120, it is of course noted that, in some embodiments, the communication device 110 may include only video data encoding capability, and the communication device 120 may include only video data decoding capability, or vice versa (e.g., in a uni-directional communication embodiment such as in accordance with a video broadcast embodiment).
  • Digital image and/or video processing of digital images and/or media (including the respective images within a digital video signal) may be performed by any of the various devices depicted below in FIG. 2A-2H to allow a user to view such digital images and/or video. These various devices do not include an exhaustive list of devices in which the image and/or video processing described herein may be effectuated, and it is noted that any generic digital image and/or video processing device may be implemented to perform the processing described herein without departing from the scope and spirit of the invention.
  • FIG. 2A illustrates an embodiment of a computer 201. The computer 201 can be a desktop computer, or an enterprise storage devices such a server, of a host computer that is attached to a storage array such as a redundant array of independent disks (RAID) array, storage router, edge router, storage switch and/or storage director. A user is able to view still digital images and/or video (e.g., a sequence of digital images) using the computer 201. Oftentimes, various image and/or video viewing programs and/or media player programs are included on a computer 201 to allow a user to view such images (including video).
  • FIG. 2B illustrates an embodiment of a laptop computer 202. Such a laptop computer 202 may be found and used in any of a wide variety of contexts. In recent years, with the ever-increasing processing capability and functionality found within laptop computers, they are being employed in many instances where previously higher-end and more capable desktop computers would be used. As with the computer 201, the laptop computer 202 may include various image viewing programs and/or media player programs to allow a user to view such images (including video).
  • FIG. 2C illustrates an embodiment of a high definition (HD) television 203. Many HD televisions 203 include an integrated tuner to allow the receipt, processing, and decoding of media content (e.g., television broadcast signals) thereon. Alternatively, sometimes an HD television 203 receives media content from another source such as a digital video disc (DVD) player, set top box (STB) that receives, processes, and decodes a cable and/or satellite television broadcast signal. Regardless of the particular implementation, the HD television 203 may be implemented to perform image and/or video processing as described herein. Generally speaking, an HD television 203 has capability to display HD media content and oftentimes is implemented having a 16:9 widescreen aspect ratio.
  • FIG. 2D illustrates an embodiment of a standard definition (SD) television 204. Of course, an SD television 204 is somewhat analogous to an HD television 203, with at least one difference being that the SD television 204 does not include capability to display HD media content, and an SD television 204 oftentimes is implemented having a 4:3 full screen aspect ratio. Nonetheless, even an SD television 204 may be implemented to perform image and/or video processing as described herein.
  • FIG. 2E illustrates an embodiment of a handheld media unit 205. A handheld media unit 205 may operate to provide general storage or storage of image/video content information such as joint photographic experts group (JPEG) files, tagged image file format (TIFF), bitmap, motion picture experts group (MPEG) files, Windows Media (WMA/WMV) files, other types of video content such as MPEG4 files, etc. for playback to a user, and/or any other type of information that may be stored in a digital format. Historically, such handheld media units were primarily employed for storage and playback of audio media; however, such a handheld media unit 205 may be employed for storage and playback of virtual any media (e.g., audio media, video media, photographic media, etc.). Moreover, such a handheld media unit 205 may also include other functionality such as integrated communication circuitry for wired and wireless communications. Such a handheld media unit 205 may be implemented to perform image and/or video processing as described herein.
  • FIG. 2F illustrates an embodiment of a set top box (STB) 206. As mentioned above, sometimes a STB 206 may be implemented to receive, process, and decode a cable and/or satellite television broadcast signal to be provided to any appropriate display capable device such as SD television 204 and/or HD television 203. Such an STB 206 may operate independently or cooperatively with such a display capable device to perform image and/or video processing as described herein.
  • FIG. 2G illustrates an embodiment of a digital video disc (DVD) player 207. Such a DVD player may be a Blu-Ray DVD player, an HD capable DVD player, an SD capable DVD player, an up-sampling capable DVD player (e.g., from SD to HD, etc.) without departing from the scope and spirit of the invention. The DVD player may provide a signal to any appropriate display capable device such as SD television 204 and/or HD television 203. The DVD player 205 may be implemented to perform image and/or video processing as described herein.
  • FIG. 2H illustrates an embodiment of a generic digital image and/or video processing device 208. Again, as mentioned above, these various devices described above do not include an exhaustive list of devices in which the image and/or video processing described herein may be effectuated, and it is noted that any generic digital image and/or video processing device 208 may be implemented to perform the image and/or video processing described herein without departing from the scope and spirit of the invention.
  • FIG. 3 is a diagram illustrating an embodiment 300 of video encoding architecture.
  • Referring to embodiment 300 of FIG. 3, with respect to this diagram depicting an alternative embodiment of a video encoder, such a video encoder carries out prediction, transform, and encoding processes to produce a compressed output bit stream. Such a video encoder may operate in accordance with and be compliant with one or more video encoding protocols, standards, and/or recommended practices such as ISO/IEC 14496-10—MPEG-4 Part 10, AVC (Advanced Video Coding), alternatively referred to as H.264/MPEG-4 Part 10 or AVC (Advanced Video Coding), ITU H.264/MPEG4-AVC.
  • It is noted that a corresponding video decoder, such as located within a device at another end of a communication channel, is operative to perform the complementary processes of decoding, inverse transform, and reconstruction to produce a respective decoded video sequence that is (ideally) representative of the input video signal.
  • As may be seen with respect to this diagram and as may be understood with respect to various embodiments, alternative arrangements and architectures may be employed for effectuating video encoding. Generally speaking, an encoder processes an input video signal (e.g., typically composed in units of macro-blocks, often times being square in shape and including N×N pixels therein). The video encoding determines a prediction of the current macro-block based on previously coded data. That previously coded data may come from the current frame (or picture) itself (e.g., such as in accordance with intra-prediction) or from one or more other frames (or pictures) that have already been coded (e.g., such as in accordance with inter-prediction). The video encoder subtracts the prediction of the current macro-block to form a residual.
  • Generally speaking, intra-prediction is operative to employ block sizes of one or more particular sizes (e.g., 16×16, 8×8, or 4×4) to predict a current macro-block from surrounding, previously coded pixels within the same frame (or picture). Generally speaking, inter-prediction is operative to employ a range of block sizes (e.g., 16×16 down to 4×4) to predict pixels in the current frame (or picture) from regions that are selected from within one or more previously coded frames (or pictures).
  • With respect to the transform and quantization operations, a block of residual samples may undergo transformation using a particular transform (e.g., 4×4 or 8×8). One possible embodiment of such a transform operates in accordance with discrete cosine transform (DCT). The transform operation outputs a group of coefficients such that each respective coefficient corresponds to a respective weighting value of one or more basis functions associated with a transform. After undergoing transformation, a block of transform coefficients is quantized (e.g., each respective coefficient may be divided by an integer value and any associated remainder may be discarded, or they may be multiplied by an integer value). The quantization process is generally inherently lossy, and it can reduce the precision of the transform coefficients according to a quantization parameter (QP). Typically, many of the coefficients associated with a given macro-block are zero, and only some nonzero coefficients remain. Generally, a relatively high QP setting is operative to result in a greater proportion of zero-valued coefficients and smaller magnitudes of non-zero coefficients, resulting in relatively high compression (e.g., relatively lower coded bit rate) at the expense of relatively poorly decoded image quality; a relatively low QP setting is operative to allow more nonzero coefficients to remain after quantization and larger magnitudes of non-zero coefficients, resulting in relatively lower compression (e.g., relatively higher coded bit rate) with relatively better decoded image quality.
  • The video encoding process produces a number of values that are encoded to form the compressed bit stream. Examples of such values include the quantized transform coefficients, information to be employed by a decoder to re-create the appropriate prediction, information regarding the structure of the compressed data and compression tools employed during encoding, information regarding a complete video sequence, etc. Such values and/or parameters (e.g., syntax elements) may undergo encoding within an entropy encoder operating in accordance with CABAC, CAVLC, or some other entropy coding scheme, to produce an output bit stream that may be stored, transmitted (e.g., after undergoing appropriate processing to generate a continuous time signal that comports with a communication channel), etc.
  • In an embodiment operating using a feedback path, the output of the transform and quantization undergoes inverse quantization and inverse transform. One or both of intra-prediction and inter-prediction may be performed in accordance with video encoding. Also, motion compensation and/or motion estimation may be performed in accordance with such video encoding.
  • The signal path output from the inverse quantization and inverse transform (e.g., IDCT) block, which is provided to the intra-prediction block, is also provided to a de-blocking filter. In certain optional embodiments, the output from the de-blocking filter is provided to one or more other in-loop filters (e.g., implemented in accordance with adaptive loop filter (ALF), sample adaptive offset (SAO) filter, and/or any other filter type) implemented to process the output from the inverse transform block. For example, such an ALF is applied to the decoded picture before it is stored in a picture buffer (again, sometimes alternatively referred to as a DPB, digital picture buffer). Such an ALF is implemented to reduce coding noise of the decoded picture, and the filtering thereof may be selectively applied on a slice by slice basis, respectively, for luminance and chrominance whether or not such an ALF is applied either at slice level or at block level. Two-dimensional 2-D finite impulse response (FIR) filtering may be used in application of such an ALF. The coefficients of the filters may be designed slice by slice at the encoder, and such information is then signaled to the decoder (e.g., signaled from a transmitter communication device including a video encoder [alternatively referred to as encoder] to a receiver communication device including a video decoder [alternatively referred to as decoder]).
  • One embodiment is operative to generate the coefficients in accordance with Wiener filtering design. In addition, it may be applied on a block by block based at the encoder whether the filtering is performed and such a decision is then signaled to the decoder (e.g., signaled from a transmitter communication device including a video encoder [alternatively referred to as encoder] to a receiver communication device including a video decoder [alternatively referred to as decoder]) based on quadtree structure, where the block size is decided according to the rate-distortion optimization. It is noted that the implementation of using such 2-D filtering may introduce a degree of complexity in accordance with both encoding and decoding. For example, by using 2-D filtering in accordance and implementation of an ALF, there may be some increasing complexity within encoder implemented within the transmitter communication device as well as within a decoder implemented within a receiver communication device.
  • As mentioned with respect to other embodiments, the use of an ALF can provide any of a number of improvements in accordance with such video processing, including an improvement on the objective quality measure by the peak to signal noise ratio (PSNR) that comes from performing random quantization noise removal. In addition, the subjective quality of a subsequently encoded video signal may be achieved from illumination compensation, which may be introduced in accordance with performing offset processing and scaling processing (e.g., in accordance with applying a gain) in accordance with ALF processing.
  • With respect to any video encoder architecture implemented to generate an output bitstream, it is noted that such architectures may be implemented within any of a variety of communication devices. The output bitstream may undergo additional processing including error correction code (ECC), forward error correction (FEC), etc. thereby generating a modified output bitstream having additional redundancy deal therein. Also, as may be understood with respect to such a digital signal, it may undergo any appropriate processing in accordance with generating a continuous time signal suitable for or appropriate for transmission via a communication channel That is to say, such a video encoder architecture may be of limited within a communication device operative to perform transmission of one or more signals via one or more communication channels. Additional processing may be made on an output bitstream generated by such a video encoder architecture thereby generating a continuous time signal that may be launched into a communication channel.
  • FIG. 4 is a diagram illustrating an embodiment 400 of intra-prediction processing. As can be seen with respect to this diagram, a current block of video data (e.g., often times being square in shape and including generally N×N pixels) undergoes processing to estimate the respective pixels therein. Previously coded pixels located above and to the left of the current block are employed in accordance with such intra-prediction. From certain perspectives, an intra-prediction direction may be viewed as corresponding to a vector extending from a current pixel to a reference pixel located above or to the left of the current pixel. Details of intra-prediction as applied to coding in accordance with H.264/AVC are specified within the corresponding standard (e.g., International Telecommunication Union, ITU-T, TELECOMMUNICATION STANDARDIZATION SECTOR OF ITU, H.264 (March 2010), SERIES H: AUDIOVISUAL AND MULTIMEDIA SYSTEMS, Infrastructure of audiovisual services—Coding of moving video, Advanced video coding for generic audiovisual services, Recommendation ITU-T H.264, also alternatively referred to as International Telecomm ISO/IEC 14496-10—MPEG-4 Part 10, AVC (Advanced Video Coding), H.264/MPEG-4 Part 10 or AVC (Advanced Video Coding), ITU H.264/MPEG4-AVC, or equivalent) that is incorporated by reference above.
  • The residual, which is the difference between the current pixel and the reference or prediction pixel, is that which gets encoded. As can be seen with respect to this diagram, intra-prediction operates using pixels within a common frame (or picture). It is of course noted that a given pixel may have different respective components associated therewith, and there may be different respective sets of samples for each respective component.
  • FIG. 5 is a diagram illustrating an embodiment 500 of inter-prediction processing. In contradistinction to intra-prediction, inter-prediction is operative to identify a motion vector (e.g., an inter-prediction direction) based on a current set of pixels within a current frame (or picture) and one or more sets of reference or prediction pixels located within one or more other frames (or pictures) within a frame (or picture) sequence. As can be seen, the motion vector extends from the current frame (or picture) to another frame (or picture) within the frame (or picture) sequence. Inter-prediction may utilize sub-pixel interpolation, such that a prediction pixel value corresponds to a function of a plurality of pixels in a reference frame or picture.
  • A residual may be calculated in accordance with inter-prediction processing, though such a residual is different from the residual calculated in accordance with intra-prediction processing. In accordance with inter-prediction processing, the residual at each pixel again corresponds to the difference between a current pixel and a predicted pixel value. However, in accordance with inter-prediction processing, the current pixel and the reference or prediction pixel are not located within the same frame (or picture). While this diagram shows inter-prediction as being employed with respect to one or more previous frames or pictures, it is also noted that alternative embodiments may operate using references corresponding to frames before and/or after a current frame. For example, in accordance with appropriate buffering and/or memory management, a number of frames may be stored. When operating on a given frame, references may be generated from other frames that precede and/or follow that given frame.
  • Coupled with the CU, a basic unit may be employed for the prediction partition mode, namely, the prediction unit, or PU. It is also noted that the PU is defined only for the last depth CU, and its respective size is limited to that of the CU.
  • FIG. 6 is a diagram illustrating an embodiment 600 of a video decoding architecture.
  • Generally speaking, such video decoding architectures operate on an input bitstream. It is of course noted that such an input bitstream may be generated from a signal that is received by a communication device from a communication channel. Various operations may be performed on a continuous time signal received from the communication channel, including digital sampling, demodulation, scaling, filtering, etc. such as may be appropriate in accordance with generating the input bitstream. Moreover, certain embodiments, in which one or more types of error correction code (ECC), forward error correction (FEC), etc. may be implemented, may perform appropriate decoding in accordance with such ECC, FEC, etc. thereby generating the input bitstream. That is to say, in certain embodiments in which additional redundancy may have been made in accordance with generating a corresponding output bitstream (e.g., such as may be launched from a transmitter communication device or from the transmitter portion of a transceiver communication device), appropriate processing may be performed in accordance with generating the input bitstream. Overall, such a video decoding architectures and lamented to process the input bitstream thereby generating an output video signal corresponding to the original input video signal, as closely as possible and perfectly in an ideal case, for use in being output to one or more video display capable devices.
  • Referring to the embodiment 600 of FIG. 6, generally speaking, a decoder such as an entropy decoder (e.g., which may be implemented in accordance with CABAC, CAVLC, etc.) processes the input bitstream in accordance with performing the complementary process of encoding as performed within a video encoder architecture. The input bitstream may be viewed as being, as closely as possible and perfectly in an ideal case, the compressed output bitstream generated by a video encoder architecture. Of course, in a real-life application, it is possible that some errors may have been incurred in a signal transmitted via one or more communication links. The entropy decoder processes the input bitstream and extracts the appropriate coefficients, such as the DCT coefficients (e.g., such as representing chroma, luma, etc. information) and provides such coefficients to an inverse quantization and inverse transform block. In the event that a DCT transform is employed, the inverse quantization and inverse transform block may be implemented to perform an inverse DCT (IDCT) operation. Subsequently, A/D blocking filter is implemented to generate the respective frames and/or pictures corresponding to an output video signal. These frames and/or pictures may be provided into a picture buffer, or a digital picture buffer (DPB) for use in performing other operations including motion compensation. Generally speaking, such motion compensation operations may be viewed as corresponding to inter-prediction associated with video encoding. Also, intra-prediction may also be performed on the signal output from the inverse quantization and inverse transform block. Analogously as with respect to video encoding, such a video decoder architecture may be implemented to perform mode selection between performing it neither intra-prediction nor inter-prediction, inter-prediction, or intra-prediction in accordance with decoding an input bitstream thereby generating an output video signal.
  • In certain optional embodiments, one or more in-loop filters (e.g., implemented in accordance with adaptive loop filter (ALF), sample adaptive offset (SAO) filter, and/or any other filter type) such as may be implemented in accordance with video encoding as employed to generate an output bitstream, a corresponding one or more in-loop filters may be implemented within a video decoder architecture. In one embodiment, an appropriate implementation of one or more such in-loop filters is after the de-blocking filter.
  • FIG. 7, FIG. 8, and FIG. 9 illustrate various embodiments 1100, 1200, and 1300, respectively, of discrete cosine transform (DCT) processing with respect to pixels of a frame or picture.
  • Referring to the embodiment 700 of FIG. 7, 1/4-pel 8-tap DCT as proposed in accordance with the HEVC Working Draft is pictorially illustrated. Such an implementation employs 3 filters at position 1/4, 1/2 and 3/4 and as modified as follows:
  • H ( 1 / 4 ) = ( - 1 , 4 , - 10 , 57 , 19 , - 7 , 3 , - 1 ) / 2 B = ( h 1 , h 2 , h 3 , h 4 , h 5 , h 6 , h 7 , h 8 ) H ( 1 / 2 ) = ( - 1 , 4 , - 11 , 40 , 40 , - 11 , 4 , - 1 ) / 2 B * symmetric * H ( 3 / 4 ) = ( h 8 · h 7 · h 6 , h 5 , h 4 , h 3 , h 2 · h 1 )
  • The pixels, a, b, and c are determined as follows:
  • a: Use H(1/4) with horizontal 8 full pixel samples (4 left and 4 right)
  • b: Use H(1/2) with horizontal 8 full pixel samples (4 left and 4 right)
  • c: Use H(3/4) with horizontal 8 full pixel samples (4 left and 4 right)
  • Referring to the embodiment 800 of FIG. 8, the pixels, a, b, and c are determined as follows:

  • a=(A −3,0 A −2,0 A −1,0 A 0,0 A 1,0 A 2,0 A 3,0 A 4,0)*H(1/4)T

  • b=(A −3,0 A −2,0 A −1,0 A 0,0 A 1,0 A 2,0 A 3,0 A 4,0)*H(1/2)T

  • c=(A −3,0 A −2,0 A −1,0 A 0,0 A 1,0 A 2,0 A 3,0 A 4,0)*H(3/4)T
  • a: Use H(1/4) with horizontal 12 full pixel samples (6 left and 6 right)
  • b: Use H(1/2) with horizontal 12 full pixel samples (6 left and 6 right)
  • c: Use H(3/4) with horizontal 12 full pixel samples (6 left and 6 right)
  • The pixels, d, h, and n are determined as follows:

  • d=(A 0,−3 A 0,−2 A 0,−1 A 0,0 A 0,1 A 0,2 A 0,3 A 0,4)*H(1/4)T

  • h=(A 0,−3 A 0,−2 A 0,−1 A 0,0 A 0,1 A 0,2 A 0,3 A 0,4)*H(1/2)T

  • n=(A 0,−3 A 0,−2 A 0,−1 A 0,0 A 0,1 A 0,2 A 0,3 A 0,4)*H(3/4)T
  • d: Use H(1/4) with vertical 8 full pixel samples (4 above and 4 down)
  • h: Use H(1/2) with vertical 8 full pixel samples (4 above and 4 down)
  • n: Use H(3/4) with vertical 8 full pixel samples (4 above and 4 down)
  • Referring to the embodiment 900 of FIG. 9, the intermediate values are determined as follows:

  • d i,0=(A i,−3 A i,−2 A i,−1 A i,0 A i,1 A i,2 A i,3 A i,4)*H(1/4)T

  • h i,0=(A i,−3 A i,−2 A i,−1 A i,0 A i,1 A i,2 A i,3 A i,4)*H(1/2)T

  • n i,0=(A i,−3 A i,−2 A i,−1 A i,0 A i,1 A i,2 A i,3 A i,4)*H(3/4)T
  • i=−3, −2, −1, 1, 2, 3, 4
  • The processing operates to compute intermediate values in left 3 columns of full pixel samples and right 4 columns of full pixel samples. The same 3 filters are used with the total being 7 times filter computations. This is operative to obtain values (di, hi, ni), i=−3, −2, −1, 1, 2, 3, 4 as follows:

  • e=(d −3,0 d −2,0 d −1,0 d 0,0 d 1,0 d 2,0 d 3,0 d 4,0)H(1/4)T

  • f=(d −3,0 d −−2,0 d −1,0 d 0,0 d 1,0 d 2,0 d 3,0 d 4,0)H(1/2)T

  • g=(d −3,0 d −2,0 d −1,0 d 0,0 d 1,0 d 2,0 d 3,0 d 4,0)H(3/4)T

  • i=(h −3,0 h −2,0 h −1,0 h 0,0 h 1,0 h 2,0 h 3,0 h 4,0)H(1/4)T

  • j=(h −3,0 h −2,0 h −1,0 h 0,0 h 1,0 h 2,0 h 3,0 h 4,0)H(1/2)T

  • k=(h −3,0 h −2,0 h −1,0 h 0,0 h 1,0 h 2,0 h 3,0 h 4,0)H(3/4)T

  • p=(n −3,0 n −2,0 n −1,0 n 0,0 n 1,0 n 2,0 n 3,0 n 4,0)H(1/4)T

  • q=(n −3,0 n −2,0 n −1,0 n 0,0 n 1,0 n 2,0 n 3,0 n 4,0)H(1/2)T

  • r=(n −3,0 n −2,0 n −1,0 n 0,0 n 1,0 n 2,0 n 3,0 n 4,0)H(3/4)T
  • Use filter H(1/4), H(1/2) and H(3/4) with 8 intermediate value di, i=−3, . . . , 4 to get e, f and g
  • Use filter H(1/4), H(1/2) and H(3/4) with 8 intermediate value hi, i=−3, . . . , 4 to get i, j and k
  • Use filter H(1/4), H(1/2) and H(3/4) with 8 intermediate value ni, i=−3, . . . , 4 to get p, q and r
  • At least one drawback may be understood, in that, the computation associated with fractional-pixels e, f, g, i, j, k, p, q, r cannot be carried out at the same time as that of sub-pixels a, b, c, d, h. n. As may be understood, the fact that these operations can be performed simultaneously, in parallel, etc. prohibits the respective implementation in accordance with parallelism. Also, the fractional-pixels are always up-sampled (estimated) in one direction, either horizontal or vertical. However, as may be understood with respect to an actual picture or frame, the picture or frame is actually a 2 dimensional object. That is to say, by performing such sampling only with respect to a single dimension, the operations associated therewith are inherently limited and imperfect.
  • FIG. 10 is a diagram illustrating an embodiment of integer samples (blocks with upper-case letters) and fractional sample positions (blocks with lower-case letters).
  • With respect to this diagram, the uppercase or capital letters represent actual pixel values. The lowercase letters represent fractional sample positions interveningly situated between actual pixel values. With respect to calculating the actual pixel values for a current picture or frame, motion compensation operations, such as may be performed in accordance with a video decoding architecture, operate by using a motion vector which relates a current pixel within a current frame or picture to a particular location within another frame or picture (e.g., which may be viewed as a reference frame or picture that is situated either before or after the current frame). The motion vector identifies a particular location within the reference frame or picture, and this particular location may sometimes not correspond exactly with a given pixel location within that reference frame or picture. In such situations, interpolation is then performed with respect to the fractional sample positions interveningly situated between actual pixel values within the reference frame or picture. For example, this diagram particularly shows three fractional sample positions intervening between respective actual pixel values in both the vertical and horizontal directions. When interpolation needs to be performed (e.g., such as when a motion vector points to a location within a reference frame or picture that is not exactly an actual pixel value within the reference frame or picture), then calculation of the respective fractional sample positions between the actual pixel values is performed.
  • In accordance with such video processing, such interpolation may be employed for calculating the actual pixel values of a current frame or picture. For example, based upon such interpolation as may be performed to identify the appropriate value associated with an actual location within a reference frame or picture (e.g., again, which may not correspond exactly to an actual pixel value within the reference frame or picture and for which interpolation may be performed), these calculated filter coefficients may be employed in conjunction with a residual (e.g., as described above with respect to various video processing operations) to generate the actual pixel values of a current frame or picture.
  • In accordance with in accordance with various aspects, and their equivalents, of the invention, such calculation of these respective fractional sample positions intervening between actual pixel values may be performed simultaneously within more than one dimension. For example, such calculations may be performed simultaneously in both the vertical and horizontal directions. For example, considering 1/4 sub-pixel sampling with respect to the four respective pixels A0,0, A1,0, A0,1, and A1,1, then the corresponding to one the one fractional sample positions of a0,0, b0,0, c0,0, d0,0, e0,0, f0,0, g0,0, h0,0, i0,0, j0,0, k0,0, n0,0, p0,0, q0,0, and r0,0, and also a0,1, b0,1, c0,1, and also d1,0, h1,0, n1,0, may all be calculated simultaneously and/or in parallel with one another. As may be understood, such calculations that are performed simultaneously and/or in parallel with one another are also performed in both the vertical and horizontal directions. Again, the calculation of these multiple respective filter coefficients in multiple respective directions are performed simultaneously and/or in parallel with one another.
  • FIG. 11 illustrates an embodiment 1100 of two-dimensional (2-D) processing with respect to pixels of a frame or picture. Generally speaking, instead of performing DCT-IF operations in only the horizontal and the vertical direction at a given time (e.g., successively performing operations in one direction firstly and then followed by performing operations in the other direction), a diagonal approach may be performed which provides a significant savings in terms of the number steps to be performed.
  • As can be seen, two-dimensional DCT processing is performed with respect to this diagram. The use of 2-D motion compensation filters may be employed for fractional-pel accuracy.
  • 2-D fractional-pel interpolation may be performed as follows:
  • Considering the signal, x(t,s) as being a continuous two-dimensional signal with respective sampled sequences as follows:
  • x(t, s): continuous 2-D signal with sampled sequences

  • x d(n,m)=x(nT s +d s ,mT t +d t), n=0, . . . ,N−1, m=0, . . . ,M−1
  • where, −Ts<ds<Ts,−Tt<dt<Tt
  • Full pixel: x0(n,m)=x(nTx,mTy), n=0, . . . ,N−1, m=0, . . . ,M−1
  • As such, there are 21 possible d:
  • (0, Tt/4), (0, Tt/2), (0, 3Tt/4)
  • (Ts/4, 0), (Ts/4, Tt/4), (Ts/4, Tt/2), (Ts/4, 3Tt/4), (Ts/4, Tt)
  • (Ts/2, 0), (Ts/2, Tt/4), (Ts/2, Tt/2), (Ts/2, 3Tt/4), (Ts/2, Tt)
  • (3Ts/4, 0), (3Ts/4, Tt/4), (3Ts/4, Tt/2), (3Ts/4, 3Tt/4), (3Ts/4, Tt)
  • (Ts, Tt/4), (Ts, Tt/2), (Ts, 3Tt/4)
  • Basically we can have 21 filters:
  • fd(u,v) such that
  • x d ( n , m ) = u = A B v = C D f d ( u , v ) x 0 ( n + u , m + v )
  • where A<B, C<D are integers
  • Possible embodiment,
  • Use 2-D DCT to get the filter value
  • 2 D - DCT : X d ( u , v ) = 2 α ( u ) α ( v ) N n = 0 N - 1 m = 0 N - 1 x d ( n , m ) cos ( 2 n + 1 ) u π 2 N cos ( 2 m + 1 ) v π 2 N α ( j ) = 1 / 2 if j = 0 and N , 1 otherwise 2 D - IDCT : x d ( n , m ) = 2 N u = 0 N - 1 v = 0 N - 1 α ( u ) α ( v ) X d ( u , v ) cos ( 2 n + 1 ) u π 2 N cos ( 2 m + 1 ) v π 2 N
  • Proposition 1
  • If a continuous signal x(t, s) is bandlimited
  • (its Fourie Transform X(Ω) is bandlimited in the baseband (−(π/T), (π/T))
  • (i.e. X(Ω)=0 when |Ω|≧π/T), Then
  • X d ( u , v ) = 2 α ( u ) α ( v ) N n = 0 N - 1 m = 0 M - 1 x 0 ( n , m ) cos ( 2 ( n + d s / T s ) + 1 ) u π 2 N cos ( 2 ( m + d t / T t ) + 1 ) v π 2 N 2 D - IDCT : x d ( n , m ) = 2 N u = 0 N - 1 v = 0 M - 1 α ( u ) α ( v ) X d ( u , v ) cos ( 2 n + 1 ) u π 2 N cos ( 2 m + 1 ) v π 2 N X d ( u , v ) = 2 α ( u ) α ( v ) N n = 0 N - 1 m = 0 M - 1 x 0 ( n , m ) cos ( 2 ( n + d s / T s ) + 1 ) u π 2 N cos ( 2 ( m + d t / T t ) + 1 ) v π 2 N x d ( n , m ) = 2 N u = 0 N - 1 v = 0 M - 1 α ( u ) α ( v ) { 2 α ( u ) α ( v ) N p = 0 N - 1 q = 0 M - 1 x 0 ( p , q ) cos ( 2 ( p + d s / T s ) + 1 ) u π 2 N cos ( 2 ( q + d t / T t ) + 1 ) v π 2 N } cos ( 2 n + 1 ) u π 2 N cos ( 2 m + 1 ) v π 2 N
  • The filter coefficients.
  • f d ( p , q ) = 4 N 2 u = 0 N - 1 v = 0 M - 1 [ α ( u ) α ( v ) ] 2 cos ( 2 ( p + d s / T s ) + 1 ) u π 2 N cos ( 2 ( q + d t / T t ) + 1 ) v π 2 N cos ( 2 n + 1 ) u π 2 N cos ( 2 m + 1 ) v π 2 N
  • As can be seen with respect to the FIG. 11, as well as with respect to the one possible embodiment of generating the coefficients (e.g., filter coefficients) associated with an input bitstream, such processing in operation is effectuated such that more than one respective dimension is employed simultaneously and/or in parallel. That is to say, with respect to an input bitstream, at least with respect to a two-dimensional image, processing of the input bitstream is made with respect to a first dimension (e.g., vertical) and a second dimension (e.g., horizontal) simultaneously or in parallel thereby generating the coefficient values. Moreover, as may also be seen, at least one fractional-pel value in the first dimension and at least one fractional-pel value in the second dimension are employed. Looking at the particular example in which the fractional-pel distance corresponds to one fourth shift in the vertical and horizontal directions (e.g., d=(Ts/4, Tt/4)), it may be seen that at least one fractional-pel value [corresponding to that fractional-pel distance] in the vertical dimension is employed and at least one fractional-pel value corresponding to that same fractional-pel distance] in the horizontal dimension is employed. It is noted that the use of such a fractional-pel distance corresponding to a one fourth shift is exemplary, and generally speaking, any desired fractional-pel distance may be employed without departing from the scope and spirit of the invention (e.g., fractional-pel distance generally of being 1/N, where N is an integer, such that d=(Ts/N, Tt/N)).
  • Moreover, it is noted that while such exemplary embodiments are described with respect to a given fractional-pel distance employed commonly within more than one dimension, it is noted that different respective fractional-pel distances may be employed with respect to the different dimensions. For example, a first fractional-pel distance may be employed in a first dimension (e.g., vertical), a second fractional-pel distance may be employed in the second dimension (e.g., horizontal), and so on (e.g., d=(Ts/N1, Tt/N2), where N1 and N2 are different respective integers).
  • Also, while certain of the embodiments and/or diagrams included herein are directed towards image or video signals corresponding to two dimensions, it is noted that such operations and/or processing may be generally extended to image or video signals corresponding to more than two dimensions (e.g., three-dimensional image or video signals). For example, such operations and/or processing may be generally extended to more than two dimensions in accordance with generating such a function, ƒd(p, q, r), for a three-dimensional signal, or such a function, ƒd(p, q, r, s), for a four-dimensional signal, and so on. For example, in an embodiment in which the same fractional-pel distance would be employed in each respective dimension of a three-dimensional signal, such a fractional-pel distance may be shown as d=(Ts/U, Tt/U, Tu/U)), where U is an integer. Analogously, in an embodiment in which the same fractional-pel distance would be employed in each respective dimension of a four-dimensional signal, such a fractional-pel distance may be shown as d=(Ts/U, Tt/U, Tu/U, Tv/U)), where U is an integer, and so on.
  • Alternatively, if different respective fractional-pel justices would be employed in each respective dimension of a three-dimensional signal, such a fractional-pel distance may be shown as d=(Ts/U1, Tt/U2, Tu/U3)), where each of U1, U2, and U3 are integers. Of course, it is noted that any to respective dimensions may employ the very same fractional-pel distance, such that either one of U2 and U3 may be the same as U1. Analogously, if different respective fractional-pel justices would be employed in each respective dimension of a three-dimensional signal, such a fractional-pel distance may be shown as d=(Ts/U1, Tt/U2, Tu/U3, Tv/U4)), where each of U1, U2, U3, and U4 are integers, and so on.
  • As may be understood, with respect to the generation of coefficient values in accordance with more than one dimension effectuated simultaneously and/or in parallel, such operations may be extended towards signals having any of a number of multiple dimensions. For example, with respect to generation of signals in accordance with three dimensions, as described generally above, such simultaneous and/or in parallel processing may be made with respect to each of the respective three dimensions. Alternatively, with respect to three-dimensional image or video signaling corresponding to at least two respective cameras operating simultaneously from different respective viewpoints, such motion compensation processing may be applied towards the respective image or video signals generated from those at least two respective cameras. That is to say, while each respective camera may be operative for generating a two-dimensional image or video signal, such motion compensation processing as described herein with respect to two-dimensional signals may be applied respectively to the respective image or video signals generated from those of these two respective cameras. In any event, with respect to any given image or video signal, information related to more than one respective dimension may be employed simultaneously and/or in parallel in accordance with generating coefficient values.
  • Also, respect to the FIG. 11, as well as with respect other embodiments and/or diagrams herein, it is noted that the manner by which the function, ƒd(p, q) (or a higher order dimension function), has been derived with respect to this exemplary embodiment is but one manner by which such a function may be generated for use in deriving filter coefficients for use in accordance with motion compensation processing. Other such processes, such as those not being effectuated in accordance with discrete cosine transform (DCT), and it's respective inverse, may alternatively be employed in other embodiments without departing from the scope and spirit of the invention.
  • At a minimum, with respect to this exemplary embodiment performed with respect to more than one dimension, it can be seen that such filter coefficients may be generated with respect to each of at least a first dimension and a second dimension simultaneously and/or in parallel (e.g., simultaneously and/or in parallel with respect to each of a vertical dimension and a horizontal dimension as with respect to a two-dimensional image or video signal).
  • FIG. 12 illustrates an embodiment 1200 showing simultaneous or in parallel generation of coefficients in multiple dimensions in accordance with video processing operations. This diagram pictorially illustrates an input bitstream undergoing simultaneous or in parallel video processing operations in at least two respective dimensions. While various embodiments are directed to performing video processing in accordance with video decoding operations (e.g., such as in accordance with motion compensation operations and/or processing in a video decoding architecture), it is noted that certain operations may also be employed in accordance with performing video encoding operations.
  • As may be understood in accordance with in accordance with various aspects, and their equivalents, of the invention, coefficient values associated with a video signal are generated with respect to each of two or more dimensions simultaneously and/or in parallel. The generation of these respective coefficient values is based on at least one respective fractional-pel value in each of the respective dimensions (e.g., a first at least one fractional-pel value in a first dimension, a second at least one fractional-pel value in a second dimension, etc. such that different respective fractional-pel values are employed in different respective dimensions). Moreover, the same fractional-pel distance may be employed within each of the respective dimensions in certain embodiments. Alternatively, different respective fractional-pel distance may be employed within each of the respective dimensions in other embodiments (e.g., a first fractional-pel distance may be employed in a first dimension (e.g., vertical), a second fractional-pel distance may be employed in the second dimension (e.g., horizontal), and so on).
  • The coefficient values generated in accordance with performing such video processing (e.g., including motion compensation processing) may be used in accordance with any of a number of additional video encoding processes, video decoding processes, and/or operational steps by any of a number of additional circuitries, modules, functional blocks, etc.
  • FIG. 13A and FIG. 13B illustrate various embodiments of methods 1300 and 1301, respectively, for performing various video processing operations.
  • Referring to the method 1300 of FIG. 13A, the method 1300 begins by receiving an input bitstream via an input, as shown in a block 1310. The method 1300 continues by performing motion compensation (e.g., such as may be performed using emotion compensation module, such as in accordance with a video decoding architecture) including performing fractional pixel position interpolation of a first plurality of pixels of a first frame with respect to a first dimension and a second dimension simultaneously or in parallel thereby generating a plurality of coefficient values for use in generating a second plurality of pixels of the second frame, as shown in a block 1320. The method 1300 then operates by employing the first frame and the second frame for generating output video signal, as shown in a block 1330.
  • As may be understood in accordance with such motion compensation operations associated in accordance with video processing, a novel approaches presented herein by which such respective coefficient values may be calculated simultaneously and/or in parallel with one another in accordance with more than one respective dimension. That is to say, in certain embodiments in from certain perspectives, the calculation of such coefficient values may be viewed as being made in both the vertical and horizontal dimensions simultaneously.
  • Referring to method 1301 of FIG. 13B, the method 1301 operates by processing an input plurality of pixels, with respect to a first dimension and a second dimension simultaneously and/or in parallel thereby generating a plurality of coefficient values, as shown in a block 1311. The operations associated with the block 1311 may be performed such that each of the plurality of coefficient values is based on a respective at least one fractional-pel value in the first dimension and a respective at least one fractional-pel value in the second dimension. For example, the same fractional-pel value may be used with respect to both dimensions or different respective fractional-pel values may be employed respectively within different respective dimensions.
  • It is noted that such video processing operations may be performed within any of a variety of different types of devices including destination devices, middling devices, and/or generally any communication device that is operative within one or more of a satellite communication system, a wireless communication system, a wired communication system, and a fiber-optic communication system. With respect to the destination device, such a device may be viewed generally as including an input for receiving an input bitstream or a signal corresponding to the input bitstream from a source device or a middling device via one or more communication networks. With respect to the middling device, such a device may be viewed generally as including an input for receiving an input bitstream or a signal corresponding to the input bitstream from a source device or another middling device via one or more communication networks, as well as including an output for outputting an output bitstream or a signal corresponding to an output bitstream or an output video signal to at least one destination device via the one or more communication networks or one or more additional communication networks.
  • It is also noted that the various operations and functions as described with respect to various methods herein may be performed within a communication device, such as using a baseband processing module and/or a processing module implemented therein and/or other component(s) therein.
  • As may be used herein, the terms “substantially” and “approximately” provides an industry-accepted tolerance for its corresponding term and/or relativity between items. Such an industry-accepted tolerance ranges from less than one percent to fifty percent and corresponds to, but is not limited to, component values, integrated circuit process variations, temperature variations, rise and fall times, and/or thermal noise. Such relativity between items ranges from a difference of a few percent to magnitude differences. As may also be used herein, the term(s) “operably coupled to”, “coupled to”, and/or “coupling” includes direct coupling between items and/or indirect coupling between items via an intervening item (e.g., an item includes, but is not limited to, a component, an element, a circuit, and/or a module) where, for indirect coupling, the intervening item does not modify the information of a signal but may adjust its current level, voltage level, and/or power level. As may further be used herein, inferred coupling (i.e., where one element is coupled to another element by inference) includes direct and indirect coupling between two items in the same manner as “coupled to”. As may even further be used herein, the term “operable to” or “operably coupled to” indicates that an item includes one or more of power connections, input(s), output(s), etc., to perform, when activated, one or more its corresponding functions and may further include inferred coupling to one or more other items. As may still further be used herein, the term “associated with”, includes direct and/or indirect coupling of separate items and/or one item being embedded within another item. As may be used herein, the term “compares favorably”, indicates that a comparison between two or more items, signals, etc., provides a desired relationship. For example, when the desired relationship is that signal 1 has a greater magnitude than signal 2, a favorable comparison may be achieved when the magnitude of signal 1 is greater than that of signal 2 or when the magnitude of signal 2 is less than that of signal 1.
  • As may also be used herein, the terms “processing module”, “processing circuit”, and/or “processing unit” may be a single processing device or a plurality of processing devices. Such a processing device may be a microprocessor, micro-controller, digital signal processor, microcomputer, central processing unit, field programmable gate array, programmable logic device, state machine, logic circuitry, analog circuitry, digital circuitry, and/or any device that manipulates signals (analog and/or digital) based on hard coding of the circuitry and/or operational instructions. The processing module, module, processing circuit, and/or processing unit may be, or further include, memory and/or an integrated memory element, which may be a single memory device, a plurality of memory devices, and/or embedded circuitry of another processing module, module, processing circuit, and/or processing unit. Such a memory device may be a read-only memory, random access memory, volatile memory, non-volatile memory, static memory, dynamic memory, flash memory, cache memory, and/or any device that stores digital information. Note that if the processing module, module, processing circuit, and/or processing unit includes more than one processing device, the processing devices may be centrally located (e.g., directly coupled together via a wired and/or wireless bus structure) or may be distributedly located (e.g., cloud computing via indirect coupling via a local area network and/or a wide area network). Further note that if the processing module, module, processing circuit, and/or processing unit implements one or more of its functions via a state machine, analog circuitry, digital circuitry, and/or logic circuitry, the memory and/or memory element storing the corresponding operational instructions may be embedded within, or external to, the circuitry comprising the state machine, analog circuitry, digital circuitry, and/or logic circuitry. Still further note that, the memory element may store, and the processing module, module, processing circuit, and/or processing unit executes, hard coded and/or operational instructions corresponding to at least some of the steps and/or functions illustrated in one or more of the Figures. Such a memory device or memory element can be included in an article of manufacture.
  • The present invention has been described above with the aid of method steps illustrating the performance of specified functions and relationships thereof. The boundaries and sequence of these functional building blocks and method steps have been arbitrarily defined herein for convenience of description. Alternate boundaries and sequences can be defined so long as the specified functions and relationships are appropriately performed. Any such alternate boundaries or sequences are thus within the scope and spirit of the claimed invention. Further, the boundaries of these functional building blocks have been arbitrarily defined for convenience of description. Alternate boundaries could be defined as long as the certain significant functions are appropriately performed. Similarly, flow diagram blocks may also have been arbitrarily defined herein to illustrate certain significant functionality. To the extent used, the flow diagram block boundaries and sequence could have been defined otherwise and still perform the certain significant functionality. Such alternate definitions of both functional building blocks and flow diagram blocks and sequences are thus within the scope and spirit of the claimed invention. One of average skill in the art will also recognize that the functional building blocks, and other illustrative blocks, modules and components herein, can be implemented as illustrated or by discrete components, application specific integrated circuits, processors executing appropriate software and the like or any combination thereof.
  • The present invention may have also been described, at least in part, in terms of one or more embodiments. An embodiment of the present invention is used herein to illustrate the present invention, an aspect thereof, a feature thereof, a concept thereof, and/or an example thereof A physical embodiment of an apparatus, an article of manufacture, a machine, and/or of a process that embodies the present invention may include one or more of the aspects, features, concepts, examples, etc. described with reference to one or more of the embodiments discussed herein. Further, from figure to figure, the embodiments may incorporate the same or similarly named functions, steps, modules, etc. that may use the same or different reference numbers and, as such, the functions, steps, modules, etc. may be the same or similar functions, steps, modules, etc. or different ones.
  • While such circuitries in the above described figure(s) may including transistors, such as field effect transistors (FETs), as one of ordinary skill in the art will appreciate, such transistors may be implemented using any type of transistor structure including, but not limited to, bipolar, metal oxide semiconductor field effect transistors (MOSFET), N-well transistors, P-well transistors, enhancement mode, depletion mode, and zero voltage threshold (VT) transistors.
  • Unless specifically stated to the contra, signals to, from, and/or between elements in a figure of any of the figures presented herein may be analog or digital, continuous time or discrete time, and single-ended or differential. For instance, if a signal path is shown as a single-ended path, it also represents a differential signal path. Similarly, if a signal path is shown as a differential path, it also represents a single-ended signal path. While one or more particular architectures are described herein, other architectures can likewise be implemented that use one or more data buses not expressly shown, direct connectivity between elements, and/or indirect coupling between other elements as recognized by one of average skill in the art.
  • The term “module” is used in the description of the various embodiments of the present invention. A module includes a processing module, a functional block, hardware, and/or software stored on memory for performing one or more functions as may be described herein. Note that, if the module is implemented via hardware, the hardware may operate independently and/or in conjunction software and/or firmware. As used herein, a module may contain one or more sub-modules, each of which may be one or more modules.
  • While particular combinations of various functions and features of the present invention have been expressly described herein, other combinations of these features and functions are likewise possible. The present invention is not limited by the particular examples disclosed herein and expressly incorporates these other combinations.

Claims (25)

What is claimed is:
1. An apparatus, comprising:
a motion compensation module for performing fractional pixel position interpolation, by employing two-dimensional discrete cosine transform (2-D DCT) processing, of a first plurality of pixels of a first frame with respect to a first dimension and a second dimension simultaneously or in parallel thereby generating a plurality of coefficient values for use in generating a second plurality of pixels of a second frame; and wherein:
each of the plurality of coefficient values being based on a respective at least one fractional-pel value in the first dimension and a respective at least one fractional-pel value in the second dimension;
the at least one fractional-pel value in the first dimension based on a first fractional-pel distance;
the at least one fractional-pel value in the second dimension based on a second fractional-pel distance; and
the second frame being situated prior to or after the first frame in a sequence of frames.
2. The apparatus of claim 1, wherein:
the first fractional-pel distance being the second fractional-pel distance.
3. The apparatus of claim 1, wherein:
the apparatus being a video decoder for generating the second plurality of pixels of the second frame using at least one of the plurality of coefficient values, at least one motion vector relating at least one of the second plurality of pixels of the second frame and at least one of the plurality of coefficient values, and at least one residual value corresponding to at least one of the second plurality of pixels of the second frame.
4. The apparatus of claim 1, wherein:
the apparatus being a destination device including an input for receiving an input bitstream corresponding to the first frame and the second frame or a signal corresponding to the input bitstream from a source device or a middling device via at least one communication network.
5. The apparatus of claim 1, wherein:
the apparatus being a communication device operative within at least one of a satellite communication system, a wireless communication system, a wired communication system, a fiber-optic communication system, and a mobile communication system.
6. An apparatus, comprising:
a motion compensation module for performing fractional pixel position interpolation of a first plurality of pixels of a first frame with respect to a first dimension and a second dimension simultaneously or in parallel thereby generating a plurality of coefficient values for use in generating a second plurality of pixels of a second frame.
7. The apparatus of claim 6, wherein:
the apparatus being a video decoder for generating the second plurality of pixels of the second frame using at least one of the plurality of coefficient values, at least one motion vector relating at least one of the second plurality of pixels of the second frame and at least one of the plurality of coefficient values, and at least one residual value corresponding to at least one of the second plurality of pixels of the second frame.
8. The apparatus of claim 6, wherein:
the first frame being a reference frame in a sequence of frames; and
the second frame being a current frame in the sequence of frames.
9. The apparatus of claim 6, wherein:
the second frame being situated prior to or after the first frame in a sequence of frames.
10. The apparatus of claim 6, wherein:
a motion compensation module for employing two-dimensional discrete cosine transform (2-D DCT) processing of the first plurality of pixels of the first frame with respect to the first dimension and the second dimension simultaneously or in parallel thereby generating the plurality of coefficient values.
11. The apparatus of claim 6, wherein:
each of the plurality of coefficient values being based on a respective at least one fractional-pel value in the first dimension and a respective at least one fractional-pel value in the second dimension; and
each of the at least one fractional-pel value in the first dimension and the at least one fractional-pel value in the second dimension based on a common fractional-pel distance.
12. The apparatus of claim 6, wherein:
each of the plurality of coefficient values being based on a respective at least one fractional-pel value in the first dimension and a respective at least one fractional-pel value in the second dimension;
the at least one fractional-pel value in the first dimension based on a first fractional-pel distance; and
the at least one fractional-pel value in the second dimension based on a second fractional-pel distance.
13. The apparatus of claim 6, wherein:
the apparatus being a destination device including an input for receiving an input bitstream corresponding to the first frame and the second frame or a signal corresponding to the input bitstream from a source device or a middling device via at least one communication network.
14. The apparatus of claim 6, wherein:
the apparatus being a middling device including an input for receiving an input bitstream corresponding to the first frame and the second frame or a signal corresponding to the input bitstream from a source device or at least one additional middling device via at least one communication network; and
the apparatus including an output for outputting an output bitstream, a signal corresponding to an output bitstream, or an output video signal to at least one destination device via the at least one communication network or at least one additional communication network.
15. The apparatus of claim 6, wherein:
the apparatus being a communication device operative within at least one of a satellite communication system, a wireless communication system, a wired communication system, a fiber-optic communication system, and a mobile communication system.
16. A method for operating a video processing device, the method comprising:
via an input, receiving an input bitstream;
operating a motion compensation module for performing fractional pixel position interpolation of a first plurality of pixels of a first frame with respect to a first dimension and a second dimension simultaneously or in parallel thereby generating a plurality of coefficient values for use in generating a second plurality of pixels of a second frame; and
employing the first frame and the second frame for generating an output video signal.
17. The method of claim 16, wherein:
the video processing device being a video decoder for generating the second plurality of pixels of the second frame using at least one of the plurality of coefficient values, at least one motion vector relating at least one of the second plurality of pixels of the second frame and at least one of the plurality of coefficient values, and at least one residual value corresponding to at least one of the second plurality of pixels of the second frame.
18. The method of claim 16, wherein:
the first frame being a reference frame in a sequence of frames; and
the second frame being a current frame in the sequence of frames.
19. The method of claim 16, wherein:
the second frame being situated prior to or after the first frame in a sequence of frames.
20. The method of claim 16, further comprising:
operating the motion compensation module for employing two-dimensional discrete cosine transform (2-D DCT) processing of the first plurality of pixels of the first frame with respect to the first dimension and the second dimension simultaneously or in parallel thereby generating the plurality of coefficient values.
21. The method of claim 16, wherein:
each of the plurality of coefficient values being based on a respective at least one fractional-pel value in the first dimension and a respective at least one fractional-pel value in the second dimension; and
each of the at least one fractional-pel value in the first dimension and the at least one fractional-pel value in the second dimension based on a common fractional-pel distance.
22. The method of claim 16, wherein:
each of the plurality of coefficient values being based on a respective at least one fractional-pel value in the first dimension and a respective at least one fractional-pel value in the second dimension;
the at least one fractional-pel value in the first dimension based on a first fractional-pel distance; and
the at least one fractional-pel value in the second dimension based on a second fractional-pel distance.
23. The method of claim 16, wherein:
the video processing device being a destination device including the input for receiving an input bitstream corresponding to the first frame and the second frame or a signal corresponding to the input bitstream from a source device or a middling device via at least one communication network.
24. The method of claim 16, wherein:
the video processing device being a middling device including the input for receiving an input bitstream corresponding to the first frame and the second frame or a signal corresponding to the input bitstream from a source device or at least one additional middling device via at least one communication network; and
the video processing device including an output for outputting the video signal to at least one destination device via the at least one communication network or at least one additional communication network.
25. The method of claim 16, wherein:
the video processing device operative within at least one of a satellite communication system, a wireless communication system, a wired communication system, a fiber-optic communication system, and a mobile communication system.
US13/333,529 2011-09-30 2011-12-21 Two-dimensional motion compensation filter operation and processing Abandoned US20130083852A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/333,529 US20130083852A1 (en) 2011-09-30 2011-12-21 Two-dimensional motion compensation filter operation and processing

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201161541938P 2011-09-30 2011-09-30
US13/333,529 US20130083852A1 (en) 2011-09-30 2011-12-21 Two-dimensional motion compensation filter operation and processing

Publications (1)

Publication Number Publication Date
US20130083852A1 true US20130083852A1 (en) 2013-04-04

Family

ID=47992566

Family Applications (6)

Application Number Title Priority Date Filing Date
US13/333,424 Active 2035-08-07 US9693064B2 (en) 2011-09-30 2011-12-21 Video coding infrastructure using adaptive prediction complexity reduction
US13/333,529 Abandoned US20130083852A1 (en) 2011-09-30 2011-12-21 Two-dimensional motion compensation filter operation and processing
US13/333,256 Active 2037-06-29 US10165285B2 (en) 2011-09-30 2011-12-21 Video coding tree sub-block splitting
US13/333,135 Active 2034-03-17 US9161060B2 (en) 2011-09-30 2011-12-21 Multi-mode error concealment, recovery and resilience coding
US13/333,332 Abandoned US20130083840A1 (en) 2011-09-30 2011-12-21 Advance encode processing based on raw video data
US14/879,451 Active US9906797B2 (en) 2011-09-30 2015-10-09 Multi-mode error concealment, recovery and resilience coding

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US13/333,424 Active 2035-08-07 US9693064B2 (en) 2011-09-30 2011-12-21 Video coding infrastructure using adaptive prediction complexity reduction

Family Applications After (4)

Application Number Title Priority Date Filing Date
US13/333,256 Active 2037-06-29 US10165285B2 (en) 2011-09-30 2011-12-21 Video coding tree sub-block splitting
US13/333,135 Active 2034-03-17 US9161060B2 (en) 2011-09-30 2011-12-21 Multi-mode error concealment, recovery and resilience coding
US13/333,332 Abandoned US20130083840A1 (en) 2011-09-30 2011-12-21 Advance encode processing based on raw video data
US14/879,451 Active US9906797B2 (en) 2011-09-30 2015-10-09 Multi-mode error concealment, recovery and resilience coding

Country Status (1)

Country Link
US (6) US9693064B2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150256819A1 (en) * 2012-10-12 2015-09-10 National Institute Of Information And Communications Technology Method, program and apparatus for reducing data size of a plurality of images containing mutually similar information

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2785051A4 (en) * 2011-11-25 2015-11-25 Hitachi Maxell Image transmission device, image transmission method, image reception device, and image reception method
US10244246B2 (en) 2012-02-02 2019-03-26 Texas Instruments Incorporated Sub-pictures for pixel rate balancing on multi-core platforms
US9213556B2 (en) 2012-07-30 2015-12-15 Vmware, Inc. Application directed user interface remoting using video encoding techniques
US9277237B2 (en) 2012-07-30 2016-03-01 Vmware, Inc. User interface remoting through video encoding techniques
US9674538B2 (en) 2013-04-08 2017-06-06 Blackberry Limited Methods for reconstructing an encoded video at a bit-depth lower than at which it was encoded
EP2984842A1 (en) * 2013-04-08 2016-02-17 BlackBerry Limited Methods for reconstructing an encoded video at a bit-depth lower than at which it was encoded
US10075735B2 (en) * 2013-07-14 2018-09-11 Sharp Kabushiki Kaisha Video parameter set signaling
WO2015053673A1 (en) * 2013-10-11 2015-04-16 Telefonaktiebolaget L M Ericsson (Publ) Method and arrangement for video transcoding using mode or motion or in-loop filter information
JP6528765B2 (en) * 2014-03-28 2019-06-12 ソニー株式会社 Image decoding apparatus and method
JP2016092837A (en) * 2014-10-30 2016-05-23 株式会社東芝 Video compression apparatus, video reproduction apparatus and video distribution system
CN116781903A (en) * 2016-06-24 2023-09-19 世宗大学校产学协力团 Video signal decoding and encoding method, and bit stream transmission method
WO2018037853A1 (en) * 2016-08-26 2018-03-01 シャープ株式会社 Image decoding apparatus and image coding apparatus
US10609418B2 (en) 2017-04-18 2020-03-31 Qualcomm Incorporated System and method for intelligent data/frame compression in a system on a chip
US10484685B2 (en) * 2017-04-18 2019-11-19 Qualcomm Incorporated System and method for intelligent data/frame compression in a system on a chip
CA3148299C (en) 2019-07-26 2024-06-04 Beijing Bytedance Network Technology Co., Ltd. Determination of picture partition mode based on block size

Citations (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5896176A (en) * 1995-10-27 1999-04-20 Texas Instruments Incorporated Content-based video compression
US6026183A (en) * 1995-10-27 2000-02-15 Texas Instruments Incorporated Content-based video compression
US6037985A (en) * 1996-10-31 2000-03-14 Texas Instruments Incorporated Video compression
US6256348B1 (en) * 1996-08-30 2001-07-03 Texas Instruments Incorporated Reduced memory MPEG video decoder circuits and methods
US6266374B1 (en) * 1994-10-28 2001-07-24 Lg Electronics Inc. Low level digital video decoder for HDTV having backward compatibility
US6614847B1 (en) * 1996-10-25 2003-09-02 Texas Instruments Incorporated Content-based video compression
US20030202594A1 (en) * 2002-03-15 2003-10-30 Nokia Corporation Method for coding motion in a video sequence
US6668019B1 (en) * 1996-12-03 2003-12-23 Stmicroelectronics, Inc. Reducing the memory required for decompression by storing compressed information using DCT based techniques
US20040028143A1 (en) * 2002-03-27 2004-02-12 Schoenblum Joel W. Hybrid rate control in a digital stream transcoder
US20040062313A1 (en) * 2002-03-27 2004-04-01 Schoenblum Joel W. Digital stream transcoder with a hybrid-rate controller
US20050063475A1 (en) * 2003-09-19 2005-03-24 Vasudev Bhaskaran Adaptive video prefilter
US20070237219A1 (en) * 2002-03-27 2007-10-11 Scientific-Atlanta, Inc. Digital Stream Transcoder
US7406123B2 (en) * 2003-07-10 2008-07-29 Mitsubishi Electric Research Laboratories, Inc. Visual complexity measure for playing videos adaptively
US20090257493A1 (en) * 2008-04-10 2009-10-15 Qualcomm Incorporated Interpolation filter support for sub-pixel resolution in video coding
US20090257494A1 (en) * 2008-04-10 2009-10-15 Qualcomm Incorporated Symmetry for interoplation filtering of sub-pixel positions in video coding
US20100195713A1 (en) * 2007-06-19 2010-08-05 Coulombe Stephane Buffer Based Rate Control in Video Coding
US20100309979A1 (en) * 2009-06-05 2010-12-09 Schoenblum Joel W Motion estimation for noisy frames based on block matching of filtered blocks
US20110052087A1 (en) * 2009-08-27 2011-03-03 Debargha Mukherjee Method and system for coding images
US7983493B2 (en) * 2004-10-05 2011-07-19 Vectormax Corporation Adaptive overlapped block matching for accurate motion compensation
US20110194610A1 (en) * 2010-02-10 2011-08-11 Telefonaktiebolaget L M Ericsson (Publ) Motion-Vector Estimation
US20110261887A1 (en) * 2008-06-06 2011-10-27 Wei Siong Lee Methods and devices for estimating motion in a plurality of frames
US20120163460A1 (en) * 2010-12-23 2012-06-28 Qualcomm Incorporated Sub-pixel interpolation for video coding
US20120250769A1 (en) * 2009-11-06 2012-10-04 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Hybrid video coding

Family Cites Families (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU732452B2 (en) * 1997-04-01 2001-04-26 Sony Corporation Image encoder, image encoding method, image decoder, image decoding method, and distribution media
US6633611B2 (en) * 1997-04-24 2003-10-14 Mitsubishi Denki Kabushiki Kaisha Method and apparatus for region-based moving image encoding and decoding
US20020144209A1 (en) * 2001-02-20 2002-10-03 Cute Ltd. System for enhanced error correction in trellis decoding
US6859500B2 (en) * 2001-03-20 2005-02-22 Telefonaktiebolaget Lm Ericsson Run-length coding of non-coded macroblocks
US7266150B2 (en) * 2001-07-11 2007-09-04 Dolby Laboratories, Inc. Interpolation of video compression frames
KR20030095995A (en) * 2002-06-14 2003-12-24 마츠시타 덴끼 산교 가부시키가이샤 Method for transporting media, transmitter and receiver therefor
TWI262725B (en) * 2005-06-30 2006-09-21 Cheertek Inc Video decoding apparatus and digital audio and video display system capable of controlling presentation of subtitles and method thereof
US20070058723A1 (en) * 2005-09-14 2007-03-15 Chandramouly Ashwin A Adaptively adjusted slice width selection
US8295349B2 (en) * 2006-05-23 2012-10-23 Flextronics Ap, Llc Methods and apparatuses for video compression intra prediction mode determination
US8467448B2 (en) * 2006-11-15 2013-06-18 Motorola Mobility Llc Apparatus and method for fast intra/inter macro-block mode decision for video encoding
CN100566427C (en) * 2007-07-31 2009-12-02 北京大学 The choosing method and the device that are used for the intraframe predictive coding optimal mode of video coding
US9113194B2 (en) * 2007-12-19 2015-08-18 Arris Technology, Inc. Method and system for interleaving video and data for transmission over a network at a selected bit rate
US20090274213A1 (en) * 2008-04-30 2009-11-05 Omnivision Technologies, Inc. Apparatus and method for computationally efficient intra prediction in a video coder
EP2182732A1 (en) * 2008-10-28 2010-05-05 Panasonic Corporation Switching between scans in image coding
KR101504887B1 (en) * 2009-10-23 2015-03-24 삼성전자 주식회사 Method and apparatus for video decoding by individual parsing or decoding in data unit level, and method and apparatus for video encoding for individual parsing or decoding in data unit level
JP2011109172A (en) * 2009-11-12 2011-06-02 Hitachi Kokusai Electric Inc Video encoder and data processing method for the same
US8498334B1 (en) * 2010-02-03 2013-07-30 Imagination Technologies Limited Method and system for staggered parallelized video decoding
US8542737B2 (en) * 2010-03-21 2013-09-24 Human Monitoring Ltd. Intra video image compression and decompression
CN106231337B (en) * 2010-04-13 2020-06-19 Ge视频压缩有限责任公司 Decoder, decoding method, encoder, and encoding method
US20120039383A1 (en) * 2010-08-12 2012-02-16 Mediatek Inc. Coding unit synchronous adaptive loop filter flags
US9008175B2 (en) * 2010-10-01 2015-04-14 Qualcomm Incorporated Intra smoothing filter for video coding
US9807424B2 (en) * 2011-01-10 2017-10-31 Qualcomm Incorporated Adaptive selection of region size for identification of samples in a transition zone for overlapped block motion compensation
US9215473B2 (en) * 2011-01-26 2015-12-15 Qualcomm Incorporated Sub-slices in video coding
US8995523B2 (en) * 2011-06-03 2015-03-31 Qualcomm Incorporated Memory efficient context modeling
US20130044811A1 (en) * 2011-08-18 2013-02-21 Hyung Joon Kim Content-Based Adaptive Control of Intra-Prediction Modes in Video Encoding
US8693551B2 (en) * 2011-11-16 2014-04-08 Vanguard Software Solutions, Inc. Optimal angular intra prediction for block-based video coding

Patent Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6266374B1 (en) * 1994-10-28 2001-07-24 Lg Electronics Inc. Low level digital video decoder for HDTV having backward compatibility
US5896176A (en) * 1995-10-27 1999-04-20 Texas Instruments Incorporated Content-based video compression
US6026183A (en) * 1995-10-27 2000-02-15 Texas Instruments Incorporated Content-based video compression
US6256348B1 (en) * 1996-08-30 2001-07-03 Texas Instruments Incorporated Reduced memory MPEG video decoder circuits and methods
US6614847B1 (en) * 1996-10-25 2003-09-02 Texas Instruments Incorporated Content-based video compression
US6037985A (en) * 1996-10-31 2000-03-14 Texas Instruments Incorporated Video compression
US6668019B1 (en) * 1996-12-03 2003-12-23 Stmicroelectronics, Inc. Reducing the memory required for decompression by storing compressed information using DCT based techniques
US20030202594A1 (en) * 2002-03-15 2003-10-30 Nokia Corporation Method for coding motion in a video sequence
US20070237219A1 (en) * 2002-03-27 2007-10-11 Scientific-Atlanta, Inc. Digital Stream Transcoder
US20070127566A1 (en) * 2002-03-27 2007-06-07 Schoenblum Joel W Digital stream transcoder with a hybrid-rate controller
US20040028143A1 (en) * 2002-03-27 2004-02-12 Schoenblum Joel W. Hybrid rate control in a digital stream transcoder
US20040062313A1 (en) * 2002-03-27 2004-04-01 Schoenblum Joel W. Digital stream transcoder with a hybrid-rate controller
US7406123B2 (en) * 2003-07-10 2008-07-29 Mitsubishi Electric Research Laboratories, Inc. Visual complexity measure for playing videos adaptively
US20050063475A1 (en) * 2003-09-19 2005-03-24 Vasudev Bhaskaran Adaptive video prefilter
US7983493B2 (en) * 2004-10-05 2011-07-19 Vectormax Corporation Adaptive overlapped block matching for accurate motion compensation
US20100195713A1 (en) * 2007-06-19 2010-08-05 Coulombe Stephane Buffer Based Rate Control in Video Coding
US20090257494A1 (en) * 2008-04-10 2009-10-15 Qualcomm Incorporated Symmetry for interoplation filtering of sub-pixel positions in video coding
US20090257493A1 (en) * 2008-04-10 2009-10-15 Qualcomm Incorporated Interpolation filter support for sub-pixel resolution in video coding
US20110261887A1 (en) * 2008-06-06 2011-10-27 Wei Siong Lee Methods and devices for estimating motion in a plurality of frames
US20100309979A1 (en) * 2009-06-05 2010-12-09 Schoenblum Joel W Motion estimation for noisy frames based on block matching of filtered blocks
US20110052087A1 (en) * 2009-08-27 2011-03-03 Debargha Mukherjee Method and system for coding images
US20120250769A1 (en) * 2009-11-06 2012-10-04 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Hybrid video coding
US20110194610A1 (en) * 2010-02-10 2011-08-11 Telefonaktiebolaget L M Ericsson (Publ) Motion-Vector Estimation
US20120163460A1 (en) * 2010-12-23 2012-06-28 Qualcomm Incorporated Sub-pixel interpolation for video coding

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150256819A1 (en) * 2012-10-12 2015-09-10 National Institute Of Information And Communications Technology Method, program and apparatus for reducing data size of a plurality of images containing mutually similar information

Also Published As

Publication number Publication date
US20130083837A1 (en) 2013-04-04
US20130083839A1 (en) 2013-04-04
US9161060B2 (en) 2015-10-13
US20160037171A1 (en) 2016-02-04
US20130083841A1 (en) 2013-04-04
US10165285B2 (en) 2018-12-25
US9906797B2 (en) 2018-02-27
US9693064B2 (en) 2017-06-27
US20130083840A1 (en) 2013-04-04

Similar Documents

Publication Publication Date Title
US20130083852A1 (en) Two-dimensional motion compensation filter operation and processing
US11800086B2 (en) Sample adaptive offset (SAO) in accordance with video coding
US9432700B2 (en) Adaptive loop filtering in accordance with video coding
US9332283B2 (en) Signaling of prediction size unit in accordance with video coding
US9380320B2 (en) Frequency domain sample adaptive offset (SAO)
US9438904B2 (en) Reduced look-up table for LM mode calculation
US20130343447A1 (en) Adaptive loop filter (ALF) padding in accordance with video coding
US11019351B2 (en) Video coding with trade-off between frame rate and chroma fidelity
US9231616B2 (en) Unified binarization for CABAC/CAVLC entropy coding
US9456212B2 (en) Video coding sub-block sizing based on infrastructure capabilities and current conditions
CN111801941B (en) Method and apparatus for image filtering using adaptive multiplier coefficients
US9071848B2 (en) Sub-band video coding architecture for packet based transmission
US20130235926A1 (en) Memory efficient video parameter processing

Legal Events

Date Code Title Description
AS Assignment

Owner name: BROADCOM CORPORATION, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SHEN, BA-ZHONG;REEL/FRAME:027428/0806

Effective date: 20111215

AS Assignment

Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH CAROLINA

Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:037806/0001

Effective date: 20160201

Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH

Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:037806/0001

Effective date: 20160201

AS Assignment

Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD., SINGAPORE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:041706/0001

Effective date: 20170120

Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:041706/0001

Effective date: 20170120

AS Assignment

Owner name: BROADCOM CORPORATION, CALIFORNIA

Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:041712/0001

Effective date: 20170119

STCB Information on status: application discontinuation

Free format text: ABANDONED -- AFTER EXAMINER'S ANSWER OR BOARD OF APPEALS DECISION