WO1999038316A2 - Method and apparatus for advanced television signal encoding and decoding - Google Patents

Method and apparatus for advanced television signal encoding and decoding Download PDF

Info

Publication number
WO1999038316A2
WO1999038316A2 PCT/US1999/001410 US9901410W WO9938316A2 WO 1999038316 A2 WO1999038316 A2 WO 1999038316A2 US 9901410 W US9901410 W US 9901410W WO 9938316 A2 WO9938316 A2 WO 9938316A2
Authority
WO
WIPO (PCT)
Prior art keywords
stream
video
region
encoding
image
Prior art date
Application number
PCT/US1999/001410
Other languages
French (fr)
Other versions
WO1999038316A3 (en
Inventor
Yendo Hu
Original Assignee
Tiernan Communications, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tiernan Communications, Inc. filed Critical Tiernan Communications, Inc.
Priority to EP99903316A priority Critical patent/EP1051839A2/en
Priority to AU23370/99A priority patent/AU2337099A/en
Priority to JP2000529078A priority patent/JP2002502159A/en
Priority to CA002318272A priority patent/CA2318272A1/en
Publication of WO1999038316A2 publication Critical patent/WO1999038316A2/en
Publication of WO1999038316A3 publication Critical patent/WO1999038316A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/436Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation using parallelised computational arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/43Hardware specially adapted for motion estimation or compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding

Definitions

  • the Federal Communications Commission (FCC) has adopted major elements of the Advanced Television Systems Committee (ATSC) Digital Television standard for use by terrestial broadcasters.
  • ATSC Digital Television (DTV) standard addresses five key components of a model system for delivering multimedia information to users.
  • a block diagram of such a model system as defined by the International Telecommunications Union, Radio Communication Sector (ITU-R) , Task Group 11/3 is shown in FIG. 1 and includes video, audio, tranport, RF/transmission and receiver components.
  • the video subsystem 100 compresses raw video into a digital video data elementary stream in accordance with the MPEG-2 standard defined by the Moving Picture Experts Group in ISO/IEC IS 13818-2 International Standard (1994) , MPEG-2 Video.
  • the audio subsystem 102 compresses raw audio into a digital audio data elementary stream in accordance with the Digital Audio Compression 3 (DAC-3) standard defined by the Audio Specialist Group within ITU.
  • DAC-3 Digital Audio Compression 3
  • the service multiplex and transport component 104 multiplexes the video data elementary stream, the audio data elementary stream, ancillary and control data elementary streams into a single bit stream using the transport stream syntax defined by ISO/IEC IS 13818-1 International Standard (1994), MPEG-2 Systems.
  • the RF/transmission component 106 includes a channel coder and a modulator.
  • the channel coder introduces additional information into the transport stream to allow the receiver 108 to reconstruct partially corrupted bit streams.
  • the modulator encodes the digital data into RF signals using vestigial sideband transmission.
  • the MPEG-2 standard applies five compression techniques to achieve a high compression ratio: discrete cosine transform (DCT) , difference encoding, quantization, entropy encoding and motion compensation.
  • DCT discrete cosine transform
  • a DCT is applied to blocks of 8 x 8 pixels to provide 64 coefficients that represent spatial frequencies. For blocks without much detail, the high frequency coefficients have small values that can be set to zero.
  • Video frames are encoded into intra frames (I frames) which do not rely on information from other frames to reconstruct the current frame, and inter frames, P and B, which rely on information from other frames to reconstruct the current frame .
  • P frames rely on the previous P or I frame while B frames rely on the previous I or P and the future I or P to construct the current frame .
  • These previous or future I and P frames are referred to as reference frames .
  • the P and B frames include only the differences between the current frame and the adjacent frames. For low motion video sequences, the P and B frames will have very little information content .
  • the MPEG-2 compression algorithm performs motion estimation between adjacent frames to improve the prediction capability between frames.
  • the compression algorithm searches for a motion vector for every four blocks, known as a macroblock, that provides the distance and direction of motion for the current macroblock.
  • the DCT coefficients of each block are weighted and quantized based on a quantization matrix that matches the response of the human eye.
  • the results are combined with the motion vectors and then encoded using variable length encoding to provide a stream for transmission.
  • each macroblock 124 is completely processed and delivered to an output buffer 126 before processing the next macroblock.
  • the MPEG-2 standard defines algorithmic tools known as profiles and sets of constraints on parameter values (e.g., picture size, bit rate) known as levels.
  • the known MPEG-2 compression engines noted above have been designed to meet the main profile @ main level portion of the standard for conventional broadcast television signals such as NTSC and PAL.
  • the main level is specified as 720 pixels by 480 active lines at 30 frames per second.
  • the DTV signal is specified as 1920 pixels by 1080 active lines at 30 frames per second. This is known as the MPEG-2 high level.
  • the computational demand needed for the DTV signal specified as main profile @ high level is approximately six times that needed for existing standard television signals specified as main profile @ main level.
  • the method and apparatus of the present invention provides an architecture capable of addressing the computational demand required for high-definition video signals, such as a DTV signal compliant with MPEG-2 main profile @ high level, using standard MPEG-2 compression engines operating in the main profile @ main level mode.
  • the invention provides parallel processing using such standard MPEG-2 compression engines in an overlapping arrangement that does not sacrifice compression performance.
  • a video encoder of the present invention comprises plural regional processors for encoding an input stream of video images .
  • Each video image is divided into regions that have overlapping portions, with each processor encoding a particular region of a current video image in the stream according to an encoding process that includes motion compensation such as MPEG-2 main profile @ main level.
  • the regional processors each store a reference frame in a local memory based on a prior video image in the stream for use in the motion compensation of the encoding process.
  • a reference frame processor coupled to the plural local memories updates each reference frame with information from reference frames stored in adjacent local memories.
  • the encoded video images are made up of macroblocks and each regional processor includes means for removing certain macroblocks from the encoded video images that correspond to the overlap portions and concatenating the resulting encoded video images with that of other regional processors to provide an output video stream.
  • the regional processors each include an image selection unit for selecting a particular image region from each of the video images.
  • a compression engine compresses the selected image region to provide a compressed image region stream of macroblocks .
  • a macroblock remover removes certain macroblocks from the compressed image region stream that correspond to the overlapping portions.
  • a stream concatenation unit concatenates the compressed image region stream with such streams from each regional processor to provide an output video stream.
  • the preferred embodiment includes multiple regional processors for processing the overlapping regions
  • the present invention encompasses single processor embodiments in which each region is processed successively.
  • a video decoder includes a demultipexer, multiple regional decoders, a reference frame memory and a multiplexer.
  • the demultiplexer demultiplexes a compressed stream of video images to plural region streams. Each video image is divided into contiguous regions, each region stream being associated with a particular region.
  • the regional decoders each decode a particular region stream according to a decoding process that includes motion compensation such as MPEG-2 main profile at main level.
  • the reference frame memory stores reference frames associated with each regional decoder.
  • the regional decoders retrieve reference frames of adjacent regions for use in the motion compensation process.
  • the multiplexer multiplexes the decoded region streams to a decoded output stream.
  • FIG. 1 is a block diagram of a model advanced television system.
  • FIG. 2 is a block diagram illustrating a slice-based compression approach for MPEG-2 main profile at main level .
  • FIG. 3 is a block diagram illustrating a macroblock- based compression approach for MPEG-2 main profile at main level .
  • FIG. 4 is a diagram illustrating a first processor arrangement in accordance with the present invention.
  • FIG. 5 is a diagram illustrating a preferred processor arrangement in accordance with the present invention.
  • FIG. 6 is a block diagram of a video encoding subsystem of the present invention.
  • FIG. 7 is a schematic block diagram of a video compression engine of the video subsystem of FIG. 6.
  • FIG. 8 is a block diagram illustrating a synchronization configuration for the compression engine of FIG. 7.
  • FIG. 9 is a diagram illustrating local image selection from a global image for the engine of FIG. 7.
  • FIG. 10 is a diagram illustrating raw and active regions of the global image of FIG. 9.
  • FIG. 11 is a diagram illustrating an active region within a raw region of the image of FIG. 10.
  • FIG. 12 is a block diagram of a token passing arrangement for a 10801 video processing configuration.
  • FIG. 13 is a block diagram of a token passing arrangement for a 72Op video processing configuration.
  • FIG. 14 is a block diagram illustrating allocation of reference images in reference buffers of a local memory of the system of FIG. 7.
  • FIG. 15 is a diagram illustrating the reference image updating arrangement of the present invention.
  • FIG. 16 is a diagram illustrating regions of a reference image in accordance with the invention.
  • FIG. 17 is a block diagram of a reference image manager of the system of FIG. 7.
  • FIG. 18 is a block diagram of a local manager of the reference image manager of FIG. 17.
  • FIG. 19 is a diagram illustrating the decoding arrangement of the present invention.
  • FIG. 20 is a block diagram of a decoder system of the present invention.
  • FIG. 21 is a diagram illustrating motion compensation in the decoder system of FIG. 20.
  • FIG. 22 is a block diagram of the reference frame store of the decoder system of FIG. 20.
  • the present invention employs a parallel processing arrangement that takes advantage of known MPEG-2 main profile at main level (mp/ml) compression engines to provide a highly efficient compression engine for encoding high definition television signals such as the DTV signal that is compliant with MPEG-2 main profile at high level .
  • a first approach to using MPEG-2 compression engines in a parallel arrangement is shown in FIG. 4.
  • a total of nine MPEG-2 mp/ml compression engines are configured to process contiguous regions encompassing an ATSC DTV video image 142 (1920 pixels by 1080 lines) .
  • Each MPEG-2 mp/ml engine is capable of processing a region 144 equivalent to an NTSC video image (720 pixels by 480 lines) .
  • FIG. 4 A first approach to using MPEG-2 compression engines in a parallel arrangement. 4.
  • a total of nine MPEG-2 mp/ml compression engines are configured to process contiguous regions encompassing an ATSC DTV video image 142 (1920 pixels by 1080 lines) .
  • engines 3, 6, 7, 8 and 9 encode regions smaller than NTSC images.
  • the compression provided by this first approach is less than optimal.
  • the motion compensation performed within each engine is naturally constrained to not search beyond its NTSC image format boundaries. As a result, macroblocks along the boundaries between assigned engine areas may not necessarily benefit from motion compensation.
  • the preferred approach of the present invention shown in FIG. 5 provides a parallel arrangement of MPEG-2 compression engines in which the engines are configured to process overlapping regions 146, 148, 150, 152 of an ATSC DTV video image 142.
  • motion compensation performed by a particular engine for its particular region is extended into adjacent regions.
  • motion compensation uses a reference image (I or P frame) for predicting the current frame in the frame encoding process .
  • the preferred approach extends motion compensation into adjacent regions by updating the reference images at the end of the frame encoding process with information from reference frames of adjacent engines.
  • each engine stores at most two reference frames in memory. If at the end of a frame encoding process either of the two reference frames have been updated, then that reference frame is further updated to reflect the frame encoding results from adjacent engines.
  • the video encoder 100A includes a digital video receiver 160 and a compression engine subsystem 162.
  • the digital video receiver 160 receives uncompressed video from external sources in either of two different digital input formats: Panasonic 72Op (progressive scan) parallel format and 10801 serial format.
  • the 10801 serial format provides uncompressed 1080 line interlaced (10801) video at a rate of 1.484 Gbps following the SMPTE292M standard.
  • the digital receiver 160 converts the input signals into a common internal format referred to as TCI422-40 format in which 20 bits carry two Y components with 10 bit resolution, and 20 bits carry the chrominance components with 10 bit resolution.
  • the preferred embodiment of the video compression engine subsystem 162 shown in FIG. 7 includes a video input connector 200, a system manager 202, a bit allocation processor 204, several regional processors 206 and a PES header generator 208.
  • Each regional processor 206 includes a local image selection unit 210, an MPEG-2 compression unit 212, a macroblock remover and stream concatenation unit 214/216, and a local memory 218.
  • the compression subsystem 162 also includes one o more reference image managers (RIMs) 220. In the arrangement of FIG. 7, there are four RIMs 220. The RIM 220 is described further herein.
  • the MPEG-2 compression unit 212 is preferably an IBM model MM30 single package, three chip unit, though any standard MPEG-2 compression engine capable of main profile @ main level operation can be used.
  • the video input connector 200 terminates a system control bus 222 and a video data bus 224 referred to as the TCI422-40 bus.
  • the control bus 222 carries control data from a system control processor (not shown) to the system manager 202.
  • the TCI422-40 bus 224 carries video data from the digital receiver 160 (FIG. 6) .
  • the system manager 202 initializes the regional processors 206, interacts with the outside world, monitors video processing status, performs processor synchronization, and updates Frame PTS .
  • An AM29k chip manufactured by Advanced Micro Devices is used to implement this function.
  • the system manager 202 holds all execution FPGA files in an internal FLASH memory. Upon startup, the system manager initializes all processors and FPGAs with preassigned files from FLASH memory. The system manager 202 configures the following parameters for MPEG-2 compression units 212:
  • the following table gives the frame size of each MPEG-2 compression unit for 720p encoding.
  • the system manager monitors the video compression process. It polls the health status registers in the local image selection unit, the MPEG-2 unit, the macroblock remover unit and the stream concatenate unit of each regional processor 206 at a rate of once per second.
  • the system manager 202 synchronizes the frame encoding process over the nine regional processors 206.
  • the tasks required by the system manager to synchronize the parallel processors are described.
  • a scalable MPEG-2 architecture requires each regional processor 206 to finish the current frame encoding process before starting the next frame. This requirement exists because of the need to update the reference images across the adjacent parallel processors.
  • Each MPEG-2 engine uses internal reference images to compress the current image. The internal reference images are derived from the results of the compression process for the previous frames.
  • sections of the reference image are updated using reference images from adjacent processors. The following drives the need for synchronization:
  • the reference images are updated after each encoding process .
  • Each MPEG-2 compression unit must update the internal reference image using information from the reference image in the adjacent processors before it can properly encode the next image .
  • each MPEG-2 compression unit generates a current image compression complete (CICC) signal 250 after each encoding process.
  • CICC current image compression complete
  • the system manager 202 triggers the reference image manager 220 to update the internal reference images of each MPEG-2 compression unit using a common reference image update (RIU) signal 252.
  • ROU common reference image update
  • Each reference image manager activates a reference image update complete (RIUC) signal 254 when complete.
  • RIUC reference image update complete
  • the system manager triggers all local image selection units 210 to start loading the next frame into the compression units 212 through a common start image loading (SIL) signal 256.
  • SIL start image loading
  • the delay between the time when the RIU is activated and when the RIUC is activated may be as short as one cycle.
  • the system manager must respond promptly when all RIUC signals are activated.
  • the system manager updates the PTS in the PES header generator.
  • the system manager receives an interrupt every time when regional processor #1 receives a new picture header. It then compute a new PTS value from the latched STC value at processor #1 ' s video input port and the frame type from processor #1 ' s compressed output port .
  • the bit allocation processor 204 is responsible for ensuring that the cumulative compressed bit rate from all of the regional processors meets the target compressed video bit rate defined externally.
  • the bit allocation processor dynamically changes the compression quality setting of each MPEG-2 engine to maintain optimal video quality.
  • the local image selection unit (LISU) 210 extracts a local image from the uncompressed input data on the TCI422-40 data bus 224. It outputs the local image in a format that complies with the input format specified by the MPEG-2 unit 212.
  • the LISU supports the following programmable registers :
  • local image location registers These registers specify the location of a local field image within a global field image 300 (FIG. 9) .
  • the registers specify points within the field image, not the reconstructed progressive image.
  • the 72Op video has only one field image per frame, whereas the 10801 video has two field images per frame.
  • registers specify the corner locations of a local image 302 within a global image 300 as shown in FIG. 9.
  • the four registers are defined below:
  • Hstop register Pixel index of the first non-active pixel after the local image.
  • Vstart register Line index of the first active line in local image. First line in global image will have an index value of 1.
  • Vstop register Line index of the first non-active line after the local image.
  • the macroblock remover and bit concatenation (MRBC) units 214/216 are responsible for converting the MPEG-2 main profile @ main level bit streams received from the MPEG-2 units 212 to ATSC compliant bit streams. Each MRBC unit performs two tasks : macroblock removal and bit stream concatenation by scanning and editing the bit streams from the MPEG-2 unit 212 and by communicating with other MRBC units .
  • the scalable MPEG-2 architecture (FIG. 7) employs nine MPEG-2 compression units 212 for 10801 video format encoding and 8 MPEG-2 compression units for 720P video format encoding .
  • Each MPEG-2 compression unit is responsible for compressing a specific region of the target image 300 called an active-region 310.
  • the target picture 300 is covered by the active regions 310 without overlapping.
  • Figure 10 shows raw-regions 310B and active-regions 310 for 10801.
  • Each MPEG-2 compression unit actually compresses a larger region (raw-region) of the target picture 300 than its active-region.
  • An active-region 310 is a sub-region of the corresponding raw-region 310B. Therefore the target picture is covered also by the raw-regions but with overlapping between adjacent raw-regions. Every raw-region 310B, active-region 310 or overlapped region 310A is ensured to have sizes of multiple of 16 (or 32) so that the active-region can be obtained by removing some macroblocks from the raw-region.
  • the macroblock remover 214 removes the macroblocks which are in the overlap region 310A but not in the active-region 310.
  • the size of active regions is derived from the following :
  • raw_height the height of the raw-region 310B.
  • raw_width the width of the raw-region 310B.
  • left_alignment the mark where the active-region 310 macroblocks 320 start horizontally, thus macroblocks to the left of this mark in the raw-region need to be removed .
  • right_alignment the mark where the active-region macroblocks ends horizontally, thus macroblocks to the right of this mark in the raw region need to be removed
  • top_alignment the mark where the active-region macroblocks start vertically, thus macroblocks above this mark in the raw region need to be removed.
  • head__mb_skip the number of macroblocks skipped from left_alignment and the non- skipped macroblock in the active-region.
  • tail_mb_skip the number of macroblocks skipped from the last non-skipped macroblock in the active-region to right-alignment .
  • the values of the configuration vectors for each MRBC unit for 10801 are as follows:
  • the MRBC unit 214/216 scans and edits the coded bit streams for slices on a row basis. Horizontally, macroblocks in the area between the top of the raw-region and top_alignment , between the bottom_alignment and the bottom of the raw-region should be removed. For each row in the raw-region, macroblocks in the area between the left start of the raw-region and left_alignment , between right_alignment and the right end of the raw-region should be removed. The resulting bit stream is called an mr-processed row. Since each MPEG-2 unit uses a single slice for each row, an mr-processed row is also called an mr-processed slice in this context.
  • a local variable quant_trace is used to record the value of quantiser_scale__code which is initially set to the quantiser_scale_code in the slice header and is updated every time a quantise_scale_code is encountered in the following macroblocks until left_alignment .
  • the macroblock remover scans the coded bit streams and performs the following procedures :
  • Updates quant_trace until left_alignment A check is made that the macroblock_quant flag is set in the first non-skipped macroblock in the active-region. If not, the macroblock_quant flag is set and the value quantiser_scale__code is set to the value of quant_trace, and a macroblock header is rebuilt accordingly (specific to MRBC units on the 2 nd and 3 rd column for 10801 encoding) . Forms the mr-processed slice by only preserving macroblocks in the active-region in the process of scanning.
  • tail_mb_skip if required (specific to 10801) .
  • Bit streams from each local MPEG-2 compression unit 212 have to be concatenated to form uniform ATSC compliant DTV bit streams.
  • Every MRBC unit has to put its local mr-processed slice into the output buffer 208 (FIG. 7) at the right time.
  • the behavior of the MRBC units 214/216 has to be synchronized.
  • a token mechanism is used to synchronize MRBC units.
  • the communication model is as shown in FIG. 12.
  • processors #2, #5, #8 are removed as shown in FIG. 13. An extra row is added to the bottom.
  • a token is an indication that the MRBC unit holding the token can sent its bit stream to the output buffer along output bus 228.
  • a MRBC unit receives a completion signal from another MRBC unit, it has the token.
  • the MRBC unit #1 is responsible for initiating new tokens.
  • the MRBC unit #1 has a time-out variable. When the time-out is reached, a fault will be generated and the system manager will reset. Tokens are sent through a designated line 270 between the MRBC units. Only one active token is allowed at any given time.
  • each DTV slice is obtained by concatenating three local slices in the three MRBC units of the same row. Since each macroblock header contains information about the number of skipped macroblocks between the previous non-skipped macroblock and the current macroblock, this information needs to be updated when the local mr-processed slice is integrated into a DTV slice. To be more specific, the first non- skipped macroblock in the second and the third local processed slices should have its header updated.
  • Proper header information is inserted into DTV bit streams by the MRBC units.
  • the header information is obtained by scanning the bit streams from the output buffer of the local MPEG-2 unit 212.
  • MRBC unit #4 and #7 are responsible for only inserting slice header information.
  • the macroblock skipping information, tail_mb_skip, from the last MRBC unit is received and combined with the local head_mb_skip .
  • the total macroblock skipping information is then inserted into the macroblock header of the first non-skipped macroblock in the mr-processed slice and the slice bit stream is then put into the DTV output buffer. Then the local tail_mb_skip is sent to the next MRBC unit via the dedicated 8-pin data bus 228.
  • MRBC units (#1, #4, #7) in the first column only send tail_mb_skip information; MRBC units (#2, #5, #8) in the second column both receive and send tail_mb_skip information; MRBC units (#3, #6, #9) in the third column only receive tail_mb_skip information.
  • the MRBC unit Upon receiving a token signal, the MRBC unit updates the mr-processed slice and outputs it to the DTV output buffer, then turns the token over to next MRBC unit by activating the token line.
  • the next step for sending the token for 10801 video format encoding is determined by the following rules:
  • MRBC unit #9 sends a token to MRBC unit #1 after the last mr- processed slice.
  • RIMs reference image managers
  • Each RIM 220 transfers information from the local memory 218 within one regional processor 206 to the local memory of adjacent processors.
  • the reference images within each MPEG-2 unit are updated by the compression engine during the frame encoding process .
  • the reference buffer need only be updated by the RIM 220 if the reference buffer is modified by the frame compression process.
  • a reference image 400A is updated by the RIM 220 using information from the reference images 400A from adjacent processors, as FIG. 15 shows the regions 400B within the reference images 400 of the adjacent processors used by the RIM to update regions 400C of the reference image 400A in the center processor. For those side and corner processors that do not have adjacent processors on all sides, the regions bordering those empty neighbors in the reference image will not require updating.
  • the RIM 220 keeps track of the relationship between the frame type and the reference image update based on the guidelines noted previously.
  • the RIM identifies which one of the two reference buffers A and B, if any, was updated by the MPEG-2 units at the end of each frame encoding process.
  • the RIM 220 determines when the MPEG-2 units have completed encoding of the current frame by monitoring the Henc_inc signal from the IBM encoder, the output time of the picture information block, and the vertical synch signal at the video input port .
  • the RIM computes the begin address within the local memory for the chroma and luma reference images using information from the picture information block extracted from the MPEG-2 compression unit, the Henc_int signal, and the update status of reference buffer A and B.
  • the begin address is defined by the compression configuration.
  • the RIM 220 updates all modified reference images at the end of each encoding process. The RIM updates each reference image according to the table below and as shown in FIG. 16:
  • BRl specifies the first pixel just after region
  • ARl specifies the first pixel just after region
  • BRt specifies the line just after region Rtn
  • ARt specifies the line just after region Atn
  • ARb specifies the line just after region Aln and Am
  • BRb specifies the line just after region Rln and Rrn;
  • BRr specifies the first pixel just after region
  • ARr specifies the first pixel just after region
  • RIM processors 220 perform the functions required in the preferred embodiment of FIG. 7.
  • the components within each RIM processor are shown in the block diagram of FIG. 17.
  • a single RIM processor 220 manages the local memory for four video processors designated here as Ptl, Ptr, Pbl , and Pbr.
  • the RIM has access to the 64 bit data, 9 bit address, and associated DRAM control signals.
  • 12 local managers 220A handle the different border regions 400B (FIG. 15) around each reference image 400.
  • the diagram of FIG. 18 shows the components within each local manager 220A.
  • the local manager holds four buffers 220B, 220C, two to hold the border image for reference image A, and two to hold the border image for reference image B.
  • the MPEG-2 unit reads and writes into the local memory
  • the local manager when manipulating data within the AR region.
  • the local manager will update one of the AR/BR buffers 220B, 220C .
  • buffer A holds data that mirrors the AR region within the local memory of the MPEG-2 unit.
  • a controller 232 within the local manager will re-map AR/BR buffer A into the BR region of the adjacent MPEG-2 unit.
  • Buffer B which was mapped as the BR region for the adjacent MPEG-2 unit, is re-mapped back into the AR region of the center processor. It is the responsibility of the controller to re-map the buffers every time the reference image is updated through a frame encoding process .
  • the PES header generator 208 inserts the PES header in the video elementary stream.
  • the PES header generator extracts this information directly from the compressed stream.
  • the generator extracts this information from the picture header within the compressed bit stream.
  • the PES generator computes the PTS value from the following information: picture type pt , gop structure gops, and the input video STC timestamp, STCvi .
  • the PES header generator latches the STCvi value using the vertical synchronous signal going into the MPEG-2 unit of regional processor #1.
  • the computational demand required to decode an ATSC compliant video stream is also significant.
  • the common sequential decoding architecture used for standard MPEG-2 ml@mp decoding will not meet the demand.
  • the scalable architecture of the present invention uses existing MPEG2 decoding engines to decode ATSC DTV video streams .
  • An embodiment of a decoder system comprises four parallel regional decoders.
  • the system requires that each decoder be capable of decoding a video frame size that is 2.5 times the NTSC format. This requirement is not unreasonable, since the known decoding algorithm is not demanding.
  • each decoder decodes a local region 500A, 500B, 500C, 500D of 1920 pixels by 270 lines within an ATSC frame 500 that is 1920 pixels by 1080 lines.
  • FIG. 20 shows the components of a decoder system 520 that includes a compressed stream demultiplexer 522, four parallel regional decoders 524A, 524B, 524C, 524D, reference frame stores 526A, 526B, 526C and a multiplexer 528.
  • the compressed stream demultiplexer 522 demultiplexes a compressed video stream 521 on the basis of slice header information to provide region streams 523A, 523B, 523C, 523D.
  • the regional decoder 524A, 524B, 524C, 524D decodes a compressed bit stream 523A, 523B, 523C, 523D that is fully compliant with MPEG-2 except for the following two exceptions: the stream defines a frame that is 1920 pixels by 270 lines; the motion vectors extend beyond the vertical dimension by a maximum of 270 lines.
  • an existing MPEG-2 decoder is modified so as to fulfill the noted exceptions spelled out for the regional decoder 524A, 524B, 524C, 524D.
  • the regional decoder must also address the decoding dependencies between adjacent regions. There exists one dependency that is critical to the decoding process: motion vector compensation.
  • the motion vector compensation procedure utilizes pixel information from a reference image to create the pixels within the current macroblock.
  • the reference image is an image created from previous I or P frames.
  • the procedure reaches into regions beyond the local region to create the pixels within the current block.
  • the maximum depth the procedure will reach is governed by the maximum length defined by the motion vectors.
  • Each regional decoder makes the reference image available for other decoders in order for the other decoders to correctly carry out the motion vector compensation procedure.
  • the reference image is shared between adjacent regions. The assumption is that the maximum motion vector will not exceed the width of each region, which is 270 lines. This is a reasonable assumption, since no realistic compressed video sequences will generate motion vectors greater than 270 lines.
  • the reference images are shared through the reference frame store as shown in FIG. 22.
  • the regional decoders simultaneously write into two memory locations 530, 532, 534, 536: one (530, 536) for future access by the current decoder, and one (532, 534) for future access by the adjacent decoders.
  • the embodiment resolves simultaneous reading by two decoders when performing motion vector compensation routine.
  • the multiplexer 528 multiplexes the uncompressed frame regions back into a full frame.
  • the multiplexer constructs an 8 bit digital data stream following the SMPTE 274M standard.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The specification discloses a method and apparatus for encoding and decoding advanced television signals using standard MPEG-2 compression engines while maintaining the compression efficiency of such compression engines. The architecture provides parallel processing using standard MPEG-2 compression engines in an overlapping arrangement that does not sacrifice compression performance. A video encoder includes plural regional processors for encoding an input stream of video images. Each video image is divided into regions that have overlapping portions, with each processor encoding a particular region of a current video image in the stream. The regional processors each store a reference frame in a local memory based on a prior video image in the stream for use in the motion compensation of the encoding process. A reference frame processor coupled to the plural local memories updates each reference frame with information from reference frames stored in adjacent local memories. The encoded video images are made up of macroblocks and each regional processor includes means for removing certain macroblocks from the encoded video images that correspond to the overlap portions and concatenating the resulting encoded video images with that of other regional processors to provide an output video stream.

Description

METHOD AND APPARATUS FOR ADVANCED TELEVISION SIGNAL
ENCODING AND DECODING
BACKGROUND OF THE INVENTION
The Federal Communications Commission (FCC) has adopted major elements of the Advanced Television Systems Committee (ATSC) Digital Television standard for use by terrestial broadcasters. The ATSC Digital Television (DTV) standard addresses five key components of a model system for delivering multimedia information to users. A block diagram of such a model system as defined by the International Telecommunications Union, Radio Communication Sector (ITU-R) , Task Group 11/3 is shown in FIG. 1 and includes video, audio, tranport, RF/transmission and receiver components. The video subsystem 100 compresses raw video into a digital video data elementary stream in accordance with the MPEG-2 standard defined by the Moving Picture Experts Group in ISO/IEC IS 13818-2 International Standard (1994) , MPEG-2 Video. The audio subsystem 102 compresses raw audio into a digital audio data elementary stream in accordance with the Digital Audio Compression 3 (DAC-3) standard defined by the Audio Specialist Group within ITU.
The service multiplex and transport component 104 multiplexes the video data elementary stream, the audio data elementary stream, ancillary and control data elementary streams into a single bit stream using the transport stream syntax defined by ISO/IEC IS 13818-1 International Standard (1994), MPEG-2 Systems. The RF/transmission component 106 includes a channel coder and a modulator. The channel coder introduces additional information into the transport stream to allow the receiver 108 to reconstruct partially corrupted bit streams. The modulator encodes the digital data into RF signals using vestigial sideband transmission.
The MPEG-2 standard applies five compression techniques to achieve a high compression ratio: discrete cosine transform (DCT) , difference encoding, quantization, entropy encoding and motion compensation. A DCT is applied to blocks of 8 x 8 pixels to provide 64 coefficients that represent spatial frequencies. For blocks without much detail, the high frequency coefficients have small values that can be set to zero. Video frames are encoded into intra frames (I frames) which do not rely on information from other frames to reconstruct the current frame, and inter frames, P and B, which rely on information from other frames to reconstruct the current frame . P frames rely on the previous P or I frame while B frames rely on the previous I or P and the future I or P to construct the current frame . These previous or future I and P frames are referred to as reference frames . The P and B frames include only the differences between the current frame and the adjacent frames. For low motion video sequences, the P and B frames will have very little information content .
The MPEG-2 compression algorithm performs motion estimation between adjacent frames to improve the prediction capability between frames. The compression algorithm searches for a motion vector for every four blocks, known as a macroblock, that provides the distance and direction of motion for the current macroblock.
The DCT coefficients of each block are weighted and quantized based on a quantization matrix that matches the response of the human eye. The results are combined with the motion vectors and then encoded using variable length encoding to provide a stream for transmission.
The computational demand required to carry out the video compression specified in the MPEG-2 standard is significant. For applications in which real-time compression is necessary, such as live broadcast, the approach taken to achieve such video compression becomes critical. There are two known approaches for implementing MPEG-2 compression: sliced-based and macroblock-based. In the slice-based approach shown in FIG. 2, a video frame 120 is divided into several contiguous regions 120A. Each region is assigned to a different processor (PI, P2 , P3 , P4 , P5) for processing. A dedicated central processor 122 manages the overall compression operation. In the macroblock-based approach shown in FIG. 3, each macroblock 124 is completely processed and delivered to an output buffer 126 before processing the next macroblock. The MPEG-2 standard defines algorithmic tools known as profiles and sets of constraints on parameter values (e.g., picture size, bit rate) known as levels. The known MPEG-2 compression engines noted above have been designed to meet the main profile @ main level portion of the standard for conventional broadcast television signals such as NTSC and PAL. The main level is specified as 720 pixels by 480 active lines at 30 frames per second. In contrast, the DTV signal is specified as 1920 pixels by 1080 active lines at 30 frames per second. This is known as the MPEG-2 high level. The computational demand needed for the DTV signal specified as main profile @ high level is approximately six times that needed for existing standard television signals specified as main profile @ main level.
SUMMARY OF THE INVENTION
It would be desirable to take advantage of existing MPEG-2 compression engines to encode higher definition video signals while maintaining the compression efficiency of such compression engines.
The method and apparatus of the present invention provides an architecture capable of addressing the computational demand required for high-definition video signals, such as a DTV signal compliant with MPEG-2 main profile @ high level, using standard MPEG-2 compression engines operating in the main profile @ main level mode. The invention provides parallel processing using such standard MPEG-2 compression engines in an overlapping arrangement that does not sacrifice compression performance.
Accordingly, a video encoder of the present invention comprises plural regional processors for encoding an input stream of video images . Each video image is divided into regions that have overlapping portions, with each processor encoding a particular region of a current video image in the stream according to an encoding process that includes motion compensation such as MPEG-2 main profile @ main level. The regional processors each store a reference frame in a local memory based on a prior video image in the stream for use in the motion compensation of the encoding process. A reference frame processor coupled to the plural local memories updates each reference frame with information from reference frames stored in adjacent local memories. The encoded video images are made up of macroblocks and each regional processor includes means for removing certain macroblocks from the encoded video images that correspond to the overlap portions and concatenating the resulting encoded video images with that of other regional processors to provide an output video stream.
In an embodiment, the regional processors each include an image selection unit for selecting a particular image region from each of the video images. A compression engine compresses the selected image region to provide a compressed image region stream of macroblocks . A macroblock remover removes certain macroblocks from the compressed image region stream that correspond to the overlapping portions. A stream concatenation unit concatenates the compressed image region stream with such streams from each regional processor to provide an output video stream.
While the preferred embodiment includes multiple regional processors for processing the overlapping regions, the present invention encompasses single processor embodiments in which each region is processed successively.
According to another aspect of the invention, a video decoder includes a demultipexer, multiple regional decoders, a reference frame memory and a multiplexer. The demultiplexer demultiplexes a compressed stream of video images to plural region streams. Each video image is divided into contiguous regions, each region stream being associated with a particular region. The regional decoders each decode a particular region stream according to a decoding process that includes motion compensation such as MPEG-2 main profile at main level. The reference frame memory stores reference frames associated with each regional decoder. The regional decoders retrieve reference frames of adjacent regions for use in the motion compensation process. The multiplexer multiplexes the decoded region streams to a decoded output stream.
BRIEF DESCRIPTION OF THE DRAWINGS
The foregoing and other objects, features and advantages of the invention will be apparent from the following more particular description of preferred embodiments of the invention, as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views. FIG. 1 is a block diagram of a model advanced television system.
FIG. 2 is a block diagram illustrating a slice-based compression approach for MPEG-2 main profile at main level . FIG. 3 is a block diagram illustrating a macroblock- based compression approach for MPEG-2 main profile at main level .
FIG. 4 is a diagram illustrating a first processor arrangement in accordance with the present invention.
FIG. 5 is a diagram illustrating a preferred processor arrangement in accordance with the present invention.
FIG. 6 is a block diagram of a video encoding subsystem of the present invention.
FIG. 7 (includes Figs. 7A-7C) is a schematic block diagram of a video compression engine of the video subsystem of FIG. 6.
FIG. 8 is a block diagram illustrating a synchronization configuration for the compression engine of FIG. 7.
FIG. 9 is a diagram illustrating local image selection from a global image for the engine of FIG. 7.
FIG. 10 is a diagram illustrating raw and active regions of the global image of FIG. 9.
FIG. 11 is a diagram illustrating an active region within a raw region of the image of FIG. 10.
FIG. 12 is a block diagram of a token passing arrangement for a 10801 video processing configuration. FIG. 13 is a block diagram of a token passing arrangement for a 72Op video processing configuration.
FIG. 14 is a block diagram illustrating allocation of reference images in reference buffers of a local memory of the system of FIG. 7. FIG. 15 is a diagram illustrating the reference image updating arrangement of the present invention.
FIG. 16 is a diagram illustrating regions of a reference image in accordance with the invention. FIG. 17 is a block diagram of a reference image manager of the system of FIG. 7.
FIG. 18 is a block diagram of a local manager of the reference image manager of FIG. 17.
FIG. 19 is a diagram illustrating the decoding arrangement of the present invention.
FIG. 20 is a block diagram of a decoder system of the present invention.
FIG. 21 is a diagram illustrating motion compensation in the decoder system of FIG. 20. FIG. 22 is a block diagram of the reference frame store of the decoder system of FIG. 20.
DETAILED DESCRIPTION OF THE INVENTION
The present invention employs a parallel processing arrangement that takes advantage of known MPEG-2 main profile at main level (mp/ml) compression engines to provide a highly efficient compression engine for encoding high definition television signals such as the DTV signal that is compliant with MPEG-2 main profile at high level . A first approach to using MPEG-2 compression engines in a parallel arrangement is shown in FIG. 4. In this arrangement, a total of nine MPEG-2 mp/ml compression engines are configured to process contiguous regions encompassing an ATSC DTV video image 142 (1920 pixels by 1080 lines) . Each MPEG-2 mp/ml engine is capable of processing a region 144 equivalent to an NTSC video image (720 pixels by 480 lines) . As shown in FIG. 4, engines 3, 6, 7, 8 and 9 encode regions smaller than NTSC images. The compression provided by this first approach is less than optimal. The motion compensation performed within each engine is naturally constrained to not search beyond its NTSC image format boundaries. As a result, macroblocks along the boundaries between assigned engine areas may not necessarily benefit from motion compensation.
The preferred approach of the present invention shown in FIG. 5 provides a parallel arrangement of MPEG-2 compression engines in which the engines are configured to process overlapping regions 146, 148, 150, 152 of an ATSC DTV video image 142. With the preferred approach, motion compensation performed by a particular engine for its particular region is extended into adjacent regions. As noted in the background, motion compensation uses a reference image (I or P frame) for predicting the current frame in the frame encoding process . The preferred approach extends motion compensation into adjacent regions by updating the reference images at the end of the frame encoding process with information from reference frames of adjacent engines.
As described further herein, each engine stores at most two reference frames in memory. If at the end of a frame encoding process either of the two reference frames have been updated, then that reference frame is further updated to reflect the frame encoding results from adjacent engines.
A preferred embodiment of a video encoder 100A of the present invention is shown in FIG. 6. The video encoder 100A includes a digital video receiver 160 and a compression engine subsystem 162. The digital video receiver 160 receives uncompressed video from external sources in either of two different digital input formats: Panasonic 72Op (progressive scan) parallel format and 10801 serial format. The 10801 serial format provides uncompressed 1080 line interlaced (10801) video at a rate of 1.484 Gbps following the SMPTE292M standard. The digital receiver 160 converts the input signals into a common internal format referred to as TCI422-40 format in which 20 bits carry two Y components with 10 bit resolution, and 20 bits carry the chrominance components with 10 bit resolution.
The preferred embodiment of the video compression engine subsystem 162 shown in FIG. 7 includes a video input connector 200, a system manager 202, a bit allocation processor 204, several regional processors 206 and a PES header generator 208. There are nine regional processors 206 shown in the arrangement of FIG. 7, though other arrangements are possible, e.g., an arrangement of 12 regional processors can be implemented to provide a greater range of motion compensation. Each regional processor 206 includes a local image selection unit 210, an MPEG-2 compression unit 212, a macroblock remover and stream concatenation unit 214/216, and a local memory 218. The compression subsystem 162 also includes one o more reference image managers (RIMs) 220. In the arrangement of FIG. 7, there are four RIMs 220. The RIM 220 is described further herein.
The MPEG-2 compression unit 212 is preferably an IBM model MM30 single package, three chip unit, though any standard MPEG-2 compression engine capable of main profile @ main level operation can be used.
The video input connector 200 terminates a system control bus 222 and a video data bus 224 referred to as the TCI422-40 bus. The control bus 222 carries control data from a system control processor (not shown) to the system manager 202. The TCI422-40 bus 224 carries video data from the digital receiver 160 (FIG. 6) .
The system manager 202 initializes the regional processors 206, interacts with the outside world, monitors video processing status, performs processor synchronization, and updates Frame PTS . An AM29k chip manufactured by Advanced Micro Devices is used to implement this function. The system manager 202 holds all execution FPGA files in an internal FLASH memory. Upon startup, the system manager initializes all processors and FPGAs with preassigned files from FLASH memory. The system manager 202 configures the following parameters for MPEG-2 compression units 212:
• The GOP structure
• The frame rate
• Progressive encoding for 72Op video, interlaced encoding for 10801 video. • The encoded frame size The following table gives the frame size of each MPEG-2 compression unit 212 for 10801 encoding.
Figure imgf000015_0001
The following table gives the frame size of each MPEG-2 compression unit for 720p encoding.
Figure imgf000015_0002
The system manager monitors the video compression process. It polls the health status registers in the local image selection unit, the MPEG-2 unit, the macroblock remover unit and the stream concatenate unit of each regional processor 206 at a rate of once per second.
The system manager 202 synchronizes the frame encoding process over the nine regional processors 206. The following presents the motivations behind the need to synchronize. Next, the tasks required by the system manager to synchronize the parallel processors are described.
A scalable MPEG-2 architecture requires each regional processor 206 to finish the current frame encoding process before starting the next frame. This requirement exists because of the need to update the reference images across the adjacent parallel processors. Each MPEG-2 engine uses internal reference images to compress the current image. The internal reference images are derived from the results of the compression process for the previous frames. In the scalable MPEG-2 architecture of the present invention, sections of the reference image are updated using reference images from adjacent processors. The following drives the need for synchronization:
1. The reference images are updated after each encoding process .
2. Each MPEG-2 compression unit must update the internal reference image using information from the reference image in the adjacent processors before it can properly encode the next image . Referring now to FIG. 8, each MPEG-2 compression unit generates a current image compression complete (CICC) signal 250 after each encoding process. When all CICC signals are detected, the system manager 202 triggers the reference image manager 220 to update the internal reference images of each MPEG-2 compression unit using a common reference image update (RIU) signal 252. The system manager must respond promptly when all CICC signals are active, since any delay will cut into the MPEG-2 engine encoding time.
Each reference image manager activates a reference image update complete (RIUC) signal 254 when complete. When all RIUC signals are detected, the system manager triggers all local image selection units 210 to start loading the next frame into the compression units 212 through a common start image loading (SIL) signal 256. The delay between the time when the RIU is activated and when the RIUC is activated may be as short as one cycle. The system manager must respond promptly when all RIUC signals are activated.
The system manager updates the PTS in the PES header generator. The system manager receives an interrupt every time when regional processor #1 receives a new picture header. It then compute a new PTS value from the latched STC value at processor #1 ' s video input port and the frame type from processor #1 ' s compressed output port .
The bit allocation processor 204 is responsible for ensuring that the cumulative compressed bit rate from all of the regional processors meets the target compressed video bit rate defined externally. The bit allocation processor dynamically changes the compression quality setting of each MPEG-2 engine to maintain optimal video quality.
The local image selection unit (LISU) 210 extracts a local image from the uncompressed input data on the TCI422-40 data bus 224. It outputs the local image in a format that complies with the input format specified by the MPEG-2 unit 212. The LISU supports the following programmable registers :
1. input video format register: This register defines the video format the data on TCI422-40 bus represents. 0 = 10801, 1 = 720p 2. local image location registers: These registers specify the location of a local field image within a global field image 300 (FIG. 9) .
The registers specify points within the field image, not the reconstructed progressive image. Keep in mind that the 72Op video has only one field image per frame, whereas the 10801 video has two field images per frame.
Four registers specify the corner locations of a local image 302 within a global image 300 as shown in FIG. 9. The four registers are defined below:
Hstart register: Pixel index of the first active pixel in local image 302. First pixel in global image 300 will have an index value of 1.
Hstop register: Pixel index of the first non-active pixel after the local image.
Vstart register: Line index of the first active line in local image. First line in global image will have an index value of 1.
Vstop register: Line index of the first non-active line after the local image.
The following table gives the values for the registers for the different formats supported by each MPEG-2 unit 212.
Figure imgf000019_0001
The macroblock remover and bit concatenation (MRBC) units 214/216 are responsible for converting the MPEG-2 main profile @ main level bit streams received from the MPEG-2 units 212 to ATSC compliant bit streams. Each MRBC unit performs two tasks : macroblock removal and bit stream concatenation by scanning and editing the bit streams from the MPEG-2 unit 212 and by communicating with other MRBC units .
The scalable MPEG-2 architecture (FIG. 7) employs nine MPEG-2 compression units 212 for 10801 video format encoding and 8 MPEG-2 compression units for 720P video format encoding .
Each MPEG-2 compression unit is responsible for compressing a specific region of the target image 300 called an active-region 310. The target picture 300 is covered by the active regions 310 without overlapping. Figure 10 shows raw-regions 310B and active-regions 310 for 10801.
Each MPEG-2 compression unit actually compresses a larger region (raw-region) of the target picture 300 than its active-region. An active-region 310 is a sub-region of the corresponding raw-region 310B. Therefore the target picture is covered also by the raw-regions but with overlapping between adjacent raw-regions. Every raw-region 310B, active-region 310 or overlapped region 310A is ensured to have sizes of multiple of 16 (or 32) so that the active-region can be obtained by removing some macroblocks from the raw-region.
The macroblock remover 214 removes the macroblocks which are in the overlap region 310A but not in the active-region 310.
The size of active regions is derived from the following :
Hactl = 592. Vactl = Vact3 = 352.
Hact2 = 480. Vact2 = 128. Hact3 = 608. Vol = 128. Hol l2 = 128 . Hol23 = 112 .
Referring now to FIG. 11, for each MPEG-2 compression unit 212, the following integer parameters are defined with respect to the macroblock positions: raw_height : the height of the raw-region 310B. raw_width : the width of the raw-region 310B. left_alignment : the mark where the active-region 310 macroblocks 320 start horizontally, thus macroblocks to the left of this mark in the raw-region need to be removed . right_alignment the mark where the active-region macroblocks ends horizontally, thus macroblocks to the right of this mark in the raw region need to be removed, top_alignment the mark where the active-region macroblocks start vertically, thus macroblocks above this mark in the raw region need to be removed. bottom_alignment the mark where the active-region macroblocks end vertically, thus macroblocks below this mark in the raw region need to be removed. The following definitions of two other parameters are specific to 10801. head__mb_skip : the number of macroblocks skipped from left_alignment and the non- skipped macroblock in the active-region. tail_mb_skip: the number of macroblocks skipped from the last non-skipped macroblock in the active-region to right-alignment .
For the convenience of expression, the following convention is used to denote these parameters: ( (raw_width, raw_height) , (left_alignment , right_alignment) , (top_alignment , bottom_alignment) ) and is called the configuration vector for the MRBC unit. Note that this configuration vector defines the boundaries of the raw-region 310B and the active-region 310 of the current MPEG-2 compression unit and hence which macroblocks need to be removed and which macroblocks need to be kept .
The values of the configuration vectors for each MRBC unit for 10801 are as follows:
#1: (45,30) (0, 41) (0, 26) )
#2: (45,30) (4, 41) (0, 26) )
#3: (45,30) (3, 45) (0, 26) )
#4: (45,24) (0, 41) (4, 20) ) #5: (45,24) (4, 41) (4, 20) )
#6: (45,24) (3, 45) (4, 20) )
#7: (45,30) (0, 41) (4, 30) )
#8: (45,30) (4, 41) (4, 30) )
#9: (45,30) (3, 45) (4, 30) )
The MRBC unit 214/216 scans and edits the coded bit streams for slices on a row basis. Horizontally, macroblocks in the area between the top of the raw-region and top_alignment , between the bottom_alignment and the bottom of the raw-region should be removed. For each row in the raw-region, macroblocks in the area between the left start of the raw-region and left_alignment , between right_alignment and the right end of the raw-region should be removed. The resulting bit stream is called an mr-processed row. Since each MPEG-2 unit uses a single slice for each row, an mr-processed row is also called an mr-processed slice in this context.
For 10801, besides producing mr_processed rows, some of the macroblock removers need to produce values for head_mb_skip and/or tail_mb_skip. Furthermore, a local variable quant_trace is used to record the value of quantiser_scale__code which is initially set to the quantiser_scale_code in the slice header and is updated every time a quantise_scale_code is encountered in the following macroblocks until left_alignment .
Starting from a given slice, the macroblock remover scans the coded bit streams and performs the following procedures :
Computes head_mb_skip if required (specific to 10801) .
Updates quant_trace until left_alignment . A check is made that the macroblock_quant flag is set in the first non-skipped macroblock in the active-region. If not, the macroblock_quant flag is set and the value quantiser_scale__code is set to the value of quant_trace, and a macroblock header is rebuilt accordingly (specific to MRBC units on the 2nd and 3rd column for 10801 encoding) . Forms the mr-processed slice by only preserving macroblocks in the active-region in the process of scanning.
Computes the value of tail_mb_skip if required (specific to 10801) .
Bit streams from each local MPEG-2 compression unit 212 have to be concatenated to form uniform ATSC compliant DTV bit streams. Every MRBC unit has to put its local mr-processed slice into the output buffer 208 (FIG. 7) at the right time. In other words, the behavior of the MRBC units 214/216 has to be synchronized. Thus, a token mechanism is used to synchronize MRBC units.
For 10801 video format encoding, the communication model is as shown in FIG. 12. For 72 Op video format encoding, processors #2, #5, #8 are removed as shown in FIG. 13. An extra row is added to the bottom.
A token is an indication that the MRBC unit holding the token can sent its bit stream to the output buffer along output bus 228. When a MRBC unit receives a completion signal from another MRBC unit, it has the token. The MRBC unit #1 is responsible for initiating new tokens. The MRBC unit #1 has a time-out variable. When the time-out is reached, a fault will be generated and the system manager will reset. Tokens are sent through a designated line 270 between the MRBC units. Only one active token is allowed at any given time.
For 10801 video format encoding,' each DTV slice is obtained by concatenating three local slices in the three MRBC units of the same row. Since each macroblock header contains information about the number of skipped macroblocks between the previous non-skipped macroblock and the current macroblock, this information needs to be updated when the local mr-processed slice is integrated into a DTV slice. To be more specific, the first non- skipped macroblock in the second and the third local processed slices should have its header updated.
Proper header information is inserted into DTV bit streams by the MRBC units. The header information is obtained by scanning the bit streams from the output buffer of the local MPEG-2 unit 212. For 10801 video format encoding, MRBC unit #4 and #7 are responsible for only inserting slice header information.
The macroblock skipping information, tail_mb_skip, from the last MRBC unit is received and combined with the local head_mb_skip . The total macroblock skipping information is then inserted into the macroblock header of the first non-skipped macroblock in the mr-processed slice and the slice bit stream is then put into the DTV output buffer. Then the local tail_mb_skip is sent to the next MRBC unit via the dedicated 8-pin data bus 228. MRBC units (#1, #4, #7) in the first column only send tail_mb_skip information; MRBC units (#2, #5, #8) in the second column both receive and send tail_mb_skip information; MRBC units (#3, #6, #9) in the third column only receive tail_mb_skip information.
Upon receiving a token signal, the MRBC unit updates the mr-processed slice and outputs it to the DTV output buffer, then turns the token over to next MRBC unit by activating the token line. The next step for sending the token for 10801 video format encoding is determined by the following rules:
If there is a next MRBC unit in the same row, send the token to the next MRBC unit . - If the current MRBC unit is at the end of the row of MRBC units, it sends the token to the first MRBC unit of the same row if the slice sent to the output buffer is not the last slice in the active-region; it sends the token to the first MRBC unit of the next row if the slice sent is the last slice in the active region. One exception is that MRBC unit #9 sends a token to MRBC unit #1 after the last mr- processed slice. As noted above, the reference image managers (RIMs) 220 (FIG. 7) manage the updating of reference images used by each of the MPEG-2 compression units 212. Each RIM 220 transfers information from the local memory 218 within one regional processor 206 to the local memory of adjacent processors. The reference images within each MPEG-2 unit are updated by the compression engine during the frame encoding process . There are two reference frames stored in each local memory 218. Only one reference frame is updated during each frame encoding period. The reference frames are updated only when encoding the I or P frames. The following example shows the order in which the two reference frames are updated as shown in FIG. 14.
Consider two reference images stored in reference buffer A and reference buffer B of local memory 218 and- a compressed output sequence IBBPBBPBBIBB . The compression process for the first I frame creates a reference image. This image is stored in reference buffer A. The compression process for the next two B frames does not create any reference images . The compression process for the next P frame creates a reference image. This image is stored in reference buffer B. No new reference images are created until the next P frame. The reference image of this P frame is stored in reference buffer A. The previous reference image created when compressing the I frame is then lost.
The reference buffer need only be updated by the RIM 220 if the reference buffer is modified by the frame compression process. A reference image 400A is updated by the RIM 220 using information from the reference images 400A from adjacent processors, as FIG. 15 shows the regions 400B within the reference images 400 of the adjacent processors used by the RIM to update regions 400C of the reference image 400A in the center processor. For those side and corner processors that do not have adjacent processors on all sides, the regions bordering those empty neighbors in the reference image will not require updating.
The RIM 220 keeps track of the relationship between the frame type and the reference image update based on the guidelines noted previously. The RIM identifies which one of the two reference buffers A and B, if any, was updated by the MPEG-2 units at the end of each frame encoding process. The RIM 220 determines when the MPEG-2 units have completed encoding of the current frame by monitoring the Henc_inc signal from the IBM encoder, the output time of the picture information block, and the vertical synch signal at the video input port .
The RIM computes the begin address within the local memory for the chroma and luma reference images using information from the picture information block extracted from the MPEG-2 compression unit, the Henc_int signal, and the update status of reference buffer A and B. One can assume that the luma and chroma reference begin address will not change between each frame encoding process. The begin address is defined by the compression configuration. The RIM 220 updates all modified reference images at the end of each encoding process. The RIM updates each reference image according to the table below and as shown in FIG. 16:
Figure imgf000028_0001
BRl specifies the first pixel just after region
Rtln and Rbln;
ARl specifies the first pixel just after region
AtIn nd Abn;
BRt specifies the line just after region Rtn; ARt specifies the line just after region Atn;
ARb specifies the line just after region Aln and Am;
BRb specifies the line just after region Rln and Rrn;
BRr specifies the first pixel just after region
Rbn and Rtn;
ARr specifies the first pixel just after region
Abn and Atn. Four RIM processors 220 perform the functions required in the preferred embodiment of FIG. 7. The components within each RIM processor are shown in the block diagram of FIG. 17. A single RIM processor 220 manages the local memory for four video processors designated here as Ptl, Ptr, Pbl , and Pbr. The RIM has access to the 64 bit data, 9 bit address, and associated DRAM control signals. Within the RIM, 12 local managers 220A handle the different border regions 400B (FIG. 15) around each reference image 400. The diagram of FIG. 18 shows the components within each local manager 220A. The local manager holds four buffers 220B, 220C, two to hold the border image for reference image A, and two to hold the border image for reference image B. During each frame encoding process, the MPEG-2 unit reads and writes into the local memory
218 (FIG. 7) when manipulating data within the AR region. Simultaneously, the local manager will update one of the AR/BR buffers 220B, 220C . For this example, buffer A holds data that mirrors the AR region within the local memory of the MPEG-2 unit. At the end of the frame encoding process, a controller 232 within the local manager will re-map AR/BR buffer A into the BR region of the adjacent MPEG-2 unit. Buffer B, which was mapped as the BR region for the adjacent MPEG-2 unit, is re-mapped back into the AR region of the center processor. It is the responsibility of the controller to re-map the buffers every time the reference image is updated through a frame encoding process .
The PES header generator 208 inserts the PES header in the video elementary stream. The PES header generator extracts this information directly from the compressed stream. For the picture type information, the generator extracts this information from the picture header within the compressed bit stream. For the PTS value, the PES generator computes the PTS value from the following information: picture type pt , gop structure gops, and the input video STC timestamp, STCvi . The PES header generator latches the STCvi value using the vertical synchronous signal going into the MPEG-2 unit of regional processor #1.
The computational demand required to decode an ATSC compliant video stream is also significant. Thus, the common sequential decoding architecture used for standard MPEG-2 ml@mp decoding will not meet the demand. The scalable architecture of the present invention uses existing MPEG2 decoding engines to decode ATSC DTV video streams .
An embodiment of a decoder system comprises four parallel regional decoders. The system requires that each decoder be capable of decoding a video frame size that is 2.5 times the NTSC format. This requirement is not unreasonable, since the known decoding algorithm is not demanding. As shown in FIG. 19, each decoder decodes a local region 500A, 500B, 500C, 500D of 1920 pixels by 270 lines within an ATSC frame 500 that is 1920 pixels by 1080 lines.
The block diagram of FIG. 20 shows the components of a decoder system 520 that includes a compressed stream demultiplexer 522, four parallel regional decoders 524A, 524B, 524C, 524D, reference frame stores 526A, 526B, 526C and a multiplexer 528.
The compressed stream demultiplexer 522 demultiplexes a compressed video stream 521 on the basis of slice header information to provide region streams 523A, 523B, 523C, 523D. The regional decoder 524A, 524B, 524C, 524D decodes a compressed bit stream 523A, 523B, 523C, 523D that is fully compliant with MPEG-2 except for the following two exceptions: the stream defines a frame that is 1920 pixels by 270 lines; the motion vectors extend beyond the vertical dimension by a maximum of 270 lines. In the preferred embodiment, an existing MPEG-2 decoder is modified so as to fulfill the noted exceptions spelled out for the regional decoder 524A, 524B, 524C, 524D. The regional decoder must also address the decoding dependencies between adjacent regions. There exists one dependency that is critical to the decoding process: motion vector compensation. The motion vector compensation procedure utilizes pixel information from a reference image to create the pixels within the current macroblock. The reference image is an image created from previous I or P frames. Thus, as shown in FIG. 21, through motion compensation, the procedure reaches into regions beyond the local region to create the pixels within the current block. The maximum depth the procedure will reach is governed by the maximum length defined by the motion vectors.
Each regional decoder makes the reference image available for other decoders in order for the other decoders to correctly carry out the motion vector compensation procedure. The reference image is shared between adjacent regions. The assumption is that the maximum motion vector will not exceed the width of each region, which is 270 lines. This is a reasonable assumption, since no realistic compressed video sequences will generate motion vectors greater than 270 lines.
The reference images are shared through the reference frame store as shown in FIG. 22. The regional decoders simultaneously write into two memory locations 530, 532, 534, 536: one (530, 536) for future access by the current decoder, and one (532, 534) for future access by the adjacent decoders. The embodiment resolves simultaneous reading by two decoders when performing motion vector compensation routine. The multiplexer 528 multiplexes the uncompressed frame regions back into a full frame. The multiplexer constructs an 8 bit digital data stream following the SMPTE 274M standard. EQUIVALENTS
While this invention has been particularly shown and described with reference to preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. Those skilled in the art will recognize or be able to ascertain using no more than routine experimentation, many equivalents to the specific embodiments of the invention described specifically herein. Such equivalents are intended to be encompassed in the scope of the claims .

Claims

CLAIMSWhat is claimed is:
1. A video encoder comprising: a processor for encoding an input stream of video images, each video image divided into regions that have overlapping portions, the processor encoding each region of a current video image in the stream according to an encoding process that includes motion compensation and storing a reference frame in a memory based on a prior video image in the stream for use in the motion compensation of the encoding process ; and a reference frame processor coupled to the memory for updating each reference frame with information from reference frames of adjacent regions .
2. The video encoder of Claim 1 wherein the processor comprises plural regional processors each assigned to encode a particular region and wherein the memory comprises a local memory for each regional processor.
3. The video encoder of Claim 2 wherein the encoded video images comprise macroblocks and each regional processor further comprises means for removing certain macroblocks from the encoded video images that correspond to the overlapping portions and means for concatenating the resulting encoded video images with that of other regional processors to provide an output video stream.
4. The video encoder of Claim 1 wherein the encoded video images comprise macroblocks and the processor further comprises means for removing certain macroblocks from the encoded video images that correspond to the overlapping portions to provide an output video stream.
5. The video encoder of Claim 1 wherein the encoding process for each region comprises MPEG-2 encoding with main profile at main level .
6. The video encoder of Claim 1 wherein the input stream is an ATSC compliant digital video stream.
7. The video encoder of Claim 1 wherein the input stream is a digital video stream compliant with MPEG-2 main profile at high level.
8. A video encoder comprising: plural regional processors for encoding an input stream of video images, each video image divided into regions that have overlapping portions, each processor encoding a particular region of a current video image in the stream according to an encoding process that includes motion compensation and storing a reference frame in a local memory based on a prior video image the stream for use in the motion compensation of the encoding process; and a reference frame processor coupled to the plural local memories for updating each reference frame with information from reference frames of adjacent regions.
9. The video encoder of Claim 8 wherein the encoded video images comprise macroblocks and each regional processor further comprises means for removing certain macroblocks from the encoded video images that correspond to the overlapping portions and means for concatenating the resulting encoded video images with that of other regional processors to provide an output video stream.
10. The video encoder of Claim 8 wherein the encoding process for each region comprises MPEG-2 encoding with main profile at main level .
11. The video encoder of Claim 8 wherein the input stream is an ATSC compliant digital video stream.
12. The video encoder of Claim 8 wherein the input stream is a digital video stream compliant with MPEG-2 main profile at high level.
13. A method of video encoding comprising the steps of: providing an input stream of video images, each video image divided into regions that have overlapping portions, for each region: encoding a particular region of a current video image in the stream according to an encoding process that includes motion compensation; and storing a reference frame in a local memory based on a prior video image in the stream for use in the motion compensation of the encoding process; and updating each reference frame with information from reference frames of adjacent regions.
14. The method of Claim 13 wherein the encoded video images include macroblocks and further comprising the steps of removing certain macroblocks from the encoded video images that correspond to the overlapping portions to provide an output video stream.
15. The method of Claim 13 further comprising providing a plurality of regional processors, each assigned to a particular region for performing the encoding and storing steps.
16. A video encoder comprising: plural regional processors for encoding an input stream of video images, each video image divided into regions that have overlapping portions, each regional processor comprising: an image selection unit for selecting a particular image region from each of the video images ; a compression engine for compressing the selected image regions to provide a compressed image region stream comprising macroblocks according to an encoding process that includes motion compensation; a local memory for storing a reference frame based on a prior compressed image region for use in the motion compensation of the encoding process; a macroblock remover for removing certain macroblocks from the compressed image region stream that correspond to the overlapping portions; a stream concatenation unit for concatenating the compressed image region stream with such streams from each regional processor to provide an output video stream; and a reference frame processor for updating each reference frame with information from reference frames of adjacent regions.
17. The video encoder of Claim 16 wherein the encoding process comprises MPEG-2 encoding with main profile at main level .
18. The video encoder of Claim 16 wherein the compression engine is an MPEG-2 main profile at main level engine.
19. A method for video encoding comprising the steps of: providing an input stream of video images, each video image divided into regions that have overlapping portions, for each region: selecting a particular image region from each of the video images; compressing the selected image regions to provide a compressed image region stream comprising macroblocks according to an encoding process that includes motion compensation; storing in a local memory a reference frame based on a prior compressed image region for use in the motion compensation of the encoding process; removing certain macroblocks from the compressed image region stream that correspond to the overlapping portions; concatenating the compressed image region stream with other such streams to provide an output video stream; and updating each reference frame with information from reference frames of adjacent regions.
20. The method of Claim 19 further comprising providing a plurality of regional processors, each assigned to a particular region for performing the selecting, compressing, storing, removing and concatenating steps .
21. A video decoder comprising: a demultiplexer for demultiplexing a compressed stream of video images to plural region streams, each video image divided into contiguous regions and each region stream associated with a particular region; plural regional decoders each decoding a particular region stream to a decoded region stream according to a decoding process that includes motion compensation; a reference frame memory for storing reference frames associated with each regional decoder, each regional decoder retrieving reference frames of adjacent regions for use in the motion compensation of the decoding process; and a multiplexer for multiplexing the decoded region streams to a decoded output stream.
22. The decoder of Claim 21 wherein the decoding process for each regional decoder comprises MPEG-2 decoding with main profile at main level .
23. The decoder of Claim 21 wherein the compressed stream is an ATSC compliant digital video stream.
24. The decoder of Claim 21 wherein the compressed stream is a digital video stream compliant with MPEG-2 main profile at high level.
25. A method of video decoding comprising the steps of: providing an input stream of compressed video images, each video image divided into contiguous regions ; demultiplexing the compressed stream to plural region streams, each region stream associated with a particular region; decoding each region stream according to a decoding process that includes motion compensation; storing reference frames associated with each region and retrieving reference frames of adjacent regions for use in the motion compensation of the decoding process; and multiplexing the decoded region streams to a decoded output stream.
26. The method of Claim 25 wherein the step of decoding each region stream comprises MPEG-2 decoding with main profile at main level .
27. The method of Claim 25 wherein the input stream comprises a digital video stream compliant with MPEG-2 main profile at high level.
PCT/US1999/001410 1998-01-26 1999-01-21 Method and apparatus for advanced television signal encoding and decoding WO1999038316A2 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
EP99903316A EP1051839A2 (en) 1998-01-26 1999-01-21 Method and apparatus for advanced television signal encoding and decoding
AU23370/99A AU2337099A (en) 1998-01-26 1999-01-21 Method and apparatus for advanced television signal encoding and decoding
JP2000529078A JP2002502159A (en) 1998-01-26 1999-01-21 Method and apparatus for encoding and decoding high performance television signals
CA002318272A CA2318272A1 (en) 1998-01-26 1999-01-21 Method and apparatus for advanced television signal encoding and decoding

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US7243698P 1998-01-26 1998-01-26
US60/072,436 1998-01-26
US5442798A 1998-04-03 1998-04-03
US09/054,427 1998-04-03

Publications (2)

Publication Number Publication Date
WO1999038316A2 true WO1999038316A2 (en) 1999-07-29
WO1999038316A3 WO1999038316A3 (en) 2000-01-20

Family

ID=26733026

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1999/001410 WO1999038316A2 (en) 1998-01-26 1999-01-21 Method and apparatus for advanced television signal encoding and decoding

Country Status (5)

Country Link
EP (1) EP1051839A2 (en)
JP (1) JP2002502159A (en)
AU (1) AU2337099A (en)
CA (1) CA2318272A1 (en)
WO (1) WO1999038316A2 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1143737A2 (en) * 2000-03-30 2001-10-10 Sony Corporation Image encoding apparatus and method, video camera, image recording apparatus, and image transmission apparatus
WO2002063559A2 (en) * 2001-02-09 2002-08-15 Koninklijke Philips Electronics N.V. Software system for deploying image processing functions on a programmable platform of distributed processor environments
GB2344954B (en) * 1998-09-25 2003-04-09 Nippon Telegraph & Telephone Apparatus and method for encoding moving images and recording medium containing computer-readable encoding program
WO2006029195A1 (en) * 2004-09-08 2006-03-16 Inlet Technologies, Inc. Slab-based processing engine for motion video
EP2216994A3 (en) * 2009-02-05 2011-01-05 Sony Corporation System and method for image processing
US8041134B2 (en) 2003-06-16 2011-10-18 Samsung Electronics Co., Ltd. Apparatus to provide block-based motion compensation and method thereof

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4476936B2 (en) 2003-08-01 2010-06-09 ラサ エセア Awning joint arm

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0577310A2 (en) * 1992-06-29 1994-01-05 Canon Kabushiki Kaisha Image processing device
US5461679A (en) * 1991-05-24 1995-10-24 Apple Computer, Inc. Method and apparatus for encoding/decoding image data
US5701160A (en) * 1994-07-22 1997-12-23 Hitachi, Ltd. Image encoding and decoding apparatus

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5461679A (en) * 1991-05-24 1995-10-24 Apple Computer, Inc. Method and apparatus for encoding/decoding image data
EP0577310A2 (en) * 1992-06-29 1994-01-05 Canon Kabushiki Kaisha Image processing device
US5701160A (en) * 1994-07-22 1997-12-23 Hitachi, Ltd. Image encoding and decoding apparatus

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHALLAPALI K ET AL: "GRAND ALLIANCE MPEG-2-BASED VIDEO DECODER WITH PARALLEL PROCESSING ARCHITECTURE" INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY,US,WILEY AND SONS, NEW YORK, vol. 5, no. 4, 1 January 1994 (1994-01-01), page 263-267 XP000565047 ISSN: 0899-9457 *
MAILHOT J N: "THE GRAND ALLIANCE HDTV VIDEO ENCODER" INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS - DIGEST OF TECHNICALPAPERS,US,NEW YORK, IEEE, vol. CONF. 14, 7 June 1995 (1995-06-07), page 300-301 XP000547830 ISBN: 0-7803-2141-3 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2344954B (en) * 1998-09-25 2003-04-09 Nippon Telegraph & Telephone Apparatus and method for encoding moving images and recording medium containing computer-readable encoding program
EP1143737A2 (en) * 2000-03-30 2001-10-10 Sony Corporation Image encoding apparatus and method, video camera, image recording apparatus, and image transmission apparatus
EP1143737A3 (en) * 2000-03-30 2004-06-16 Sony Corporation Image encoding apparatus and method, video camera, image recording apparatus, and image transmission apparatus
WO2002063559A2 (en) * 2001-02-09 2002-08-15 Koninklijke Philips Electronics N.V. Software system for deploying image processing functions on a programmable platform of distributed processor environments
WO2002063559A3 (en) * 2001-02-09 2003-06-05 Koninkl Philips Electronics Nv Software system for deploying image processing functions on a programmable platform of distributed processor environments
US8041134B2 (en) 2003-06-16 2011-10-18 Samsung Electronics Co., Ltd. Apparatus to provide block-based motion compensation and method thereof
WO2006029195A1 (en) * 2004-09-08 2006-03-16 Inlet Technologies, Inc. Slab-based processing engine for motion video
US7881546B2 (en) 2004-09-08 2011-02-01 Inlet Technologies, Inc. Slab-based processing engine for motion video
EP2216994A3 (en) * 2009-02-05 2011-01-05 Sony Corporation System and method for image processing
US8340506B2 (en) 2009-02-05 2012-12-25 Sony Corporation System and method for signal processing

Also Published As

Publication number Publication date
JP2002502159A (en) 2002-01-22
EP1051839A2 (en) 2000-11-15
WO1999038316A3 (en) 2000-01-20
AU2337099A (en) 1999-08-09
CA2318272A1 (en) 1999-07-29

Similar Documents

Publication Publication Date Title
EP0862835B1 (en) Method for modifying encoded digital video for improved channel utilization by reduction of coded B-frame data
US6141059A (en) Method and apparatus for processing previously encoded video data involving data re-encoding.
EP0895694B1 (en) System and method for creating trick play video streams from a compressed normal play video bitstream
KR100341055B1 (en) Syntax Analyzer for Video Compression Processor
US8817885B2 (en) Method and apparatus for skipping pictures
US5862140A (en) Method and apparatus for multiplexing video programs for improved channel utilization
US5623308A (en) Multiple resolution, multi-stream video system using a single standard coder
US7023924B1 (en) Method of pausing an MPEG coded video stream
US20100271463A1 (en) System and method for encoding 3d stereoscopic digital video
EP1161097B1 (en) MPEG decoder
US20100118982A1 (en) Method and apparatus for transrating compressed digital video
JP2001169292A (en) Device and method for processing information, and storage medium
WO1997019561A9 (en) Method and apparatus for multiplexing video programs
WO1997019559A9 (en) Method and apparatus for modifying encoded digital video for improved channel utilization
CN102438139A (en) Local macroblock information buffer
KR100710290B1 (en) Apparatus and method for video decoding
EP1596603B1 (en) Video encoder and method for detecting and encoding noise
JP3649729B2 (en) Device for concealing errors in digital video processing systems.
US20100104015A1 (en) Method and apparatus for transrating compressed digital video
EP1051839A2 (en) Method and apparatus for advanced television signal encoding and decoding
EP2352296A1 (en) Moving image encoding apparatus and moving image decoding apparatus
JP2001169278A (en) Device and method for generating stream, device and method for transmitting stream, device and method for coding and recording medium
JP4906197B2 (en) Decoding device and method, and recording medium
US11438631B1 (en) Slice based pipelined low latency codec system and method
KR100988622B1 (en) Method for encoding and decoding image, apparatus for displaying image and recording medium thereof

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT UA UG UZ VN YU ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW SD SZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
AK Designated states

Kind code of ref document: A3

Designated state(s): AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT UA UG UZ VN YU ZW

AL Designated countries for regional patents

Kind code of ref document: A3

Designated state(s): GH GM KE LS MW SD SZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
ENP Entry into the national phase

Ref document number: 2318272

Country of ref document: CA

Ref country code: CA

Ref document number: 2318272

Kind code of ref document: A

Format of ref document f/p: F

WWE Wipo information: entry into national phase

Ref document number: 23370/99

Country of ref document: AU

WWE Wipo information: entry into national phase

Ref document number: 1999903316

Country of ref document: EP

ENP Entry into the national phase

Ref country code: JP

Ref document number: 2000 529078

Kind code of ref document: A

Format of ref document f/p: F

NENP Non-entry into the national phase

Ref country code: KR

WWP Wipo information: published in national office

Ref document number: 1999903316

Country of ref document: EP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

WWR Wipo information: refused in national office

Ref document number: 1999903316

Country of ref document: EP

WWW Wipo information: withdrawn in national office

Ref document number: 1999903316

Country of ref document: EP