US20130039418A1 - System and Method for Video and Audio Encoding on a Single Chip - Google Patents

System and Method for Video and Audio Encoding on a Single Chip Download PDF

Info

Publication number
US20130039418A1
US20130039418A1 US13587546 US201213587546A US2013039418A1 US 20130039418 A1 US20130039418 A1 US 20130039418A1 US 13587546 US13587546 US 13587546 US 201213587546 A US201213587546 A US 201213587546A US 2013039418 A1 US2013039418 A1 US 2013039418A1
Authority
US
Grant status
Application
Patent type
Prior art keywords
integrated circuit
single integrated
video data
data
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13587546
Inventor
Amir Morad
Leonid Yavits
Gadi Oxman
Evgeny Spektor
Michael Khrapkovsky
Gregory Chernov
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Avago Technologies General IP Singapore Pte Ltd
Original Assignee
Broadcom Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/24Systems for the transmission of television signals using pulse code modulation
    • H04N7/52Systems for transmission of a pulse code modulated video signal with one or more other pulse code modulated signals, e.g. an audio signal or a synchronizing signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/423Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/436Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation using parallelised computational arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television, VOD [Video On Demand]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/226Characteristics of the server or Internal components of the server
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television, VOD [Video On Demand]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/236Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a Uniform Resource Locator [URL] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television, VOD [Video On Demand]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/236Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a Uniform Resource Locator [URL] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
    • H04N21/2368Multiplexing of audio and video streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television, VOD [Video On Demand]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network, synchronizing decoder's clock; Client middleware
    • H04N21/434Disassembling of a multiplex stream, e.g. demultiplexing audio and video streams, extraction of additional data from a video stream; Remultiplexing of multiplex streams; Extraction or processing of SI; Disassembling of packetised elementary stream
    • H04N21/4341Demultiplexing of audio and video streams

Abstract

An apparatus is disclosed for performing real time video/audio encoding on a single chip. Within the single chip, a video encoder generates encoded video data from uncompressed video data and an audio encoder generates encoded audio data from uncompressed audio data. A mux processor within the single chip generates an output stream of encoded data from the encoded video data and the encoded audio data.

Description

    RELATED APPLICATIONS
  • This application is a continuation of U.S. patent application Ser. No. 10/170,019 filed Jun. 11, 2002, which is a continuation-in-part of U.S. patent application Ser. No. 09/543,904 filed Apr. 6, 2000, which issued as U.S. Pat. No. 6,690,726 on Feb. 10, 2004, which claims the benefit of Israel Application Serial No. 129345 filed Apr. 6, 1999.
  • U.S. patent application Ser. No. 10/170,019 also makes reference to, claims priority to and claims the benefit of U.S. Provisional Patent Application Ser. No. 60/296,766 filed on Jun. 11, 2001 and U.S. Provisional Patent Application Ser. No. 60/296,768 filed on Jun. 11, 2001.
  • All of the above-listed patent applications are incorporated herein by reference in their entirety.
  • BACKGROUND OF THE INVENTION
  • Methods for encoding an audio-visual signal are known in the art. According to the methods, a video signal is digitized, analyzed and encoded in a compressed manner. The methods are implemented in computer systems, either in software, hardware or combined software-hardware forms.
  • Most hardware encoding systems consist of a set of semiconductor circuits arranged on a large circuit board. State of the art encoding systems include a single semiconductor circuit. Such a circuit is typically based on a high-power processor.
  • Reference is now made to FIG. 1, which is a block diagram illustration of a prior art video encoding circuit 10.
  • Encoding circuit 10 includes a video input processor 12, a motion estimation processor 14, a digital signal processor 16 and a bitstream processor 18. Processors 12-18, respectively, are generally connected in series.
  • Video input processor 12 captures and processes a video signal, and transfers it to motion estimation processor 14. Motion estimation processor 14 analyzes the motion of the video signal, and transfers the video signal and its associated motion analysis to digital signal processor 16. According to the data contained within the associated motion analysis, digital signal processor 16 processes and compresses the video signal, and transfers the compressed data to bitstream processor 18. Bitstream processor 18 formats the compressed data and creates therefrom an encoded video bitstream, which is transferred out of encoding circuit 10.
  • It will be appreciated by those skilled in the art that such an encoding circuit has several disadvantages. For example, one disadvantage of encoding circuit 10 is that bitstream processor 18 transfers the encoded video bitstream, data word by data word, directly to an element external to encoding circuit 10. Accordingly, each time such data word is ready, the encoded video data word is individually transferred to the external element. Transfer of the encoded video in such a fashion greatly increases the data traffic volume and creates communication bottlenecks in communication lines such as computer buses. Additionally, circuit 10 requires a dedicated storage/bus which is allocated on a full time basis, hence, magnifying these disturbances.
  • Another disadvantage is that encoding circuit 10 is able to perform the encoding of video signals, only. Usually, moving picture compression applications include multiframe videos and their associated audio paths. While the encoding circuit 10 performs video compression and encoding, the multiplexing of compressed video, audio and user data streams are performed separately. Such an approach increases the data traffic in the compression system and requires increased storage and processing bandwidth requirements, thereby greatly increasing the overall compression system complexity and cost.
  • Reference is now made to FIG. 2, which is a block diagram of a prior art video input processor 30, as may be typically included in encoding circuit 10. Video input processor 30 includes a video capture unit 32, a video preprocessor 34 and a video storage 36. The elements are generally connected in series.
  • Video capture unit 32 captures an input video signal and transfers it to video preprocessor 34. Video preprocessor 34 processes the video signal, including noise reduction, image enhancement, etc., and transfers the processed signal to the video storage 36. Video storage 36 buffers the video signal and transfers it to a memory unit (not shown) external to video input processor 30.
  • It will be appreciated by those skilled in the art that such a video input processor has several disadvantages. For example, one disadvantage of processor 30 is that it does not perform image resolution scaling. Accordingly, only original resolution pictures can be processed and encoded.
  • Another disadvantage is that processor 30 does not perform statistical analysis of the video signal, since in order to perform comprehensive statistical analysis a video feedback from the storage is necessary, thus allowing interframe (picture to picture) analysis, and processor 30 is operable in “feed forward” manner, only. Accordingly, video input processor 30 cannot detect developments in the video contents, such as scene change, flash, sudden motion, fade in/fade out, etc.
  • Reference is now made to FIG. 3 which is a block diagram illustration of a prior art video encoding circuit 50, similar to encoding circuit 10, however, connected to a plurality of external memory units. As an example, FIG. 3 depicts circuit 50 connected to a pre-encoding memory unit 60, a reference memory unit 62 and a post-encoding memory unit 64, respectively. Reference is made in parallel to FIG. 4, a chart depicting the flow of data within circuit 50.
  • Encoding circuit 50 includes a video input processor 52, a motion estimation processor 54, a digital signal processor 56 and a bitstream processor 58. Processors 54 to 58, respectively, are generally connected in series.
  • In the present example, video encoding circuit 50 operates under MPEG video/audio compression standards. Hence, for purposes of clarity, reference to a current frame refers to a frame to be encoded. Reference to a reference frame refers to a frame that has already been encoded and reconstructed, preferably by digital signal processor 56, and transferred to and stored in reference memory unit 62. Reference frames are compared to current frames during the motion estimation task, which is generally performed by motion estimation processor 54.
  • Video input processor 52 captures a video signal, which contains a current frame, or a plurality of current frames, and processes and transfers them to external pre-encoding memory unit 60. External pre-encoding memory unit 60 implements an input frame buffer (not shown) which accumulates and re-orders the frames according to the standard required for the MPEG compression scheme.
  • External pre-encoding memory unit 60 transfers the current frames to motion estimation processor 54. External reference memory unit 62 transfers the reference frames also to motion estimation processor 54. Motion estimation processor 54, reads and compares both sets of frames, analyzes the motion of the video signal, and transfers the motion analysis to digital signal processor 56.
  • Digital signal processor 56 receives the current frames from the external pre-encoding memory 60, and according to the motion analysis received from motion estimation processor 54, processes and compresses the video signal. Digital signal processor 56 then transfers the compressed data to the bitstream processor 58. Digital signal processor 56 further reconstructs the reference frame and stores it in reference memory 62. Bitstream processor 58 encodes the compressed data and transfers an encoded video bitstream to external post-encoding memory unit 64.
  • It will be appreciated by those skilled in the art that such an encoding circuit has several disadvantages. For example, one disadvantage of encoding circuit 50 is that a plurality of separate memory units are needed to support its operations, thereby greatly increasing the cost and complexity of any encoding system based on device 50.
  • Another disadvantage is that encoding circuit 50 has a plurality of separate memory interfaces. This increases the data traffic volume and the number of external connections of encoding circuit 50, thereby greatly increasing the cost and the complexity of encoding circuit 50. Another disadvantage is that encoder circuit 50 does not implement video and audio multiplexing, which is typically required in compression schemes.
  • Further limitations and disadvantages of conventional and traditional approaches will become apparent to one of skill in the art, through comparison of such systems with embodiments of the present invention as set forth in the remainder of the present application with reference to the drawings.
  • BRIEF SUMMARY OF THE INVENTION
  • Certain embodiments of the present invention provide an apparatus for performing video and audio encoding. In particular, certain embodiments provide for performing video and audio encoding on a single chip.
  • Apparatus of the present invention provides for performing real time video/audio encoding on a single chip. Within the single chip, a video encoder generates encoded video data from uncompressed video data and an audio encoder generates encoded audio data from uncompressed audio data. A mux processor within the single chip generates an output stream of encoded data from the encoded video data and the encoded audio data.
  • These and other advantages and novel features of the present invention, as well as details of an illustrated embodiment thereof, will be more fully understood from the following description and drawings.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a block diagram of a prior art video encoding circuit.
  • FIG. 2 is a block diagram of a prior art video input processor.
  • FIG. 3 is a block diagram of a prior art video encoding circuit linked to a plurality of external memory units.
  • FIG. 4 is a flow chart of the data flow within the prior art circuit illustrated in FIG. 3.
  • FIG. 5 is a block diagram of a video and audio encoding video/audio/data multiplexing device constructed and operative on a single chip in accordance with an embodiment of the present invention.
  • FIG. 6 is a detailed block diagram of a PCI interface of the device of FIG. 5 in accordance with an embodiment of the present invention.
  • FIG. 7 illustrates a block diagram of a I2C/GPIO interface of the device of FIG. 5 in accordance with an embodiment of the present invention.
  • FIG. 8 is a block diagram and timing diagram illustrating the signals and timing output by a DVB formatter of the device in FIG. 5 in accordance with an embodiment of the present invention.
  • FIG. 9 illustrates how a VBI extractor of the device in FIG. 5 may extract user data from specified lines of a video signal in accordance with an embodiment of the present invention.
  • DETAILED DESCRIPTION OF THE INVENTION
  • An embodiment of the present invention provides a video/audio encoder on a single chip to generate compressed video and audio multiplexed into a transport stream. One embodiment of the encoder of the present invention supports MPEG standards and AC-3 standards, for example. With a single firmware change, however, the encoder may support any number of other standards as well. Applications for the encoder of the present invention may include personal video recorders, DVD recorders, set top box recorders, PC TV tuners, digital camcorders, video streaming, video conferencing, and game consoles.
  • Reference is now made to FIG. 5, a block diagram of video encoding video/audio/data multiplexing device 100, constructed and operative in accordance with an embodiment of the present invention.
  • An embodiment of the present invention overcomes the disadvantage of the prior art by providing a novel approach to video/audio compression and encoding, and, as per this approach, a novel encoding device structure which comprises a plurality of processors with a defined, optimized work division scheme.
  • Typically, a sequence of compression commands are instructions or a sequence of instructions, such as, removal of temporal redundancy, removal of spatial redundancy, and entropy redundancy of data, and the like. Device 100 operates according to an optimized compression labor division, thus segmenting the compression tasks between the different processors and reducing, in comparison to prior art, the compression time.
  • According to an embodiment of the present invention, device 100 is a parallel digital processor implemented on a single chip and designed for the purposes of real-time video/audio compression and multiplexing, such as for MPEG encoding and the like. For purposes of clarity herein, multiplexing refers to the creating of synchronized streams of a plurality of unsynchronized audio and video streams. Device 100 may be incorporated in digital camcorders, recordable digital video disk (DVD), game machines, desktop multimedia, video broadcast equipment, video authoring systems, video streaming and video conferencing equipment, security and surveillance systems, and the like.
  • According to an embodiment of the present invention, device 100 efficiently performs video compression tasks such as removing temporal redundancy (i.e., motion between frames), spatial redundancy (i.e. motion within frame), and entropy redundancy of data. Device 100 has a plurality of processors, each processor designed to perform a segment of the compression task, hence, achieving optimal performance of each such task.
  • The number of processors, the architecture of each processor, and the task list per processor, achieves the optimal tradeoff between device implementation cost and efficiency.
  • In an embodiment of the present invention, device 100 incorporates both video encoding and audio encoding on a single chip. Device 100 includes a video input buffer (VIB) 102, a global controller 104, motion estimation processors P4 105 and MEF 106, a digital signal processor (DSP) 108, a memory controller 110, a bitstream processor (BSM) 112, an audio encoder (AUD) 113, a multiplexing processor (MUX) 114, a PCI interface 115, and a I2C/GPIO interface 116.
  • Together, the VIB 102, MEF 106, P4 105, DSP 108, and BSM 112 constitute a video encoder in an embodiment of the present invention.
  • Device 100 may be connectable to an external video interface, an external audio interface, an external memory unit, and an external host interface. Typically, for example, the video interface supplies a digital video signal in CCIR 656 format and the audio interface supplies a digital audio signal in I2S/AC97 formats.
  • The host interface typically connects to an external host (not shown) and acts as a user interface between device 100 and the user. The host interface accepts microcodes, commands, data parameters and the like received from a user or a supervising system. The host interface also transfers information from device 100 to the user. The host interface provides access to the compressed data and may be used to transfer uncompressed digitized video and/or audio and/or user data into device 100.
  • The PCI interface 115 connects the single chip device 100 to a PCI bus for use in PC applications. Using the PCI interface 115, the device 100 may directly communicate with the PCI bus without the aid of an intermediate interface (chip) external to the device 100. In an embodiment of the present invention, the heart of the PCI interface 115 includes a powerful programmable DMA engine that may transfer encoded data from the device 100 to host memory without a host processor intervening. FIG. 6 is block diagram of an embodiment of the PCI interface 115 including a PCI core 120, a PCI application 121, and a host interface controller 122. The PCI core 120 provides the interface between the PCI bus and the PCI application 121. The PCI application interfaces the PCI core 120 to the host interface controller 122 and is responsible to the Master/Slave protocols and to configure PCI memory space. The PCI application 121 also includes the programmable DMA engine for transferring compressed data to Host memory. All microcodes and user defined parameters are uploaded to the single chip device 100 through the host interface controller 122 (off-line, prior to operation).
  • In an embodiment of the present invention, the PCI interface 115 may also support a file mode where an uncompressed file may be brought into the single chip device 100 and encoded. For example, video files stored on a PC may be converted to MPEG-2 using this method. The PCI interface 115 allows the uncompressed file to be transferred quickly to the device 100.
  • In an embodiment of the present invention, device 100 is operable either in a programming mode or an operational mode, and is capable of operating in both modes simultaneously.
  • In the programming mode, an external host transfers, via the host interface, microcodes, commands and data parameters to global controller 104. Global controller 104 transfers the microcodes, commands and data parameters to video input buffer 102, motion estimation processors 105 and 106, digital signal processor 108, memory controller 110, bitstream processor 112, I2C/GPIO interface 116, and multiplexing processor 114.
  • In the operational mode, video input buffer 102 is responsible for acquiring an uncompressed CCIR-656 video signal from an external video source (not shown) and storing it via the memory controller 110. In an alternative embodiment, VIB 102 captures an uncompressed video signal, via the PCI interface 115. VIB 102 is responsible for acquiring an uncompressed CCIR-656 video and storing it via the memory controller 110 in an external memory unit in a raster-scan manner.
  • In an embodiment of the present invention, the memory controller 110 is a SDRAM controller and the external memory unit is an SDRAM memory unit. The SDRAM controller is responsible for communication between the single chip and the external SDRAM memory unit, which is used as a frame buffer and an output buffer for compressed data. The SDRAM controller operations are controlled and scheduled by special instructions issued by the global controller 104.
  • Video input buffer 102 performs statistical analysis of the video signal, thereby detecting 3-2 pulldown sequences and developments in the video contents, such as scene change, sudden motion, fade in/fade out and the like. Video input buffer 102 also performs resolution downscaling, thereby allowing or enabling compression not only of the original resolution frames, but also reduced resolution frames (such as SIF, half D1 etc.). Additionally, video input buffer 102 also pre-processes the video signal, such as spatial filtering, noise reduction, image enhancement and the like. Furthermore, video input buffer 102 decreases the frame rate by decimating (dropping) frames thus allowing flexible rate control.
  • Video input buffer 102 accumulates the scaled and processed video data and transfers the data in bursts to an external memory unit, via memory controller 110. Memory controller 110 stores the video data in the external memory unit.
  • In an embodiment of the present invention, device 100 operates under MPEG video/audio compression standards. Hence, a data block represents a macroblock, which is a sixteen by sixteen matrix of luminance pixels and two, four or eight, by eight matrices of chrominance pixels as defined by MPEG standards. For purposes of clarity herein, reference to a reference frame refers to a frame that has already been encoded, reconstructed and stored in an external memory unit, and which is compared to the current frame during the motion estimation performed by motion estimation processors 105 and 106.
  • Motion estimation processor 105 (P4) is a level 1 motion estimation engine that is responsible for downscaling current and original reference pictures and for motion vector search. Motion estimation processor 105 finds motion vectors with a 2-pel accuracy by applying a fully exhaustive search in the range of .+−0.96 pels horizontally and .+−0.64 pels vertically.
  • Motion estimation processor 106 (MEF) is a level 2 motion estimation engine that is responsible for finding final (half pel) motion vectors. Additionally, the MEF performs horizontal and vertical interpolation of a chrominance signal. The MEF employs a fully exhaustive search in the range of .+−0.2 pels horizontally and vertically. After the full-pel motion vector is found, the MEF performs half-pel motion search in eight possible positions surrounding the optimal full-pel vector.
  • The dual memory controller 110 retrieves a current frame macroblock, and certain parts of the reference frames (referred hereto as search area) from the external memory unit and loads them into motion estimation processors 105 and 106. The motion estimation processors compare the current frame macroblock with the respective reference search area in accordance with a sequence of compression commands, thereby producing an estimation of the motion of the current frame macroblock. The estimation is used to remove temporal redundancy from the video signal.
  • Motion estimation processors 105 and 106 transfer the resulting motion estimation to global controller 104. Motion estimation processors 105 and 106 also transfer the current frame macroblock and the corresponding reference frames macroblocks to digital signal processor 108.
  • Digital signal processor 108 performs a series of macroblock processing operations intended to remove the spatial redundancy of the video signal, such as discrete cosine transform, macroblock type selection, quantization, rate control and the like. Digital signal processor 108 transfers the compressed data to the bitstream processor 112. Digital signal processor 108 further processes the compressed frame, thus reconstructing the reference frames, and transfers the reconstructed reference frames to the external memory unit via memory controller 110, thereby overwriting some of the existing reference frames.
  • Bitstream processor 112 encodes the compressed video data into a standard MPEG format, in accordance with a sequence known in the art of encoding commands. Bitstream processor 112 transfers compressed video data streams to multiplexing processor 114.
  • Audio encoder 113 is a processor responsible for audio encoding. In an embodiment of the present invention, audio encoder 113 supports MPEG-1 Layer II and Dolby AC-3 encoding and may be reprogrammed to support various additional audio compression schemes. The audio encoder 113 is also responsible for acquiring the uncompressed audio signal (I2S and AC97 standards are supported, for example) and buffering the compressed audio.
  • Multiplexing processor 114 multiplexes the encoded video and the encoded audio and/or user data streams {as received from bitstream processor 112 and audio encoder 113) and generates, according to a sequence of optimized multiplexing commands, MPEG standard format streams such as packetized elementary stream, program stream, transport stream and the like. Multiplexing processor 114 transfers the multiplexed video/audio/data streams to a compressed data stream output and to memory controller 110. Multiplexing processor 114 outputs a stream of encoded video and/or audio data. Multiplexing processor 114 can output two multiplexed streams, each containing one video stream and one audio stream, or one multiplexed stream containing all of the four input streams (two video streams and two audio streams).
  • Global controller 104 controls and schedules the video input buffer 102, the motion estimation processors 105 and 106, the digital signal processor 108, the memory controller 110, the bitstream processor 112, the I2C/GPIO interface, and the multiplexing processor 114. Global controller 104 is a central control unit that synchronizes and controls all of the internal chip units and communicates with all of the internal chip units using data-instruction-device buses.
  • In an embodiment of the present invention, the I2C/GPIO interface 116 may be used to program an external video ND or an external audio ND through the single chip device 100. Any other device that is compatible with the I2C protocol may also be programmed through the device 100 using the I2C/GPIO interface 116. The I2C/GPIO interface 115 may be configured as any of multiple types of interfaces in order to communicate with other devices on the same board as the single chip device 100. In an embodiment of the present invention, the I2C/GPIO interface 115 is configured (programmed) through the host interface or global controller 104 using microcode. FIG. 7 illustrates a block diagram of the I2C/GPIO interface 116 in accordance with an embodiment of the present invention.
  • An embodiment of the present invention provides a digital video broadcasting (DVB) formatter 117 as part of the mux processor 114. The DVB formatter 117 enables an encoded multiplexed stream to be converted to a standard DVB format and transmitted directly from the device 100 to another chip without going through a host interface or PCI interface. The host processor does not need to get involved in the transfer of the encoded data when the DVB interface is used. The DVB interface provides a powerful and smaller interface to transfer encoded data to, for example, a CD burner or a decoder chip.
  • FIG. 8 is a block diagram and timing diagram illustrating the signals and timing output by the DVB formatter 117 in accordance with an embodiment of the present invention. FIG. 8 illustrates a typical system for parallel transmission of a transport stream at either constant or variable rate. The clock (CLOCK), the 8-bit data (Data), and the PSYNC signal are transmitted in parallel. The PSYNC signal marks the sync byte of the transport header and is transmitted each 188 bytes. The DVALID signal is a constant 1 in the 188-byte mode. All signals are synchronous to the clock which is set to the transport bit rate and number of bits.
  • An embodiment of the present invention provides a vertical blanking interval (VBI) extractor 103 as part of the VIB 102. In general, analog video data may contain user data such as closed caption information or other user information. For example, a CCIR 656 video signal may typically contain uncompressed video data in a picture interval and user data in a VBI interval. The user data is transmitted during the VBI of the video signal where picture data is not present.
  • The VBI extractor 103 in the VIB 102 extracts the user data from the VBI of the CCIR 656 video stream. The extracted user data is then sliced using microcode in either the mux processor 114 and inserted into the encoded stream or is sliced using microcode in the global controller 104 or BSM 112 and inserted in the unencoded stream. Slicing comprises taking the user data and breaking it up into smaller groups. For example, a picture line represented as a large number of bytes may be sliced to a smaller number of bytes.
  • FIG. 9 illustrates how the VBI extractor 103 may extract user data from specified lines of a video signal in accordance with an embodiment of the present invention. Several modes may be supported by the VBI extractor 103 and subsequent slicing including a generic VBI mode. In the generic VBI mode, the user defines which pels of which video lines (e.g. of line 6 through line 21) of each field (top, bottom) are to be extracted and further transmitted in the compressed stream.
  • Several registers are used to control the VBI extractor 103. A first register determines the video lines of the top field to be extracted in generic VBI mode. Each bit of the first register corresponds to a certain video line (see FIG. 9). Through setting the bits of the first register, the user selects the video lines of the top field to be extracted.
  • A second register determines the video lines of the bottom field to be extracted in generic VBI mode. Each bit of the second register corresponds to a certain video line (see FIG. 9). Through setting the bits of the second register, the user selects the video lines of the bottom field to be extracted.
  • A third and fourth register determine the pixel interval within a video line of the top field of each frame to be extracted and transmitted in the compressed stream. The content of the third and fourth registers may range from 0 to 720, and a START value must be less than an END value.
  • A fifth and sixth register determine the pixel interval within a video line of the bottom field of each frame to be extracted and transmitted in the compressed stream. The content of the fifth and sixth registers may range from 0 to 720, and a START value must be less than an END value.
  • The various elements of device 100 may be combined or separated according to various embodiments of the present invention.
  • Also, the various elements may be implemented as various combinations of programmable and non-programmable hardware elements.
  • In summary, certain embodiments of the present invention afford an approach to perform video and audio encoding on a single chip to generate a stream of encoded video and audio data for use in various applications such as personal video recorders, DVD recorders, and set top box recorders. In other words, the system of the present invention enables a single chip that encodes video and audio (and any other system data desired) and generates therefrom a stream of encoded data.
  • While the invention has been described with reference to certain embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the scope of the invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the invention without departing from its scope. Therefore, it is intended that the invention not be limited to the particular embodiment disclosed, but that the invention will include all embodiments falling within the scope of the appended claims.

Claims (21)

  1. 1-9. (canceled)
  2. 10. A single integrated circuit, comprising:
    multiplexer circuitry configured to receive first video data, first audio data, second video data, and second audio data;
    the multiplexer circuitry being configured, in a first mode, to multiplex the first video data, the first audio data, the second video data, and the second audio data into a first multiplexed stream operably coupled via a first output to circuitry external to the single integrated circuit; and
    the multiplexer circuitry being configured, in a second mode, to:
    multiplex the first video data and the first audio data into the first multiplexed stream operably coupled via the first output to circuitry external to the single integrated circuit; and
    multiplex the second video data and the second audio data into a second multiplexed stream operably coupled via a second output to circuitry external to the single integrated circuit.
  3. 11. The single integrated circuit of claim 10, wherein the first video data, the first audio data, the second video data, and the second audio data are each compressed.
  4. 12. The single integrated circuit of claim 11, further comprising encoder circuitry configured to:
    receive first uncompressed video data from a first video source external to the single integrated circuit and compress the first uncompressed video data to produce the first video data;
    receive first uncompressed audio data from a first audio source external to the single integrated circuit and compress the first uncompressed audio data to produce the first audio data;
    receive second uncompressed video data from a second video source external to the single integrated circuit and compress the second uncompressed video data to produce the second video data; and
    receive second uncompressed audio data from a second audio source external to the single integrated circuit and compress the second uncompressed audio data to produce the second audio data.
  5. 13. The single integrated circuit of claim 12, wherein the encoder circuitry further comprises:
    a first memory interface that interfaces directly with first storage external to the single integrated circuit, the first uncompressed video data and the first uncompressed audio data being received via the first memory interface; and
    a second memory interface that interfaces directly with second storage external to the single integrated circuit, the second uncompressed video data and the second uncompressed audio data being received via the second memory interface.
  6. 14. The single integrated circuit of claim 12, further comprising controller circuitry configured to synchronize operation of the multiplexer circuitry and the encoder circuitry.
  7. 15. The single integrated circuit of claim 14, further comprising a bus interface that operably couples the controller circuitry and a processor external to the single integrated circuit.
  8. 16. The single integrated circuit of claim 10, further comprising circuitry configured to:
    receive the first video data from a first video source external to the single integrated circuit;
    receive the first audio data from a first audio source external to the single integrated circuit;
    receive the second video data from a second video source external to the single integrated circuit; and
    receive the second audio data from a second audio source external to the single integrated circuit.
  9. 17. The single integrated circuit of claim 10, wherein each of the first audio data and the second audio data represent at least two audio channels.
  10. 18. The single integrated circuit of claim 10, further comprising a corresponding plurality of search processors for performing motion analysis on each of the first video data and the second video data.
  11. 19. The single integrated circuit of claim 18, wherein the corresponding plurality of search processors is configured to operate in parallel upon a single macroblock, and each search processor operates at a different one of a plurality of resolutions.
  12. 20. A method, comprising:
    receiving, by multiplexer circuitry in a single integrated circuit, first video data, first audio data, second video data, and second audio data;
    responsive to the multiplexer circuitry being in a first mode, multiplexing, by the multiplexer circuitry, the first video data, the first audio data, the second video data, and the second audio data into a first multiplexed stream operably coupled via a first output to circuitry external to the single integrated circuit; and
    responsive to the multiplexer circuitry being in a second mode:
    multiplexing, by the multiplexer circuitry, the first video data and the first audio data into the first multiplexed stream operably coupled via the first output to circuitry external to the single integrated circuit; and
    multiplexing, by the multiplexer circuitry, the second video data and the second audio data into a second multiplexed stream operably coupled via a second output to circuitry external to the single integrated circuit.
  13. 21. The method of claim 20, wherein the first video data, the first audio data, the second video data, and the second audio data are each compressed.
  14. 22. The method of claim 21, further comprising:
    receiving, by encoder circuitry in the single integrated circuit, first uncompressed video data from a first video source external to the single integrated circuit and compress the first uncompressed video data to produce the first video data;
    receiving, by the encoder circuitry, first uncompressed audio data from a first audio source external to the single integrated circuit and compress the first uncompressed audio data to produce the first audio data;
    receiving, by the encoder circuitry, second uncompressed video data from a second video source external to the single integrated circuit and compress the second uncompressed video data to produce the second video data; and
    receiving, by the encoder circuitry, second uncompressed audio data from a second audio source external to the single integrated circuit and compress the second uncompressed audio data to produce the second audio data.
  15. 23. The method of claim 22, further comprising:
    receiving, by the encoder circuitry, the first uncompressed video data and the first uncompressed audio data via a first memory interface that interfaces directly with first storage external to the single integrated circuit; and
    receiving, by the encoder circuitry, the second uncompressed video data and the second uncompressed audio data via a second memory interface that interfaces directly with second storage external to the single integrated circuit.
  16. 24. The method of claim 22, further comprising synchronizing, by controller circuitry in the single integrated circuit, operation of the multiplexer circuitry and the encoder circuitry.
  17. 25. The method of claim 24, further comprising operably coupling the controller circuitry and a processor external to the single integrated circuit via a bus interface.
  18. 26. The method of claim 20, further comprising:
    receiving, by the single integrated circuit, the first video data from a first video source external to the single integrated circuit;
    receiving, by the single integrated circuit, the first audio data from a first audio source external to the single integrated circuit;
    receiving, by the single integrated circuit, the second video data from a second video source external to the single integrated circuit; and
    receiving, by the single integrated circuit, the second audio data from a second audio source external to the single integrated circuit.
  19. 27. The method of claim 20, further comprising performing a motion analysis on each of the first video data and the second video data using a corresponding plurality of search processors in the single integrated circuit.
  20. 28. The method of claim 27, wherein the corresponding plurality of search processors is configured to operate in parallel upon a single macroblock, and each search processor operates at a different one of a plurality of resolutions.
  21. 29. A single-chip audio/video encoder device, comprising:
    a first encoder configured to generate first compressed video data and first compressed audio data;
    a second encoder configured to generate second compressed video data and second compressed audio data;
    a multiplexer configured to receive the first compressed video data and the first compressed audio data from the first encoder and to receive the second compressed video data and the second compressed audio data from the second encoder;
    the multiplexer being configured, in a first mode, to multiplex the first compressed video data, the first compressed audio data, the second compressed video data, and the second compressed audio data into a first multiplexed stream operably coupled via a first output to circuitry external to the device; and
    the multiplexer being configured, in a second mode, to:
    multiplex the first compressed video data and the first compressed audio data into the first multiplexed stream operably coupled via the first output to circuitry external to the device; and
    multiplex the second compressed video data and the second compressed audio data into a second multiplexed stream operably coupled via a second output to circuitry external to the device.
US13587546 1999-04-06 2012-08-16 System and Method for Video and Audio Encoding on a Single Chip Abandoned US20130039418A1 (en)

Priority Applications (7)

Application Number Priority Date Filing Date Title
IL12934599 1999-04-06
IL129345 1999-04-06
US09543904 US6690726B1 (en) 1999-04-06 2000-04-06 Video encoding and video/audio/data multiplexing device
US29676601 true 2001-06-11 2001-06-11
US29676801 true 2001-06-11 2001-06-11
US10170019 US8270479B2 (en) 1999-04-06 2002-06-11 System and method for video and audio encoding on a single chip
US13587546 US20130039418A1 (en) 1999-04-06 2012-08-16 System and Method for Video and Audio Encoding on a Single Chip

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US13587546 US20130039418A1 (en) 1999-04-06 2012-08-16 System and Method for Video and Audio Encoding on a Single Chip

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US10170019 Continuation US8270479B2 (en) 1999-04-06 2002-06-11 System and method for video and audio encoding on a single chip

Publications (1)

Publication Number Publication Date
US20130039418A1 true true US20130039418A1 (en) 2013-02-14

Family

ID=27389752

Family Applications (2)

Application Number Title Priority Date Filing Date
US10170019 Active 2023-06-19 US8270479B2 (en) 1999-04-06 2002-06-11 System and method for video and audio encoding on a single chip
US13587546 Abandoned US20130039418A1 (en) 1999-04-06 2012-08-16 System and Method for Video and Audio Encoding on a Single Chip

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US10170019 Active 2023-06-19 US8270479B2 (en) 1999-04-06 2002-06-11 System and method for video and audio encoding on a single chip

Country Status (3)

Country Link
US (2) US8270479B2 (en)
EP (1) EP1430706A4 (en)
WO (1) WO2002102049A3 (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6690726B1 (en) * 1999-04-06 2004-02-10 Broadcom Corporation Video encoding and video/audio/data multiplexing device
US7489362B2 (en) * 2003-03-04 2009-02-10 Broadcom Corporation Television functionality on a chip
EP1555821A1 (en) * 2004-01-13 2005-07-20 Sony International (Europe) GmbH Method for pre-processing digital data, digital to analog and analog to digital conversion system
US20050285975A1 (en) * 2004-06-28 2005-12-29 Broadcom Corporation Software implementing parts of a blanking interval encoder/decoder
US8130841B2 (en) * 2005-12-29 2012-03-06 Harris Corporation Method and apparatus for compression of a video signal
DE102006020650B3 (en) * 2006-05-02 2007-08-23 Thyssenkrupp Presta Ag Steering column for steer-by-wire guidance system of motor vehicle, has energy absorption structure with magneto rheological elastomer, which is deformable during crashfall, where structure is subjected to changeable magnetic field
US7720251B2 (en) 2006-06-23 2010-05-18 Echo 360, Inc. Embedded appliance for multimedia capture
US8345996B2 (en) * 2008-07-07 2013-01-01 Texas Instruments Incorporated Determination of a field referencing pattern
WO2011094346A1 (en) * 2010-01-26 2011-08-04 Hobbs Barry L Integrated concurrent multi-standard encoder, decoder and transcoder
CN102810085A (en) * 2011-06-03 2012-12-05 鸿富锦精密工业(深圳)有限公司 PCI-E expansion system and method
WO2013003698A3 (en) 2011-06-30 2014-05-08 Echo 360, Inc. Methods and apparatus for an embedded appliance
CN102931546A (en) * 2011-08-10 2013-02-13 鸿富锦精密工业(深圳)有限公司 Connector assembly
CN103179435B (en) * 2013-02-27 2016-09-28 北京视博数字电视科技有限公司 A multi-channel video data multiplexing method and apparatus
US9333433B2 (en) * 2014-02-04 2016-05-10 Sony Computer Entertainment America Llc Online video game service with split clients

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5159447A (en) * 1991-05-23 1992-10-27 At&T Bell Laboratories Buffer control for variable bit-rate channel
US5610659A (en) * 1995-05-08 1997-03-11 Futuretel, Inc. MPEG encoder that concurrently determines video data encoding format and rate control
US5719982A (en) * 1994-12-15 1998-02-17 Sony Corporation Apparatus and method for decoding data
US5825430A (en) * 1995-12-20 1998-10-20 Deutsche Thomson Brandt Gmbh Method, encoder and decoder for the transmission of digital signals which are hierarchically structured into a plurality of parts
US5903261A (en) * 1996-06-20 1999-05-11 Data Translation, Inc. Computer based video system
US5959677A (en) * 1996-12-13 1999-09-28 Hitachi, Ltd. Digital data transmission system
US5963256A (en) * 1996-01-11 1999-10-05 Sony Corporation Coding according to degree of coding difficulty in conformity with a target bit rate
US6301558B1 (en) * 1997-01-16 2001-10-09 Sony Corporation Audio signal coding with hierarchical unequal error protection of subbands
US20010040903A1 (en) * 1997-03-28 2001-11-15 Shinji Negishi Multiplexing apparatus and method, transmitting apparatus and method, and recording medium
US6359911B1 (en) * 1998-12-04 2002-03-19 Koninklijke Philips Electronics N.V. (Kpenv) MPEG-2 transport demultiplexor architecture with non-time-critical post-processing of packet information
US6363211B1 (en) * 1997-05-23 2002-03-26 Sony Corporation Data recording apparatus and method, data reproducing apparatus and method, data recording/reproducing apparatus and method, and transmission medium
US20020071655A1 (en) * 1997-09-10 2002-06-13 Keiji Kanota Information recording method and apparatus and information recording medium
US6427150B1 (en) * 1998-05-06 2002-07-30 Matsushita Electric Industrial Co., Ltd. System and method for digital data communication
US7830881B2 (en) * 2002-07-16 2010-11-09 Panasonic Corporation Content receiver and content transmitter

Family Cites Families (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6124882A (en) * 1992-02-19 2000-09-26 8×8, Inc. Videocommunicating apparatus and method therefor
US6121998A (en) * 1992-02-19 2000-09-19 8×8, Inc. Apparatus and method for videocommunicating having programmable architecture permitting data revisions
US5719630A (en) * 1993-12-10 1998-02-17 Nec Corporation Apparatus for compressive coding in moving picture coding device
US5874997A (en) * 1994-08-29 1999-02-23 Futuretel, Inc. Measuring and regulating synchronization of merged video and audio data
US5663962A (en) * 1994-09-29 1997-09-02 Cselt- Centro Studi E Laboratori Telecomunicazioni S.P.A. Method of multiplexing streams of audio-visual signals coded according to standard MPEG1
US5982459A (en) * 1995-05-31 1999-11-09 8×8, Inc. Integrated multimedia communications processor and codec
US5625693A (en) * 1995-07-07 1997-04-29 Thomson Consumer Electronics, Inc. Apparatus and method for authenticating transmitting applications in an interactive TV system
US5956674A (en) * 1995-12-01 1999-09-21 Digital Theater Systems, Inc. Multi-channel predictive subband audio coder using psychoacoustic adaptive bit allocation in frequency, time and over the multiple channels
US5784572A (en) * 1995-12-29 1998-07-21 Lsi Logic Corporation Method and apparatus for compressing video and voice signals according to different standards
US5852473A (en) * 1996-02-20 1998-12-22 Tektronix, Inc. 3-2 pulldown detector
US6018768A (en) * 1996-03-08 2000-01-25 Actv, Inc. Enhanced video programming system and method for incorporating and displaying retrieved integrated internet information segments
US6157674A (en) * 1996-03-21 2000-12-05 Sony Corporation Audio and video data transmitting apparatus, system, and method thereof
US5764803A (en) * 1996-04-03 1998-06-09 Lucent Technologies Inc. Motion-adaptive modelling of scene content for very low bit rate model-assisted coding of video sequences
US5793425A (en) * 1996-09-13 1998-08-11 Philips Electronics North America Corporation Method and apparatus for dynamically controlling encoding parameters of multiple encoders in a multiplexed system
JP3926873B2 (en) * 1996-10-11 2007-06-06 株式会社東芝 Computer system
DE19652362A1 (en) * 1996-12-17 1998-06-18 Thomson Brandt Gmbh Method and apparatus for compensating the caused by the processing of chrominance signals Luminanzdefekte
WO1999020051A1 (en) * 1997-10-15 1999-04-22 Sony Corporation Video data multiplexer, video data multiplexing control method, method and apparatus for multiplexing encoded stream, and encoding method and apparatus
CN1146205C (en) * 1997-10-17 2004-04-14 皇家菲利浦电子有限公司 Method of encapsulation of data into transport packets of constant size
US6823013B1 (en) * 1998-03-23 2004-11-23 International Business Machines Corporation Multiple encoder architecture for extended search
US6101591A (en) * 1998-03-25 2000-08-08 International Business Machines Corporation Method and system for selectively independently or simultaneously updating multiple system time clocks in an MPEG system
US6259733B1 (en) * 1998-06-16 2001-07-10 General Instrument Corporation Pre-processing of bit rate allocation in a multi-channel video encoder
DE69926689T2 (en) 1998-06-18 2006-06-08 Sony Corp. Apparatus and method for transmitting information, apparatus and method for receiving information, apparatus for providing a computer-readable program and television transmission system
US6347344B1 (en) * 1998-10-14 2002-02-12 Hitachi, Ltd. Integrated multimedia system with local processor, data transfer switch, processing modules, fixed functional unit, data streamer, interface unit and multiplexer, all integrated on multimedia processor
US6665872B1 (en) * 1999-01-06 2003-12-16 Sarnoff Corporation Latency-based statistical multiplexing
US6466258B1 (en) * 1999-02-12 2002-10-15 Lockheed Martin Corporation 911 real time information communication
US6490250B1 (en) * 1999-03-09 2002-12-03 Conexant Systems, Inc. Elementary stream multiplexer
US6690726B1 (en) * 1999-04-06 2004-02-10 Broadcom Corporation Video encoding and video/audio/data multiplexing device
US6795506B1 (en) * 1999-10-05 2004-09-21 Cisco Technology, Inc. Methods and apparatus for efficient scheduling and multiplexing
US7068724B1 (en) * 1999-10-20 2006-06-27 Prime Research Alliance E., Inc. Method and apparatus for inserting digital media advertisements into statistical multiplexed streams
US6493388B1 (en) * 2000-04-19 2002-12-10 General Instrument Corporation Rate control and buffer protection for variable bit rate video programs over a constant rate channel

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5159447A (en) * 1991-05-23 1992-10-27 At&T Bell Laboratories Buffer control for variable bit-rate channel
US5719982A (en) * 1994-12-15 1998-02-17 Sony Corporation Apparatus and method for decoding data
US5610659A (en) * 1995-05-08 1997-03-11 Futuretel, Inc. MPEG encoder that concurrently determines video data encoding format and rate control
US5825430A (en) * 1995-12-20 1998-10-20 Deutsche Thomson Brandt Gmbh Method, encoder and decoder for the transmission of digital signals which are hierarchically structured into a plurality of parts
US5963256A (en) * 1996-01-11 1999-10-05 Sony Corporation Coding according to degree of coding difficulty in conformity with a target bit rate
US5903261A (en) * 1996-06-20 1999-05-11 Data Translation, Inc. Computer based video system
US5959677A (en) * 1996-12-13 1999-09-28 Hitachi, Ltd. Digital data transmission system
US6301558B1 (en) * 1997-01-16 2001-10-09 Sony Corporation Audio signal coding with hierarchical unequal error protection of subbands
US20010040903A1 (en) * 1997-03-28 2001-11-15 Shinji Negishi Multiplexing apparatus and method, transmitting apparatus and method, and recording medium
US6363211B1 (en) * 1997-05-23 2002-03-26 Sony Corporation Data recording apparatus and method, data reproducing apparatus and method, data recording/reproducing apparatus and method, and transmission medium
US20020071655A1 (en) * 1997-09-10 2002-06-13 Keiji Kanota Information recording method and apparatus and information recording medium
US6684026B2 (en) * 1997-09-10 2004-01-27 Sony Corporation Information recording method and apparatus and information recording medium
US6427150B1 (en) * 1998-05-06 2002-07-30 Matsushita Electric Industrial Co., Ltd. System and method for digital data communication
US6359911B1 (en) * 1998-12-04 2002-03-19 Koninklijke Philips Electronics N.V. (Kpenv) MPEG-2 transport demultiplexor architecture with non-time-critical post-processing of packet information
US7830881B2 (en) * 2002-07-16 2010-11-09 Panasonic Corporation Content receiver and content transmitter

Also Published As

Publication number Publication date Type
EP1430706A2 (en) 2004-06-23 application
WO2002102049A3 (en) 2003-04-03 application
US8270479B2 (en) 2012-09-18 grant
WO2002102049A2 (en) 2002-12-19 application
US20030108105A1 (en) 2003-06-12 application
EP1430706A4 (en) 2011-05-18 application

Similar Documents

Publication Publication Date Title
US6490324B1 (en) System, method and apparatus for a variable output video decoder
US6674796B1 (en) Statistical multiplexed video encoding for diverse video formats
US5920343A (en) Imaging system with image processing for re-writing a portion of a pixel block
US6414996B1 (en) System, method and apparatus for an instruction driven digital video processor
US6222885B1 (en) Video codec semiconductor chip
US6064450A (en) Digital video preprocessor horizontal and vertical filters
US20030016753A1 (en) Multi-channel video encoding apparatus and method
US20090060032A1 (en) Software Video Transcoder with GPU Acceleration
US6445738B1 (en) System and method for creating trick play video streams from a compressed normal play video bitstream
US6828987B2 (en) Method and apparatus for processing video and graphics data
US6516029B1 (en) Method and apparatus for adaptive video encoding
EP0661888A2 (en) Multiplexing/demultiplexing method for superimposing sub- images on a main image
US6442206B1 (en) Anti-flicker logic for MPEG video decoder with integrated scaling and display functions
US20040013202A1 (en) Method and device for indicating quantizer parameters in a video coding system
US5717461A (en) Dram mapping for a digital video decompression processor
US6690881B1 (en) Digital camera apparatus and recording method thereof
US20080101455A1 (en) Apparatus and method for multiple format encoding
US6028635A (en) Reducing the memory required for decompression by storing compressed information using DCT based techniques
US5751861A (en) Reducing residual artifacts in video coding schemes with integer motion compensation
US5920353A (en) Multi-standard decompression and/or compression device
US20040252900A1 (en) Spatial scalable compression
US5774623A (en) Video image and audio sound signal processor having signal multiplexer and single data compression system for digital video recording and playback apparatus
US6754439B1 (en) Method and apparatus for using multiple compressed digital video and audio signals
US5577191A (en) System and method for digital video editing and publishing, using intraframe-only video data in intermediate steps
US6724434B1 (en) Inserting one or more video pictures by combining encoded video data before decoding

Legal Events

Date Code Title Description
AS Assignment

Owner name: BROADCOM CORPORATION, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MORAD, AMIR;YAVITS, LEONID;OXMAN, GADI;AND OTHERS;SIGNING DATES FROM 20001123 TO 20021027;REEL/FRAME:029211/0592

AS Assignment

Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH

Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:037806/0001

Effective date: 20160201

AS Assignment

Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:041706/0001

Effective date: 20170120

AS Assignment

Owner name: BROADCOM CORPORATION, CALIFORNIA

Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:041712/0001

Effective date: 20170119