US20100060641A9 - Video and graphics system with square graphics pixels - Google Patents

Video and graphics system with square graphics pixels Download PDF

Info

Publication number
US20100060641A9
US20100060641A9 US11/928,955 US92895507A US2010060641A9 US 20100060641 A9 US20100060641 A9 US 20100060641A9 US 92895507 A US92895507 A US 92895507A US 2010060641 A9 US2010060641 A9 US 2010060641A9
Authority
US
United States
Prior art keywords
video
graphics
pixels
image
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US11/928,955
Other versions
US7719547B2 (en
US20080049021A1 (en
Inventor
Alexander MacInnis
Sheng Zhong
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Avago Technologies International Sales Pte Ltd
Original Assignee
Broadcom Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Broadcom Corp filed Critical Broadcom Corp
Priority to US11/928,955 priority Critical patent/US7719547B2/en
Publication of US20080049021A1 publication Critical patent/US20080049021A1/en
Publication of US20100060641A9 publication Critical patent/US20100060641A9/en
Priority to US12/752,304 priority patent/US20100182318A1/en
Application granted granted Critical
Publication of US7719547B2 publication Critical patent/US7719547B2/en
Assigned to BROADCOM CORPORATION reassignment BROADCOM CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MACINNIS, ALEXANDER G., ZHONG, SHENG
Assigned to BANK OF AMERICA, N.A., AS COLLATERAL AGENT reassignment BANK OF AMERICA, N.A., AS COLLATERAL AGENT PATENT SECURITY AGREEMENT Assignors: BROADCOM CORPORATION
Assigned to AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD. reassignment AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BROADCOM CORPORATION
Assigned to BROADCOM CORPORATION reassignment BROADCOM CORPORATION TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS Assignors: BANK OF AMERICA, N.A., AS COLLATERAL AGENT
Assigned to AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITED reassignment AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITED MERGER (SEE DOCUMENT FOR DETAILS). Assignors: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.
Assigned to AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITED reassignment AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITED CORRECTIVE ASSIGNMENT TO CORRECT THE EFFECTIVE DATE OF MERGER TO 9/5/2018 PREVIOUSLY RECORDED AT REEL: 047196 FRAME: 0687. ASSIGNOR(S) HEREBY CONFIRMS THE MERGER. Assignors: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.
Assigned to AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITED reassignment AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITED CORRECTIVE ASSIGNMENT TO CORRECT THE PROPERTY NUMBERS PREVIOUSLY RECORDED AT REEL: 47630 FRAME: 344. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT. Assignors: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/44Receiver circuitry for the reception of television signals according to analogue transmission standards
    • H04N5/445Receiver circuitry for the reception of television signals according to analogue transmission standards for displaying additional information
    • H04N5/44504Circuit details of the additional information generator, e.g. details of the character or graphics signal generator, overlay mixing circuits
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • G09G5/363Graphics controllers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/426Internal components of the client ; Characteristics thereof
    • H04N21/42653Internal components of the client ; Characteristics thereof for processing graphics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/641Multi-purpose receivers, e.g. for auxiliary information
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/02Handling of images in compressed format, e.g. JPEG, MPEG
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/04Changes in size, position or resolution of an image
    • G09G2340/0407Resolution change, inclusive of the use of different resolutions for different screen areas
    • G09G2340/0421Horizontal resolution change
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/12Overlay of images, i.e. displayed pixel being the result of switching between the corresponding input pixels
    • G09G2340/125Overlay of images, i.e. displayed pixel being the result of switching between the corresponding input pixels wherein one of the images is motion video
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2360/00Aspects of the architecture of display systems
    • G09G2360/12Frame memory handling
    • G09G2360/125Frame memory handling using unified memory architecture [UMA]

Definitions

  • the present invention relates generally to integrated circuits and systems, and more particularly to a system for processing and displaying video and graphics.
  • Video images are often provided as ITU-R 601 compliant images (e.g., NTSC with 704 ⁇ 480 pixels) that have 13.5 MHz display sample rate. These video images typically have oblong pixels.
  • graphics images are often provided with square pixels with a sample rate of 12.27 MHz. For example, a typical graphics image may have 640 ⁇ 480 pixels.
  • video and graphics are often overlaid on top of each other to provide a blended image. It is desirable to combine the video and graphics in such a way that both retain their individual and correct pixel aspect ratios. It is also desirable to perform this function without modifying the video samples or changing the sample rate of the video and graphics is often difficult due to their different image sizes and pixel aspect ratios.
  • a video and graphics system has first and second inputs.
  • the first input is used to receive a graphics image.
  • the graphics image contains graphics pixels, which have square pixel aspect ratio.
  • the second input is used to receive a video image containing video pixels, which have non-square pixel aspect ratio.
  • the video image has a larger number of pixels per scan line than the graphics image.
  • the video and graphics system includes a sample rate converter for converting sample rate of the graphics image so that the number of graphics pixels per scan line of the graphics image becomes greater than or equal to the number of video pixels per scan line of the video image.
  • the video and graphics system includes a video compositor for blending the graphics image with the video image.
  • the graphics image is scaled horizontally so that the graphics image can be overlaid on the full width of the video image, and the square pixel aspect ratio of the graphics pixels is maintained.
  • a method of blending a graphics image with a video image is provided.
  • the graphics image containing graphics pixels having square pixel aspect ratio is received.
  • the video image containing video pixels having non-square pixel aspect ratio is also received.
  • the video image has a larger number of pixels per scan line than the graphics image.
  • a sample rate of the graphics image is converted so that the number of graphics pixels per scan line of the graphics image becomes greater than or equal to the number of video pixels per scan line of the video image.
  • the graphics image is blended with the video image.
  • the graphics image is scaled horizontally so that the graphics image can be overlaid on the full width of the video image, and the square pixel aspect ratio of the graphics pixels is maintained.
  • a video and graphics system has first input means for receiving a graphics image containing graphics pixels, which have square pixel aspect ratio.
  • the video and graphics system also has second input means for receiving a video image comprising video pixels, which have non-square pixel aspect ratio.
  • the video image has a larger number of pixels per scan line than the graphics image.
  • the video and graphics system includes means for converting sample rate of the graphics image so that the number of graphics pixels per scan line of the graphics image becomes greater than or equal to the number of video pixels per scan line of the video image.
  • the video and graphics system includes means for blending the graphics image with the video image.
  • the graphics image is scaled horizontally so that the graphics image can be overlaid on the full width of the video image, and the square pixel aspect ratio of the graphics pixels is maintained.
  • FIG. 1 is a block diagram of an integrated circuit chip, which embodies the video and graphics system of the present invention, coupled to the CPU and other devices;
  • FIG. 2 is a block diagram of a video and graphics system in an embodiment according to the present invention.
  • FIG. 3 is a block diagram of a video-graphics display and scale engine in an embodiment according to the present invention.
  • FIG. 4 is a frequency response diagram of a 11-phase, 5-tap filter for luma processing to generate square graphics pixels in an embodiment according to the present invention.
  • One embodiment of the present invention is a video and graphics system with square graphics pixels.
  • the display sample rate of graphics images preferably is converted to match the display sample rate of video images so as to facilitate blending the two, while maintaining a square pixel aspect ratio.
  • Displayed graphics images are scaled along the horizontal axis. For example, a 640 ⁇ 480 graphics image is scaled to match the 704 ⁇ 480 image size of NTSC-compatible video, such that both the video and graphics fill the same area on the display.
  • the display sample rate of the video images may be converted to match the display sample rate of the graphics images.
  • the display sample rate of the both the video images and the graphics images may be converted.
  • an integrated circuit 100 includes one embodiment of the video and graphics system according to the present invention. In other embodiments, the system may be implemented using two or more separate integrated circuit chips.
  • the integrated circuit 100 may include inputs 112 for receiving multiple compressed data streams.
  • the compressed data streams may include but are not limited to MPEG-2 Transport streams.
  • the integrated circuit 100 may also include an analog input 116 for receiving analog video signals 114 .
  • the analog video signals may include but are not limited to NTSC, PAL, Y/C (S-video), SECAM, RGB, YPrPb, YCrCb, or other analog video signals in SDTV or HDTV format that include video and/or graphics information.
  • the color components of the graphics and video signals may be in any of a number of formats, including but not limited to, YUV, YCrCb, YPrPb, HLS, and HSV. There may be multiple definitions of each of these terms. Exemplary definitions of some of these signal formats may be found in ITU-R rec. Bt-601 and ITU-R rec. Bt-709.
  • the integrated circuit 100 may also include an output 128 for providing a video output signal 126 , and an output 132 for providing an audio output signal 130 .
  • the video output signal 126 may include digital or analog video signals.
  • the digital video signals may include video signals to be displayed on Digital Visual Interface (DVI)-compliant monitors.
  • the digital video signals may also be provided to an on-chip or off-chip device that may encrypt the output.
  • DVI Digital Visual Interface
  • the integrated circuit 100 may also include a bus 120 for communicating with PCI devices 118 and a bus 124 to interface with I/O devices 122 such as read-only memory (ROM), flash memory. and/or other devices.
  • the integrated circuit may further include a bus 104 for transferring data to and from memory 102 and a bus 108 for connecting to a CPU 106 .
  • Graphics data for display preferably is produced by any suitable graphics library software, such as Direct Draw marketed by Microsoft Corporation, and is read from the CPU 106 into the memory 102 .
  • the memory 102 preferably is a unified memory that is shared by the system, the CPU 106 and other peripheral components.
  • the CPU preferably uses the unified memory for its code and data while the video and graphics system preferably performs all graphics, video, audio and display functions using the same unified memory.
  • FIG. 2 is a block diagram of one embodiment of the video and graphics system implemented in the integrated circuit 100 .
  • the video and graphics system preferably includes a data transport 200 , a video transport 202 , a video RISC 204 , two row RISCs 206 , 208 , an audio decode processor (ADP) 214 , a graphics accelerator 224 , a DMA engine 226 , a memory controller 234 , an analog video decoder (VDEC) with a 10-bit analog-to-digital converter (ADC) 236 , a video-graphics display and scale engine 238 , a set of video DACs 240 , a PCI bridge 242 , an I/O bus bridge with DMA 244 , a CPU interface block 246 , a PCM audio 250 , an audio DAC 252 , and a video encoder (VEC) 254 .
  • ADP audio decode processor
  • VDEC analog video decoder
  • ADC analog-
  • the data transport 200 , the video transport 202 , the video RISC 204 , the row RISCs 206 , 208 , and the ADP 214 preferably perform transport and decode functions of the video and graphics system, which may include MPEG-2 Transport and video decoding.
  • the video and graphics system preferably includes multiple transport processors.
  • the video and graphics system may include three transport processors.
  • the compressed data streams which may include in-band and out-of-band MPEG Transport streams IB 1 (in-band 1 ), IB 2 (in-band 2 ) and OOB, preferably are provided to the data transport 200 and the video transport 202 .
  • the data transport 200 preferably performs PID and section filtering of the compressed data streams.
  • the data transport preferably provides message data obtained through section filtering to the memory controller 234 for storage in the external memory, e.g., SDRAM.
  • the data transport preferably also performs descrambling of encrypted transport streams.
  • the encrypted transport streams may have been encrypted using, e.g., DES, DVD or other encryption method.
  • the data transport provides the descrambled compressed data streams to the video transport 202 and the audio decode processor (ADP) 214 .
  • ADP audio decode processor
  • the video transport preferably extracts bit stream for video, which may include MPEG-2 video.
  • the video transport 202 preferably extracts compressed MPEG video data by removing transport stream (TS) headers and packetized elementary stream (PES) headers from the compressed data streams. Then the video transport preferably provides the compressed video data, which may include MPEG video data, for processing in the video RISC 204 .
  • the compressed data streams may also include other types of packetized data streams such as DIRECTV transport streams.
  • DIRECTV is a trademark of DIRECTV, Inc.
  • the video RISC 204 and the row RISCs 206 , 208 make up a digital video decoder, which may be an MPEG-2 video decoder.
  • the digital video decoder preferably decodes the compressed video data and provides it to the memory controller 234 to be stored temporarily in an external memory, e.g., SDRAM.
  • an external memory e.g., SDRAM.
  • complex video decode process of MPEG video preferably is partitioned into concurrently operable multiple decode functionality.
  • the digital video decoder preferably decodes multiple rows of the compressed MPEG-2 video data concurrently.
  • the video RISC 204 preferably parses and processes layers of compressed MPEG-2 video data above the SLICE layer, i.e., SEQUENCE, group of pictures (GOP), EXTENSION and PICTURE layers.
  • the two row RISCs 206 , 208 preferably are used for SLICE layer, macroblock layer and block layer decoding and processing. Row decode paths associated with the row RISCs preferably are used for full speed processing of time critical functions at the macroblock and block layers.
  • Processors used in this embodiment are RISC processors. Other types of processors may be used in other embodiments.
  • the digital video decoder may scale frames by half when saving them to frame buffers. Thus, savings to memory size and bandwidth may result when the reference frames are saved for reconstruction of P-frames and B-frames.
  • the frames preferably are not scaled vertically during reconstruction.
  • the frame buffers preferably are implemented in external memory.
  • the ADP 214 preferably performs audio PID parsing to extract audio packets from the compressed data streams.
  • the ADP 214 preferably decodes the audio packets extracted from the compressed data streams.
  • the ADP 214 provides the decoded audio data to the PCM audio 250 for mixing with other audio signals.
  • the register bus bridge 216 preferably provides an interface between the internal CPU-register bus and the memory controller 234 .
  • the system uses 16-bit registers. In other embodiments, the system may use registers having other bit sizes.
  • the graphics accelerator 224 preferably performs graphics operations that may require intensive CPU processing, such as operations on three dimensional graphics images.
  • the graphics accelerator 224 preferably is implemented as a RISC processor optimized for performing real-time 3D and 2D effects on graphics and video surfaces.
  • the graphics accelerator preferably incorporates specialized graphics vector arithmetic functions for maximum performance with video and real-time graphics.
  • the graphics accelerator preferably performs a range of essential graphics and video operations with performance approaching that of hardwired approaches. At the same time, the graphics accelerator may be programmable so that it may meet new and evolving application requirements with firmware downloads in the field.
  • the DMA engine 226 preferably transfers data between the CPU and components of the system without interrupting the CPU. For example, CPU read and write operations as illustrated in CPU R/W block 218 are performed by the DMA engine 226 .
  • the memory controller 234 preferably reads and writes video and graphics data to and from memory by using burst accesses with burst lengths that may be assigned to each task.
  • the memory preferably is any suitable memory such as an SDRAM. All functions within the system preferably share the same memory having a unified memory architecture (UMA), with real-time performance of all of the hard real time functions. CPU accesses of code and data preferably are performed as quickly and efficiently as possible without impairing the video, graphics, and audio functions. Memory preferably is utilized very efficiently by performing burst accesses with burst lengths optimized for each task, and through careful optimization of the memory access patterns for MPEG video decoding.
  • UMA unified memory architecture
  • the analog video decoder (VDEC) 236 preferably digitizes and processes analog input video to generate YUV component signals having separated luma and chroma components.
  • the VDEC 236 preferably includes a 10-bit CMOS video analog-to-digital converter (ADC) to digitize analog video directly.
  • ADC CMOS video analog-to-digital converter
  • the VDEC 236 may also include internal anti-aliasing filters which allow simple connections of normal analog video to the system.
  • the VDEC 236 preferably separates luma and chroma using an adaptive 2 H (3 line) comb filter, adaptive edge enhancement and noise coring.
  • the video-graphics display and scale engine 238 preferably takes graphics information from memory, blends the graphics information, and composites the blended graphics with video.
  • the video-graphics display and scale engine preferably performs display sample rate conversion of the blended graphics so as to facilitate blending of graphics and video, while maintaining square aspect ratio of the graphics pixels.
  • the video-graphics display and scale engine 238 preferably supports capturing of video as illustrated in a capture block 220 and preferably reads graphics from the external memory, e.g., SDRAM, as illustrated in a graphics read block 222 .
  • Decoded MPEG video preferably is provided to the video-graphics display and scale engine as indicated in MPEG display feeder blocks 1 and 2 228 , 230 .
  • the video-graphics display and scale engine preferably also receives a video window 232 .
  • the video-graphics display and scale engine 238 preferably also performs both downscaling and upscaling of MPEG video and analog video as needed.
  • the scale factors may be adjusted continuously from a scale factor of much less than one to a scale factor of four or more. With both analog and MPEG video input, either one may be scaled while the other is displayed full size at the same time. Any portion of the input may be the source for video scaling.
  • the video-graphics display and scale engine preferably downscales before capturing video frames to memory, and upscales after reading from memory.
  • the video-graphics display and scale engine may scale both the HDTV video and the SDTV video.
  • the video-graphics display and scale engine 238 provides HDTV video to be displayed while scaling the HDTV video down to SDTV format, and capturing into memory.
  • the HDTV video may be scaled and captured as an SDTV video either before or after compositing with graphics.
  • the HDTV video may also be scaled and captured as an SDTV video both before and after compositing with graphics.
  • the scaled and captured HDTV video may be recorded, e.g., using a standard video cassette recorder (VCR), while the HDTV video is being displayed on television.
  • VCR video cassette recorder
  • the video-graphics display and scale engine 238 preferably provides the component video, e.g., RGB, YPrPb and YCrCb, to the set of video DACs 240 for digital-to-analog conversion.
  • the set of video DACs 240 includes five DACs.
  • the video-graphics display and scale engine 238 preferably provides the composite video, e.g., NTSC, PAL, Y/C video (S-video), to the VEC 254 for conversion into proper signal format.
  • the VEC 254 preferably provides the formatted composite video to the set of video DACs 240 to be converted to analog format.
  • the VEC 254 includes a set of video DACs, and thus the formatted composite video is converted to analog video in the VEC 254 .
  • the set of video DACs 240 preferably provide multiple digitized video outputs.
  • the digitized video outputs may include component video such as RGB and YPrPb, in addition to composite video in various formats such as composite video blanking and sync (CVBS) including NTSC and PAL composite video, and Y/C video (S-video).
  • CVBS composite video blanking and sync
  • S-video Y/C video
  • the set of video DACs 240 includes five video DACs, and thus all of Y/C video, CVBS video and standard definition component video may be displayed simultaneously.
  • a system bridge controller 248 preferably provides a “north bridge” function by providing a bridge for the CPU to interface with multiple peripheral devices.
  • the system bridge controller preferably is comprised of the PCI (Peripheral Component Interconnect) bridge 242 , the I/O bus bridge with DMA 244 and the CPU interface block 246 .
  • PCI Peripheral Component Interconnect
  • the PCM audio 250 preferably receives decoded MPEG or Dolby AC-3 audio from the ADP 214 .
  • the PCM audio 250 preferably also receives I 2 S audio through an I 2 S input 262 and digitizes and captures it for mixing with other audio data.
  • the PCM audio 250 preferably supports applications that create and play audio locally within a set top box and allow mixing of the locally created audio with audio from a digital audio source, such as the MPEG audio or Dolby AC-3, and with digitized analog audio.
  • the PCM audio 250 preferably plays audio from an SDRAM in a variety of sample rates and formats. Both the captured analog audio and the local PCM audio may be played and mixed at the same time, even though they may have different sample rates and formats.
  • the PCM audio 250 preferably also provides digital audio output 276 in, e.g., SPDIF serial output format.
  • the audio DAC 252 provides the decoded and digital-to-analog converted MPEG and Dolby AC-3 audio component as an analog audio output 274 of the system.
  • the analog audio output 274 may also include other audio information such as I 2 S audio.
  • the VEC 254 converts between the HD video color space (YPrPb) and the standard definition YUV color space, and between either of those and RGB before converting to the respective outputs.
  • YPrPb HD video color space
  • RGB RGB
  • Video that was originally coded using YPrPb may be displayed in YPrPb for direct HD output, or converted to YUV for SD display via composite, Y/C or direct RGB output. This function preferably is available regardless of the resolution of the video.
  • Video that was originally coded using YUV may be output as composite, Y/C or RGB, or converted to YPrPb for direct HD output.
  • the HD YPrPb component output may support the specified tri-level sync.
  • the RGB output may also support optional sync on green, sync on RGB, or separate H and V sync on 2 Y/CVBS and C outputs, to support various types of standard definition and HD monitors.
  • FIG. 3 is a block diagram of the video-graphics display and scaling engine 238 in one embodiment of the present invention.
  • the video-graphics display and scaling engine includes a display engine 300 , a sample rate converter (SRC) 302 and a video compositor 304 .
  • the video-graphics display and scaling engine may also include other components (not shown) for processing video and graphics. In other embodiments, the SRC may be included in the display engine.
  • the video-graphics display and scaling engine 238 preferably receives video signals 306 and graphics signals 308 , and composites them to provide a video output 314 .
  • the video signals 306 preferably includes one or more MPEG display feeds and video windows, and may include either or both an HDTV video and an SDTV video.
  • the graphics signals 308 may include graphics windows having various different formats such as YUV and RGB formats.
  • the display engine 300 preferably blends the graphics windows included in the graphics signals 308 to generate blended graphics 310 .
  • the SRC 302 preferably performs display sample rate conversion of the blended graphics to generate square graphics pixels 312 .
  • the video compositor 304 preferably composites the square graphics pixels 312 together with the video signals 306 .
  • any conventional or non-conventional display engine may be used as the display engine 300 for blending, filtering and scaling graphics.
  • one embodiment of the present invention incorporates the display engine used in one embodiment of the invention described in commonly owned U.S. patent application Ser. No. 09/641,374 filed Aug. 18, 2000 and entitled “Video, Audio and Graphics Decode, Composite and Display System,” the contents of have been incorporated by reference.
  • the display engine 300 preferably provides the blended graphics 310 having an image size of 640 ⁇ 480 pixels and a display sample rate of 12.27 MHz to the SRC 302 .
  • the blended graphics have square graphics pixels that are provided to the SRC 302 .
  • the blended graphics 310 preferably are in YUV 4:2:2 format. Therefore, the blended graphics preferably include luma (Y) and chroma (U and V) component signals, and each graphics image in the blended graphics preferably includes 640 ⁇ 480 Y values, 320 ⁇ 480 U values and 320 ⁇ 480 V values.
  • YUV may also be referred to as YCrCb or any other terminology used by those skilled in the art to designate video/graphics format having luma and chroma components.
  • the blended graphics 310 may be in other format, such as, for example, YUV 4:4:4 format.
  • SRC preferably converts the sample rate of the blended graphics by 11/10 ratio to provide 704 pixels in each display scan line.
  • the SRC preferably converts Y, U and V values to 704 ⁇ 480 Y values, 352 ⁇ 480 U values and 352 ⁇ 480 V values.
  • Other sample rate ratios may be used if either or both the video and the graphics have a different display sample rate.
  • the sample rate of the blended graphics may be converted by 22/10 ratio to provide 1408 pixels per display scan line.
  • different sample rate conversion ratios may be used if the video includes an HDTV video.
  • the SRC 302 preferably includes a multi-tap filter for the display sample rate conversion of all three of the Y, U and V values.
  • a multi-tap filter for the display sample rate conversion of all three of the Y, U and V values.
  • 11/10 ratio e.g., down sampling by 10 and up sampling by 11
  • 11 phases, and therefore 11 coefficients preferably are used per tap.
  • the multi-tap filter preferably has five taps. Therefore, in this embodiment, 55 coefficients are used to process Y (luma) components.
  • the SRC may include a multi-tap filter having a different number of taps, e.g., eight taps, and corresponding number of coefficients, e.g., 88, may be used.
  • the SRC may include a separate filter for processing each of the Y, U and V component signals.
  • the SRC preferably also includes a memory for storing the filter coefficients.
  • the memory may be a read only memory (ROM) or a random access memory (RAM).
  • the filter coefficients preferably are selected to provide a good balance of sharpness at the cut-off frequency, smoothness, anti-aliasing and minimum ringing. Design and implementation of multi-tap filters are well known to those skilled in the art.
  • the 55 filter coefficients for processing luma components in the 11-phase, 5-tap filter in one embodiment of the present invention are provided in Table 1.
  • Each filter coefficient in Table 1 may be designated with a parameter c[ph][t], where ph is the phase that ranges from 0 to 10, and t is the tap number that ranges from 0 to 4.
  • ph is the phase that ranges from 0 to 10
  • t is the tap number that ranges from 0 to 4.
  • the value of the coefficient c[0 ][0] is equal to ⁇ 52 according to Table 1.
  • the value of the coefficient c[6 ][2] is equal to 540.
  • the phase ph preferably is selected to be (10 ⁇ n) mod 11, and thus ph ranges from 0 to 10.
  • the center pixel p of the five pixels provided to the five filter taps preferably is selected to be ⁇ (10 ⁇ n)/11>, where ⁇ x>is defined to be the largest integer less than or equal to x.
  • the center pixel p has a luma value of y i [p].
  • the Y value y o [300] of the output pixel 300 preferably is calculated from input Y values y i [270], y i [ 271 ], y i [ 272 ], y i [ 273 ] and y i [ 274 ], where the input pixel 272 is the center pixel.
  • ph equals to (10 ⁇ 300) mod 11, which is equal to 8.
  • the five input Y values to be provided to the 5-tap filter, according to Eq. 1 are y i [ 636 ], y i [ 637 ], y i [ 638 ], y i [ 639 ] and y i [ 640 ].
  • images having 640 ⁇ 480 pixels have input Y values ranging from y i [ 0 ] to y i [ 639 ], and y i [ 640 ] does not exist.
  • the right boundary input Y value may be duplicated so that all five taps of the 5-tap filter may be provided with an input Y value.
  • the input Y values of y i [ 636 ], y i [ 637 ], y i [ 638 ], y i [ 639 ] and y i [ 639 ] may be provided in which the right boundary input Y value of y i [ 639 ] is duplicated and used twice. Similar duplication of the boundary input Y value may be used at the left boundary as well.
  • FIG. 4 is a diagram illustrating the frequency response of the 5-tap filter in one embodiment of the present invention.
  • the abscissa is in units of a half of the sampling frequency.
  • the range of 0 to 1.0 on the abscissa corresponds to 0 Hz to 13.5 MHz for the case of luma component signals for ITU-R 601 compliant video.
  • the 5-tap filter preferably also performs low pass filtering as to reduce aliasing.
  • U may have three input values including u i[0], u i [ 2 ], u i [ 4 ], since there is only one value of U for every two values of Y in an YUV 4:2:2 image.
  • Table 2 illustrates a frequency relationship between Y, U and V components of a YUV 4:2:2 image.
  • Y, U and V Frequency Relationship for a YUV 4:2:2 Image Y y i [0] y i [1] y i [2] y i [3] y i [4] y i [5] y i [6] y i [7] y i [8] y i [9] U/V u i [0] v i [0] u i [2] v i [2] u i [4] v i [4] u i [6] v i [6] u i [8] v i [8]
  • each scan line in a YUV 4:4:4 image typically contains an identical number of input U and V values as the
  • a three tap filter is used to process input U values and input V values.
  • the coefficients to be applied to the input U and V values may be derived from the coefficients for the input Y values, or they may be generated independently of the coefficients for the input Y values.
  • Equation 2-4 ph is the phase that ranges from 0-10, and therefore, there are 33 coefficients for the U and V input values. In another embodiment, different coefficients used for the input U values and the input V values may be different from one another.
  • the phase ph preferably is selected to be (10 ⁇ m)mod 11.
  • the center pixel q of the input pixels for filtering preferably is defined to be ⁇ (10 ⁇ m)/11>.
  • a 5-tap filter similar to the filter for the Y samples, may be used to filter the input U and V values.
  • the same or different coefficients may be applied to the input Y, U and V values.
  • additional values preferably are generated from existing values. For example, two of the three input U values may be duplicated and five values may be provided as u i [ 0 ], u i [ 0 ], u i [ 1 ], u i [ 1 ] and u i [ 2 ] or other similar sequence of U values.
  • the in-between input U values may also be generated using other methods.
  • u i [ 0 ], (u i [0]+u i [1])/2, u i [ 1 ], (u i [1]+u i [2])/2, u i [2] may be used as the five input U values.
  • some input V values may be duplicated or in-between input V values may be generated in order to provide five input V values to the 5-tap filter concurrently with five input Y values.
  • images having a size 320 ⁇ 480 may be expanded to have a size of 704 ⁇ 480.
  • the same 5-tap filter may be used for display sample rate conversion to generate output pixels for this embodiment.
  • the phase ph preferably is selected to be (5 ⁇ n)mod 11. All other aspects of the filtering algorithm preferably is similar to the foregoing embodiments for display sample rate conversion by 11/10 ratio.
  • one embodiment of the present invention provides a video and graphics system for HDTV and SDTV applications with a capability for display i ng a combination of video and graphics where the video has non-square aspect ratio pixels, and the graphics content has square aspect ratio pixels.
  • the present invention may be applied to any system that uses sample rate conversion.
  • the present invention may be used for display sample rate conversion of graphics or other images to generate square aspect ratio pixels for applications in a 1080i-format HDTV with 1920 ⁇ 1080 pixels.
  • Y, Pr and Pb component signals may be processed in a similar manner as the processing of Y, U and V (Y, Cr, Cb) component signals in the foregoing description.
  • the present invention may also be used for a display sample rate conversion to generate display pixels for applications in HDTVs having progressive formats such as 720p and 1080p.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Image Processing (AREA)

Abstract

A video and graphics system provides square graphics pixels to blend images having 640×480 pixels, such as graphics images provided by some set top boxes and intended to be displayed at a 12.27 MHz display sample rate, with images having 704×480 pixels, such as ITU-R 601 compliant images such as NTSC SDTV images, having oblong pixels and displayed at a 13.5 MHz display sample rate. A sample rate converter including a multi-phase-multi-tap filter is used to generate square pixels. The multi-phase-multi-tap filter provides a good balance of sharpness, smoothness, anti-aliasing and reduced ringing. The multi-phase-multi-tap filter can also be used to convert images having 320×480 pixels to images having 704×480 pixels. The multi-tap filter can be used for scan rate conversion of graphics or video images for HDTV or SDTV applications.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation of U.S. patent application Ser. No. 09/799,252, filed Mar. 5, 2001.
  • The present application contains subject matter related to the subject matter disclosed in U.S. patent application Ser. No. 09/641,374 entitled “Video, Audio and Graphics Decode, Composite and Display System” filed Aug. 18, 2000, now issued as U.S. Pat. No. 6,853,385 on Feb. 8, 2005, the contents of which are hereby incorporated by reference in full.
  • FIELD OF THE INVENTION
  • The present invention relates generally to integrated circuits and systems, and more particularly to a system for processing and displaying video and graphics.
  • BACKGROUND OF THE INVENTION
  • Video images are often provided as ITU-R 601 compliant images (e.g., NTSC with 704×480 pixels) that have 13.5 MHz display sample rate. These video images typically have oblong pixels. On the other hand, graphics images are often provided with square pixels with a sample rate of 12.27 MHz. For example, a typical graphics image may have 640×480 pixels. In a video and graphics system, video and graphics are often overlaid on top of each other to provide a blended image. It is desirable to combine the video and graphics in such a way that both retain their individual and correct pixel aspect ratios. It is also desirable to perform this function without modifying the video samples or changing the sample rate of the video and graphics is often difficult due to their different image sizes and pixel aspect ratios.
  • Prior attempts to implement similar functions have suffered from problems. For example, some previous designs produced visual artifacts in the resulting displayed graphics, such as aliasing, blurring, or ringing. Others, in addition to these problems, have slow performance due to their additional accesses to shared memory.
  • Therefore, it is desirable to provide a method and apparatus for overlaying video and graphics to generate a blended image without appreciable image quality degradation, in such a way that both retain their individual and correct pixel aspect ratios without modifying the video samples or changing the video sample rate.
  • SUMMARY OF THE INVENTION
  • In one embodiment of the present invention, a video and graphics system is provided. The video and graphics system has first and second inputs. The first input is used to receive a graphics image. The graphics image contains graphics pixels, which have square pixel aspect ratio. The second input is used to receive a video image containing video pixels, which have non-square pixel aspect ratio. The video image has a larger number of pixels per scan line than the graphics image. The video and graphics system includes a sample rate converter for converting sample rate of the graphics image so that the number of graphics pixels per scan line of the graphics image becomes greater than or equal to the number of video pixels per scan line of the video image. Further, the video and graphics system includes a video compositor for blending the graphics image with the video image. The graphics image is scaled horizontally so that the graphics image can be overlaid on the full width of the video image, and the square pixel aspect ratio of the graphics pixels is maintained.
  • In another embodiment of the present invention, a method of blending a graphics image with a video image is provided. The graphics image containing graphics pixels having square pixel aspect ratio is received. The video image containing video pixels having non-square pixel aspect ratio is also received. The video image has a larger number of pixels per scan line than the graphics image. A sample rate of the graphics image is converted so that the number of graphics pixels per scan line of the graphics image becomes greater than or equal to the number of video pixels per scan line of the video image. Then, the graphics image is blended with the video image. The graphics image is scaled horizontally so that the graphics image can be overlaid on the full width of the video image, and the square pixel aspect ratio of the graphics pixels is maintained.
  • In yet another embodiment of the present invention, a video and graphics system is provided. The video and graphics system has first input means for receiving a graphics image containing graphics pixels, which have square pixel aspect ratio. The video and graphics system also has second input means for receiving a video image comprising video pixels, which have non-square pixel aspect ratio. The video image has a larger number of pixels per scan line than the graphics image. The video and graphics system includes means for converting sample rate of the graphics image so that the number of graphics pixels per scan line of the graphics image becomes greater than or equal to the number of video pixels per scan line of the video image. Further, the video and graphics system includes means for blending the graphics image with the video image. The graphics image is scaled horizontally so that the graphics image can be overlaid on the full width of the video image, and the square pixel aspect ratio of the graphics pixels is maintained.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • These and other aspects of the invention may be understood by reference to the following detailed description, taken in conjunction with the accompanying drawings, which are briefly described below.
  • FIG. 1 is a block diagram of an integrated circuit chip, which embodies the video and graphics system of the present invention, coupled to the CPU and other devices;
  • FIG. 2 is a block diagram of a video and graphics system in an embodiment according to the present invention;
  • FIG. 3 is a block diagram of a video-graphics display and scale engine in an embodiment according to the present invention; and
  • FIG. 4 is a frequency response diagram of a 11-phase, 5-tap filter for luma processing to generate square graphics pixels in an embodiment according to the present invention.
  • DETAILED DESCRIPTION
  • One embodiment of the present invention is a video and graphics system with square graphics pixels. In this embodiment, the display sample rate of graphics images preferably is converted to match the display sample rate of video images so as to facilitate blending the two, while maintaining a square pixel aspect ratio. Displayed graphics images are scaled along the horizontal axis. For example, a 640×480 graphics image is scaled to match the 704×480 image size of NTSC-compatible video, such that both the video and graphics fill the same area on the display. In other embodiments, the display sample rate of the video images may be converted to match the display sample rate of the graphics images. In still other embodiments, the display sample rate of the both the video images and the graphics images may be converted.
  • In FIG. 1, an integrated circuit 100 includes one embodiment of the video and graphics system according to the present invention. In other embodiments, the system may be implemented using two or more separate integrated circuit chips. The integrated circuit 100 may include inputs 112 for receiving multiple compressed data streams. The compressed data streams may include but are not limited to MPEG-2 Transport streams. The integrated circuit 100 may also include an analog input 116 for receiving analog video signals 114. The analog video signals may include but are not limited to NTSC, PAL, Y/C (S-video), SECAM, RGB, YPrPb, YCrCb, or other analog video signals in SDTV or HDTV format that include video and/or graphics information. The color components of the graphics and video signals may be in any of a number of formats, including but not limited to, YUV, YCrCb, YPrPb, HLS, and HSV. There may be multiple definitions of each of these terms. Exemplary definitions of some of these signal formats may be found in ITU-R rec. Bt-601 and ITU-R rec. Bt-709.
  • The integrated circuit 100 may also include an output 128 for providing a video output signal 126, and an output 132 for providing an audio output signal 130. The video output signal 126 may include digital or analog video signals. For example, the digital video signals may include video signals to be displayed on Digital Visual Interface (DVI)-compliant monitors. The digital video signals may also be provided to an on-chip or off-chip device that may encrypt the output.
  • The integrated circuit 100 may also include a bus 120 for communicating with PCI devices 118 and a bus 124 to interface with I/O devices 122 such as read-only memory (ROM), flash memory. and/or other devices. The integrated circuit may further include a bus 104 for transferring data to and from memory 102 and a bus 108 for connecting to a CPU 106. Graphics data for display preferably is produced by any suitable graphics library software, such as Direct Draw marketed by Microsoft Corporation, and is read from the CPU 106 into the memory 102. The memory 102 preferably is a unified memory that is shared by the system, the CPU 106 and other peripheral components. The CPU preferably uses the unified memory for its code and data while the video and graphics system preferably performs all graphics, video, audio and display functions using the same unified memory.
  • FIG. 2 is a block diagram of one embodiment of the video and graphics system implemented in the integrated circuit 100. The video and graphics system preferably includes a data transport 200, a video transport 202, a video RISC 204, two row RISCs 206, 208, an audio decode processor (ADP) 214, a graphics accelerator 224, a DMA engine 226, a memory controller 234, an analog video decoder (VDEC) with a 10-bit analog-to-digital converter (ADC) 236, a video-graphics display and scale engine 238, a set of video DACs 240, a PCI bridge 242, an I/O bus bridge with DMA 244, a CPU interface block 246, a PCM audio 250, an audio DAC 252, and a video encoder (VEC) 254.
  • The data transport 200, the video transport 202, the video RISC 204, the row RISCs 206, 208, and the ADP 214 preferably perform transport and decode functions of the video and graphics system, which may include MPEG-2 Transport and video decoding.
  • The video and graphics system preferably includes multiple transport processors. For example, in one embodiment, the video and graphics system may include three transport processors. The compressed data streams, which may include in-band and out-of-band MPEG Transport streams IB 1 (in-band 1), IB 2 (in-band 2) and OOB, preferably are provided to the data transport 200 and the video transport 202.
  • The data transport 200 preferably performs PID and section filtering of the compressed data streams. The data transport preferably provides message data obtained through section filtering to the memory controller 234 for storage in the external memory, e.g., SDRAM. The data transport preferably also performs descrambling of encrypted transport streams. The encrypted transport streams may have been encrypted using, e.g., DES, DVD or other encryption method. In one embodiment of the present invention, the data transport provides the descrambled compressed data streams to the video transport 202 and the audio decode processor (ADP) 214.
  • The video transport preferably extracts bit stream for video, which may include MPEG-2 video. The video transport 202 preferably extracts compressed MPEG video data by removing transport stream (TS) headers and packetized elementary stream (PES) headers from the compressed data streams. Then the video transport preferably provides the compressed video data, which may include MPEG video data, for processing in the video RISC 204. The compressed data streams may also include other types of packetized data streams such as DIRECTV transport streams. DIRECTV is a trademark of DIRECTV, Inc.
  • The video RISC 204 and the row RISCs 206, 208 make up a digital video decoder, which may be an MPEG-2 video decoder. The digital video decoder preferably decodes the compressed video data and provides it to the memory controller 234 to be stored temporarily in an external memory, e.g., SDRAM. For the case of MPEG-2 video data, complex video decode process of MPEG video preferably is partitioned into concurrently operable multiple decode functionality. The digital video decoder preferably decodes multiple rows of the compressed MPEG-2 video data concurrently.
  • The video RISC 204 preferably parses and processes layers of compressed MPEG-2 video data above the SLICE layer, i.e., SEQUENCE, group of pictures (GOP), EXTENSION and PICTURE layers. The two row RISCs 206, 208 preferably are used for SLICE layer, macroblock layer and block layer decoding and processing. Row decode paths associated with the row RISCs preferably are used for full speed processing of time critical functions at the macroblock and block layers. Processors used in this embodiment are RISC processors. Other types of processors may be used in other embodiments.
  • The digital video decoder may scale frames by half when saving them to frame buffers. Thus, savings to memory size and bandwidth may result when the reference frames are saved for reconstruction of P-frames and B-frames. The frames preferably are not scaled vertically during reconstruction. The frame buffers preferably are implemented in external memory.
  • The ADP 214 preferably performs audio PID parsing to extract audio packets from the compressed data streams. The ADP 214 preferably decodes the audio packets extracted from the compressed data streams. The ADP 214 provides the decoded audio data to the PCM audio 250 for mixing with other audio signals.
  • The register bus bridge 216 preferably provides an interface between the internal CPU-register bus and the memory controller 234. In one embodiment, the system uses 16-bit registers. In other embodiments, the system may use registers having other bit sizes.
  • The graphics accelerator 224 preferably performs graphics operations that may require intensive CPU processing, such as operations on three dimensional graphics images. The graphics accelerator 224 preferably is implemented as a RISC processor optimized for performing real-time 3D and 2D effects on graphics and video surfaces. The graphics accelerator preferably incorporates specialized graphics vector arithmetic functions for maximum performance with video and real-time graphics. The graphics accelerator preferably performs a range of essential graphics and video operations with performance approaching that of hardwired approaches. At the same time, the graphics accelerator may be programmable so that it may meet new and evolving application requirements with firmware downloads in the field.
  • The DMA engine 226 preferably transfers data between the CPU and components of the system without interrupting the CPU. For example, CPU read and write operations as illustrated in CPU R/W block 218 are performed by the DMA engine 226.
  • The memory controller 234 preferably reads and writes video and graphics data to and from memory by using burst accesses with burst lengths that may be assigned to each task. The memory preferably is any suitable memory such as an SDRAM. All functions within the system preferably share the same memory having a unified memory architecture (UMA), with real-time performance of all of the hard real time functions. CPU accesses of code and data preferably are performed as quickly and efficiently as possible without impairing the video, graphics, and audio functions. Memory preferably is utilized very efficiently by performing burst accesses with burst lengths optimized for each task, and through careful optimization of the memory access patterns for MPEG video decoding.
  • The analog video decoder (VDEC) 236 preferably digitizes and processes analog input video to generate YUV component signals having separated luma and chroma components. The VDEC 236 preferably includes a 10-bit CMOS video analog-to-digital converter (ADC) to digitize analog video directly. The VDEC 236 may also include internal anti-aliasing filters which allow simple connections of normal analog video to the system. The VDEC 236 preferably separates luma and chroma using an adaptive 2H (3 line) comb filter, adaptive edge enhancement and noise coring.
  • The video-graphics display and scale engine 238 preferably takes graphics information from memory, blends the graphics information, and composites the blended graphics with video. The video-graphics display and scale engine preferably performs display sample rate conversion of the blended graphics so as to facilitate blending of graphics and video, while maintaining square aspect ratio of the graphics pixels.
  • The video-graphics display and scale engine 238 preferably supports capturing of video as illustrated in a capture block 220 and preferably reads graphics from the external memory, e.g., SDRAM, as illustrated in a graphics read block 222. Decoded MPEG video preferably is provided to the video-graphics display and scale engine as indicated in MPEG display feeder blocks 1 and 2 228, 230. The video-graphics display and scale engine preferably also receives a video window 232.
  • The video-graphics display and scale engine 238 preferably also performs both downscaling and upscaling of MPEG video and analog video as needed. The scale factors may be adjusted continuously from a scale factor of much less than one to a scale factor of four or more. With both analog and MPEG video input, either one may be scaled while the other is displayed full size at the same time. Any portion of the input may be the source for video scaling. To conserve memory and bandwidth, the video-graphics display and scale engine preferably downscales before capturing video frames to memory, and upscales after reading from memory. The video-graphics display and scale engine may scale both the HDTV video and the SDTV video.
  • In one embodiment, the video-graphics display and scale engine 238 provides HDTV video to be displayed while scaling the HDTV video down to SDTV format, and capturing into memory. The HDTV video may be scaled and captured as an SDTV video either before or after compositing with graphics. The HDTV video may also be scaled and captured as an SDTV video both before and after compositing with graphics. The scaled and captured HDTV video may be recorded, e.g., using a standard video cassette recorder (VCR), while the HDTV video is being displayed on television.
  • The video-graphics display and scale engine 238 preferably provides the component video, e.g., RGB, YPrPb and YCrCb, to the set of video DACs 240 for digital-to-analog conversion. In one embodiment, the set of video DACs 240 includes five DACs. The video-graphics display and scale engine 238 preferably provides the composite video, e.g., NTSC, PAL, Y/C video (S-video), to the VEC 254 for conversion into proper signal format. The VEC 254 preferably provides the formatted composite video to the set of video DACs 240 to be converted to analog format. In another embodiment, the VEC 254 includes a set of video DACs, and thus the formatted composite video is converted to analog video in the VEC 254.
  • The set of video DACs 240 preferably provide multiple digitized video outputs. The digitized video outputs may include component video such as RGB and YPrPb, in addition to composite video in various formats such as composite video blanking and sync (CVBS) including NTSC and PAL composite video, and Y/C video (S-video). In one embodiment, the set of video DACs 240 includes five video DACs, and thus all of Y/C video, CVBS video and standard definition component video may be displayed simultaneously.
  • A system bridge controller 248 preferably provides a “north bridge” function by providing a bridge for the CPU to interface with multiple peripheral devices. The system bridge controller preferably is comprised of the PCI (Peripheral Component Interconnect) bridge 242, the I/O bus bridge with DMA 244 and the CPU interface block 246.
  • The PCM audio 250 preferably receives decoded MPEG or Dolby AC-3 audio from the ADP 214. The PCM audio 250 preferably also receives I2S audio through an I2S input 262 and digitizes and captures it for mixing with other audio data. The PCM audio 250 preferably supports applications that create and play audio locally within a set top box and allow mixing of the locally created audio with audio from a digital audio source, such as the MPEG audio or Dolby AC-3, and with digitized analog audio.
  • The PCM audio 250 preferably plays audio from an SDRAM in a variety of sample rates and formats. Both the captured analog audio and the local PCM audio may be played and mixed at the same time, even though they may have different sample rates and formats. The PCM audio 250 preferably also provides digital audio output 276 in, e.g., SPDIF serial output format.
  • The audio DAC 252 provides the decoded and digital-to-analog converted MPEG and Dolby AC-3 audio component as an analog audio output 274 of the system. The analog audio output 274 may also include other audio information such as I2S audio.
  • The VEC 254 converts between the HD video color space (YPrPb) and the standard definition YUV color space, and between either of those and RGB before converting to the respective outputs. For example, video that was originally coded using YPrPb may be displayed in YPrPb for direct HD output, or converted to YUV for SD display via composite, Y/C or direct RGB output. This function preferably is available regardless of the resolution of the video. Video that was originally coded using YUV may be output as composite, Y/C or RGB, or converted to YPrPb for direct HD output.
  • The HD YPrPb component output may support the specified tri-level sync. The RGB output may also support optional sync on green, sync on RGB, or separate H and V sync on 2 Y/CVBS and C outputs, to support various types of standard definition and HD monitors.
  • FIG. 3 is a block diagram of the video-graphics display and scaling engine 238 in one embodiment of the present invention. The video-graphics display and scaling engine includes a display engine 300, a sample rate converter (SRC) 302 and a video compositor 304. The video-graphics display and scaling engine may also include other components (not shown) for processing video and graphics. In other embodiments, the SRC may be included in the display engine.
  • The video-graphics display and scaling engine 238 preferably receives video signals 306 and graphics signals 308, and composites them to provide a video output 314. The video signals 306 preferably includes one or more MPEG display feeds and video windows, and may include either or both an HDTV video and an SDTV video. The graphics signals 308 may include graphics windows having various different formats such as YUV and RGB formats.
  • The display engine 300 preferably blends the graphics windows included in the graphics signals 308 to generate blended graphics 310. The SRC 302 preferably performs display sample rate conversion of the blended graphics to generate square graphics pixels 312. The video compositor 304 preferably composites the square graphics pixels 312 together with the video signals 306.
  • Any conventional or non-conventional display engine may be used as the display engine 300 for blending, filtering and scaling graphics. For example, one embodiment of the present invention incorporates the display engine used in one embodiment of the invention described in commonly owned U.S. patent application Ser. No. 09/641,374 filed Aug. 18, 2000 and entitled “Video, Audio and Graphics Decode, Composite and Display System,” the contents of have been incorporated by reference.
  • The display engine 300 preferably provides the blended graphics 310 having an image size of 640×480 pixels and a display sample rate of 12.27 MHz to the SRC 302. The blended graphics have square graphics pixels that are provided to the SRC 302. The blended graphics 310 preferably are in YUV 4:2:2 format. Therefore, the blended graphics preferably include luma (Y) and chroma (U and V) component signals, and each graphics image in the blended graphics preferably includes 640×480 Y values, 320×480 U values and 320×480 V values. YUV may also be referred to as YCrCb or any other terminology used by those skilled in the art to designate video/graphics format having luma and chroma components. In other embodiments, the blended graphics 310 may be in other format, such as, for example, YUV 4:4:4 format.
  • In one embodiment, SRC preferably converts the sample rate of the blended graphics by 11/10 ratio to provide 704 pixels in each display scan line. In this embodiment, the SRC preferably converts Y, U and V values to 704×480 Y values, 352×480 U values and 352×480 V values. Other sample rate ratios may be used if either or both the video and the graphics have a different display sample rate. In other embodiments, for example, the sample rate of the blended graphics may be converted by 22/10 ratio to provide 1408 pixels per display scan line. For another example, different sample rate conversion ratios may be used if the video includes an HDTV video.
  • The SRC 302 preferably includes a multi-tap filter for the display sample rate conversion of all three of the Y, U and V values. For the display sample rate conversion with 11/10 ratio (e.g., down sampling by 10 and up sampling by 11), 11 phases, and therefore 11 coefficients preferably are used per tap. In one embodiment of the present invention, the multi-tap filter preferably has five taps. Therefore, in this embodiment, 55 coefficients are used to process Y (luma) components. In other embodiments, the SRC may include a multi-tap filter having a different number of taps, e.g., eight taps, and corresponding number of coefficients, e.g., 88, may be used. In other embodiments, the SRC may include a separate filter for processing each of the Y, U and V component signals. The SRC preferably also includes a memory for storing the filter coefficients. The memory may be a read only memory (ROM) or a random access memory (RAM).
  • The filter coefficients preferably are selected to provide a good balance of sharpness at the cut-off frequency, smoothness, anti-aliasing and minimum ringing. Design and implementation of multi-tap filters are well known to those skilled in the art. The 55 filter coefficients for processing luma components in the 11-phase, 5-tap filter in one embodiment of the present invention are provided in Table 1.
    TABLE 1
    Filter Coefficients to Process Luma Components for a
    11-Phase, 5-Tap Filter
    Tap 0 Tap 1 Tap 2 Tap 3 Tap 4
    Phase 0 −52 273 348 −68 11
    Phase 1 −34 195 415 −79 15
    Phase 2 −17 122 472 −84 19
    Phase 3 −2 58 514 −79 21
    Phase 4 9 5 540 −63 21
    Phase 5 17 −36 550 −36 17
    Phase 6 21 −63 540 5 9
    Phase 7 21 −79 514 58 −2
    Phase 8 19 −84 472 122 −17
    Phase 9 15 −79 415 195 −34
    Phase 10 11 −68 348 273 −52
  • Each filter coefficient in Table 1 may be designated with a parameter c[ph][t], where ph is the phase that ranges from 0 to 10, and t is the tap number that ranges from 0 to 4. For example, the value of the coefficient c[0 ][0] is equal to −52 according to Table 1. For another example, the value of the coefficient c[6 ][2] is equal to 540.
  • For example, in one embodiment of the present invention, pixel n in each row of the pixels that are input to the SRC has a luma value of yi[n], where n=0, 1, 2, . . . , 703. The phase ph preferably is selected to be (10×n) mod 11, and thus ph ranges from 0 to 10. The center pixel p of the five pixels provided to the five filter taps preferably is selected to be <(10×n)/11>, where <x>is defined to be the largest integer less than or equal to x. The center pixel p has a luma value of yi[p].
  • In this embodiment, the resulting Y values, i.e., yo[n]s, for each scan line are generated using the following equation: y o [ n ] = i = p - 2 p + 2 c [ ph ] [ i - ( p - 2 ) ] × y i [ i ] ( Eq . 1 )
    Thus, for example, the Y value yo[300] of the output pixel 300 preferably is calculated from input Y values yi[270], yi[271], yi[272], yi[273] and yi[274], where the input pixel 272 is the center pixel. For the output pixel 300, for another example, ph equals to (10×300) mod 11, which is equal to 8.
  • It is not always possible to have two input values to the left and two input values to the right of the center pixel on the same scan line. For example, when the pixel 638 is used as the center pixel, the five input Y values to be provided to the 5-tap filter, according to Eq. 1, are yi[636], yi[637], yi[638], yi[639] and yi[640]. However, images having 640×480 pixels have input Y values ranging from yi[0] to yi[639], and yi[640] does not exist. For such cases, the right boundary input Y value may be duplicated so that all five taps of the 5-tap filter may be provided with an input Y value. For example, in this case, the input Y values of yi[636], yi[637], yi[638], yi[639] and yi[639] may be provided in which the right boundary input Y value of yi[639] is duplicated and used twice. Similar duplication of the boundary input Y value may be used at the left boundary as well.
  • FIG. 4 is a diagram illustrating the frequency response of the 5-tap filter in one embodiment of the present invention. The abscissa is in units of a half of the sampling frequency. For example, the range of 0 to 1.0 on the abscissa corresponds to 0 Hz to 13.5 MHz for the case of luma component signals for ITU-R 601 compliant video. As can be seen from FIG. 4, the 5-tap filter preferably also performs low pass filtering as to reduce aliasing.
  • Since there are half as many U values and half as many V values as there are Y values in a YUV 4:2:2 image, in order to perform sample rate conversion using the same filter for all three values, different coefficients preferably are used for filtering U values and V values. For example, during the time Y has five input values including yi[0], yi[1], yi[2], yi[3] and yi[4], U may have three input values including ui[0], u i[2], ui[4], since there is only one value of U for every two values of Y in an YUV 4:2:2 image. Table 2 illustrates a frequency relationship between Y, U and V components of a YUV 4:2:2 image.
    TABLE 2
    Y, U and V Frequency Relationship for a YUV 4:2:2 Image
    Y yi[0] yi[1] yi[2] yi[3] yi[4] yi[5] yi[6] yi[7] yi[8] yi[9]
    U/V ui[0] vi[0] ui[2] vi[2] ui[4] vi[4] ui[6] vi[6] ui[8] vi[8]

    In other embodiments, there may be more or less number of input U and V values per input Y value. For example, each scan line in a YUV 4:4:4 image typically contains an identical number of input U and V values as the input Y value.
  • In one embodiment of the present invention, preferably a three tap filter is used to process input U values and input V values. The coefficients to be applied to the input U and V values may be derived from the coefficients for the input Y values, or they may be generated independently of the coefficients for the input Y values. For example, coefficients cc[ph][t] for the input U and V values may be derived from the coefficients c[ph][t] for the input Y values in accordance with the following equations:
    cc[ph][0]=c[ph][0]+c[ph][1];  (Eq. 2)
    cc[ph][1]=c[ph][ 2]; and   (Eq. 3)
    cc[ph][2]=c[ph][3]+c[ph][4].  (Eq. 4)
    In equations 2-4, ph is the phase that ranges from 0-10, and therefore, there are 33 coefficients for the U and V input values. In another embodiment, different coefficients used for the input U values and the input V values may be different from one another.
  • There are twice as many pixels as there are U and V values in each YUV 4:2:2 image. Therefore, a parameter m should be designated for each group of two pixels in the graphics image, where m is defined to be <n/2>and m=0, 1, 2, . . . , 351. The phase ph preferably is selected to be (10×m)mod 11. The center pixel q of the input pixels for filtering preferably is defined to be <(10×m)/11>. The U and V component signals preferably are filtered in accordance with the following equations:
    u o [m]=cc[ph][0]×u i [q−2]+cc[ph][1]×u i [q]+cc[ph][2]×u i [q+2]; and  (Eq. 5)
    v o [m]=cc[ph][0]×v i [q−2]+cc[ph][1]×v i [q]+cc[ph][2]×v i [q+2].  (Eq. 6)
  • In other embodiments, a 5-tap filter, similar to the filter for the Y samples, may be used to filter the input U and V values. In these embodiments, the same or different coefficients may be applied to the input Y, U and V values. In order to use the 5-tap filter to process three input U values, additional values preferably are generated from existing values. For example, two of the three input U values may be duplicated and five values may be provided as ui[0], ui[0], ui[1], ui[1] and ui[2] or other similar sequence of U values. The in-between input U values may also be generated using other methods. For example, ui[0], (ui[0]+ui[1])/2, ui[1], (ui[1]+ui[2])/2, ui[2] may be used as the five input U values. Similarly, some input V values may be duplicated or in-between input V values may be generated in order to provide five input V values to the 5-tap filter concurrently with five input Y values.
  • In one embodiment of the present invention, images having a size 320×480 may be expanded to have a size of 704×480. The same 5-tap filter may be used for display sample rate conversion to generate output pixels for this embodiment. In this embodiment, since the images are to be scaled up by an 11/5 ratio, (e.g., down sampling by 5 and up sampling by 11) the phase ph preferably is selected to be (5×n)mod 11. All other aspects of the filtering algorithm preferably is similar to the foregoing embodiments for display sample rate conversion by 11/10 ratio.
  • As described above, one embodiment of the present invention provides a video and graphics system for HDTV and SDTV applications with a capability for displaying a combination of video and graphics where the video has non-square aspect ratio pixels, and the graphics content has square aspect ratio pixels.
  • Although this invention has been described in certain specific embodiments, many additional modifications and variations would be apparent to those skilled in the art. It is therefore to be understood that this invention may be practiced otherwise than as specifically described. Thus, the present embodiments of the invention should be considered in all respects as illustrative and not restrictive, the scope of the invention to be determined by the appended claims and their equivalents.
  • For example, the present invention may be applied to any system that uses sample rate conversion. For example, the present invention may be used for display sample rate conversion of graphics or other images to generate square aspect ratio pixels for applications in a 1080i-format HDTV with 1920×1080 pixels. For this application, Y, Pr and Pb component signals may be processed in a similar manner as the processing of Y, U and V (Y, Cr, Cb) component signals in the foregoing description. The present invention may also be used for a display sample rate conversion to generate display pixels for applications in HDTVs having progressive formats such as 720p and 1080p.

Claims (1)

1. A video and graphics system comprising:
a first input for receiving a graphics image comprising graphics pixels, the graphics pixels having square pixel aspect ratio;
a second input for receiving a video image comprising video pixels, the video pixels having non-square pixel aspect ratio, the video image having a larger number of pixels per scan line than the graphics image;
a sample rate converter for converting sample rate of the graphics image so that the number of graphics pixels per scan line of the graphics image becomes greater than or equal to the number of video pixels per scan line of the video image; and
a video compositor for blending the graphics image with the video image,
wherein the graphics image is scaled horizontally so that the graphics image can be overlaid on the full width of the video image, and the square pixel aspect ratio of the graphics pixels is maintained.
US11/928,955 2001-03-05 2007-10-30 Video and graphics system with square graphics pixels Expired - Fee Related US7719547B2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US11/928,955 US7719547B2 (en) 2001-03-05 2007-10-30 Video and graphics system with square graphics pixels
US12/752,304 US20100182318A1 (en) 2001-03-05 2010-04-01 Video and graphics system with square graphics pixels

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US09/799,252 US7057627B2 (en) 2001-03-05 2001-03-05 Video and graphics system with square graphics pixels
US11/357,735 US7336287B2 (en) 2001-03-05 2006-02-17 Video and graphics system with square graphics pixels
US11/928,955 US7719547B2 (en) 2001-03-05 2007-10-30 Video and graphics system with square graphics pixels

Related Parent Applications (2)

Application Number Title Priority Date Filing Date
US09/799,252 Continuation US7057627B2 (en) 2001-03-05 2001-03-05 Video and graphics system with square graphics pixels
US11/357,735 Continuation US7336287B2 (en) 2001-03-05 2006-02-17 Video and graphics system with square graphics pixels

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US12/752,304 Continuation US20100182318A1 (en) 2001-03-05 2010-04-01 Video and graphics system with square graphics pixels

Publications (3)

Publication Number Publication Date
US20080049021A1 US20080049021A1 (en) 2008-02-28
US20100060641A9 true US20100060641A9 (en) 2010-03-11
US7719547B2 US7719547B2 (en) 2010-05-18

Family

ID=25175432

Family Applications (4)

Application Number Title Priority Date Filing Date
US09/799,252 Expired - Fee Related US7057627B2 (en) 2001-03-05 2001-03-05 Video and graphics system with square graphics pixels
US11/357,735 Expired - Lifetime US7336287B2 (en) 2001-03-05 2006-02-17 Video and graphics system with square graphics pixels
US11/928,955 Expired - Fee Related US7719547B2 (en) 2001-03-05 2007-10-30 Video and graphics system with square graphics pixels
US12/752,304 Abandoned US20100182318A1 (en) 2001-03-05 2010-04-01 Video and graphics system with square graphics pixels

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US09/799,252 Expired - Fee Related US7057627B2 (en) 2001-03-05 2001-03-05 Video and graphics system with square graphics pixels
US11/357,735 Expired - Lifetime US7336287B2 (en) 2001-03-05 2006-02-17 Video and graphics system with square graphics pixels

Family Applications After (1)

Application Number Title Priority Date Filing Date
US12/752,304 Abandoned US20100182318A1 (en) 2001-03-05 2010-04-01 Video and graphics system with square graphics pixels

Country Status (3)

Country Link
US (4) US7057627B2 (en)
EP (1) EP1415481A4 (en)
WO (1) WO2002071762A1 (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7057627B2 (en) * 2001-03-05 2006-06-06 Broadcom Corporation Video and graphics system with square graphics pixels
US7254277B2 (en) * 2002-12-30 2007-08-07 Texas Instruments Incorporated Image processing with minimization of ringing artifacts and noise
US7489362B2 (en) * 2003-03-04 2009-02-10 Broadcom Corporation Television functionality on a chip
US7679629B2 (en) * 2003-08-15 2010-03-16 Broadcom Corporation Methods and systems for constraining a video signal
JP2005057324A (en) * 2003-08-01 2005-03-03 Pioneer Electronic Corp Picture display device
US7233268B1 (en) * 2006-06-03 2007-06-19 Rdw, Inc. Multi-stage sample rate converter
DE102006053261B4 (en) * 2006-11-11 2015-04-16 Visus Technology Transfer Gmbh System for the reproduction of medical images
US9716854B2 (en) * 2008-04-09 2017-07-25 Imagine Communications Corp. Video multiviewer system with distributed scaling and related methods
US20130128120A1 (en) * 2011-04-06 2013-05-23 Rupen Chanda Graphics Pipeline Power Consumption Reduction
GB201414204D0 (en) * 2014-08-11 2014-09-24 Advanced Risc Mach Ltd Data processing systems
US9319080B1 (en) * 2015-01-12 2016-04-19 The United States Of America As Represented By The Secretary Of The Air Force Detection-enhanced adjustable bandwidth circuit

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4462024A (en) * 1981-03-24 1984-07-24 Rca Corporation Memory scanning address generator
US5574572A (en) * 1994-09-07 1996-11-12 Harris Corporation Video scaling method and device
US5695401A (en) * 1991-12-20 1997-12-09 Gordon Wilson Player interactive live action athletic contest
US5912710A (en) * 1996-12-18 1999-06-15 Kabushiki Kaisha Toshiba System and method for controlling a display of graphics data pixels on a video monitor having a different display aspect ratio than the pixel aspect ratio
US5923385A (en) * 1996-10-11 1999-07-13 C-Cube Microsystems Inc. Processing system with single-buffered display capture
US6144362A (en) * 1996-09-27 2000-11-07 Sony Corporation Image displaying and controlling apparatus and method
US6208354B1 (en) * 1998-11-03 2001-03-27 Ati International Srl Method and apparatus for displaying multiple graphics images in a mixed video graphics display

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4357624A (en) * 1979-05-15 1982-11-02 Combined Logic Company Interactive video production system
GB2231228B (en) * 1989-04-27 1993-09-22 Sony Corp Video signal to photographic film conversion
US5119082A (en) * 1989-09-29 1992-06-02 International Business Machines Corporation Color television window expansion and overscan correction for high-resolution raster graphics displays
JPH06153069A (en) * 1992-10-30 1994-05-31 Sony Corp Converter, duplicating device, reproduction device and display device of image
US5928313A (en) * 1997-05-05 1999-07-27 Apple Computer, Inc. Method and apparatus for sample rate conversion
US6483951B1 (en) * 1998-06-26 2002-11-19 Lsi Logic Corporation Digital video filter sequence for bandwidth constrained systems
WO2000028518A2 (en) * 1998-11-09 2000-05-18 Broadcom Corporation Graphics display system
US7365757B1 (en) * 1998-12-17 2008-04-29 Ati International Srl Method and apparatus for independent video and graphics scaling in a video graphics system
US6417891B1 (en) * 1999-04-16 2002-07-09 Avid Technology, Inc. Color modification on a digital nonlinear editing system
US6956617B2 (en) * 2000-11-17 2005-10-18 Texas Instruments Incorporated Image scaling and sample rate conversion by interpolation with non-linear positioning vector
US7057627B2 (en) * 2001-03-05 2006-06-06 Broadcom Corporation Video and graphics system with square graphics pixels

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4462024A (en) * 1981-03-24 1984-07-24 Rca Corporation Memory scanning address generator
US5695401A (en) * 1991-12-20 1997-12-09 Gordon Wilson Player interactive live action athletic contest
US5574572A (en) * 1994-09-07 1996-11-12 Harris Corporation Video scaling method and device
US6144362A (en) * 1996-09-27 2000-11-07 Sony Corporation Image displaying and controlling apparatus and method
US5923385A (en) * 1996-10-11 1999-07-13 C-Cube Microsystems Inc. Processing system with single-buffered display capture
US5912710A (en) * 1996-12-18 1999-06-15 Kabushiki Kaisha Toshiba System and method for controlling a display of graphics data pixels on a video monitor having a different display aspect ratio than the pixel aspect ratio
US6208354B1 (en) * 1998-11-03 2001-03-27 Ati International Srl Method and apparatus for displaying multiple graphics images in a mixed video graphics display

Also Published As

Publication number Publication date
US20100182318A1 (en) 2010-07-22
US7336287B2 (en) 2008-02-26
US20020158893A1 (en) 2002-10-31
EP1415481A4 (en) 2006-11-29
US20060209086A1 (en) 2006-09-21
EP1415481A1 (en) 2004-05-06
WO2002071762A1 (en) 2002-09-12
US7057627B2 (en) 2006-06-06
US7719547B2 (en) 2010-05-18
US20080049021A1 (en) 2008-02-28

Similar Documents

Publication Publication Date Title
US7719547B2 (en) Video and graphics system with square graphics pixels
US6327000B1 (en) Efficient image scaling for scan rate conversion
US6411333B1 (en) Format conversion using patch-based filtering
US6556193B1 (en) De-interlacing video images using patch-based processing
JP5582429B2 (en) Decoder and method
US6493005B1 (en) On screen display
US8754991B2 (en) Shared memory multi video channel display apparatus and methods
US20070242160A1 (en) Shared memory multi video channel display apparatus and methods
US20040119886A1 (en) Method and apparatus for implementing 4:2:0 to 4:2:2 and 4:2:2 to 4:2:0 color space conversion
US8218091B2 (en) Shared memory multi video channel display apparatus and methods
US7050113B2 (en) Digital video data scaler and method
US20070040943A1 (en) Digital noise reduction apparatus and method and video signal processing apparatus
US6437787B1 (en) Display master control
US6100937A (en) Method and system for combining multiple images into a single higher-quality image
KR19980068686A (en) Letter Box Processing Method of MPEG Decoder
US20080247455A1 (en) Video signal processing apparatus to generate both progressive and interlace video signals
JPH02272894A (en) Color image sigal encoding
TWI415479B (en) System and method for cvbs signal decoding and de-interlacing
WO2012114373A1 (en) Image signal processing method and device
JP2000181418A (en) Device and method for picture processing and providing medium
JPH08307893A (en) Color moving image-still image conversion device

Legal Events

Date Code Title Description
STCF Information on status: patent grant

Free format text: PATENTED CASE

FPAY Fee payment

Year of fee payment: 4

AS Assignment

Owner name: BROADCOM CORPORATION, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MACINNIS, ALEXANDER G.;ZHONG, SHENG;SIGNING DATES FROM 20010402 TO 20010524;REEL/FRAME:033819/0751

AS Assignment

Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH CAROLINA

Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:037806/0001

Effective date: 20160201

Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH

Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:037806/0001

Effective date: 20160201

AS Assignment

Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD., SINGAPORE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:041706/0001

Effective date: 20170120

Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:041706/0001

Effective date: 20170120

AS Assignment

Owner name: BROADCOM CORPORATION, CALIFORNIA

Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:041712/0001

Effective date: 20170119

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552)

Year of fee payment: 8

AS Assignment

Owner name: AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITE

Free format text: MERGER;ASSIGNOR:AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.;REEL/FRAME:047196/0687

Effective date: 20180509

AS Assignment

Owner name: AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITE

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE EFFECTIVE DATE OF MERGER TO 9/5/2018 PREVIOUSLY RECORDED AT REEL: 047196 FRAME: 0687. ASSIGNOR(S) HEREBY CONFIRMS THE MERGER;ASSIGNOR:AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.;REEL/FRAME:047630/0344

Effective date: 20180905

AS Assignment

Owner name: AVAGO TECHNOLOGIES INTERNATIONAL SALES PTE. LIMITE

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE PROPERTY NUMBERS PREVIOUSLY RECORDED AT REEL: 47630 FRAME: 344. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT;ASSIGNOR:AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD.;REEL/FRAME:048883/0267

Effective date: 20180905

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20220518