MXPA99005597A - Memory architecture for a multiple format video signal processor - Google Patents

Memory architecture for a multiple format video signal processor

Info

Publication number
MXPA99005597A
MXPA99005597A MXPA/A/1999/005597A MX9905597A MXPA99005597A MX PA99005597 A MXPA99005597 A MX PA99005597A MX 9905597 A MX9905597 A MX 9905597A MX PA99005597 A MXPA99005597 A MX PA99005597A
Authority
MX
Mexico
Prior art keywords
data
format
mpeg2
visual display
block
Prior art date
Application number
MXPA/A/1999/005597A
Other languages
Spanish (es)
Inventor
Alan Canfield Barth
Wayne Patton Steven
Christopher Todd
Original Assignee
Kranawetter Greg Alan
Schultz Mark Alan
Thomson Consumer Electronics Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kranawetter Greg Alan, Schultz Mark Alan, Thomson Consumer Electronics Inc filed Critical Kranawetter Greg Alan
Publication of MXPA99005597A publication Critical patent/MXPA99005597A/en

Links

Abstract

A video decoder (10) transcodes video data from various input formats to a predetermined output format. Input data may be standard definition (NTSC or PAL) data or MPEG2 compressed data. Standard definition data are rearranged into block format to be compatible with the decoder's (10) single display processor (40). The display processor selectively processes and conveys either MPEG2 format data or non-MPEG2 format data to a display device. A block based frame memory (20) stores MPEG2 and non-MPEG2 pixel block data, as well as standard definition data in raster line format during processing.

Description

MEMORY ARCHITECTURE FOR A VIDEO SIGNAL PROCESSOR IN MULTIPLE FORMATS Field of the Invention This invention relates to the processing of video signals for visual display.
Background Information Compressed video signal transmission systems, for example systems using an MPEG2 compression format (Motion Picture Experts Group) ("Coding of Moving Pictures and Associated Audio", ISO / IEC JTC1 / SC29 / WG11 N0702 (revised), May 10, 1994), are currently transmitting HDTV digital signals (High Television) Definition) from a number of test sites. Broadcasts of commercial programs are scheduled to start soon, when the first HDTV devices enter the market. HDTV signals are not compatible with current television receivers, such as those for the processing of standard NTSC signals in the United States. Accordingly, a transition period will be presented during which SD (standard definition) television signals will continue to be transmitted in accordance with NTSC or PAL television standards, to prevent SD devices from becoming immediately obsolete. Also, during a period of time, some programming will not be available in MPEG2 format, due to the changing logistics that transmitters will find.
Video data is transmitted in different formats (for example, 4: 3 and 16: 9 image display aspect ratios; 4: 4: 4, 4: 2: 2 and 4: 2 data sample formats: 0, explored interlaced and not interlaced), and with different spatial resolution (for example, 352, 480, 544, 640, 720 ... 1920 pixels per line, and 240, 480, 720, 1080 active lines per frame). In general, it is impractical, both for aesthetic and cost reasons, to equip video signal receivers with the ability to display decompressed signals in their pre-broadcast format. Rather, a subsequent decompression processing circuit is preferably included to transcode different formats of a decompressed video signal into a desired visual display format.
There are many transcoding or space-time conversion systems known to those skilled in the art of video signal processing. In general, each refers to a specific type of conversion, such as conversion from interlaced to non-interlaced, or sample duplication, line, or field speed. Even though video decompression systems incorporate an appreciable amount of circuits, it is desirable to employ additional circuits to process uncompressed or standard definition video signals. The post-processing circuits included in the receiver must transcode an SD video signal without significantly increasing the amount of transcoding circuits. This is difficult, because the digital MPEG2 format television signals arrive at a visual display processor compatible with MPEG2 in a decoded pixel block format. The SD television signals generally arrive at the visual display processor as lines of multiplexed analog YCRCB visual display pixels (a grid scan) in a 4: 2: 2 ratio, either in the NTSC or PAL format. Also, SD signals are of a lower resolution than many of the high definition (HD) visual displays associated with MPEG2 HD signals. The upward conversion that correctly compensates for the movement that occurs in an image is a complex process because the image is temporarily presented as interlaced field data. Significant memory is required to build an image frame suitable for visual display.
Compendium According to the present invention, a digital video signal processing system receives both data compatible with MPEG2 and not compatible with MPEG2. A visual display processor that includes a block-to-line converter to process MPEG2 data in block format and data that is not in MPEG2 format converted from line to block, receives the digital video data. A common memory stores the data in MPEG2 format and data that is not in MPEG2 format during processing by said system.
Brief Description of the Drawings Figure 1 is a block diagram of one embodiment of the present invention. Figure 2A is a block diagram of an MPEG2 SD / HDTV decoder, and a visual display processing circuit employing the present invention. Figure 2B is a block diagram showing an embodiment of an MPEG2 decompressor as used in Figure 2A. Figure 2C is a block diagram of the visual display processor of Figure 2A. Figure 3 illustrates an example of the line-to-block conversion.
Figure 4 and Figures 5A and 5B through 8A and 8B illustrate different conversions of signal formats implemented by the decoder circuit. Figure 9 is a flow chart of the signal path through a receiver, including a decoder according to the present invention.
Description of the Preferred Modality Figure 1 illustrates the basic elements of the preferred embodiment of the invention. The compressed MPEG2 data from the compressed data input (CD) and from the MPEG2 input provide compressed MPEG2 data to the MPEG2 decoder 16. The MPEG2 data can be any type of data compressed and transmitted within the MPEG2 standard guidelines. This includes, for example, high definition data and standard definition data. The decoded MPEG2 data is provided to the Block Memory 20, and from there to the visual display processor 40. The standard definition data that is not MPEG2, for example the video data in CCIR 601 format, is received by the SD Interface. 22, which accepts the line data, and converts it into block data. The Block Memory 20 receives the data in standard definition (SD), in a block format, from the SD Interface 22, and provides it as necessary to the same visual display processor 40. The visual display processor 40 receives the data block, by means of the memory 20, from both sources, and provides the block-to-line conversion and the conversion of formatted aspect and proportion to a desired visual display device. The busbar structure between the elements 16, 20, 22, and 40, can be a common busbar, as shown, or separate busbars connecting each of the elements 16, 22, and 40 with the element 20. Figure 2A shows a block diagram of a portion of a compressed video signal decoder, including visual display processing circuits for transforming signals that are presented in different formats, into a preferred format or formats. All circuits illustrated, except possibly by external memory and system control, can be included in a single integrated circuit or not, depending on the requirements of a particular system. The apparatus of Figure 2A can be included in, for example, an Advanced Television Receiver.
(ATV), which includes tuner / IF circuits, deinterleaving circuits, error correction circuits, and reverse transport circuits, to provide, for example, a digital video signal compressed in MPEG2. The apparatus of Figure 2A assumes that the television receiver will provide, for example, decoded NTSC, PAL, or SECAM signals (all referred to as SD) in a digital format, such as CCIR601. In addition, the apparatus of Figure 2A receives and decodes compressed video signals from other sources, which can transmit at constant and variable rates, both continuously and in bursts. Other data formats can be put into the decoder 10, by adding a converter, to provide the signal in an acceptable format. These data formats can be those known in the computer industry, RGB, VGA, SVGA, etc., for example. The decoder 10 includes an input interface 12, which couples the external compressed video data different from the SD video data to the decoder. For example, the input interface 12 is coupled with a controller of the overall system 14, with a primary decompressor MPEG2 16, and with a memory interface 18. The input interface 12 couples the external data and the control signals with different parts of the decoder 10, through a RBUS, which is 21 bits wide in this example. The compressed video data is retrieved from the packets in MPEG2 format, and is buffered in the external memory 20 before decompression. Standard definition digital video that is not MPEG2, is applied directly from an external source to an SD 22 interface by means of an 8-bit busbar. The SD data is received in a digital grid line format, ie line by line. The SD interface 22 operates in conjunction with the LMC 24 (local memory controller), the SD data being passed to the external memory 20 as block data of pixels compatible with the input requirements of a visual display processor 40. Since the SD data are representations of pixels in line format, pixel data is simply rearranged by placing it in blocks of pixels, as they are written into memory 20. The conversion of SD data to pixel blocks conveniently allows both the SD data as the decompressed MPEG2 data is processed by the same visual display processor. The interface SD 22 is simpler and less expensive than deriving the visual display processor 40 or providing a second compatible visual display processor. The derivation requires a reprogramming and reconfiguration of many of the elements in the visual display processor 40, to be handled when the SD data is received, because the pixel block data is not processed the same as the grid line data . The SD 22 interface is an uncomplicated element that handles certain tasks. These tasks include receiving and counting the number of pixels per line, ensuring that the correct amount or information is always drawn to the external memory 20, and not producing data during the blanking periods. In addition, LMC 24 requires only a simple algorithm to direct the reorganization of the data received by the SD interface 22. Figure 3 illustrates an example of reorganizing data from the line form to the block form. In general, the data received by the SD Interface 22 is in digital form. However, a converter (not shown) can be easily added on or before the input of the SD 22 interface, to convert the data to a digital form when necessary. Rows A through L represent pixel data having an aspect ratio of 4: 2: 2, and a grid line format. Data rows continue according to the received data format. The SD interface 22 reorganizes the data by separating the luminance and chrominance U and V values. The luminance data are grouped in blocks of 8x8, and the chrominance data U and V are grouped in blocks of 4x4. The chrominance data blocks include the odd data positions in the U block, and the even positions in the V block. Also, during the reorganization the conversion is presented from an aspect ratio of 4: 2: 2 to a ratio of 4 : 2: 0; however, the conversion of the aspect ratio will depend on the input data requirements of the visual display device. The reorganized data is stored as blocks in the external memory 20. The compressed data, which can only appear once, which can be received at a variable speed, or which can be received in bursts, are received by the decoder 10 over an interface Priority CD (compressed data) 32. When data is present in the CD 32 interface, the decoder 10 prioritizes the interface activity to ensure proper reception. The CD 32 interface accepts compressed video data in an MPEG2 compatible format. The CD 32 interface includes a buffer zone with an 8-bit input, and a 128-bit output, which translates the data and sends it to the external memory 20 before decompression. The external memory 20 also connects externally with the decoder 10, and can be as large as 128 Mbits for the high definition television signals. The connection is a 64-bit busbar coupled through a multiplexer / demultiplexer 26. Unit 26 translates data from a 128-bit internal memory data bus (MEM BUS) to the 64-bit memory bus . The LMC 24 controls the reading / writing of the external memory 20 at the request of the different interfaces and of the different processing circuits. The LMC 24 is programmed to store video data in the memory 20, in a block format, wherein a block comprises a block structured in MPEG2 of data of 8x8 pixels. The decoder 10 uses the external frame memory 20 as a buffer zone for reception and timing for the compressed video data, due to its storage capacity. A large storage space is needed to buffer the input data before decompression. Placing this buffer zone in an integrated circuit inconveniently occupies a significant physical space. Also, putting the formation of the pixel blocks for the reconstruction of the frame in buffer zone. The excess information is retrieved by the start code detector 34, which obtains the necessary information for decompression. The compressed input video data is retrieved from the external memory 20 for the initial decompression, and is applied via the MEM BUS to the MPEG2 decompressor 16. Other forms of decompression can be used without affecting the spirit of the present invention. The MPEG2 decompression of the predicted frames requires that previously decompressed "anchor" frames are stored in memory and recovered when needed, to decompress and reconstruct an image. The apparatus of Figure 2A preferably incorporates the secondary compression of the decompressed MPEG2 video data before the full frames are stored in the memory 20, thereby significantly reducing the amount of external memory required in the receiver. The secondary compression is later referred to herein as a recompression. The first compression, and the subsequent decompression, is the formatting of the data in MPEG2 format to be transmitted in a transport stream. Figure 2B is an example of an MPEG2 decompressor. The decompressor 16 of Figure 2A is expanded to show the necessary generic element, an MPEG2 decompressor. The encoded compressed MPEG2 data is received in the RBUS by the VLD (variable length decoder) 100. The VLD 100 passes the decoded data to the inverse quantizer 102, which passes the non-quantized data to the inverse discrete transformation processor 104, which produces data based on MPEG2 decompressed block. This data is combined with the data from the motion processor 108 in the combiner 106, and are passed to the recompressor 28. The recompressor 28 is different from the MPEG2 compression in an MPEG2 encoder, and can be practiced in many ways. For example, recompression may include modulation of differential pulse code on a block basis, and the subsequent fixed, variable, or length of execution coding. In an alternative way, you can incorporate Huffman coding on a block basis. The compression can be lossless or lossy. Recompression is performed in Figure 2A by a compressor 28 coupled between the MPEG2 decompressor 16 and the MEM BUS. Accordingly, the decoded and decompressed video data MPEG2 is applied to the compressor 28 for recompression of the data, followed by storage in the external memory 20. When the recompressed video data is recovered to reconstruct the predicted frames in MPEG2, in a motion processing network, first applied to a decompressor 30, which operates inversely to the compressor 28. The recovered data, after passing through the decompressor 30, are able to be used by the MPEG2 decoder 10, to reconstruct the frames predicted in the course of motion compensation processing. Both the video frames recompressed in HD, and the video frames in SD, are recovered from the external memory 20, and are applied to the visual display processor 40 by means of the MEM BUS, to be processed before visual display or storage, as signals components with a desired visual aspect ratio and display resolution, as shown in Figure 2C. The data retrieved from the external memory 20 is applied to the visual display processor 40 through the FIFOs 42, 44, 46, 48, 50, which perform two functions. The first is to put the time of the data in buffer zone. The second is to convert data of 16 bytes wide (128 bits) from the MEM BUS, to data of 1 byte wide (MPEG2 data to the decompressor 52), or to data of four bytes wide (SD data to the LMU 54). Designated byte widths are examples. The visual display processor 40 is shown in Figure 2C. In the visual display processor 40, the recompressed MPEG2 video data is first applied to the decompressor 52, which is similar to the decompressor 30. The decompressor 52 provides the video component luminance (Y) and chrominance (C) video signals. decompressed, on a block-by-block basis. The MPEG2 component signals decompressed from the decompressor 52 are applied to the respective block luminance and chrominance converters 56 and 58. The block to line converters apply the component Y and C signals on a line-by-line basis to a converter vertical format (LUMA VFC 60), and a chroma vertical format converter (CHROMA VFC 62) respectively. Both luma and chroma 60, 62 speed converters include circuits for vertical format conversion and horizontal sample rate conversion. The vertical and horizontal converters are separated by FIFOs to handle the time transitions between the converters. The sample rate converters are programmable according to the parameters of a particular system, and can increase or decrease the number of lines per image, and / or increase the number of pixels per line. The component luma and chroma data from the sample rate converters are coupled with a visual screen display (OSD 64), which is selectively conditioned as it is known, to superimpose text and / or graphics on the component video signals. Either the system controller 14 or the input data stream can provide OSD data, which is stored in the external memory 20, but not on a block basis. The decoder 10 conveniently includes circuits for deinterlacing the SD image formats, and a progressive scan output of 480 active lines. These circuits are located in the LMU 54. The SD image format has 480 active interlaced scan lines. To provide the appearance of a higher vertical resolution for visual display on a high-resolution monitor, the output is increased to 480 progressive lines. The LMU 54 (Upward Converter of Adaptive Linear Motion) performs the line conversion required by the device for visual display of output images, and this is caused by interlacing fields from an image frame. The SD signal is stored in, and subsequently recovered from, the external memory 20, because the LMU 54 requires an SD signal concurrently from the adjacent fields to calculate the movement of the image and generate a progressive scan output to the same resolution or higher resolution. This is not a movement compensation as it is known in an MPEG2 format. For each field, the associated lines pass through the LMU 54, which estimates the interstitial lines to the field lines based on the amount of movement of the image. The movement of the image is estimated from the differences between the corresponding pixel values in the previous and subsequent fields. If the movement values are generally zero, then the average of the line from the previous and subsequent fields is used as the estimated line. If there is a high degree of movement for the pixel that is being estimated, then the pixel value is estimated from the average of the line that is above and the line that is below the interstitial line in the current field. If only a small degree of movement exists, then the interstitial line is estimated from a combination of the line in the previous field and the averaged lines of the current field. The more movement there is present, the more the average of the lines above and below the current line is used from the current field in relation to the interlaced scan line from the attached fields. Instead of limiting the memory 20 to provide the adjacent lines for the average lines, the internal memory of the block-to-line luma converter 60 is conveniently used to concurrently provide the video signal from the adjacent lines to the LMU 54 However, only the previous or next line is available from the line memories in the converter 60. In addition, the LMU 54 can clarify the frames with filters and the line and / or field delays based on the movement that is made. present inside the frame. The LMU 54 requires memory to process the SD data, because an image frame is presented in two interlaced fields that must be temporarily processed to correctly reconstruct the movement information from the original image. Processing can not be done until adjacent lines are available from both fields. An image field for SD data is approximately 240 active lines. Instead of providing additional internal memory for this function, as was done previously, the data that is being processed can be stored in, and retrieved from, the memory 20. A sufficient portion of memory 20 is available because it is not being used completely, as it would be for HD data processing (described above). By addressing the data from the LMU 54 to the memory 20, instead of providing local memories in the integrated circuit of the visual display processor, the size and cost of the integrated circuit is reduced. Due to the existing memory bus bars READ DATA BUS (read data bus) and WRITE DATA BUS (write data bus), and to the firmware associated with the LMU 24, the transfer to the memory 20 is fast and efficient. The data can be applied to and from the MEM BUS through the internal FIFO memories to the processing elements (not shown to simplify the drawing). The elements of Figure 2A have input and / or output FIFOs that allow the decoder 10 to operate in a seamless manner. When loading a data segment in buffer zones / FIFOs, each element can access the MEM BUS as it becomes available, while maintaining a continuous data flow inside the processing element. The visual display processor has two separate clocks that control separate sections, the decompression clock domain 66, and the visual display clock domain 68, as seen in Figure 2C. The decompression clock domain 66 contains all the functions that must be synchronously interconnected with the block to line conversion RAMs 56, 58, and is executed at clock speeds of 40 to 81 MHz to achieve the desired bandwidth. The domain of the visual display clock 68 contains functions that run synchronously with the final output at clock speeds of 27 to 81 MHz. The two clocks can operate at the same speed or at a different speed, depending on the application. The video data passing between the two clock domains pass through the FIFOs 71, 73 (each for luma and chroma) with the read request for the FIFOs coming from the controller of the horizontal sample rate converter. Each FIFO includes the control logic that responds to the knowledge signals and read and write request from the visual display processor 40 and the LMC 24. There is also a control logic to track the amount of data in the respective FIFO, and to control the asynchronous interface between the end of the "busbar" of the FIFO, which uses the same clock as the data bus, and the end of the "visual display" of the FIFO that uses the decompression clock. Since the visual display section contains the control logic, the number of circuits that actually operate from the "bus bar" clock desirably minimizes. The primary decompressed data or MPEG2 (but the secondarily recompressed data) are accessed from external memory 20 on a block-by-block basis, and are applied by means of FIFO 346 and FIFO 448 to the secondary decompressors of luma and chroma, which provide values of de-compressed luma and chroma pixel blocks. The blocks of the de-compressed luma and chroma pixel values are applied to the respective block-to-line converters 56 and 58, which comprise local RAM memories. Entire rows of 8x8 blocks (luma) or blocks of 4x4 (chroma) are written in the respective local memories. The memories are read line by line, or by multiple lines in parallel, depending on the instantaneous function of the converter circuits connected to the memory output memory. As the data is read, new data is written in that location, to minimize the amount of local memory required. The sample sizes for the local memories of block converters to line 56 and 58 are 16 bytes wide by 960 words deep, and 16 bytes wide by 720 words deep. Local memories include input multiplexers and output multiplexers to configure the data input data 16 bytes wide to be stored in the local memory, and to properly configure data 16 bytes wide read from memory to be used by the respective vertical sample rate converter. The horizontal and vertical sample rate converters to process the uncompressed MPEG2 video to be displayed in a 16: 9 high definition visual display will perform the line conversions listed in Tables I and II, respectively. The horizontal converter must be able to have a maximum output speed of 81 MHz pixels.
TABLE I: HORIZONTAL CONVERSIONS TABLE II: VERTICAL CONVERSIONS Tables I and II describe the conversions of the luma signal. Similar conversions are performed on the chroma signals. With respect to chroma, the compressed signal is in a 4: 2: 0 format, and the previous chroma conversions include an additional conversion from 4: 2: 0 to 4: 2: 2. Normally, this chroma processing will be included with any other vertical processing required. For vertical chroma conversion, a two-tap polyphase filter is generally used for combined resampling and conversion from 4: 2: 0 to 4: 2: 2. For Figures 4 to 8, it may appear that the X's and the Either they are not aligned or they overlap in an incorrect way. Although the figures approach the placement, the general relationship of X to 0 is correct. Misalignment or overlap is correct, and occurs due to the integerless ratio of the conversion. Figure 4 illustrates pictorially the vertical / temporal relationship of the input and output chrominance lines, when only the conversion from 4: 2: 0 to 4: 2: 2 is required (ie, receiving 480 progressive and displaying 480 interlacing, or receive 1080 progressive and display 1080 interlaced). Figure 4 represents a portion of the lines in a field. The circles represent the original pixels in a 4: 2: 0 format. The "X" represents pixels of the converted 4: 2: 2 signal. Interpolated lines are calculated in each field from the lines of the respective field. Figure 4 shows a visual display based on the field. In this case, the even chroma lines (starting with line 0) are used to generate the first upper field, and the chroma lines are used to generate the second lower field. Figures 5A and 6A illustrate the luma conversion options in a manner similar to that described with respect to Figure 2A. Figure 5A illustrates the vertical and temporal relationship of the input and output luma lines when the 720 progressive format is converted to an interlaced 1080 format. Figure 6A illustrates the vertical and temporal relationship of the input and output luma lines when converting the progressive 720 format to an interlaced 480 format. Figures 5B and 6B illustrate the corresponding chroma conversion options in relation to the luma conversions described above. Figure 5B shows the vertical and temporal relationship of the input and output chroma lines when the 720 progressive format is converted to an interlaced 1080 format. Figure 6B shows the vertical and temporal relationship of the input and output chroma lines when the 720 progressive format is converted to an interlaced 480 format. Temporary processing is not included in these sample conversions. The processing of luma and chroma occurs only in the vertical direction. Also, the input chroma information is based on the frame, and you only need to consider converting from 4: 2: 0 to 4: 2: 2 based on the frame. Figures 7A and 7B are different. Figure 7A shows the vertical and temporal relationship of the input and output luminance lines when the interlaced 1080 format is converted to an interlaced 480 format. Figure 7B shows the vertical and temporal relationship of the input and output chrominance lines when the interlaced 1080 format is converted to an interlaced 480 format. Figures 8A and 8B illustrate pictorially the vertical conversions of luminance and chrominance, respectively, of the SD video signal, made by the LMU 54. Remember that vertical and temporal processing is included in these conversions, instead of only vertical processing . Normally, only the operation of the de-interleaver algorithm is required for the interlaced 720x480 image sizes (ie, CCIR601 resolution). These images can originate from the MPEG2 decoding process, or as an input from the SD input port. Figure 9 is a flow chart of the signal line through a receiver, which includes a decoder in accordance with the principles of the present invention. The input signal is received by the receiver in block 120. The input signal is formatted as an MPEG2 signal or not compatible with MPEG2, as described above. The signal format is identified in block 122, and the identified signal is directed to the appropriate processing line. If the format of the signal is compatible with MPEG2, the signal is decoded in block 124, as described above, and block data compatible with the visual display processor is produced, and stored in memory 20. If the signal is not compatible with MPEG2, the signal is processed and stored in memory 20, in block 126, as described above. This data is also block data compatible with the visual display processor 40 of Figure 1. The block data compatible with the visual display processor is passed to the visual display processor 40 from the memory 20. Block 128 produces formatted data which are compatible with a particular visual display device, or other storage device. Data that requires a higher resolution is transferred between the visual display processor 40 and the memory 20 during this processing. Finally, the data compatible with the visual display is sent to the visual display device (or to the storage medium) in block 130. The common architecture disclosed above is useful for storing information of field and frame images in the memory 20 during other standard definition data processing, as well as when memory 20 is not being otherwise used. For example, the standard definition data is often filtered by a comb filter, which can use sufficient memory to store a field or picture frame. This memory is generally separated from the memory used for other functions. By using the common structure described above, frame memory can be used, thus saving on design and implementation costs. On the screen, the visual display can also use the memory 20 in a similar manner, to eliminate the need for a separate memory.

Claims (14)

1. A digital processor having a common architecture for the processing of video signals in multiple formats, which comprises: an input network (2, 4, 6) to receive video data formatted in high definition and video data formatted in definition standard; a decoder (16) coupled with the input network, to produce decoded and decompressed high definition data; a converter (22) coupled with the input network, to convert the formatted data in standard definition to a format compatible with the data formatted in high definition; a common memory (20) for storing data formatted in high definition and standard definition during processing by the processor; and a visual display processor (40) for processing the high definition formatted data and converted standard definition formatted data for visual display.
2. The processor of claim 1, wherein: the decoded and decompressed high definition data is in a block format; High definition data is compatible with MPEG2; the visual display processor accepts the video data in a block format; and the converter converts the data in standard definition to a block format.
3. The processor of claim 1, wherein: the input network receives compressed data in burst.
4. The processor of claim 1, wherein: the memory receives data based on blocks, and field and frame data in a grid format. The processor of claim 2, further comprising: a recompressor (28) for recompressing decoded and decompressed MPEG2 data before being stored in memory. The processor of claim 1, wherein: the visual display processor can be programmed to provide an output video data format compatible with a visual display device coupled with the visual display processor. The processor of claim 1, wherein: the input network includes an input (2) for receiving data in MPEG2 format and an input (4) for receiving data that is not in MPEG2 format; the visual display processor (40) includes a block-to-line converter (56), for processing MPEG2 data in block format and data that is not in MPEG2 format converted from line to block; and the common memory (20) stores the data in MPEG2 format and the data that is not in MPEG2 format during the processing by this system. The system of claim 7, wherein: the memory receives data based on blocks, and field and frame data in a grid format. The processor of claim 1, comprising: a first processing line, associated with the input network, comprising an input (2) for receiving information in MPEG2 format, an MPEG2 decoder (16) corresponding to said decoder , and the visual display processor (40) having an output for conveying the image information to a visual display device; a second processing line, associated with the input network, comprising an input (4) for receiving information in standard definition format, a line-to-block converter (22), and the visual display processor; a switching element for selectively transporting the visual display information to the visual display output by means of the first and second processing lines; and wherein: the common memory (20) is block-based, and is coupled with the first and second processing lines; the visual display processor includes an element to change the resolution of the information in standard definition format; and the switching element stores the information in standard definition format in memory during processing. 10. A system according to claim 9, wherein: the memory receives data based on blocks, and field and frame data in a grid format. 11. A method for processing video signals in multiple formats, which comprises the steps of: receiving a signal comprising the data to be processed (120); identify the signal received as a signal in MPEG2 format and a signal in standard definition format (122); decoding the signal in input MPEG2 format to produce data in block format when received (124); process the signal in standard definition format of input, to produce data in block format when they are received (126); condition the transported data to an adequate format for visual display (128); store data in MPEG2 format and data in standard definition format in a common memory during processing; and transporting the conditioned visual display data to a visual display device (130). 12. A method according to claim 11, wherein: the preprocessing step converts the data into line format or data in block format. 13. A method according to claim 11, wherein: the conditioning step converts the data in block format to data in line format. 14. A method according to claim 11, wherein: the memory receives the data based on blocks, and the field and frame data in a grid format.
MXPA/A/1999/005597A 1996-12-18 1999-06-16 Memory architecture for a multiple format video signal processor MXPA99005597A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP96402785.8 1996-12-18

Publications (1)

Publication Number Publication Date
MXPA99005597A true MXPA99005597A (en) 2000-04-24

Family

ID=

Similar Documents

Publication Publication Date Title
EP0947092B1 (en) Memory architecture for a multiple format video signal processor
US6900845B1 (en) Memory architecture for a multiple format video signal processor
US8170099B2 (en) Unified system for progressive and interlaced video transmission
US5889562A (en) Memory requirement reduction in a SQTV processor by ADPCM compression
MXPA99005597A (en) Memory architecture for a multiple format video signal processor
MXPA99005601A (en) A multiple format video signal processor
MXPA99005590A (en) Parallel compressors for recompression of interleaved pixel data within an mpeg decoder
MXPA99005591A (en) Parallel decompressors for recompressed pixel data within an mpeg decoder