MXPA96003750A - Apparatus for processing yuv video signals and mezcl color envelope - Google Patents
Apparatus for processing yuv video signals and mezcl color envelopeInfo
- Publication number
- MXPA96003750A MXPA96003750A MXPA/A/1996/003750A MX9603750A MXPA96003750A MX PA96003750 A MXPA96003750 A MX PA96003750A MX 9603750 A MX9603750 A MX 9603750A MX PA96003750 A MXPA96003750 A MX PA96003750A
- Authority
- MX
- Mexico
- Prior art keywords
- pixel
- data
- pixel data
- moving image
- image
- Prior art date
Links
- 230000000007 visual effect Effects 0.000 claims abstract description 54
- 230000015654 memory Effects 0.000 claims description 108
- 239000011159 matrix material Substances 0.000 claims description 7
- 238000000034 method Methods 0.000 claims description 5
- 241001442055 Vipera berus Species 0.000 description 10
- 238000010586 diagram Methods 0.000 description 10
- 239000004020 conductor Substances 0.000 description 6
- 230000000875 corresponding Effects 0.000 description 6
- 239000002131 composite material Substances 0.000 description 5
- 238000004519 manufacturing process Methods 0.000 description 4
- 238000005562 fading Methods 0.000 description 3
- 230000002452 interceptive Effects 0.000 description 3
- 239000003086 colorant Substances 0.000 description 2
- 150000001875 compounds Chemical class 0.000 description 2
- 230000001902 propagating Effects 0.000 description 2
- 230000001360 synchronised Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000003247 decreasing Effects 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 230000003111 delayed Effects 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000006011 modification reaction Methods 0.000 description 1
- 230000000750 progressive Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000001052 transient Effects 0.000 description 1
- 238000011144 upstream manufacturing Methods 0.000 description 1
Abstract
The present invention relates to an apparatus for processing mixed video and graphics signals to be displayed on a standard interlaced television receiver, which comprises: a buffer and processing zone element that responds to a first and second reception horizontal lines of image pixel data with movement from first and second fields, respectively, of each of the moving image (s), to be selectively displayed in previously selected selected areas separated from a visual display of the standard interlaced television receiver, to generate from the same, first, second, and third output signals including image pixel data with movement in sequence for three adjacent horizontal lines of an image to be displayed on the television receiver, wherein the pixel data for the Higher priority moving images are produced for each position d e pixel of the first, second and third output signals at any instant of time, when the moving images overlap an area of the visual display in the television receiver, and the first and third output signals include image pixel data with movement for the lines of one of the first and second fields, and the second output signal includes pixel data of moving image for the lines of the other of the first and second fields, and includes chrominance data when the image data is moving they relate to a true color format, and a convolutional element to receive the first, second, and third output signals from the buffer and processing zone element, and to generate from them, an output signal which provides a weighted average for a central pixel of a previously determined array of pixel data to be transmitted to the television receiver when or an image with movement in a pixel area of the visual display of the television receiver is included, wherein the central pixel is part of the second output signal from the buffer and processing zone element, and to transmit a signal of live video to the visual display of the television receiver when an image with movement in a pixel area of the visual display of the television receiver is not included
Description
APPARATUS FOR PROCESSING YUV VIDEO SIGNALS AND MIXED COLOR PALETTE
Reference to Related Requests This invention relates to the following applications all of which are assigned to the assignee of the present invention, have common inventors, and are concurrently filed: United States Patent Application Serial Number (GID872) ), entitled "Method and Apparatus for Performing Two Dimensional Video Convolving", Patent Application of the
United States of America with Serial Number (GID907), entitled "Video Magnification Apparatus", and United States of America Patent Application with
Serial Number (GID908), entitled "Apparatus
Using Memory Control Tables Related To Video Graphics Processing For TV Receivers ".
Field of the Invention The present invention relates to an apparatus for processing YUV video signals and mixed color palette graphics, and displaying these video graphics signals by themselves, or superimposing these mixed graphic video signals on live television signals. received from a remote source when desired.
BACKGROUND OF THE INVENTION Some commercially available computers, particularly personal computers, provide circuits that allow a composite video signal to be fused, (eg, a signal from the National Television Standards Committee) with computer-generated video graphics display signals, typically red, green, and blue (RGB). More specifically, the modern video graphics team has the ability to produce backgrounds, characters, symbols, and other pictorial representations and configurations in the sizes, shapes, and colors selected by the operator. U.S. Patent No. 4,737,772, (Nishi et al.), Issued April 12, 1988, discloses a video display controller comprising a video display processor (VDP), a Central Processing Unit ( CPU), a memory, and a Direct Video Access Memory (VRAM). The memory stores both the programs that are to be executed by the central processing unit, and different kinds of image data. The video direct access memory stores the image data that the video display processor can change and then transfer outward to be displayed on a screen of a visual display of the Cathode Ray Tube (CRT). The video display processor, a time signal generator generates time signals to correctly scan the elements of the image to be displayed, which are used by a horizontal and vertical counter, and a visual display of the lightning tube. cathode to synchronize data processing in an image data processing circuit (IDPC), and correctly display this processed data on the visual display screen of the cathode ray tube. A Video Digitizer samples an externally supplied analog video signal, and converts the levels or amplitudes of the analog video signal into digital data consisting of 2 or 4 bits each. The output data of the amplitude digitized by the Video Digitizer represent a fixed image, and are supplied to the image data processing circuit. The image data processing circuit selectively stores the output data of the Video Digitizer, such as the color codes supplied from the central processing unit to the video direct access memory through an interface circuit. Each color code from the central processing unit represents a color of a respective visual display element (e.g., pixel) that constitutes a still image on the screen. In operation, in response to a visual display command from the central processing unit, the image data processing circuit reads the point data from the video direct access memory in synchronization with the scanning position in sequence. the visual display of the cathode ray tube, and pulls the data points to a color palette circuit. In a concurrent manner, the image data processing circuit calculates and reads the data necessary to display an animation image from the video direct access memory, and supplies color codes to the color palette circuit. Where an animation and a still image are located in the same position of visual display of the display screen of the cathode ray tube, preferably the animation image is displayed. The color palette circuit converts each color code into three color data for red, green, and blue, each consisting of three bits. A Digital to Analog Converter (DAC) converts the color data from the color palette circuit into R, G, and B signals (red, green, blue) that are provided to the visual display of the cathode ray tube. The Patent of the United States of North America
Number 5,355,175 (Okada et al.), Issued October 11, 1994, discloses a video mixing apparatus that mixes a video image of graphics and a reproduction video image in a plurality of mixing proportions in a plane of an image . Fade data are generated in sequence indicating the mixing ratio of at least one line of the reproduction video signal and the graphics video signal according to a predetermined order. The faded data is held in a containment element, and is removed from the containment element in a synchronized manner with a horizontal synchronization signal. The levels of the reproduction video signal and the graphics video signal are adjusted individually according to the fading data produced from the containment element, and the adjusted signals are added together. The reproduction video signal and the graphics video signal are mixed in the mixing ratio established for each line on a plane of an image to generate a video output signal from the apparatus. United States Patent Number 4,420,770, (Rahman), issued December 13, 1983, discloses a video background generation system for generating rectangular video patterns that have video attributes selected by the operator. The system comprises a horizontal bit memory and a vertical bit memory, each of the memories being a memory of 16 entities for storing information for 16 background entities. The memory for each background entity defines opposite corners of the background area for an entity on the screen. As shown in Figure 2 of the patent, a first entity defines a first rectangular area, and a second higher priority entity defines a second rectangular area partially overlapped. An attribute query table stores information for each entity related to the color video output (red, green, blue) for that entity. While scanning the lines of an image that is occurring, the first entity occurs in its defined area, and the second entity occurs in its defined area. However, the second entity has a higher priority which results in the overlapping region of the two entities being presented with the stored attributes of the second entity. U.S. Patent Number 4,580,165 (Patton et al.), Issued Io. April 1986, describes an overlay video graphics system that synchronizes a color graphics module both horizontally and vertically with an externally received video signal. In a more particular way, the system comprises a color graphics module and a video mixing module that are coupled together to provide a graphics image superimposed on a video image. The color graphics module comprises a time element to produce both horizontal and vertical synchronization pulses of the color graphics. The video mixer module comprises a video decoder and a synchronization processor element. The video decoder separates an externally received video signal into the red, green, and blue (RGB) components, and generates from it, a synchronization signal composed of external video. The synchronization processor element comprises a horizontal and vertical synchronizing processor element that synchronizes the color graphics signals both horizontally and vertically with the composite video synchronization signals. The horizontal synchronization processor element filters the synchronization signal composed of external video, and generates a processed horizontal synchronization pulse that can be adjusted in its phase. The vertical synchronization processor element uses the external video composite synchronization signal to generate external video vertical synchronization signals to vertically synchronize the video image and the color graphics image. A multiplexer combines the video image with the color graphics image to provide a composite image output signal. NowadaysA need for interactive video graphics is emerging that makes it possible for a whole new class of services to be delivered to the home through a cable television network. These new services will improve the viewer experience for many traditional television programs, while providing greater services to others. However, the NTSC and Alternate Phase Line (PAL) television receivers, unlike computer monitors, have a very low video bandwidth, and employ an interlaced visual display, and not a progressive scan. These limitations place severe restrictions on the generation of a high-resolution synthetic video signal free of artifices. Traditionally, consumer products, such as video games, eliminate these problems by generating low-resolution, non-interlaced video signals. This approach results in images that are of low quality, that have an appearance of "blocks", that are limited in the color selection, and take on an appearance in the form of a caricature. The generation of a synthetic video that approximates broadcast quality requires that the generated synthesized signals emulate those of a video camera that scans a scene and the subsequent processing of the analog signal for those video camera signals. Accordingly, it is desirable to provide a relatively inexpensive configuration that allows good synthetic video graphics to be superimposed on top of the live television programming to be viewed on a standard NTSC or PAL interlaced television receiver.
SUMMARY OF THE INVENTION The present invention relates to an apparatus for processing YUV video graphics signals and mixed color palettes, and for displaying those video graphics signals by themselves or displaying these mixed video signals on live television signals. received from a remote source when desired, in an interlaced television receiver. Viewed from one aspect, the present invention relates to an apparatus for processing mixed graphics and video signals to be displayed on a standard interlaced television receiver. The apparatus comprises a buffer and processing zone element, and a convolution element. The buffer and processing zone element responds to a reception of the first and second horizontal lines of the pixel data of moving images from the first and second fields, respectively, of each of the moving image (s), to exhibit selectively in predetermined selected areas separated from a visual display of the standard interlaced television receiver. The buffer and processing zone element generates, from the image pixel data with movement from the first and second fields of each of the moving image (s), first, second, and third output signals that they include image pixel data with movement in sequence for three adjacent horizontal lines of an image to be displayed on the television receiver. In the generation of the first, second, and third output signals, the pixel data is produced for the higher priority motion images for each pixel position of the first, second, and third output signals at any time. of the time when moving images overlap in an area of visual display on the television receiver. Still further, the first and third output signals include image pixel data with movement for lines of one of the first and second fields, and the second output signal includes image pixel data with movement for the lines of the other of the first and second fields, and includes chrominance data when the moving image data is related to the true color format. The convolution element receives the first, second, and third output signals from the buffer and processing zone element, and generates therefrom, an output signal that provides a weighted average for a central pixel of a previously determined matrix of pixel data. This output signal is transmitted to the television receiver when a moving image is included in a pixel area of the visual display of the television receiver, and the central pixel is part of the second output signal from the buffer zone element and processing. Still further, the convolution element transmits a live video signal to the visual display of the television receiver when an image is included with movement in a pixel area of the visual display of the television receiver. Seen from another aspect, the present invention relates to an apparatus for processing mixed graphics and video signals to be displayed on a standard interlaced television receiver. The apparatus comprises a digital memory element, a buffer and processing zone element, and a convolution element. The digital memory element stores, and subsequently transmits an output signal comprising moving image data for each of the moving image (s), to be selectively displayed in selected predetermined areas of a visual display of the standard interlaced television receiver . The moving image data comprises any of a true color video format or a color palette data format for a first and second fields of a moving image that is to be displayed on the television receiver, and each one of the moving image (s) has a previously determined priority when there is more than one moving image. The buffer and processing zone element responds to the output signal from the digital memory element to generate from the same, first, second, and third output signals. The three output signals include the pixel data of the moving picture in sequence for three adjacent horizontal lines of an image to be displayed on the television receiver. Additionally, the first and third output signals include image pixel data with movement between the lines of one of the first and second fields, and the second output signal includes the pixel data of the moving image for the other lines of the first and second fields, and includes chrominance data when the moving image data is related to the true color format. The convolution element receives the first, second, and third output signals from the buffer and processing zone element, and generates therefrom, an output signal that provides a weighted average for a central pixel of a previously determined matrix of pixel data to be transmitted to the television receiver. This output signal is transmitted to the television receiver when a moving image is included in a pixel area of the visual display of the television receiver, and the central pixel is part of the second output signal from the buffer zone element and processing. Still further, the convolution element transmits a live video signal to the visual display of the television receiver when an image with movement in a pixel area of the visual display of the television receiver is not included. Seen from still another aspect, the present invention relates to an apparatus for processing mixed graphics and video signals to be displayed on a standard interlaced television receiver. The apparatus comprises a digital memory element, a memory controller, a buffer and processing zone element, and a convolution element. The digital memory element stores, and subsequently transmits an output signal comprising moving image data for each of the moving image (s) to be selectively displayed in selected predetermined areas of a visual display of the receiver of standard interlaced television. The moving image data comprises any of a true color video format or a color palette data format for a first and second fields of a moving image that is to be displayed on the television receiver, and each one of the or images with movement has a previously determined priority when there is more than one moving image. The memory controller causes the data of the moving image of the first and second fields to be read in predetermined locations of the digital memory element, and subsequently be read selectively from the digital memory element as an output signal. of memory that includes a horizontal line of data of the image with movement of the first field, and an adjacent line of data of the image with movement of the second field. The buffer and processing zone element responds to the memory output signal from the memory controller to generate from the same first, second, and third output signals that include pixel data of the image in sequence motion for three adjacent horizontal lines of an image to be displayed on the television receiver. Still further, the pixel data for the images with higher priority movement are produced for each pixel position of the first, second, and third exit signals at any instant of time, when moving images overlap an area of visual display on the television receiver. Additionally, the first and third output signals include pixel data of the moving image for the lines of one of the first and second fields, and the second output signal includes the pixel data of the moving image for the lines of the another of the first and second fields, and includes the chrominance data when the moving image data is related to the true color format. The convolution element receives the first, second, and third output signals from the buffer and processing zone element, and generates therefrom an output signal that provides a weighted average for a central pixel of a matrix previously determined pixel data to be transmitted to the television receiver. This output signal is transmitted to the television receiver when a moving image is included in a pixel area of the visual display of the television receiver, and the central pixel is part of the second output signal from the buffer zone element and processing. Still further, the convolution element transmits a live video signal to the visual display of the television receiver when an image with movement in a pixel area of the visual display of the television receiver is not included. The invention will be better understood from the following more detailed description taken with the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS Figure 1 is a block diagram of a cable box unit of the subscriber in accordance with the present invention. Figure 2 is a block diagram of a first portion of a Video Processing Circuit located in the cable box unit of the subscriber of Figure 1, in accordance with the present invention. Figure 3 is a block diagram of a second portion of a Video Processing Circuit located in a cable box unit of the subscriber of Figure 1, in accordance with the present invention. Figures 4, 5, and 6 illustrate the operation of a
Intermediate Memory Assembly Area of Pixels that is part of the first portion of the Processing Circuit of
Video of Figure 3, in accordance with the present invention. Figure 7 is a block diagram of an exemplary Multiplexer / Fader that is part of a second portion of the Video Processing Circuit shown in Figure 3. Figure 8 is a block diagram of an example configuration of a convolver. which is part of the second portion of the Video Processing Circuit shown in Figure 3.
Detailed Description It should be understood that the corresponding elements that perform the same function in each of the figures, have received the same designation number.
Referring now to Figure 1, a block diagram of a subscriber cable box unit 10, which can be found on a subscriber's premises, and provides interactive video processing in accordance with the present invention is shown. The subscriber cable box unit 10 comprises a first module (MODULE 1) 12 (shown inside a first rectangle with dotted line), and a second module (MODULE 2), 14 (shown inside a second rectangle with dotted line) ). The first module 12 is a conventional configuration comprising a Radio Frequency (RF) to Baseband Converter 20 and a Converter Control System 22, which are each known in the art. The baseband radio frequency converter 20 receives signals from the radiofrequency television channel multiplexed in the standard NTSC or PAL format, which propagate on a cable 27 from a central office of a remote cable company (not shown), and selectively converts These signals of the radio frequency television channel multiplexed from its multiplexed channel frequencies to the baseband frequencies. The baseband radio frequency converter 20 transmits a resulting baseband video output signal resulting from the conversion process on a busbar 24 to the second module 14. The Converter Control System 22 is typically user-controlled ( subscriber) by either an infrared remote control device, or a keyboard in the cable box, as is well known in the art. The Converter Control System 22 functions to receive / transmit authorization and access control signals by means of the cable 27, or to or from a central office of the remote cable company, activate mixing or demixing of baseband video , and produce Display Visual Display (OSD) messages. The Converter Control System 22 produces control signals by means of a busbar 29 to the baseband radio frequency converter 20, to select the desired channel programming, and different control and decoding data signals (e.g. of control data output and upstream, infrared reception and transmission signals, and Decoded TI Quadrature Phase Change Key data signals), via conductors 31 and 33 to the second module 14. The second module 14 comprises a Serial Interface Processor (SIP) 30, the Input / Output (I / O) devices a Read Only Memory (ROM) 34, a Direct Access Memory (RAM) 35 a Central Processing Unit (CPU) 36, a Graphics Memory 38, and an integrated video and memory control circuit (CONT. VIDEO AND MEM.) 40 (shown inside a dotted line rectangle). The Serial Interface processor 30, the input / output devices 32, the read only memory 34, the direct access memory 35, the central processing unit 36, and a Memory Controller and Image Status Machine with Movement (MAQ.) 42 of the video and memory control integrated circuit 40, are interconnected by a data bus 48. The central processing unit 36 may comprise any suitable processing unit, and in accordance with the present invention, is a central processing unit of type 386 that is relatively inexpensive. The read-only memory 34 may comprise any suitable memory, such as, for example, an electrically programmable erasable read-only memory (EPROM) for initialization purposes and for programming the central processing unit 36. The direct access memory 35 may comprise any suitable memory, such as two dynamic 256 kilobyte by 16 bit direct access memories connected in series to provide a 512k by 16 bit direct access memory configuration to be used as a transient memory for the central processing unit 36 The Graphics Memory 38 may comprise any suitable memory, such as, for example, a 32-bit amplitude direct access memory area, or preferably two 256k by 16-bit dynamic direct access memories configured in parallel for use with a 32-bit amplitude bus 39. Graphics Memory 38 is used to store image data in with movement in relation to graphics and video images. The use of a 32-bit amplitude bus 39 allows the use of a fast paging memory address for both a Memory Controller and Motion Picture State 42 Machine as well as a block memory engine ( no, shown) that is part of the Video and Memory Controller 40. Through the significant use of the memory address in block mode, an average data transfer rate of approximately 52 nanoseconds can be achieved, which corresponds to approximately processing 77 millions of bytes of data per second. The serial interface processor 30 operates to handle data communications between the first module 12 and the second module 14. More particularly, the serial interface processor 30 handles all data transfer signals between the second module 14 and the converter control system 22 of the first module 12. These data transfer signals can have formats such as, for example, a data stream like TI at 1.5 Mbits / second, which involves the volume of the communication transfers, and raw data from an infrared receiver (not shown) in the converter control system 22. The serial interface processor 30 may also include a full-duplex synchronous serial port (not shown) for future expansion. These data transfer signal formats are used to communicate between the control system of the converter 22 in the first module 12 and the central processing unit 36 in the second module 14, to activate desired actions in the second module 14. The Integrated circuit of Video Control and Memory 40 comprises the Memory Controller and Motion Picture Status Machine 42, the Compound circuit to YUV 44 and the Video Processing circuit (PROC.) 46. The Memory Controller Machine and Motion Image Status 42 is coupled to the Graphics Memory 38 via a data bus 39, and with the Video Processing circuit 46 via a data bus 45. The Compound to YUV 44 circuit receives the signal of composite video in baseband from the busbar 24, and produces the resulting YUV video signals to the Memory Controller and Motion Picture State Machine 42 on a bus 43. The Video Processing circuit 46 receives the video signals from the Memory Controller Machine and State of Motion Image 44 on the data bus 45, and produces standard video signals in NTSC or PAL over a bus 47 to a remote television receiver (not shown), or to another processing circuit (not shown) ). It should be understood that the present invention falls within the area of the integrated Video and Memory Control circuit 40 and the Graphics Memory 38. The elements of the first module 12 and the second module 14 were introduced and discussed hereinabove for a better understanding of the manner in which the present invention fits into the cable box unit of the interactive subscriber 10. Referring now to Figures 2 and 3, block diagrams of the first and second portions, respectively, of a Video Processing Circuit 46 (shown within a dotted line area) which is located in the second module 14 of the subscriber cable box unit 10 of Figure 1 according to the present invention. As shown in Figure 2, a Graphics Memory 38, which forms a second module element 14 of Figure 1, is coupled by means of a data bus 39 to a Memory Controller and Image Status Controller Machine. with Motion 42 that is part of a Video and Memory Controller 40 (shown within a dotted line area) of the second module 14 of Figure 1. A first portion of the Video Processing Circuit 46 comprises a Data Tube 50, and an Intermediate Memory Assembly Area of Pixels 52. (shown inside a dotted line rectangle). The Data Tube 50 receives the data on a bus 45 which, which was obtained by the Memory Controller and Motion Picture Status Machine 42 from the Graphics Memory 38 for a particular motion image, to be transmitted to the Intermediate Memory Assembly Area of Pixels 52. More particularly, the Data Tube 50 receives data for an image with movement from Graphics Memory 38 by means of the Memory Controller and Motion Image Estimator Machine 42 and provides separate outputs with respect to luinance data (data y) and chrominance data (data c) to be transmitted to the Pixel Assembly Intermediate Memory Area 52. The Pixel Assembly Intermediate Memory Area 52 it comprises first, second and third buffer zones of double line 53, 54 and 55, respectively, and a buffer zone Y / G of line 0, 58. The first memory area in dual-line termediary 53 is used to store Y / G data (luminance) of line Ia and Y / G data of line Ib for the first and second lines of the first field of an image with movement received by means of a busbar 49 from the Data Tube 50. The line luminance data comprises 10 bits (bits 9-0) of data, and the control for each pixel of a line. The second double line buffer zone 54 is used to store the data C (chrominance) of Line a and C of Line Ib of the first and second lines of the first field of the image with movement received by means of a bar collector 51 from the Data Tube 50. The line chrominance data comprises 8 bits (bits 7-0) of data for each pixel of a line. The third double line buffer zone 55 is used to store the Y / G data (luminance) of line 2a, and the Y / G data of line 2b of the first and second lines of a second field of an image with movement received via bus 49 from Data Tube 50. Line luminance data comprises 10 bits (bits 9-0) of data, and control for each pixel of a line. It should be understood that the lines la and 2a of the first and second double-line buffer zones 53 and 55 store first and second horizontal lines of pixel data, respectively, wherein the first and second horizontal lines are adjacent lines within Separate fields of the moving image in an interlaced visual display format. In a similar manner, the lines Ib and 2b of the first and third buffer zones of double line 53 and 55 store the third and fourth horizontal lines of pixel data respectively, where the third and fourth horizontal lines are adjacent lines within separate fields of the moving image in an interlaced visual display format. In other words, the first and third double-line buffer zones 53 and 55 store in sequence the luminance data and the control for, for example, the pixels of a pair of odd and even lines, respectively, of the first and second ones. second respective fields, or vice versa, of the moving image during a scan of an interlaced visual display format. The second double-line buffer zone 54 stores the chroma data for the data of the lines stored in the double-line buffer zone 53. A chroma-line double-buffer memory zone (not shown) can be provided. similar to the double line buffer area 54, for the double line buffer zone 55, but is not necessary for reasons of economy, and because it is not important in a convolver, which will be explained later herein. The output data from the first double line buffer area 53 comprises 10 bits of luminance data and the control for each pixel of the lines stored therein, which are produced in parallel for each pixel on a designated busbar , LINE 1 to the circuit of Figure 3. The output data from the second double line buffer area 54, comprises eight bits of chrominance data for each pixel of the lines stored therein, which are produced in parallel for each pixel on a busbar designated as LINE Ic to the circuit of Figure 3. The output data from the third double line buffer zone 55 comprises 10 bits of luminance data and the control for each pixel of the lines stored in it, which are produced in parallel for each pixel on a busbar designated as LINE 2 to the circuit of Figure 3, and to the Intermediate Memory Area of Y / G of Line 0, 58. The Y / G Intermediate Memory Zone of line 0, 58, functions to delay the line data produced by the third double line buffer zone 55 by a horizontal line period, to provide a delayed line output comprising ten bits of luminance data and the control for each pixel of the line stored therein, which occurs in parallel on a bus designated as LINE 0 to the circuit of Figure 3. It must be understand that at a sampling rate of 13.5 MHz for the standard NTSC visual television display, there are 858 pixels per line of the image, of which approximately 704 pixels are actually displayed, and that there are 525 horizontal lines of pixels in two fields of an image, of which approximately 440-500 lines are normally seen depending on the television receiver used. Turning now to Figures 4, 5, and 6, there is shown an example operation sequence for the first and third buffer zones of double line 53 and 55, respectively, and for the Intermediate Memory Area of Y / G of the Line 0, 58, of the Pixel Assembly Intermediate Memory Area 52 of Figure 2 in accordance with the present invention. It must be understood that, in a normal operation of the double line buffer zones 53 and 55, a horizontal line of pixel data is inserted into a first field in the middle of the first double line buffer zone 53, concurrently with a line Horizontal pixel data of a second field that are inserted in the middle of the third buffer zone of double line 55. Concurrently with the introduction of the horizontal lines of pixel data in the first halves of the first and third buffer zones. double line buffer 53 and 55, the horizontal lines of pixel data previously stored in the other halves of the first and third buffer zones of double line 53 and 55 are read at the outputs of LINES 1 and 2, respectively . In other words, for the first double-line buffer zone 53, a first horizontal line of pixel data from the first field is inserted, for example, into the Y / G portion of the LINE of the first memory area. double line intermediate 53, and during a following horizontal line period, a second horizontal line of pixel data from a first field of a frame is inserted into the Y / G portion of the LINE Ib of the first buffer zone of double line 53, while the first horizontal line of pixel data from the Y / G portion of the LINE is read on the output of LINE 1. During a following horizontal line period, a third horizontal line of pixel data from the first field the one of the first buffer area of double line 53 is put into the Y / G portion of the LINE, while the second horizontal line of pixel data from the Y / G portion of the LINE Ib is read about the departure of the LTNFA 1 In a concurrent manner, the first, second, and third horizontal lines of pixel data from a second field of the frame are read similarly in the third buffer zone of double line 55, and are drawn therefrom on the output of LINE 2. Figure 4 shows an end point of an initialization stage of the Pixel Assembly Intermediate Memory Area 52, after the subscriber cable box unit 10 of Figure 1 has first been powered on. a more particular way, when turned on, the pixel data for a horizontal line 0 of a first field, and the pixel data for a horizontal line 1 of a second field, get into the Y / G portion of the line the the first double line buffer zone 53, and in the Y / G portion of the line 2a of the double line buffer zone 55, respectively, during a first horizontal line period. During a second horizontal line period, the pixel data for a horizontal line 2 of a first field, and the pixel data for a horizontal line 3 of a second field, are entered into the Y / G portion of the Ib line of the first double line buffer zone 53, and in the Y / G portion of the line 2b of the third double line buffer zone 55, respectively, while the pixel data for the horizontal lines 0 and 1 is read from the Y / G portion of the line that of the first double line buffer zone 53, and of the Y / G portion of the line 2a of the third double line buffer zone 55, respectively, over the respective outputs of the lines 1 and 2. Concurrently with the same, the pixel data for a horizontal line 1 from the Y / G portion of the line 2a of the third buffer zone of double line 55, get into the zone buffer of Y / G of line 0, 58. Since e the Y / G buffer zone of the line 0, 58, functions to delay the horizontal line data stored therein for a period of one horizontal line, and the buffer zone 58 had no data stored in the same when initialized, the output from it on the output of LINE 0 does not include valid data. Figure 5 continues the loading and unloading process after the initialization steps shown in Figure
4. More particularly, the pixel data for a horizontal line 4 of a first field, and the pixel data for a horizontal line 5 of a second field, are put into the Y / G portion of the line of the first double line buffer zone 53, and in the Y / G portion of line 2a of the third double line buffer zone 55, respectively, during a third horizontal line period. Concurrently with the same, the pixel data for the horizontal lines 2 and 3 are read from the Y / G portion of the Ib line of the first double line buffer zone 53, and from the Y / G portion of the line 2b of the third double line buffer zone 55, respectively, on their respective outputs of the LINES 1 and 2. Concurrently with the same, the pixel data for the horizontal line 3 from the Y / G portion of the line 2b of the third double line buffer zone 55 are put into the Y / G buffer zone of the line 0, 58, while the pixel data previously stored for the data of the horizontal line 1 are taken over the output of LINE 0. Accordingly, buffer zones 58, 53, and 55 are producing pixel data for horizontal lines 1, 2, and 3, respectively, for an image with movement on the respective outputs of the LINES 0, 1, and 2, during the third p Horizontal line period, where pixel data for horizontal lines 1 and 3 are part of the second field, and pixel data for horizontal line 2 are part of the first field of a moving image that was stored in memory of graphics 38 (shown in Figures 1 and 2). Figure 6 continues the loading and unloading process from the step shown in Figure 5. More particularly, the pixel data for a horizontal line 6 of a first field, and the pixel data for a horizontal line 7 of a second field of a moving image is put into the Y / G portion of the horizontal line Ib of the first double line buffer zone 53, and in the Y / G portion of the horizontal line 2b of the third zone of double line buffer 55, respectively, during a fourth horizontal line period. Concurrently with the same, the pixel data for the horizontal lines 4 and 5 of the image with movements are read from the Y / G portion of the horizontal line that of the first double line buffer zone 53, and from the portion Y / G of the horizontal line 2a of the third double line buffer zone 55, respectively, on the respective outputs of the lines 1 and 2. Concurrently with the same, the pixel data for the horizontal line 5 from the portion Y / G of the line 2a of the third buffer zone of double line 55, get into the Y / G Intermediate Memory Area of the Line 0, 58, while the pixel data previously stored for the data of the horizontal line 3, are produced on the output of LINE 0. Accordingly, the Intermediate Memory Zones 58, 53, and 55 are producing pixel data for the horizontal lines 3, 4, and 5, respectively, on the outputs of the LINES 0, 1 , and 2, during the fourth horizontal line period, where the data for the horizontal lines 3 and 5 are part of the second field of the moving image, while the data for the horizontal line 4 are obtained from the first field of the moving image that was stored in Graphics Memory 38 (shown in Figures 1 and 2). From Figures 5 and 6, it can be seen that after the initialization (Figure 4), the pixel data on the output of LINE 1, represent the data for the horizontal lines in sequence (for example, the horizontal lines with even numbers (0-254) of a standard NTSC image) of a first field of the two fields of a frame for an NTSC interlaced visual display. After the horizontal lines with example even numbers of the first field on the output of LINE 1 have been produced in sequence, during the periods of horizontal line in sequence, the output continues with the horizontal lines numbered in sequence (for example, horizontal lines with odd numbers 1-255 of a standard NTSC image) of the second frame field the way it is for the exploration of an interlaced visual display. Although not shown in Figures 4 to 6, it should be understood that the chrominance data is produced on the LINE output lc from the second double line buffer area 54 shown in Figure 2, concurrently with the pixel data of associated luminance for each horizontal line that is being produced on the output of LINE 1. Turning now to Figure 3, a block diagram of a second portion of a Video Processing Circuit 46 is shown in the unit of Subscriber cable box 10 of Figure 1 in accordance with the present invention. The second portion of the Video Processing Circuit 46 comprises the Color Palette circuit 60, a Demultiplexer from YC to YUV 62, a Multiplexer / De-energizer (MUX / DESV.) 64, a Multiplexer of 3: 1 and Control (MUX 3). : 1 and CONT.) 66, and a Convoluctor 68. The data of the 10-bit pixel (bits 9: 0) that propagate on each of the LINES 0, 1, 2, from the output of the Intermediate Memory Zone of Pixel Assembly 52 of Figure 2, for the corresponding pixels in three adjacent horizontal lines of a moving image, are received on separate inputs of each of the Circuits of the Color Palette 60, the Demultiplexer of YC to YUV 62 , and the Multiplexer of 3: 1 and Control 66. More particularly, the bits 7-0 of the parallel output of 10 bits / pixel from the Intermediate Memory Area of Pixel Assembly 52 for each of the outputs of LINES 0, 1, and 2 are received at the inputs of the Color Palette Circuit 60 and the Desmulti from YC to YUV 62, while bits 9 and 8 of the 10-bit / pixel parallel output from the Pixel Assembly Intermediate Memory Area 52 for each of the outputs of LINES 0, 1, and 2, In addition, the Demultiplexer from YC to YUV 62 receives bits 7-0 of chrominance data produced in parallel on the output of LINE Ic from the Intermediate Memory Zone. of Pixel Assembly 52, since the chrominance data is only used when the pixel data of the motion image is related to an image signal with True Color movement. In a more particular way, where the data of the moving image is encoded as a color palette signal, the code itself defines the color, and the chrominance data as required with a color video signal is not required. true. Color Palette circuit 60 operates to detect when the 8 bits (bits 7: 0) of pixel data received in parallel on each of the outputs of LINE 0, 1, and 2, represent separate codes for particular colors of a color palette, and for converting these codes from the color palette into an output signal on the bus bar 61, which represents a color palette signal multiplexed in 24-bit YUV for three 8-bit pixel data received for those three lines. The Palette Color Circuit 60 is a well-known device, and any suitable circuit can be used for the same. The Demultiplexer from YC to YUV 62 detects when 8 bits (bits 7: 0) of data received in parallel for the pixels of each of the outputs of LINEAS 0, 1, and 2, from the Intermediate Memory Zone of Assembly of Pixels 52, represent true color data (eg, a motion image obtained directly from a television image), and also use the 8-bit chrominance data obtained by means of the LINE L1 output from the Memory Zone Intermediate Pixel Assembly 52, to generate a 24-bit True Color YUV output signal for the pixels of the three lines, to be transmitted over the bus 63. The Multiplexer / Fader (MUX./DESV.) 64 receives, at its separate inputs, each one of the YUV data signals of the 24-bit color palette propagating over a bus bar 61 from the Color Palette circuit 60, the 24-bit True Color YUV data signals propagating over the busbar 63 from the YC Demultiplexer to YUV 62, and the 24 V YUV live video signals over a bus 59. The Multiplexer / Fader 64 responds to the control signals on a conductor 67 from the Multiplexer of 3: 1 and Control 66 to produce one of the three input signals (YUV, 24-bit color palette, true-color 24-bit YUV, or YUV 24-bit live video) received in Multiplexer / Fader 64 during each period of pixel, as output signals mixed YUV scanned on a busbar 65. More specifically, the Multiplexer of 3: 1 and Control 66 determines, from the bits 9 and 8 received on the outputs of the LINES 0, 1, and 2 from the Zone of Intermediate Memory of Pixel Assembly 52, s the pixel data from the Intermediate Memory Area of Pixel Assembly 52 on the outputs of LINES 0, 1, and 2, represent data of the color palette, true color data , or data (invalid data) for a pixel that is not part of a moving image to overcome a live video signal, and therefore, the live video signal must be used for that pixel instead of the colored ball or the true color data received from the Pixel Assembly Intermediate Memory Area 52. As a result of this control information obtained from bits 9 and 8 of the outputs of LINE 0, 1, and 2 from the Intermediate Memory Zone of Assembly of P 52 pixels, the 3: 1 Multiplexer and Control 66 sends control signals over the conductor 67 to the Multiplexer / Fader 64 to select the correct input data for each pixel of an image to be displayed on an NTSC television receiver o Remote PAL (not shown). The Convolver 68 uses series in sequence of three pixel data values received in the signal from the Multiplexer / Fader 64 on the busbar 65, to provide an 8-bit weighted output signal for the pixel data for a central pixel in the a 3 by 3 matrix of corresponding pixels in three adjacent lines of a television image, or to provide the signal from the Multiplexer / Fader 64 on the bus 65 as a YUV output signal on the bus 47, depending on the control signals from the 3: 1 Multiplexer and Control 66 over a conductor 69. Referring now to Figure 7, a block diagram of an Example Multiplexer / Fader circuit 64 (shown inside an inline rectangle) is shown. dotted) comprising a 2: 1 Multiplexer (MUX.) 72, and a Fader 74 (shown within a dotted line rectangle). The Fader 74 comprises an adder AB 75, a Multiplier with Sign (MULT.SIGN) 77, and an adder A + B 78. The Multiplexer of 2: 1 receives each of the data signals of Graphic from the Color Palette 60 on the bus bar 61 at a first input terminal (A), and the graphic data signals from the demultiplexer from YC to YUV 62 on the bus bar 63 at a second input terminal (B). A Control signal on the conductor 67 from the 3: 1 Multiplexer and Control 66 selects which of the two Graphics input signals (from the input terminal A or B) will be produced from the 2: 1 Multiplexer 72 in the exit terminal (O). The output signals of Pixel Graphics (Y, U, or V) from an output terminal (O) of the Multiplexer of 2: 1 72 (designated as G) on a busbar 70, are received in a first input terminal (A) of the AB 75 adder of Fader 74. A live video YUV signal (Y, U, or V) (designated as L) is received from a bus 59 in a second input terminal (B) of adder AB 75. The pixel data values of the input data of terminal A from the multiplexer 2: 1 72 minus the data values of the live video YUV pixel data received in the input terminal B, are provided as an output on an output terminal (0) of the adder AB 75. The Multiplier with the Sign 77 receives, for example, from a recorder (not shown), a changeable ratio control value of 9 bits (R) on a bus 71 at a first input terminal (A), and the output from the adder AB 75 on a bus 76 at a second input terminal (B). The resulting multiplied value of the control value ratio (R) on the bus 71 and the output data of the Graphics signal from the adder AB 75 on the bus 76, occurs at an output terminal (O) on a bus 79 to a first input terminal (A) of adder A + B 78. The live video signal (Y, U, or V) on bus 59 is received at a second input terminal (B) of the adder A + B 78, and the sum of the two values of the input signal is provided as an output signal (designated as Q) on the bus bar 65 to the Convolver 68 (shown in Figure 3). The Fader 74 functions to fade a graphic signal for an image input or output with movement, such that the graphic does not appear or disappears instantaneously on the live video signal. In other words for a fade-in of the graph, the Fader 74 causes the graph to appear with increasing intensity in a television receiver, while the live video signal decreases in intensity in the area of the graph over a cut-off period of time , so that the graphic is fully visible. In a similar manner, for a fading of graphic output, the fader 74 causes the graphic to appear with a decreasing intensity in a television receiver, while the live video signal increases in intensity in the area of the graphic over a short period of time, until the graph appears. The operation of the Fader 74 can be explained according to the following algorithms. For the following, a busbar 71 of a sample 9-bit fading multiplier (R) is defined as follows. R is the fade control value, and is from 0 to 256. From the above definitions, Q = [(R / 256) * G] + [(1-R / 256) * L), y = L + [(GL) * R] / 256, Ec. 1 where "L" is a pixel value of live video, "G" is a pixel value of the overlay of the moving image, and the symbol "*" represents a multiplier function. From Equation 1 above, when the ratio used for the multiplier value R changes, the intensity of the graphics signals and live video changes in an opposite direction. Referring now to Figure 8, a block diagram of the Convolver 68 shown in Figure 3 is shown. The Convolver 68 (shown within a dotted line rectangle) comprises a Bypass circuit 80, a convolver circuit 82, and a Multiplexer (MUX.) 84. The Derivation circuit 80 receives the pixel data in sequence from the Multiplexer / Fader 68 (shown in Figures 3 and 7) on the busbar 65, and generates concurrently from them, on the bus bars 81, three pixels in a vertical of a moving image that is to be displayed on a television receiver. In a more particular way, the three pixels are obtained from the corresponding pixels in three adjacent lines of both fields of a frame that forms a moving image. The three values of pixel data are obtained by any suitable configuration, such as a plurality of delay circuits operating from a pixel clock, or a three-time pixel clock. The three pixel data values are received by means of the bus bars 81 by the Convolver circuit 82. The pixel data in sequence from the Multiplexer / Fader 64 are received by the Bypass circuit 80 on the busbar 65. The data of pixel in sequence from the Multiplexer / Fader 64, pass through the Bypass circuit 80, and are provided to a first input (A) of the Multiplexer 84 by means of a busbar 85. Still further, the Bypass circuit 80 transmits sequential series of three pixel data values from their separate outputs, to the separate inputs of the Convolutional circuit 82 on the bus bars 81. The Convolutional circuit 82 provides an 8-bit weighted output signal for the pixel data for a pixel central in a matrix of 3 by 3 corresponding pixels in three adjacent lines of a television image, in an output thereof, up to a second Multiplexer 84 input (B) by means of a busbar 86. The Multiplexer 84 selects the signals in the first (A) or second (B) inputs, to be transmitted to the output terminal (O) and to the busbar 47 , depending on the control signals from the 3: 1 Multiplexer and Control 66 on a conductor 69. As described in the pending Patent Application Serial Number filed on the same date as the present application for the present inventors, and incorporated herein by reference, the Convolutional circuit 82 effectively multiplies (using only adders and delays) the three pixels of a vertical received on the bus 81 in a 3 by 3 pixel array, with previously determined weight values, and provides an output signal averaged for the central pixel of the 3 by 3 matrix to a second input (B) of the Multiplexer 84. This process continues for each pixel of a central row (output from LINE 1 of the Intermediate Memory Area of the Pixel Assembly 52 of Figure 2) as the moving image data for the corresponding pixels of the three adjacent lines, as the pixel data progresses (change ) horizontally through the moving image for the three adjacent lines. It should be appreciated and understood that the specific embodiments of the invention described hereinabove are merely illustrative of the general principles of the invention. Experts in this field can make different modifications that are consistent with the stipulated principles. For example, although the present invention has been described above for use in a cable box unit of the subscriber 10, it should be understood that the present invention can be used, for example, in a production editing station before it is broadcast. the television signal. In other words, the present invention can be used in television productions to create initial products before they are broadcast, rather than to manipulate the television signal later at a remote location of the subscriber. This is possible because the quality and resolution of the image will be the same, regardless of whether editing is done during production or later at the subscriber's location. Therefore, it does not matter if the quality or the resolution may be better in a non-edited television production, if the editing is done in some parts before the production is seen in the subscriber's interlaced television set.
Claims (22)
1. An apparatus for processing video signals and mixed graphics to be displayed on a standard interlaced television receiver, which comprises: a buffer and processing zone element that responds to a reception of first and second horizontal lines of pixel data of image with movement from first and second fields, respectively, of each of the moving image (s), to be selectively displayed in selected predetermined areas separated from a visual display of the standard interlaced television receiver, to generate from the same , first, second, and third output signals including image pixel data with motion in sequence for three adjacent horizontal lines of an image to be displayed on the television receiver, wherein the pixel data for the images with movement of a highest priority occur for each pixel position of the first, second and third output signals at any instant of time, when the moving images overlap an area of the visual display on the television receiver, and the first and third output signals include image pixel data with movement for the lines of one of the first and second fields, and the second output signal includes image pixel data with movement for the lines of the other of the first and second fields, and includes chrominance data when the data of the moving image is related to a format true color; and a convolver element for receiving the first, second, and third output signals from the buffer and processing zone element, and for generating therefrom, an output signal that provides a weighted average for a central pixel of a previously determined array of pixel data to be transmitted to the television receiver when a moving image is included in a pixel area of the visual display of the television receiver, wherein the central pixel is part of the second output signal from the buffer and processing zone element, and for transmitting a live video signal to the visual display of the television receiver when a moving image is not included in a pixel area of the visual display of the television receiver.
The apparatus of claim 1, wherein the apparatus further comprises a digital memory element for storing, and subsequently transmitting an output signal comprising moving image data for each of the moving image (s) to selectively display in selected predetermined areas of a visual display of the standard interlaced television receiver, the image data comprising any movement of a true color video format or a color palette data format for a first and second second fields of a moving image that is to be displayed on the television receiver, and each of the moving image (s) has a previously determined priority when there is more than one moving image.
The apparatus of claim 1, wherein the buffer and processing zone element comprises: an element for concurrently reading a first and a second of the adjacent horizontal lines of pixel data from the digital memory element for each of the moving image (s) to be displayed on the first and second lines, and to produce pixel data for each pixel of the first and second of the lines that are part of the moving image that has the highest priority of the moving image (s), wherein the first line is a line from one of the first and second fields, and the second line is a line from the other of the first and second fields, and each one of the first and second lines of one and another field, are produced in sequence.
4. The apparatus of claim 3, wherein the buffer and processing zone element further comprises a pixel assembly buffer zone comprising: a first double buffer zone element for storing pixel data having a highest priority for each pixel of the or images with movement of first and second lines of the first horizontal lines that are received in sequence, and to produce pixel data for the first of the horizontal lines previously received in sequence on a first output bus to the convolver element, while a following one is being received in sequence from the first horizontal lines; a second double buffer zone element for storing pixel data having a higher priority for each pixel of the moving image (s) of two of the second horizontal lines that are received in sequence, and for producing pixel data for the second of the horizontal lines previously received in sequence on a second output busbar to the convolver element, while a following one is being received in sequence of the second horizontal lines; and a delay buffer zone element, to concurrently receive pixel data for the second of the horizontal lines that are being produced by the second double buffer zone element on the second output bus, and to store and delay the second of the horizontal lines received by a horizontal line period before transmitting the stored pixel data for the second horizontal line stored on a third busbar to the convolver element, while another second horizontal line is received from the second element of double buffer zone.
The apparatus of claim 4, wherein the first and second double buffer zone elements store pixel data related to a luminance component (Y) of each pixel of the first and second horizontal lines, respectively.
The apparatus of claim 5, wherein the pixel assembly buffer zone further comprises: a third double buffer zone element for storing chrominance pixel data (C) having a higher priority for each pixel of the image (s) with movement of a first and a second line of the first horizontal lines that are received in sequence when the moving image data is stored in a true color video format, and to produce pixel data of chrominance for the first of the horizontal lines previously received in sequence on a fourth output busbar, while chrominance pixel data is being received for a next in sequence of the first horizontal lines.
The apparatus of claim 6, wherein the buffer and processing zone element further comprises: a color palette converter element that responds to the horizontal line signals on the first, second and third busbars from the buffer assembly area of pixels, to convert the pixel data into a color palette format, into a YUV color palette output signal. a demultiplexer from YC to YUV that responds to the horizontal line signals of luminance and chrominance on the first, second, third and fourth busbars from the pixel assembly buffer area to convert the pixel data into a YC format of true color, in a true color YUV output signal; and a multiplexer element that responds to the output signals from the color palette converter element and the demultiplexer from YC to YUV, and a live video signal. , to select one of these three signals for each pixel for each of three adjacent horizontal lines of data of the first and second fields for transmitiree to the convolver element according to the data provided for each pixel received on the first, second, and third busbars from the buffer zone of pixel assembly.
The apparatus of claim 7, wherein: a luminance value of each pixel of horizontal line data is transmitted for the moving image (s) on the first, second, and third bus bars from the buffer zone of pixel assembly as a first plurality of bits of a pixel data word, and a designation of whether each pixel of the moving image (s) is encoded in a color palette or true color format, and if the pixel forms part of the moving image (s), a second plurality of bits of the pixel data word are encoded; and the second plurality of bits of the pixel data word is used by the multiplexer element to select one of the output signals from the color palette converter element and the demultiplexer from YC to YUV, and a live video signal to be transmitted to the convolver element.
9. An apparatus for processing mixed video and graphic signals to be displayed on a standard interlaced television receiver, which comprises: a digital memory element for storing and subsequently transmitting an output signal comprising moving image data for each one of the moving image (s) to be selectively displayed in selected predetermined areas of a visual display of the standard interlaced television receiver, the image data comprising any of a true color video format or video data color palette for a first and a second fields «r of a moving image to be displayed on the television receiver, and each of the moving image (s) has a previously determined priority when there is more than one image with movement; a buffer and processing zone element that responds to the output signal from the digital memory element to generate from it, first, second, and third output signals including image pixel data with sequence movement for three adjacent horizontal lines of an image to be displayed on the television receiver, wherein the pixel data for the higher priority moving images are produced for each pixel position of the first, second and third output signals in any instant of time, when the motion pictures overlap an area of the visual display in the television receiver, and the first and third output signals include moving picture pixel data for the lines of one of the first and second fields, and the second output signal includes image pixel data with movement for the lines of the other of the first and second fields, and includes n Chrominance data when the moving image data is related to a true color format; and a convolver element for receiving the first, second, and third output signals from the buffer and processing zone element, and for generating therefrom, an output signal which provides a weighted average for a central pixel of a previously determined array of pixel data to be transmitted to the television receiver when a moving image is included in a pixel area of the visual display of the television receiver, wherein the central pixel is part of the second output signal from the buffer and processing zone element, and for transmitting a live video signal to the visual display of the television receiver when a moving image is not included in a pixel area of the visual display of the television receiver.
The apparatus of claim 9, wherein the buffer and processing zone element comprises: an element for concurrently reading a first and a second of the adjacent horizontal lines of pixel data from the digital memory element for each of the moving image (s) to be displayed on the first and second lines, and to produce pixel data for each pixel of the first and second of the lines that are part of the moving image that has the highest priority of the moving image (s), wherein the first line is a line from one of the first and second fields, and the second line is a line from the other of the first and second fields, and each one of the first and second lines of one and another field, are produced in sequence.
11. The apparatus of claim 10, wherein the buffer and processing zone element further comprises a pixel assembly buffer zone comprising: a first double buffer zone element for storing pixel data having a higher priority for each pixel of the image (s) with movement of first and second lines of the first horizontal lines that are received in sequence, and to produce pixel data for the first of the horizontal lines previously received in sequence on a first exit busbar up to the convolutional element, while a following is being received in sequence from the first horizontal lines; a second double buffer zone element for storing pixel data having a higher priority for each pixel of the moving image (s) of two of the second horizontal lines that are received in sequence, and for producing pixel data for the second of the horizontal lines previously received in sequence on a second output busbar to the convolver element, while a following one is being received in sequence of the second horizontal lines; and a delay buffer zone element, to concurrently receive pixel data for the second of the horizontal lines that are being produced by the second double buffer zone element on the second output bus, and to store and delay the second of the horizontal lines received by a horizontal line period before transmitting the stored pixel data for the second horizontal line stored on a third busbar to the convolver element, while another second horizontal line is received from the second element of double buffer zone.
The apparatus of claim 11, wherein the first and second double buffer zone elements store pixel data related to a luminance component (Y) of each pixel of the first and second horizontal lines, respectively.
The apparatus of claim 12, wherein the pixel assembly buffer zone further comprises: a third double buffer zone element for storing chrominance pixel data (C) having a higher priority for each pixel of the image (s) with movement of a first and a second line of the first horizontal lines that are received in sequence when the moving image data is stored in a true color video format, and to produce pixel data of chrominance for the first of the horizontal lines previously received in sequence on a fourth output busbar, while chrominance pixel data is being received for a next in sequence of the first horizontal lines.
The apparatus of claim 13, wherein the buffer and processing zone element further comprises: a color palette converter element that responds to the horizontal line signals on the first, second and third busbars from the buffer assembly area of pixels, to convert the pixel data into a color palette format, into a YUV color palette output signal. a demultiplexer from YC to YUV that responds to the horizontal line signals of luminance and chrominance on the first, second, third and fourth busbars from the pixel assembly buffer area to convert the pixel data into a YC format of true color, in a true color YUV output signal; and a multiplexer element that responds to the output signals from the color palette converter element and the demultiplexer from YC to YUV, and a live video signal, to select one of these three signals for each pixel for each of three lines adjacent horizontal data from the first and second fields to be transmitted to the convolver element according to the data provided for each pixel received on the first, second, and third buss from the pixel assembly buffer zone.
The apparatus of claim 14, wherein: a luminance value of each pixel of horizontal line data is transmitted for the moving image (s) on the first, second, and third bus bars from the buffer zone of pixel assembly as a first plurality of bits of a pixel data word, and a designation of whether each pixel of the moving image (s) is encoded in a color palette or true color format, and if the pixel forms part of the moving image (s), a second plurality of bits of the pixel data word are encoded; and the second plurality of bits of the pixel data word is used by the multiplexer element to select one of the output signals from the color palette converter element and the demultiplexer from YC to YUV, and a live video signal to be transmitted to the convolver element.
16. An apparatus for processing mixed video and graphic signals to be displayed on a standard interlaced television receiver, which comprises: a digital memory element for storing and subsequently transmitting an output signal comprising moving image data for each one of the moving image (s) to be selectively displayed in selected predetermined areas of a visual display of the standard interlaced television receiver, the image data comprising any of a true color video format or video data color palette for a first and a second fields of a moving image that is to be displayed on the television receiver, and each of the moving image (s) has a previously determined priority when there is more than one moving image; a memory controller for causing the data of the moving image of the first and second fields to be read in predetermined locations of the digital memory element, and to subsequently be read in a selective manner from the digital memory element as a signal of memory output including a horizontal line of image data with movement of the first field, and an adjacent line of image data with movement of the second field; a buffer and processing zone element that responds to the memory output signal from the memory controller to generate from it, first, second, and third output signals that include image pixel data with movement in sequence for three adjacent horizontal lines of an image to be displayed on the television receiver, wherein the pixel data for the higher priority moving images are produced for each pixel position of the first, second and third output signals at any instant of time, when the moving images overlap an area of the visual display on the television receiver, and the first and third output signals include moving image pixel data for the lines of one of the first and second fields , and the second output signal includes image pixel data with movement for the lines of the other of the first and second fields, and nclude chrominance data when the moving image data is related to a true color format; and a convolver element for receiving the first, second, and third output signals from the buffer and processing zone element, and for generating therefrom, an output signal that provides a weighted average for a central pixel. of a previously determined array of pixel data to be transmitted to the television receiver when a moving image is included in a pixel area of the visual display of the television receiver, wherein the central pixel is part of the second output signal from the buffer and processing zone element, and for transmitting a live video signal to the visual display of the television receiver when an image is not included with movement in a pixel area of the visual display of the television receiver.
The apparatus of claim 16, wherein the buffer and processing zone element comprises: an element for concurrently reading a first and a second of the adjacent horizontal lines of pixel data from the digital memory element for each of the moving image (s) to be displayed on the first and second lines, and to produce pixel data * for each pixel of the first and second of the lines that are part of the moving image that has the highest priority of the moving image (s), where the first line is a line from one of the first and second fields, and the second line is a line from the other of the first and second fields, and each of the first and second lines of either field, are produced in sequence.
The apparatus of claim 17, wherein the buffer and processing zone element further comprises a pixel assembly buffer zone comprising: a first double buffer zone element for storing pixel data that have a higher priority for each pixel of the or images with movement of a first and second lines of the first horizontal lines that are received in sequence, and to produce pixel data for the first of the horizontal lines previously received in sequence on a first output busbar to the convolver element, while a following one is being received in sequence from the first horizontal lines; a second double buffer zone element for storing pixel data having a higher priority for each pixel of the moving image (s) of two of the second horizontal lines that are received in sequence, and for producing pixel data for the second of the horizontal lines previously received in sequence on a second output busbar up to the convolver element, while being received a following in sequence of the second horizontal lines; and a delay buffer zone element, to concurrently receive pixel data for the second of the horizontal lines being produced by the second double buffer zone element on the second output bus, and for storing and delay the second of the horizontal line received by a horizontal line period before transmitting the stored pixel data for the second horizontal line stored on a third busbar to the convolver element, while another second horizontal line is received from the second horizontal line double buffer zone element.
The apparatus of claim 18, wherein the first and second double buffer zone elements store pixel data related to a luminance component (Y) of each pixel of the first and second horizontal lines, respectively.
The apparatus of claim 19, wherein the pixel assembly buffer zone further comprises: a third double buffer zone element for storing chrominance pixel data (C) having a higher priority for each pixel of the image (s) with movement of a first and a second line of the first horizontal lines that are received in sequence when the moving image data is stored in a true color video format, and to produce pixel data of chrominance for the first of the horizontal lines previously received in sequence on a fourth output busbar, while chrominance pixel data is being received for a next in sequence of the first horizontal lines.
The apparatus of claim 20, wherein the buffer and processing zone element further comprises: a color palette converter element that responds to the horizontal line signals on the first, second and third busbars from the buffer zone of pixel assembly, to convert the pixel data into a color palette format, into a YUV color palette output signal. a demultiplexer from YtJ to YUV which responds to the horizontal line signals of luminance and chrominance on the first, second, third and fourth busbars from the pixel assembly buffer area to convert the pixel data to a YC format of true color, in a true color YUV ealid signal; and a multiplexer element that reeposes the output signals from the color palette converter element and the demultiplexer from YC to YUV, and a live video signal, to select one of these »three signals for each pixel for each of three adjacent horizontal lines of data of the first and second fields to be transmitted to the convolver element according to the data provided for each pixel received on the first, second, and third buss from the pixel assembly buffer zone.
22. The apparatus of claim 21, wherein: a luminance value of each pixel of horizontal line data is transmitted for the moving image (s) on the first, second, and third bus bars from the buffer zone of pixel assembly as a first plurality of bits of a pixel data word, and a designation of whether each pixel of the moving image (s) is encoded in a color palette or true color format, and if the pixel forms part of the moving image (s), a second plurality of bits of the pixel data word are encoded; and the second plurality of bits of the pixel data word is used by the multiplexer element to select one of the output signals from the color palette converter element and the demultiplexer from YC to YUV, and a live video signal to be transmitted to the convolver element. SUMMARY An apparatus processes video signals from YUV and in mixed color palette to be displayed on a NTSC or PAL interlaced television receiver, by storing first and second fields of one or more moving images in a graphics memory. Moving images are stored as YUV or data in color palette, and each moving image is given a priority. An image with higher priority movement overwrites the pixel data of the images with lower priority movement when the moving images overlap. Concurrently first and second adjacent horizontal lines of pixel data are stored in the first and second fields, respectively, in the respective first and second buffer zones of a pixel assembly buffer zone during each line period. A delay buffer zone, and the first and second double buffer zones, generate first, second, and third adjacent horizontal lines, respectively, of pixel data, at the output of the pixel assembly buffer zone , while the third and fourth adjacent horizontal lines of pixel data are inserted into the first and second double buffer zones. A convolver element receives the first, second and third output signals from the buffer assembly area of pixels, and generates therefrom, an output signal that provides a weighted average for a central pixel of a data matrix of previously determined pixel, to be transmitted to the television receiver when an image with movement is included in a pixel area of the visual display of the television receiver. A live video signal is transmitted to the visual display of the television receiver when an image with movement in a pixel area of the visual display of the television receiver is not included. * * * * *
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US08523396 | 1995-08-31 | ||
US08/523,396 US5739868A (en) | 1995-08-31 | 1995-08-31 | Apparatus for processing mixed YUV and color palettized video signals |
Publications (2)
Publication Number | Publication Date |
---|---|
MXPA96003750A true MXPA96003750A (en) | 1997-06-01 |
MX9603750A MX9603750A (en) | 1997-06-28 |
Family
ID=24084825
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
MX9603750A MX9603750A (en) | 1995-08-31 | 1996-08-29 | Apparatus for processing mixed yuv and color palletized video signals. |
Country Status (7)
Country | Link |
---|---|
US (1) | US5739868A (en) |
EP (1) | EP0762330A3 (en) |
JP (1) | JPH09107559A (en) |
KR (1) | KR100208552B1 (en) |
CA (1) | CA2179784C (en) |
MX (1) | MX9603750A (en) |
NO (1) | NO963576L (en) |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB9512565D0 (en) * | 1995-06-21 | 1995-08-23 | Sgs Thomson Microelectronics | Video signal processor |
IT1285258B1 (en) * | 1996-02-26 | 1998-06-03 | Cselt Centro Studi Lab Telecom | HANDLING DEVICE FOR COMPRESSED VIDEO SEQUENCES. |
US6229296B1 (en) * | 1996-02-27 | 2001-05-08 | Micron Technology, Inc. | Circuit and method for measuring and forcing an internal voltage of an integrated circuit |
US5877802A (en) * | 1996-05-21 | 1999-03-02 | Asahi Kogaku Kogyo Kabushiki Kaisha | Video-signal processing device connectable to an electronic endoscope |
US6946863B1 (en) | 1998-02-27 | 2005-09-20 | Micron Technology, Inc. | Circuit and method for measuring and forcing an internal voltage of an integrated circuit |
JP4131052B2 (en) | 1998-07-17 | 2008-08-13 | ソニー株式会社 | Imaging device |
US6999089B1 (en) * | 2000-03-30 | 2006-02-14 | Intel Corporation | Overlay scan line processing |
US6906725B2 (en) * | 2002-02-22 | 2005-06-14 | L-3 Communications Corporation | Apparatus and method for simulating sensor imagery |
JP2004356939A (en) * | 2003-05-29 | 2004-12-16 | Fujitsu Component Ltd | Remote unit and remote system |
US20080106646A1 (en) * | 2006-11-06 | 2008-05-08 | Media Tek Inc. | System, apparatus, method, and computer program product for generating an on-screen display |
Family Cites Families (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4420770A (en) * | 1982-04-05 | 1983-12-13 | Thomson-Csf Broadcast, Inc. | Video background generation system |
US4754270A (en) * | 1984-02-16 | 1988-06-28 | Nintendo Co., Ltd. | Apparatus for varying the size and shape of an image in a raster scanning type display |
US4580165A (en) * | 1984-04-12 | 1986-04-01 | General Electric Company | Graphic video overlay system providing stable computer graphics overlayed with video image |
US5089811A (en) * | 1984-04-16 | 1992-02-18 | Texas Instruments Incorporated | Advanced video processor having a color palette |
JPS60254190A (en) * | 1984-05-31 | 1985-12-14 | 株式会社 アスキ− | Display controller |
CA1243779A (en) * | 1985-03-20 | 1988-10-25 | Tetsu Taguchi | Speech processing system |
DE3702220A1 (en) * | 1987-01-26 | 1988-08-04 | Pietzsch Ibp Gmbh | METHOD AND DEVICE FOR DISPLAYING A TOTAL IMAGE ON A SCREEN OF A DISPLAY DEVICE |
US4951038A (en) * | 1987-05-15 | 1990-08-21 | Hudson Soft Co., Ltd. | Apparatus for displaying a sprite on a screen |
US5258843A (en) * | 1987-09-04 | 1993-11-02 | Texas Instruments Incorporated | Method and apparatus for overlaying displayable information |
GB2210540A (en) * | 1987-09-30 | 1989-06-07 | Philips Electronic Associated | Method of and arrangement for modifying stored data,and method of and arrangement for generating two-dimensional images |
US5179642A (en) * | 1987-12-14 | 1993-01-12 | Hitachi, Ltd. | Image synthesizing apparatus for superposing a second image on a first image |
US5185597A (en) * | 1988-06-29 | 1993-02-09 | Digital Equipment Corporation | Sprite cursor with edge extension and clipping |
US5065231A (en) * | 1988-09-26 | 1991-11-12 | Apple Computer, Inc. | Apparatus and method for merging input RGB and composite video signals to provide both RGB and composite merged video outputs |
US5235677A (en) * | 1989-06-02 | 1993-08-10 | Atari Corporation | Raster graphics color palette architecture for multiple display objects |
US4965670A (en) * | 1989-08-15 | 1990-10-23 | Research, Incorporated | Adjustable overlay display controller |
US5227863A (en) * | 1989-11-14 | 1993-07-13 | Intelligent Resources Integrated Systems, Inc. | Programmable digital video processing system |
US5097257A (en) * | 1989-12-26 | 1992-03-17 | Apple Computer, Inc. | Apparatus for providing output filtering from a frame buffer storing both video and graphics signals |
US5389947A (en) * | 1991-05-06 | 1995-02-14 | Compaq Computer Corporation | Circuitry and method for high visibility cursor generation in a graphics display |
KR940001439B1 (en) * | 1991-08-30 | 1994-02-23 | 삼성전자 주식회사 | Tv screen title superimposing circuit |
US5258826A (en) * | 1991-10-02 | 1993-11-02 | Tandy Corporation | Multiple extended mode supportable multimedia palette and multimedia system incorporating same |
US5313231A (en) * | 1992-03-24 | 1994-05-17 | Texas Instruments Incorporated | Color palette device having big/little endian interfacing, systems and methods |
JP3059302B2 (en) * | 1992-06-03 | 2000-07-04 | 株式会社ハドソン | Video mixing device |
JPH05336441A (en) * | 1992-06-03 | 1993-12-17 | Pioneer Electron Corp | Video synthesis effect device |
JP2593427B2 (en) * | 1992-10-14 | 1997-03-26 | 株式会社ハドソン | Image processing device |
EP0702878A4 (en) * | 1993-06-07 | 1997-01-02 | Scientific Atlanta | Display system for a subscriber terminal |
-
1995
- 1995-08-31 US US08/523,396 patent/US5739868A/en not_active Expired - Fee Related
-
1996
- 1996-06-24 CA CA002179784A patent/CA2179784C/en not_active Expired - Fee Related
- 1996-08-02 EP EP96305724A patent/EP0762330A3/en not_active Ceased
- 1996-08-27 NO NO963576A patent/NO963576L/en unknown
- 1996-08-28 KR KR1019960035943A patent/KR100208552B1/en not_active IP Right Cessation
- 1996-08-29 MX MX9603750A patent/MX9603750A/en unknown
- 1996-08-30 JP JP8229733A patent/JPH09107559A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP0762325B1 (en) | Video magnification apparatus | |
US5587928A (en) | Computer teleconferencing method and apparatus | |
CN100502477C (en) | Method and apparatus displaying double screen | |
US5579057A (en) | Display system for selectively overlaying symbols and graphics onto a video signal | |
US6166772A (en) | Method and apparatus for display of interlaced images on non-interlaced display | |
US4568981A (en) | Font recall system and method of operation | |
KR100261638B1 (en) | Video System and Method of Using Same | |
US4956707A (en) | Adaptive graphics video standards format converter user-interface | |
JP2762287B2 (en) | Television receiver with switching signal in memory | |
EP0484981B1 (en) | Image data processing apparatus | |
MXPA96003750A (en) | Apparatus for processing yuv video signals and mezcl color envelope | |
US5739868A (en) | Apparatus for processing mixed YUV and color palettized video signals | |
JPS60165883A (en) | Methods for transmission/reception and reception of television signal | |
US5835103A (en) | Apparatus using memory control tables related to video graphics processing for TV receivers | |
US5784116A (en) | Method of generating high-resolution video | |
MXPA96003752A (en) | Apparatus using memorielelated control tables with devideo graphics processing for televis receivers | |
WO1997001929A1 (en) | Circuit for interpolating scan lines of a video signal and method of using same | |
JP4357239B2 (en) | Video signal processing device and video display device | |
CA2074548A1 (en) | Video switcher apparatus for wide screen edtv signals | |
JPH05300446A (en) | Master/slave picture display circuit | |
JPH0630350A (en) | Character and graphic information synthesizer | |
JPH07135641A (en) | Scanning line converter |