MXPA99005598A - Method and apparatus for positioning auxiliary information proximate an auxiliary image in a multi-image display - Google Patents

Method and apparatus for positioning auxiliary information proximate an auxiliary image in a multi-image display

Info

Publication number
MXPA99005598A
MXPA99005598A MXPA/A/1999/005598A MX9905598A MXPA99005598A MX PA99005598 A MXPA99005598 A MX PA99005598A MX 9905598 A MX9905598 A MX 9905598A MX PA99005598 A MXPA99005598 A MX PA99005598A
Authority
MX
Mexico
Prior art keywords
image
auxiliary
region
signal
video
Prior art date
Application number
MXPA/A/1999/005598A
Other languages
Spanish (es)
Inventor
Francis Rumreich Mark
Robert Zukas Mark
Original Assignee
Thomson Licensing Sa
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thomson Licensing Sa filed Critical Thomson Licensing Sa
Publication of MXPA99005598A publication Critical patent/MXPA99005598A/en

Links

Abstract

Method and apparatus for generating a signal representing a multi-image video display including a main image (200) and an auxiliary image (202), e.g., a picture-in-picture (PIP) image, provides for positioning auxiliary information, such as closed caption text, proximate the auxiliary image. The auxiliary information (208) is located within a border region (206) for the auxiliary image and positioned for indicating to a user that the auxiliary information is associated with the auxiliary image. The region containing the auxiliary information moves in response to movement of the auxiliary image such that the auxiliary information remains proximate the auxiliary image.

Description

METHOD AND APPARATUS FOR PLACING AUXILIARY INFORMATION NEXT TO AN AUXILIARY IMAGE ON A DISPLAY MULTIPLE IMAGES CROSS REFERENCE WITH RELATED APPLICATIONS This invention relates to the following Requests for Patent of the United States of America of the same Applicant: Serial No. 08 / 769,329 entitled "TELEVISION DEVICE FOR SIMULTANEOUS DECODING OF AUXILIARY DATA INCLUDED IN MULTIPLE TELEVISION SIGNALS", Serial No. 08 / 769,333 entitled "SIGNAL PROCESSING SYSTEM VIDEO THAT PROVIDES MODIFICATION OF INDEPENDENT IMAGES ON A MULTIPLE IMAGE SCREEN "and Serial Number 08 / 769,331 entitled" METHOD AND APPARATUS FOR PROVIDING A MODULATED DISPLACEMENT SPEED FOR TEXT DEPLOYMENT ", and Serial No. 08 / 769,332 entitled" METHOD AND APPARATUS TO FORMAT AN AUXILIARY INFORMATION INCLUDED IN A TELEVISION SIGN "All were submitted to Mark F. Rumreich and co-inventors on the same date as the present application. FIELD OF THE INVENTION The invention relates to television receivers capable of generating a multi-image screen having a main image and an auxiliary image, such as image-in-picture (PIP) and out-of-picture (POP) image screens. More particularly, the invention relates to a method and apparatus for displaying auxiliary information, such as subtitling information, next to an auxiliary image on a multi-image screen. BACKGROUND OF THE I NVENTION A television signal may include auxiliary information in addition to audio and video program information. For example, in the United States, an NTSC (National Television Standards Committee) television signal may include two bytes of subtitling data during the last half of each occurrence of line 21 of field 1. The subtitling data can be decoded and displayed to provide a visible text representation of the audio content of a television program. Additional subtitling data and other types of similarly encoded auxiliary information, such as extended data service (XDS) information, may be included in other line intervals such as field 21 line 2. United States law it requires subtitling decoders on all television receivers that have screens larger than 33.02 centimeters and most television programming (including video tapes) now include subtitling data. Although subtitling was developed to help the hearing impaired, subtitling can also provide a benefit to viewers with another type of disability. Subtitling a multi-image screen, such as picture-in-picture (PIP) screen or an out-of-picture picture (POP) screen, is an example of this type of additional benefit. For example, acting an image-in-picture function produces an auxiliary image representing the video content of a television program signal program signal. The auxiliary image is a small image inserted in a portion of the main image. However, only the audio program associated with the main image is processed and coupled to the television broadcasters. The audio content of the secondary signal is lost. As the audio program is important for the understanding of a television program, the utility of a multi-image display function such as an image-in-picture screen is severely limited by the lack of an associated audio program. One approach to solving this problem is to display subtitles, that is, visible text, which represents the image-in-picture audio program on a portion of the screen. However, the subtitling decoder in most television receivers processes only the subtitling information associated with the "main" image., not the signal of the small image. An exception to this general rule is found in certain television receivers manufactured by Sharp Corporation such as model 31 HX-1200 and 35HX-1200. These Sharp television receivers show subtitles that represent the audio of the picture-in-picture image by providing a switching capability that allows to couple the picture-in-picture signal to the main subtitling decoder. Picture-in-picture subtitles are displayed in full size (up to 4 rows of 32 large characters) at the top or bottom of the screen (a position that the user can select). An example of image-in-picture subtitling produced by Sharp television receivers is shown in Figure 1 which shows a multiple image screen including the main image 100, the image image in the image 102 and the subtitling of the image in the image 104. BRIEF DESCRIPTION OF THE INVENTION The invention relates, in part, to the recognition of inventors of a number of problems associated with the implementation of image-in-picture subtitling described. First, subtitling of the main image and subtitling of the small image can not be displayed simultaneously. Second, the small image combined with the display of subtitling for the small image may obscure the main image to a degree that is unpleasant to the user. For example, a picture-in-picture subtitling is as in the Sharp implementation (up to 20% of the screen area) combined with a normal-sized picture-in-picture image (one-ninth of the screen area) may obscure more than 30% of the main video image. Third, the subtitling of the small image is difficult to follow simultaneously with the video of the small image because the location of the subtitling at the top or bottom of the screen is physically disconnected from the small image and can be a significant distance from the small image. Fourth, the appearance of the subtitles of the small image is virtually identical to the subtitles of the main image which causes users to be confused about which image is associated with the subtitles. The combination of these problems can make subtitling of the auxiliary image that is implemented in the manner described above objectionable to a degree that makes subtitles of the auxiliary image useless for many observers. The invention also resides in part, in providing a method and apparatus for solving the described problems associated with the prior art. More specifically, the present invention provides for placing auxiliary information, such as subtitle text characters, associated with an auxiliary image on a multi-image screen next to the auxiliary image. One aspect of the invention includes combining signals representing an auxiliary image, a boundary region for the auxiliary image, and auxiliary information with a signal representing the main image to produce a combined signal representing a composite image having the auxiliary information in it. the region of the border and next to the auxiliary image. Another aspect of the invention includes producing a signal representing an image having first, second and third regions representing a main image, an auxiliary image and auxiliary information, respectively, and producing a change in the location of the second region in a manner that the third region changes location in response to the change in the location of the second region. Another aspect of the invention includes placing the third region in the image to indicate to a user that the auxiliary information is associated with an auxiliary video program included in the second region. Another aspect of the invention includes a method of generating a multi-image screen by combining primary and auxiliary image signals with auxiliary and boundary information so that the auxiliary information is included in a boundary region and close to the auxiliary image. BRIEF DESCRIPTION OF THE DRAWINGS The teachings of the present invention can be easily understood by considering the following detailed description in conjunction with the accompanying drawings., in which: Figure 1 shows a subtitling orientation of image in image as implemented in the prior art; Figure 2 shows an auxiliary information orientation relative to an auxiliary image and a main image according to the present invention; Figure 3 shows circuits for generating an exemplary small image subtitlement in accordance with the present invention; and Figures 4 and 5 illustrate various small image subtitling orientations with respect to a small image and the main image. To facilitate understanding, identical reference numbers where possible have been used to designate identical elements that are common in the figures. DESCR I I ION DETAIL OF THE I NVENTION For ease of description, the exemplary embodiments shown in the drawing will be described in the context of an image-in-picture (PIP) display system that has a small auxiliary image inserted into a large main image. . However, the principles of the invention are applicable to other multi-image display systems such as an out-of-picture (POP) image system in which a mounted auxiliary image is arranged outside, for example, on one side, the main image . Figure 2 shows the image orientation of an image-in-image image 202 relative to a main image 200 produced by an image-in-picture subtitling image generation system of the present invention. The position of the image image in image 202 within the confines of the main image 200 is conventionally defined by an observer. Specifically, the observer, through a remote control, defines a vertical line number (vertical position) and a pixel location (horizontal position) where a corner (eg, upper left corner) of the image image in image is going to be placed. The active region 210 of the picture-in-picture image 202, where the picture-in-picture video is displayed, has a common dimension of one-third to one-third of the size of the main picture 200. The image picture area in image 210 (active region) is circumscribed by a border region 204. This border region is approximately 0.64 centimeters wide. In normal operation mode, for example, without subtitling, the border of the image-in-picture image is approximately 0.64 centimeters wide on all sides of the active image area 210. When subtitling is activated for the image of In image, the area of the lower border 206 extends to a height of approximately 5 centimeters. The subtitling information is displayed in this region of 5 centimeters wide (termed the subtitling window) as two lines of subtitling texts 208. The invention provides a method and apparatus for producing this extended border area 206 and placing the subtitling information 208 within the extended boundary area 206 (i.e., placing the subtitle for the image in image image proximate the active image area of image in image 210). Although the displayed modality of the screen places the subtitling information for the image-in-image image in the lower part of the image-in-image image area, the image-in-image subtitling information could also be easily placed in a border area. extended in the upper part of the image area in image or in any other place that is close to the image area in image area 210.
Figure 3 shows circuits 300 for placing image-in-picture subtitling information proximate to the image region in active image as shown in Figure 2. The circuits contain a main image timing generator 312 coupled to a multiplexer matrix 314 and an image-in-picture generator 302. The multiplexer matrix contains three multiplexers 306, 308 and 310. These multiplexers are actively switched, pixel by pixel, to combine pixel values (e.g., difference signals). of color and luminance) and produce the images shown in Figure 2. Specifically, the third multiplexer 310 inserts the subtitle and border of the image-in-image image in the main image; the second multiplexer 308 inserts the active image-in-picture video images in the border region; and the first multiplexer combines the subtitle character values with border values that form an image-in-image subtitling window. More specifically, the timing generator 312 has as its input a vertical position 324 and a horizontal position 326 defined by the user to place the image in image image within the borders of the main image. For example, a user can determine the location of the image in image by activating a "MOVE" key on a remote control. In a typical application, each activation of the MOVE key moves the image-in-image image to a different corner of the main image as indicated by the values of the vertical position and the horizontal position. The system shown in Figure 3 is controlled, for example, by a microcomputer (not shown in Figure 3). The microcomputer responds to the image-in-image position selected by the user by generating two digital values that represent the vertical and horizontal coordinates of the image-in-image position. The microcomputer stores the digital values in memory and, in a typical system, communicates the digital values to the system in Figure 3 via a data bus to provide the vertical position 324 and the horizontal position 326. In addition to the vertical position inputs and horizontal position, the timing generator 312 receives the vertical count 328 and the horizontal count 330 as input signals. These count values indicate the line and pixel of the main image present. The counting values are generated in a conventional manner by counters (not shown in Figure 3) that count in response to timing signals including vertical and horizontal synchronization. The generation circuits of conventional synchronization signals (not shown in the Figure 3) produce the synchronization signals in response to a synchronization component composed of a television signal.
In response to the count values, the timing generator produces three control signals, namely, INSERT_Your BATTERY LATION, I NSERT_IMAGE IN_IN_I MAGEN and FSW (RAPID SWITCH). In general, these signals are timing signals that are active for certain portions (e.g., a predetermined number of pixels) in certain lines. For example, the location of the subtitling in the main image is defined by a number of inclusive pixels and lines. As such, for all counting values that include these lines and pixels, the I NSERTAR_SUBTITU LAC ION signal is active to define a rectangular subtitling window. The beginning of the window, for example, its upper left corner, is defined as a displacement of a number of lines and pixels of the vertical position and horizontal position values (324 and 326) that define the location of the image image in image . The I NSERTAR_SU BITULATION signal is coupled to the subtitling generator 304 which generates the signal I NSERT CHARACTER VALUE on the path 320 to control the first multiplexer 306 as described below. Similarly, the signals I NSERT_I MAGEN_EN_I MAGEN and FSW (QUICK CONMITTER) are active for certain pixels and lines to control the insertion of the image in the active image in the border region as well as the insertion of the image in image with its border and subtitling in the main image. The INSERT_I MAGEN_EN_I MAGEN signal is also coupled to the image-in-picture generator 302. It must place the pixels of the image in image in relation to the main image.
The image-in-picture image generator 302 contains a subtitle character generator 304 that produces subtitling characters. The EIA-608 subtitling standard specifies a subtitle character format comprising a display character grid of 15 rows by 32 columns with up to four rows of characters displayed at any time. Although these standard characters could be displayed close to the image area of the image-in-picture image using the present invention, the invention generally uses reformatted characters produced by the character generator 302. The reformatting performed by the unit 304 comprises translating the series of standard subtitling characters in a series of reduced characters, using a smaller font size, and showing only two rows of 18 characters each in the subtitling window of the image in image, for example, the border extension of 5 cm Wide. The image-in-picture generator 302 produces a control signal I NSERT CHARACTER VALUE in the path 320 which is coupled to the control terminal of the first multiplexer 306. In addition to the control signal, the image-in-picture generator produces a signal image-in-image (IMAGE MAGNET IN ACTIVE IMAGE) that is coupled to the second multiplexer 308. Using the image-in-picture generator 302 and its subtitle character generator, the image-in-image image as well as the Subtitling is extracted in a conventional manner from an auxiliary video signal (AUXILIARY VIDEO). The placement of the image in image is controlled by the INSERT_I MAGEN_EN_I MAGEN signal generated by the main image timing generator 312, that is, the image-in-image generator produces the image-in-image pixels during a period when the signal I NSERT_I MAGEN_EN_I MAGEN is active. Additionally, the timing generator 312 produces a signal of I NSERTA R_SU BTITU LAC ION that is coupled to the generated subtitle characters. This signal controls the position of the subtitling window with respect to the main image, that is, the subtitle character pixels are placed in pixels and lines when the INSERT_SUBTITULATION signal is active.
The INSERT CHARACTER VALUE control signal (path 320) selects as the output of the first multiplexer either a character value 316 (e.g., a white level prediction error data value) or a boundary value 318 (e.g. , a gray level pixel value). The result is a matrix of character values and boundary values, for example white pixels in a gray background, which when taken together as an array of values show one or more text characters in a gray background. The output of the first multiplexer 306 is coupled via the path 322 to the first input of the second multiplexer 308. The output of the first multiplexer is essentially an image (a rectangular border layer) having a constant luminance value throughout the entire image except in a region where subtitling characters are inserted. The characters are located in a subtitling window defined by the I NSERTAR_SUBTITU LATION signal. The second multiplexer 308 combines the video of the image in active image with the border layer. As such, the second input of the multiplexer 308 is the image-in-picture video (I MAGEN OF I MAGEN IN I ACTIVE MAGEN 332) produced by the image-in-picture generator 302. The second multiplexer 308 is controlled by the signal of I NSERT_I MAGEN_IN_ MAGN produced by the timing generator 312. The timing generator 312 produces the signal I NSERT_IMAGE_IN_ MAG IN to create the image area in active image, that is, a "high" signal during a number of pixels in each line that will contain the image image in image. Specifically, the signal of I NSERT_I MAGEN_EN_IMAGEN selects the first input to the second multiplexer for all horizontal and vertical counting values outside of the image area of the active image. For all the horizontal and vertical counting values in that region, the INSERT_I MAG signal IN_EN_I MAGEN selects the image video in image for output of the second multiplexer 308. As such, the image video in active image is inserted in the border layer next to the picture-in-picture subtitling window.
A similar effect is achieved if the first and second multiplexers are in reverse order, that is, the image in the active image is combined with the boundary and then multiplexed with the value of h. The timing generator 312 includes conventional logic devices comprising, for example, gates, tilting circuits, etc. that generate active states in the control signals INSERT_SUBTITULATION, FSW (QUICK SWITCH) and INSERT_IMAGE_IN_IMAGE during the intervals described above. The specific time intervals used in the exemplary mode are defined by the following relationships between the horizontal count 330 (defined below as "HC"), the vertical count 328 ("VC"), the horizontal position 326 ("hpp") and the vertical position 324 ("VP"). The signal of INSERT_SUBTITULATION is active (that is, high or logical 1) when: 4HP < HC < (4HP + 220); and (VP + 75) < VC < (VP + 72 + 18CAP). That is, the INSERT_SUBTITULATION signal is active when HC is greater than 4HP and less than 4HP + 220, and VC is greater than (VP + 75) and less than (VP + 72 + 18CAP) where "CAP" is a binary value (either 1 or 0) indicating whether subtitling of image in image is activated. That is, when a user activates picture-in-picture subtitling, that is, by selecting "PICTURE-ON-PICTURE SUBTITUTION ON" of an establishment menu, CAP has a value of 1. Similarly, the FSW signal (WITH RAPID MUTADOR DO ) is active when: 4H P < HC < (4H P + 232); and VP < VC < (VP + 75 + 18CAP); The INSERT_IMAGE_IN_IMAGE signal is active when: 4H P < HC < (4HP + 22); and (VP + 3) < VC < (VP + 72). Values such as 4 multiplied by HP and 220 added to HP define horizontal shifts (ie, in pixels) that control the horizontal position and the width of the boundary, the image windows in image and subtitling from image to image. In the same way, the values that are added to VP define vertical displacements (that is, in lines) that control the vertical position and height of the border, the window of the image in image, and the image subtitling window in image . It will be apparent that these offset values can be modified to vary the position and size of the windows as necessary. Regardless of the order of the first and second multiplexers, the system establishes to keep the subtitling of image in image in close proximity to the image in image. If the location of the image-in-picture image changes, for example, when a user moves the image from image to image (such as when using the "MOVE" key mentioned above on a remote control), the location of the subtitle of the image in the image moves automatically to stay in close proximity to the image in image. That is, the location of the picture-in-picture subtitling is determined in response to the location of the picture-in-picture image. Figure 4 illustrates four exemplary locations of an image-in-image image and an exemplary orientation of image-in-image subtitling for each image location in image. A variation of the configuration of Figure 4 is illustrated in Figure 5 in which image-in-picture subtitling automatically changes its orientation with respect to the main image and moves in the border layer. For example, moving the image-in-picture image from a higher portion of the main image to a lower portion of the main image causes image-in-image subtitling to move at the border as shown in Figures 5A and 5B or as it is shown in Figures 5C and 5D. Moving the picture-in-picture data subtitling in the border layer can, for example, improve the readibility of subtitling from picture to picture and / or minimize the interference of picture-in-picture subtitling with the main picture. The particular manner in which image-in-image subtitling moves within the border layer can be selected by a user from an establishment menu. Going back to Figure 3, a third multiplexer 310 selects between the image-in-image image with its boundary layer and the main image 334. The third multiplexer 310 is activated by a signal of FSW (RAPID DOOR) generated by the timing generator 312. The FSW signal (WITH QUICK MUTER) selects the first input to the third multiplexer 310 (the border and image image in image) for all the horizontal and vertical counting values in the image area in image including the border region. For all horizontal and vertical counting values outside the image and border region for the image in image, the FSW (FAST SWITCH) signal selects the main image. As such, the image-in-picture image and its border layer are inserted into the main image and the FSW signal (RAPID DOOR CONM) defines the width of the border. The signals at the output of the multiplexer 310 are coupled to a deployment trigger (not shown but well known in the art). Using the circuits in Figure 3, the deployment of Figure 2 occurs. The circuits, in essence, use a layered approach to image generation. Specifically, a character value of subtitling text is combined with a border value to produce a border layer (a gray layer that has a previously defined size and that contains subtitling texts), then, the image in active image it is combined with the boundary layer, and finally, the main image is multiplexed with the image in image, its border and text to create the wide image image display of Figure 2. As the system establishes to place the texts of Subtitling in close proximity to the picture-in-picture image, an observer can easily understand the subtitling texts with reference to the picture-in-picture image. Although various embodiments embodying the teachings of the present invention have been shown and described in detail herein, those skilled in the art will readily be able to find many other embodiments that incorporate those teachings. For example, several configurations of the boundary region shown in Figure 2 are possible. First, several orientations of the boundary region are possible as described and shown previously in Figures 4 and 5. Additionally, the extent of the boundary region is The border containing the auxiliary information may be adjacent to the image-in-picture image as shown in Figures 2, 4 and 5 may be placed slightly apart from the image-in-picture image, for example, with a region of one color and / or different brilliance.

Claims (16)

  1. CLAIMS 1. Apparatus comprising: means for processing (302) a first television signal to generate a first signal representing auxiliary information included in said first television signal; means for generating a control signal (312); and means responsive to said first signal control signal, a signal representing an auxiliary image, a signal representing a frontier region for such an auxiliary image and a second television signal representing a main image a composite image including such an image main, said auxiliary image, said border region and such auxiliary information wherein said auxiliary information and such auxiliary image is displayed in said border region, said auxiliary information is close to said auxiliary image.
  2. 2. The apparatus of claim 1, wherein said auxiliary information comprises text.
  3. 3. The apparatus of claim 1, wherein said text comprises subtitling information.
  4. The apparatus of claim 1, wherein said means for combining signals comprises a multiplexer; said control signal generation means comprise a timing generator and said control signal comprises a timing signal for causing said multiplexer to include said first signal, said signal represents said auxiliary image, and such a signal represents said border region in said combined signal wherein the auxiliary information is displayed in such a border region and close to such an auxiliary image.
  5. The apparatus of claim 4, wherein said signal representing said border region comprises border values; said auxiliary information comprises subtitling information; such multiplexer combines the border values with said subtitling information to produce a boundary layer, combines such a signal representing said auxiliary image with such a boundary layer, to produce an intermediate signal and combines said combined signal with said signal that represents such a main image to produce the aforementioned combined signal.
  6. 6. The apparatus of claim 4, wherein said signal representing said boundary region comprises a boundary value; said auxiliary information comprises a character value of subtitling; said multiplexer comprises: a first multiplexer (306) for combining said boundary value with said subtitle character value to produce a boundary layer representing said boundary region including such auxiliary information; a second multiplexer (308), coupled to said first multiplexer, for combining said signal representing said auxiliary image with said boundary layer to produce an intermediate signal; and a third multiplexer (310), coupled to said second multiplexer, for combining said intermediate signal with said signal representing said main image.
  7. The apparatus of claim 6, wherein said timing generator produces said control signal for said multiplexer in response to a horizontal and vertical coordinate position defined by a user (324, 326) for such auxiliary image, a value of vertical counting and a horizontal counting value, wherein the horizontal and vertical counting values (328, 330) indicate a particular pixel location that is displayed in such a main image.
  8. The apparatus of claim 7, wherein said auxiliary image comprises an image in image or an image image out of image.
  9. 9. Apparatus comprising: means for extracting auxiliary information from an auxiliary video signal; means for processing (306, 308, 310) a main video signal and such an auxiliary video signal for generating an output area signal representing a video image having a first region representing a main video program included in such a main video signal, having a second region representing an auxiliary video program included in such auxiliary video signal, and having a third region representing such auxiliary information; and means for producing (312) a change in location of said second region in said video image; said third region exhibits a change in location in said video image in response to such a change in location of said second region.
  10. The apparatus of claim 9, wherein said third region is positioned proximate said second region before and after such a change in location of said second region. eleven .
  11. The apparatus of claim 10, wherein said second and third regions are located in said video image with a first orientation prior to said change in location of said second region; said first orientation is maintained after said change in location of such second region.
  12. The apparatus of claim 10, wherein said second and third regions are located in said video image with a first orientation prior to said change in location of said second region; said first orientation changes to a second orientation after said change in location of said second region.
  13. 13. Apparatus comprising: means for extracting (302) auxiliary information from an auxiliary video signal; means for processing (306, 308, 310) a main video signal and said auxiliary video signal for generating an output signal representing a video image having a first region representing a main video program included in such a signal main video, having a second region representing an auxiliary video program included in such auxiliary video signal, and having a third region representing such auxiliary information; and means for placing (312) said third region in a location in such a video image; said location of said third region with respect to said second region indicating to a user that said auxiliary information is associated with said auxiliary video program included in the second region.
  14. The apparatus of claim 13, wherein said means for positioning said third region also places said second region in said video image; such means for placing said second and third regions; said third region is placed subsequent to said change in position so that said user associates said auxiliary information with said auxiliary video program subsequent to said change in position.
  15. 15. The apparatus of claim 14, wherein said second region of said video image represents an image-in-picture image or an out-of-picture image image.
  16. 16. A method for generating a multi-image screen having character information located next to an auxiliary image in a main image, comprising the steps of: extracting (302) auxiliary information from an auxiliary video signal representing such an auxiliary image; combining (306) a boundary value with such auxiliary information to produce a boundary layer containing such auxiliary information; combining (308) such auxiliary video signal with said boundary layer to produce an intermediate signal representing such an auxiliary image, a boundary region and said auxiliary information placed next to said auxiliary image and in said boundary region; and combining (310) such intermediate signal and a main video signal representing said main image.
MXPA/A/1999/005598A 1996-12-19 1999-06-16 Method and apparatus for positioning auxiliary information proximate an auxiliary image in a multi-image display MXPA99005598A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US08770770 1996-12-19

Publications (1)

Publication Number Publication Date
MXPA99005598A true MXPA99005598A (en) 2000-04-24

Family

ID=

Similar Documents

Publication Publication Date Title
EP0945005B1 (en) Method and apparatus for positioning auxiliary information proximate an auxiliary image in a multi-image display
US5929927A (en) Method and apparatus for providing a modulated scroll rate for text display
EP0656727B1 (en) Teletext receiver
EP0300509B2 (en) Display apparatus capable of simultaneously displaying a television picture and a compressed display page of character and graphics data
EP0870401B1 (en) Television apparatus and method with provisions for displaying an auxiliary image of variable size
EP0945013B1 (en) Apparatus for reformatting auxiliary information included in a television signal for pip display
US5500680A (en) Caption display controlling device and the method thereof for selectively scrolling and displaying a caption for several scenes
KR100212134B1 (en) Soft scroll method of viewer selection type caption display
JP2629268B2 (en) Teletext broadcast receiver
CN1094014C (en) Apparatus for controling caption display on wide aspect ratio screen
JPH0946657A (en) Closed caption decoder
MXPA99005598A (en) Method and apparatus for positioning auxiliary information proximate an auxiliary image in a multi-image display
JP3424057B2 (en) Television receiver for teletext broadcasting
KR19980020291A (en) How to move the viewer-selectable caption display position
JPH05347735A (en) Television system
KR100462448B1 (en) Television system with provisions for displaying an auxiliary image of variable size
KR100188272B1 (en) Televiewer option caption display method with adjust function a column space between lines of write vertical
KR100188274B1 (en) Scroll method of caption display televiewer option
JPH03192884A (en) Character graphic information display device
KR19980027853A (en) How to Display Pop-on of Selective Subtitle Broadcasting
JPH09181994A (en) Video signal processor
JP2003046963A (en) Information display system and method therefor
JPH09266450A (en) Video display device
KR970009290A (en) Main screen designation control device of multi-screen TV and its method
MXPA99005599A (en) Method and apparatus for reformatting auxiliary information included in a television signal