WO1998027725A1 - Method and apparatus for positioning auxiliary information proximate an auxiliary image in a multi-image display - Google Patents

Method and apparatus for positioning auxiliary information proximate an auxiliary image in a multi-image display Download PDF

Info

Publication number
WO1998027725A1
WO1998027725A1 PCT/US1997/022750 US9722750W WO9827725A1 WO 1998027725 A1 WO1998027725 A1 WO 1998027725A1 US 9722750 W US9722750 W US 9722750W WO 9827725 A1 WO9827725 A1 WO 9827725A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
auxiliary
region
signal
border
Prior art date
Application number
PCT/US1997/022750
Other languages
French (fr)
Inventor
Mark Francis Rumreich
Mark Robert Zukas
Original Assignee
Thomson Consumer Electronics, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thomson Consumer Electronics, Inc. filed Critical Thomson Consumer Electronics, Inc.
Priority to AU55222/98A priority Critical patent/AU5522298A/en
Priority to EP97951632A priority patent/EP0945005B1/en
Priority to DE69733961T priority patent/DE69733961T2/en
Priority to JP52782798A priority patent/JP4105769B2/en
Publication of WO1998027725A1 publication Critical patent/WO1998027725A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/44Receiver circuitry for the reception of television signals according to analogue transmission standards
    • H04N5/445Receiver circuitry for the reception of television signals according to analogue transmission standards for displaying additional information
    • H04N5/45Picture in picture, e.g. displaying simultaneously another television channel in a region of the screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • H04N21/4858End-user interface for client configuration for modifying screen layout parameters, e.g. fonts, size of the windows
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles

Definitions

  • the invention relates to television receivers capable of generating a multi-image display having main and auxiliary images such as picture-in-picture (PIP) or picture-outside-picture (POP) displays.
  • PIP picture-in-picture
  • POP picture-outside-picture
  • the invention relates to a method and apparatus for displaying auxiliary information, such as closed caption information, proximate an auxiliary image in a multi-image display.
  • auxiliary information such as closed caption information
  • a television signal may include auxiliary information in addition to video program and audio program information.
  • an NTSC (National Television Standards Committee) television signal may include two bytes of closed captioning data during the latter half of each occurrence of line 21 of field 1. Closed caption data may be decoded and displayed to provide a visible text representation of a television program's audio content. Additional closed caption data and other types of similarly encoded auxiliary information, such as extended data services information (XDS), may be included in other line intervals such as line 21 of field 2.
  • XDS extended data services information
  • SUBSTITUTE SHEH (RULE 28) receivers having displays larger than 13 inches and most television programming (including video tapes) now includes captioning data.
  • captioning was developed to aid the hearing impaired, captioning can also provide a benefit to non-hearing impaired viewers as well.
  • Captioning for a multi-image display such as picture-in- picture (PIP) or picture-outside-picture (POP) displays is an example of this type of additional benefit.
  • PIP picture-in- picture
  • POP picture-outside-picture
  • activating a PIP feature produces an auxiliary image representing the video content of a secondary television program signal.
  • the auxiliary image is a small picture that is inset into a portion of the main picture. However, only the audio program associated with the main picture is processed and coupled to the speakers of the television. The audio content of the secondary signal is lost.
  • Sharp television receivers manufactured by Sharp Corporation such as models 31H-X1200 and 35H-X1200.
  • These Sharp television receivers display captions representing the audio of the PIP image by providing a switching capability that permits coupling the PIP signal to the main caption decoder.
  • PIP captions are displayed full size (up to four rows of 32 large characters) at the top or bottom of the screen (a user selectable position).
  • FIG. 1 depicts a display including main image 100, PIP image 102 and PIP caption 104.
  • the invention resides, in part, in the inventors' recognition of a number of problems associated with the described PIP captioning implementation.
  • SUBSTITUTE SHEET (RflLE 26) main image to an extent that is objectionable to a user.
  • a PIP caption as in the Sharp implementation up to 20% of the screen area
  • a normal size PIP image one-ninth of the screen area
  • the small- picture caption is difficult to follow simultaneously with small-picture video because the location of the caption at the top or bottom of the screen is physically disconnected from the small picture and may be a significant distance from the small picture.
  • the appearance of small-picture captions is virtually identical to main-picture captions causing users to become confused as to which image is associated with the caption.
  • the present invention provides for positioning auxiliary information, such as closed captioning text characters, that is associated with an auxiliary picture in a multi-image display proximate the auxiliary picture.
  • auxiliary information such as closed captioning text characters
  • One aspect of the invention involves combining signals representing an auxiliary image, a border region for the auxiliary image, and auxiliary information with a signal representing the main image to produce a combined signal representing a composite image having the auxiliary information within the border region and proximate the auxiliary image.
  • Another aspect of the invention involves producing a signal representing an image having first, second and third regions representing a main image, an auxiliary image and auxiliary information, respectively, and producing a change in the location of the second region such that the third region changes location in response to the change in location of the second region.
  • Another aspect of the invention involves positioning the third region within the image for indicating to a user that the auxiliary information is associated with an auxiliary video program included in the second region.
  • Another aspect of the invention involves a method of generating a multi-image display by combining main and auxiliary image signals with border and auxiliary information such that the auxiliary information is included within a border region and proximate the auxiliary image.
  • Figure 1 depicts a PIP captioning orientation as implemented in the prior art
  • Figure 2 depicts an orientation of auxiliary information relative to an auxiliary picture and a main picture in accordance with the present invention
  • Figure 3 depicts circuitry for generating an exemplary small- picture caption in accordance with the present invention.
  • Figures 4 and 5 illustrate various orientations of small-picture captioning with respect to a small image and to the main image. To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures.
  • PIP picture-in-picture
  • POP picture-outside-picture
  • FIG. 2 depicts the image orientation of a PIP image 202 in relation to a main picture 200 as produced by a PIP captioning image generation system of the present invention.
  • the position of the PIP image 202 within the confines of the main picture 200 is conventionally defined by a viewer. Specifically, the viewer, through a remote control, defines a vertical line number (vertical position) and a pixel location (horizontal position) where one corner (e.g., upper left corner) of the PIP image is to be located.
  • the active region 210 of the PIP image 202 where the PIP video is displayed, has a typical dimension of one third by one third of the size of the main picture 200.
  • the PIP image area 210 (active region) is circumscribed by a border region 204. This border region is approximately 0.25 inches (0.64 cm) wide. In the normal operating mode, e.g., without closed captioning, the border of the PIP image is approximately 0.25 inches wide on all sides of the active image area 210.
  • the bottom border area 206 is extended to a height of approximately 2 inches (5 cm).
  • the closed caption information is displayed in this 2 inch wide region
  • closed caption window (referred to as a closed caption window) as two-lines of closed caption text 208.
  • the invention provides a method and apparatus for producing this extended border area 206 and positioning the closed caption information 208 within the extended border area 206 (i.e., position the caption for the PIP image proximate to the PIP active image area 210).
  • the depicted embodiment of the display positions the closed caption information for the PIP image at the bottom of the PIP image area, the PIP closed caption information could as easily be placed in an extended border area at the top of the PIP image area or anywhere else that is proximate the PIP image area 210.
  • FIG. 3 depicts circuitry 300 for positioning the PIP closed caption information proximate the active PIP image region as depicted in FIG. 2.
  • the circuitry contains a main picture timing generator 312 coupled to a multiplexer array 314 and a PIP image generator 302.
  • the multiplexer array contains three multiplexers 306, 308 and 310. These multiplexers are actively switched, on a pixel-by-pixel basis, to combine pixel values (e.g., luminance and color difference signals) and produce the images depicted in FIG. 2.
  • the third multiplexer 310 inserts the PIP image border and caption into the main picture; the second multiplexer 308 inserts the active PIP video imagery into the border region; and the first multiplexer combines closed caption character values with border values that forms a PIP captioning window.
  • the timing generator 312 has as its input a vertical position 324 and a horizontal position 326 that is user defined for locating the PIP image within the boundaries of the main picture.
  • a user can determine the location of the PIP image by activating a "MOVE" key on a remote control.
  • each activation of the MOVE key moves the PIP image to a different corner of the main display as indicated by the vertical and horizontal position values.
  • the system shown in Figure 3 is controlled, for example, by a microcomputer (not shown in Figure 3).
  • the microcomputer responds to the user-selected PIP image position by generating two digital values representing the horizontal and vertical coordinates of the PIP position.
  • the microcomputer stores the digital values in memory and, in a typical system, communicates the digital values to the system in Figure 3 via a data bus to provide vertical position 324 and horizontal position 326.
  • timing generator 312 receives vertical count 328 and horizontal count
  • count values indicate the present main picture line and pixel.
  • the count values are generated in a conventional manner by counters (not shown in Figure 3) that count in response to timing signals including horizontal and vertical sync.
  • Conventional sync signal generation circuitry (not shown in Figure 3) produces the sync signals in response to a composite sync component of a television signal.
  • the timing generator produces three control signals, namely, CAPTION_INSERT, PIP_INSERT and FSW (FAST SWITCH).
  • these signals are timing signals that are active for certain portions (e.g., a predefined number of pixels) within certain lines. For example, the location for the caption within the main picture is defined by a number of inclusive lines and pixels.
  • the CAPTION NSERT signal is active to define a rectangular caption window.
  • the beginning of the window e.g., its upper left corner, is defined as an offset of a number of lines and pixels from the vertical and horizontal position values (324 and 326) that define the location of the PIP image.
  • the CAPTIONJNSERT signal is coupled to closed caption generator 304 which generates signal INSERT CHARACTER VALUE on path 320 for controlling first multiplexer 306 as described further below.
  • PIP_INSERT and FSW signals are active for certain pixels and lines to control insertion of the active PIP image into the border region as well as insertion of the PIP image with its border and captioning into the main picture.
  • Signal PIPJLNSERT is also coupled to PIP generator 302 for defining where PIP generator 302 should position the PIP image pixels relative to the main picture.
  • the PIP image generator 302 contains a closed captioned character generator 304 that produces closed caption characters.
  • Closed captioning standard EIA-608 specifies a closed caption character format comprising a display character grid of 15 rows by 32 columns with up to four rows of characters being displayed at any one time. Although these standard characters could be displayed proximate the image area of the PIP image using the present invention, the invention generally uses reformatted characters produced by character generator 302. Reformatting performed by unit 304 comprises translating the standard closed caption character set into a reduced character set, utilizing a smaller font size, and displaying only two rows of 18 characters each within the PIP captioning window, e.g., the two-inch wide border extension. The reformatting facilitates viewer comprehension and minimizes main picture obstruction.
  • the PIP generator 302 produces a control signal INSERT CHARACTER VALUE on path 320 that is coupled to the control terminal of the first multiplexer 306. In addition to the control signal, the PIP generator produces a PIP picture signal (ACTIVE PIP PIX) that is coupled to the second multiplexer 308. Using the PIP generator 302 and its accompanying closed caption character generator, the PIP picture or image as well as the closed-captioned data is extracted in a conventional manner from an auxiliary video signal (AUX VIDEO). Positioning of the PIP image is controlled by the PIP_INSERT signal that is generated by the main picture timing generator 312, e.g., the PIP generator produces the PIP image pixels during a period when the PIP_INSERT signal is active.
  • AUX VIDEO auxiliary video signal
  • the timing generator 312 produces a CAPTION INSERT signal that is coupled to the closed-captioned character generator. This signal controls the position of the closed-captioned window with respect to the main picture, e.g., the caption character pixels are positioned at pixels and lines where the CAPTION INSERT signal is active.
  • the INSERT CHARACTER VALUE control signal selects as the output of the first multiplexer either a character value 316 (e.g., a white level pixel value) or a border value 318 (e.g., a gray level pixel value).
  • a character value 316 e.g., a white level pixel value
  • a border value 318 e.g., a gray level pixel value.
  • the result is an array of character values and border values, e.g., white pixels on a gray background, that when taken together as an
  • the output of the first multiplexer 306 is coupled via path 322 to the first input of the second multiplexer 308.
  • the output of the first multiplexer is essentially an image (a rectangular border layer) having a constant luminance value across the entire image except in a region where the closed captioned characters are inserted. The characters are located in a caption window defined by the CAPTION INSERT signal.
  • the second multiplexer 308 combines the active PIP image video with the border layer.
  • the second input of multiplexer 308 is the active PIP image video (ACTIVE PIP PIX 332) produced by the PIP generator 302.
  • the second multiplexer 308 is controlled by the PIP_INSERT signal produced by the timing generator 312.
  • the timing generator 312 produces the PIP_INSERT signal to create the active PIP image area, e.g., a "high" signal during a number of pixels in each line that is to contain the PIP pix.
  • the PIP_INSERT signal selects the first input to the second multiplexer for all vertical and horizontal count values outside of the active PIP image area. For all vertical and horizontal count values within that region, the PIP INSERT signal selects the active PIP image video for output from the second multiplexer 308. As such, the active PIP video is inserted into the border layer proximate the PIP captioning window. A similar effect is accomplished if the first and second multiplexers are in reverse order, e.g., the active PIP image is combined with the border and then multiplexed with the character value.
  • Timing generator 312 includes conventional logic devices comprising, e.g., gates, flip-flops, etc.
  • control signals CAPTION JNSERT, FSW, and PIPJNSERT during time intervals described above.
  • the specific time intervals utilized in the exemplary embodiment are defined by the following relationships between horizontal count 330 (referred to below as "HC"), vertical count 328 ("VC"), horizontal position 326 ("HP”), and vertical position 324 ("VP”).
  • Signal CAPTIONJNSERT is active (e.g., high or logic 1) when:
  • signal CAPTION JNSERT is active when HC is greater than 4HP and less than 4HP + 220, and VC is greater than (VP+75) and less than (VP +
  • CAP is a binary value (either 1 or 0) indicating whether PIP captioning is enabled. That is, when a user enables PIP captioning, e.g., by selecting "PIP CAPTIONING ON" from a setup menu,
  • signal FSW is active when:
  • Values such as the 4 that is multiplied times HP and the 220 that is added to HP define horizontal offsets (e.g., in pixels) that control the horizontal position and width of the border, PIP image and PIP caption windows.
  • values that are added to VP define vertical offsets (e.g., in lines) that control the vertical position and height of the border, PIP image, and PIP caption windows. It will be apparent that these offset values can be modified to vary the position and size of the windows as needed.
  • the system provides for keeping the PIP captioning in close proximity to the PIP image. If the location of the PIP image changes, for example, when the user moves the PIP image (such as using the above- mentioned "MOVE" key on a remote control), the location of the PIP caption moves automatically to remain in close proximity to the PIP image. That is, the location of the PIP captioning is determined in response to the location of the PIP image.
  • Figure 4 illustrates four exemplary locations of a PIP image and an exemplary orientation of the PIP captioning for each PIP image location. A variation of the arrangement of Figure 4 is illustrated in Figure 5 in which PIP captioning automatically changes its orientation with respect to the main image and moves within the border layer.
  • moving the PIP image from a top portion of the main image to a bottom portion of the main image causes the PIP captioning to move within the border as shown in Figures 5A and 5B or as shown in Figures 5C and 5D.
  • Moving the PIP captioning within the border layer can, for example, improve readability of the PIP captioning and/or minimize interference of the PIP captioning with the main image.
  • the particular manner in which the PIP captioning moves within the border layer can be selected by a user from a setup menu.
  • a third multiplexer 310 selects between the PIP image with its border layer and the main picture 334.
  • the third multiplexer 310 is driven by a fast switch (FSW) signal generated by the timing generator 312.
  • the FSW signal selects the first input to the third multiplexer 310 (the PIP image and border) for all horizontal and vertical count values within the PIP image area including the border region. For all vertical and horizontal count values outside of the image and border region for the PIP image, the FSW signal selects the main picture. As such, the PIP image and its border layer is inserted into the main picture and the FSW signal defines the width of the border.
  • the signals at the output of the multiplexer 310 are coupled to a display driver (not shown but well known in the art).
  • the display of FIG. 2 is produced.
  • the circuitry uses a layered approach to image generation. Specifically, a closed caption text character value is combined with a border value to produce a border layer (a gray layer having a predefined size and containing closed caption text), then the active PIP pix is combined with the border layer, and lastly, the main pix is multiplexed with the PIP image, its border and text to create the comprehensive PIP display of FIG. 2. Because the system provides for locating the closed- caption text in close proximity to the PIP image, a viewer can easily comprehend the closed caption text in reference to the PIP image.
  • ⁇ BSTITUTE SHEET slightly from the PIP image, e.g., with a region of different color and/or brightness .

Abstract

Method and apparatus for generating a signal representing a multi-image video display including a main image (200) and an auxiliary image (202), e.g., a picture-in-picture (PIP) image, provides for positioning auxiliary information, such as closed caption text, proximate the auxiliary image. The auxiliary information (208) is located within a border region (206) for the auxiliary image and positioned for indicating to a user that the auxiliary information is associated with the auxiliary image. The region containing the auxiliary information moves in response to movement of the auxiliary image such that the auxiliary information remains proximate the auxiliary image.

Description

METHOD AND APPARATUS FOR POSITIONING AUXILIARY INFORMATION
PROXIMATE AN AUXILIARY IMAGE IN A MULTI-IMAGE DISPLAY
CROSS-REFERENCE TO RELATED APPLICATIONS
This application is related to the following commonly-assigned U.S. Patent Applications: Serial No. 08/769,329 entitled "TELEVISION APPARATUS FOR SIMULTANEOUS DECODING OF AUXILIARY DATA INCLUDED IN MULTIPLE TELEVISION SIGNALS", Serial No. 08/769,333 entitled "VIDEO SIGNAL PROCESSING SYSTEM PROVIDING INDEPENDENT IMAGE MODIFICATION IN A MULTIJMAGE DISPLAY", Serial No. 08/769,331 entitled "METHOD AND APPARATUS FOR PROVIDING A MODULATED SCROLL RATE FOR TEXT DISPLAY", and Serial No. 08/769,332 entitled "METHOD AND APPARATUS FOR REFORMATTING AUXILIARY INFORMATION INCLUDED IN A TELEVISION SIGNAL", all of which were filed in the name of Mark F. Rumreich et al. on the same date as the present application.
FIELD OF THE INVENTION
The invention relates to television receivers capable of generating a multi-image display having main and auxiliary images such as picture-in-picture (PIP) or picture-outside-picture (POP) displays.
More particularly, the invention relates to a method and apparatus for displaying auxiliary information, such as closed caption information, proximate an auxiliary image in a multi-image display.
BACKGROUND
A television signal may include auxiliary information in addition to video program and audio program information. For example, an NTSC (National Television Standards Committee) television signal may include two bytes of closed captioning data during the latter half of each occurrence of line 21 of field 1. Closed caption data may be decoded and displayed to provide a visible text representation of a television program's audio content. Additional closed caption data and other types of similarly encoded auxiliary information, such as extended data services information (XDS), may be included in other line intervals such as line 21 of field 2. United States law requires caption decoders in all television
SUBSTITUTE SHEH (RULE 28) receivers having displays larger than 13 inches and most television programming (including video tapes) now includes captioning data.
Although captioning was developed to aid the hearing impaired, captioning can also provide a benefit to non-hearing impaired viewers as well. Captioning for a multi-image display such as picture-in- picture (PIP) or picture-outside-picture (POP) displays is an example of this type of additional benefit. For example, activating a PIP feature produces an auxiliary image representing the video content of a secondary television program signal. The auxiliary image is a small picture that is inset into a portion of the main picture. However, only the audio program associated with the main picture is processed and coupled to the speakers of the television. The audio content of the secondary signal is lost. Because the audio program is important to the comprehension of a television program, the usefulness of a multi-image display feature such as a PIP display is severely limited by the lack of an associated audio program. An approach to solving this problem is to display captions, i.e., visible text, representing the PIP audio program in a portion of the display. However, the closed caption decoder in most television receivers processes only the caption information associated with the "main" picture, not the small picture signal.
An exception to this general rule can be found in certain television receivers manufactured by Sharp Corporation such as models 31H-X1200 and 35H-X1200. These Sharp television receivers display captions representing the audio of the PIP image by providing a switching capability that permits coupling the PIP signal to the main caption decoder. PIP captions are displayed full size (up to four rows of 32 large characters) at the top or bottom of the screen (a user selectable position). An example of PIP captioning produced by Sharp television receivers is shown in FIG. 1 which depicts a display including main image 100, PIP image 102 and PIP caption 104.
SUMMARY OF THE INVENTION The invention resides, in part, in the inventors' recognition of a number of problems associated with the described PIP captioning implementation. First, main-picture captioning and small-picture captioning cannot be displayed simultaneously. Second, the small image combined with the caption display for the small image may obscure the
SUBSTITUTE SHEET (RflLE 26) main image to an extent that is objectionable to a user. For example, a PIP caption as in the Sharp implementation (up to 20% of the screen area) combined with a normal size PIP image (one-ninth of the screen area) may obscure more than 30% of the main video display. Third, the small- picture caption is difficult to follow simultaneously with small-picture video because the location of the caption at the top or bottom of the screen is physically disconnected from the small picture and may be a significant distance from the small picture. Fourth, the appearance of small-picture captions is virtually identical to main-picture captions causing users to become confused as to which image is associated with the caption. The combination of these problems may make auxiliary-picture captioning that is implemented in the manner described above objectionable to an extent that renders auxiliary-picture captioning useless for many viewers. The invention also resides, in part, in providing apparatus and a method for solving the described problems associated with the prior art. More specifically, the present invention provides for positioning auxiliary information, such as closed captioning text characters, that is associated with an auxiliary picture in a multi-image display proximate the auxiliary picture. One aspect of the invention involves combining signals representing an auxiliary image, a border region for the auxiliary image, and auxiliary information with a signal representing the main image to produce a combined signal representing a composite image having the auxiliary information within the border region and proximate the auxiliary image. Another aspect of the invention involves producing a signal representing an image having first, second and third regions representing a main image, an auxiliary image and auxiliary information, respectively, and producing a change in the location of the second region such that the third region changes location in response to the change in location of the second region. Another aspect of the invention involves positioning the third region within the image for indicating to a user that the auxiliary information is associated with an auxiliary video program included in the second region. Another aspect of the invention involves a method of generating a multi-image display by combining main and auxiliary image signals with border and auxiliary information such that the auxiliary information is included within a border region and proximate the auxiliary image.
SUBSTITUTE SHEET (RULE 2S) BRIEF DESCRIPTION OF THE DRAWING
The teachings of the present invention can be readily understood by considering the following detailed description in conjunction with the accompanying drawings, in which:
Figure 1 depicts a PIP captioning orientation as implemented in the prior art;
Figure 2 depicts an orientation of auxiliary information relative to an auxiliary picture and a main picture in accordance with the present invention;
Figure 3 depicts circuitry for generating an exemplary small- picture caption in accordance with the present invention; and
Figures 4 and 5 illustrate various orientations of small-picture captioning with respect to a small image and to the main image. To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures.
DETAILED DESCRIPTION For ease of description, the exemplary embodiments depicted in the drawing will be described in the context of a picture-in-picture (PIP) display system having a small auxiliary picture inset into a large main picture. However, the principles of the invention are applicable to other multi-image display systems such as a picture-outside-picture (POP) system in which an auxiliary picture located is located outside of ,e.g., beside, the main picture.
FIG. 2 depicts the image orientation of a PIP image 202 in relation to a main picture 200 as produced by a PIP captioning image generation system of the present invention. The position of the PIP image 202 within the confines of the main picture 200 is conventionally defined by a viewer. Specifically, the viewer, through a remote control, defines a vertical line number (vertical position) and a pixel location (horizontal position) where one corner (e.g., upper left corner) of the PIP image is to be located. The active region 210 of the PIP image 202, where the PIP video is displayed, has a typical dimension of one third by one third of the size of the main picture 200. The PIP image area 210 (active region) is circumscribed by a border region 204. This border region is approximately 0.25 inches (0.64 cm) wide. In the normal operating mode, e.g., without closed captioning, the border of the PIP image is approximately 0.25 inches wide on all sides of the active image area 210.
Upon activation of the closed captioning for the PIP image, the bottom border area 206 is extended to a height of approximately 2 inches (5 cm).
The closed caption information is displayed in this 2 inch wide region
(referred to as a closed caption window) as two-lines of closed caption text 208. The invention provides a method and apparatus for producing this extended border area 206 and positioning the closed caption information 208 within the extended border area 206 (i.e., position the caption for the PIP image proximate to the PIP active image area 210). Although the depicted embodiment of the display positions the closed caption information for the PIP image at the bottom of the PIP image area, the PIP closed caption information could as easily be placed in an extended border area at the top of the PIP image area or anywhere else that is proximate the PIP image area 210.
FIG. 3 depicts circuitry 300 for positioning the PIP closed caption information proximate the active PIP image region as depicted in FIG. 2. The circuitry contains a main picture timing generator 312 coupled to a multiplexer array 314 and a PIP image generator 302. The multiplexer array contains three multiplexers 306, 308 and 310. These multiplexers are actively switched, on a pixel-by-pixel basis, to combine pixel values (e.g., luminance and color difference signals) and produce the images depicted in FIG. 2. Specifically, the third multiplexer 310 inserts the PIP image border and caption into the main picture; the second multiplexer 308 inserts the active PIP video imagery into the border region; and the first multiplexer combines closed caption character values with border values that forms a PIP captioning window.
More specifically, the timing generator 312 has as its input a vertical position 324 and a horizontal position 326 that is user defined for locating the PIP image within the boundaries of the main picture. For example, a user can determine the location of the PIP image by activating a "MOVE" key on a remote control. In a typical application, each activation of the MOVE key moves the PIP image to a different corner of the main display as indicated by the vertical and horizontal position values. The system shown in Figure 3 is controlled, for example, by a microcomputer (not shown in Figure 3). The microcomputer responds to the user-selected PIP image position by generating two digital values representing the horizontal and vertical coordinates of the PIP position.
The microcomputer stores the digital values in memory and, in a typical system, communicates the digital values to the system in Figure 3 via a data bus to provide vertical position 324 and horizontal position 326.
In addition to the horizontal and vertical position inputs, timing generator 312 receives vertical count 328 and horizontal count
330 as input signals. These count values indicate the present main picture line and pixel. The count values are generated in a conventional manner by counters (not shown in Figure 3) that count in response to timing signals including horizontal and vertical sync. Conventional sync signal generation circuitry (not shown in Figure 3) produces the sync signals in response to a composite sync component of a television signal. In response to the count values, the timing generator produces three control signals, namely, CAPTION_INSERT, PIP_INSERT and FSW (FAST SWITCH). In general, these signals are timing signals that are active for certain portions (e.g., a predefined number of pixels) within certain lines. For example, the location for the caption within the main picture is defined by a number of inclusive lines and pixels. As such, for all count values that include these lines and pixels, the CAPTION NSERT signal is active to define a rectangular caption window. The beginning of the window, e.g., its upper left corner, is defined as an offset of a number of lines and pixels from the vertical and horizontal position values (324 and 326) that define the location of the PIP image. The CAPTIONJNSERT signal is coupled to closed caption generator 304 which generates signal INSERT CHARACTER VALUE on path 320 for controlling first multiplexer 306 as described further below.
Similarly, the PIP_INSERT and FSW signals are active for certain pixels and lines to control insertion of the active PIP image into the border region as well as insertion of the PIP image with its border and captioning into the main picture. Signal PIPJLNSERT is also coupled to PIP generator 302 for defining where PIP generator 302 should position the PIP image pixels relative to the main picture.
The PIP image generator 302 contains a closed captioned character generator 304 that produces closed caption characters. Closed captioning standard EIA-608 specifies a closed caption character format comprising a display character grid of 15 rows by 32 columns with up to four rows of characters being displayed at any one time. Although these standard characters could be displayed proximate the image area of the PIP image using the present invention, the invention generally uses reformatted characters produced by character generator 302. Reformatting performed by unit 304 comprises translating the standard closed caption character set into a reduced character set, utilizing a smaller font size, and displaying only two rows of 18 characters each within the PIP captioning window, e.g., the two-inch wide border extension. The reformatting facilitates viewer comprehension and minimizes main picture obstruction. One example of a closed captioned character generator that provides reformatted characters is disclosed in United States Patent Application Serial No. 08/769,332 entitled "METHOD AND APPARATUS FOR REFORMATTING AUXILIARY INFORMATION INCLUDED IN A TELEVISION SIGNAL" which was filed in the name of Mark F. Rumreich et al. on the same date as the present application, is commonly assigned, and is incorporated herein by reference.
The PIP generator 302 produces a control signal INSERT CHARACTER VALUE on path 320 that is coupled to the control terminal of the first multiplexer 306. In addition to the control signal, the PIP generator produces a PIP picture signal (ACTIVE PIP PIX) that is coupled to the second multiplexer 308. Using the PIP generator 302 and its accompanying closed caption character generator, the PIP picture or image as well as the closed-captioned data is extracted in a conventional manner from an auxiliary video signal (AUX VIDEO). Positioning of the PIP image is controlled by the PIP_INSERT signal that is generated by the main picture timing generator 312, e.g., the PIP generator produces the PIP image pixels during a period when the PIP_INSERT signal is active. Furthermore, the timing generator 312 produces a CAPTION INSERT signal that is coupled to the closed-captioned character generator. This signal controls the position of the closed-captioned window with respect to the main picture, e.g., the caption character pixels are positioned at pixels and lines where the CAPTION INSERT signal is active.
The INSERT CHARACTER VALUE control signal (path 320) selects as the output of the first multiplexer either a character value 316 (e.g., a white level pixel value) or a border value 318 (e.g., a gray level pixel value). The result is an array of character values and border values, e.g., white pixels on a gray background, that when taken together as an
SUBSTITUTE SHEET (RULE 28) 8 array of values depict one or more text characters on a gray background. The output of the first multiplexer 306 is coupled via path 322 to the first input of the second multiplexer 308. The output of the first multiplexer is essentially an image (a rectangular border layer) having a constant luminance value across the entire image except in a region where the closed captioned characters are inserted. The characters are located in a caption window defined by the CAPTION INSERT signal.
The second multiplexer 308 combines the active PIP image video with the border layer. As such, the second input of multiplexer 308 is the active PIP image video (ACTIVE PIP PIX 332) produced by the PIP generator 302. The second multiplexer 308 is controlled by the PIP_INSERT signal produced by the timing generator 312. The timing generator 312 produces the PIP_INSERT signal to create the active PIP image area, e.g., a "high" signal during a number of pixels in each line that is to contain the PIP pix.
Specifically, the PIP_INSERT signal selects the first input to the second multiplexer for all vertical and horizontal count values outside of the active PIP image area. For all vertical and horizontal count values within that region, the PIP INSERT signal selects the active PIP image video for output from the second multiplexer 308. As such, the active PIP video is inserted into the border layer proximate the PIP captioning window. A similar effect is accomplished if the first and second multiplexers are in reverse order, e.g., the active PIP image is combined with the border and then multiplexed with the character value. Timing generator 312 includes conventional logic devices comprising, e.g., gates, flip-flops, etc. that generate active states on control signals CAPTION JNSERT, FSW, and PIPJNSERT during time intervals described above. The specific time intervals utilized in the exemplary embodiment are defined by the following relationships between horizontal count 330 (referred to below as "HC"), vertical count 328 ("VC"), horizontal position 326 ("HP"), and vertical position 324 ("VP"). Signal CAPTIONJNSERT is active (e.g., high or logic 1) when:
4HP < HC < (4HP + 220); and (VP + 75) < VC < (VP + 72 + 18CAP). That is, signal CAPTION JNSERT is active when HC is greater than 4HP and less than 4HP + 220, and VC is greater than (VP+75) and less than (VP +
72 + 18CAP) where "CAP" is a binary value (either 1 or 0) indicating whether PIP captioning is enabled. That is, when a user enables PIP captioning, e.g., by selecting "PIP CAPTIONING ON" from a setup menu,
CAP has a value of 1. Similarly, signal FSW is active when:
4HP < HC < (4HP + 232); and VP < VC < (VP + 75 + 18CAP);
Signal PIPJNSERT is active when:
4HP < HC < (4HP + 22); and (VP +3) < VC < (VP + 72).
Values such as the 4 that is multiplied times HP and the 220 that is added to HP define horizontal offsets (e.g., in pixels) that control the horizontal position and width of the border, PIP image and PIP caption windows. Similarly, values that are added to VP define vertical offsets (e.g., in lines) that control the vertical position and height of the border, PIP image, and PIP caption windows. It will be apparent that these offset values can be modified to vary the position and size of the windows as needed.
Regardless of the ordering of the first and second multiplexers, the system provides for keeping the PIP captioning in close proximity to the PIP image. If the location of the PIP image changes, for example, when the user moves the PIP image (such as using the above- mentioned "MOVE" key on a remote control), the location of the PIP caption moves automatically to remain in close proximity to the PIP image. That is, the location of the PIP captioning is determined in response to the location of the PIP image. Figure 4 illustrates four exemplary locations of a PIP image and an exemplary orientation of the PIP captioning for each PIP image location. A variation of the arrangement of Figure 4 is illustrated in Figure 5 in which PIP captioning automatically changes its orientation with respect to the main image and moves within the border layer. For example, moving the PIP image from a top portion of the main image to a bottom portion of the main image causes the PIP captioning to move within the border as shown in Figures 5A and 5B or as shown in Figures 5C and 5D. Moving the PIP captioning within the border layer can, for example, improve readability of the PIP captioning and/or minimize interference of the PIP captioning with the main image. The particular manner in which the PIP captioning moves within the border layer can be selected by a user from a setup menu.
Returning to Figure 3, a third multiplexer 310 selects between the PIP image with its border layer and the main picture 334. The third multiplexer 310 is driven by a fast switch (FSW) signal generated by the timing generator 312. The FSW signal selects the first input to the third multiplexer 310 (the PIP image and border) for all horizontal and vertical count values within the PIP image area including the border region. For all vertical and horizontal count values outside of the image and border region for the PIP image, the FSW signal selects the main picture. As such, the PIP image and its border layer is inserted into the main picture and the FSW signal defines the width of the border. The signals at the output of the multiplexer 310 are coupled to a display driver (not shown but well known in the art).
Using the circuitry of FIG. 3, the display of FIG. 2 is produced. The circuitry, in essence, uses a layered approach to image generation. Specifically, a closed caption text character value is combined with a border value to produce a border layer (a gray layer having a predefined size and containing closed caption text), then the active PIP pix is combined with the border layer, and lastly, the main pix is multiplexed with the PIP image, its border and text to create the comprehensive PIP display of FIG. 2. Because the system provides for locating the closed- caption text in close proximity to the PIP image, a viewer can easily comprehend the closed caption text in reference to the PIP image. Although various embodiments which incorporate the teachings of the present invention have been shown and described in detail herein, those skilled in the art can readily devise many other varied embodiments that still incorporate these teachings. For example, various configurations of the border region shown in Figure 2 are possible. First, various orientations of the border region are possible as discussed above and shown in Figures 4 and 5. In addition, the border region extension containing the auxiliary information can be adjacent to the PIP image as shown in Figures 2, 4 and 5 or can be located spaced
ϋBSTITUTE SHEET (RULE 26) slightly from the PIP image, e.g., with a region of different color and/or brightness .

Claims

CLAIMS:
1 . Apparatus comprising: means for processing (302) a first television signal for generating a first signal representing auxiliary information included in said first television signal; means for generating a control signal (312); and means responsive to said control signal for combining (306,308,310) said first signal, a signal representing an auxiliary image, a signal representing a border region for said auxiliary image and a second television signal representing a main image to produce a combined signal representing a composite image including said main image, said auxiliary image, said border region and said auxiliary information such that said auxiliary information is displayed within said border region and proximate said auxiliary image.
2. The apparatus of claim 1 wherein said auxiliary information comprises text.
3. The apparatus of claim 2 wherein said text comprises closed caption information.
4. The apparatus of claim 1 wherein said means for combining signals comprises a multiplexer; said control signal generating means comprises a timing generator and said control signal comprises a timing signal for causing said multiplexer to include said first signal, said signal representing said auxiliary image, and said signal representing said border region in said combined signal such that said auxiliary information is displayed within said border region and proximate said auxiliary image.
5. The apparatus of claim 4 wherein said signal representing said border region comprises border values; said auxiliary information comprises closed caption information; said multiplexer array combines said border values with said closed caption information to produce a border layer, combines said signal representing said auxiliary image with said border layer to produce an intermediate signal, and combines said intermediate signal with said signal representing said main image to produce said combined signal.
6. The apparatus of claim 4 wherein said signal representing said border region comprises a border value; said auxiliary information comprises a closed caption character value; said multiplexer comprises : a first multiplexer (306) for combining said border value with said closed caption character value to produce a border layer representing said border region including said auxiliary information; a second multiplexer (308), coupled to said first multiplexer, for combining said signal representing said auxiliary image with said border layer to produce an intermediate signal; and a third multiplexer (310), coupled to said second multiplexer, for combining said intermediate signal with said signal representing said main image.
7. The apparatus of claim 6 wherein said timing generator produces said control signal for said multiplexer in response to a user defined horizontal and vertical coordinate position (324,326) for said auxiliary image, a vertical count value and a horizontal count value, where the horizontal and vertical count values (328,330) indicate a particular pixel location being displayed in said main image.
8. The apparatus of claim 7 wherein said auxiliary image comprises a PIP image or a POP image.
9. Apparatus comprising: means for extracting (302) auxiliary information from an auxiliary video signal; means for processing (306,308,310) a main video signal and said auxiliary video signal for generating an output signal representing a video image having a first region representing a main video program included in said main video signal, having a second region representing an auxiliary video program included in said auxiliary video signal, and having a third region representing said auxiliary information; and means for producing (312) a change in location of said second region within said video image; said third region exhibiting a change in location within said video image in response to said change in location of said second region.
10. The apparatus of claim 9 wherein said third region being located proximate said second region before and after said change in location of said second region.
1 1 . The apparatus of claim 10 wherein said second and third regions being located in said video image with a first orientation before said change in location of said second region; said first orientation being maintained following said change in location of said second region.
12. The apparatus of claim 10 wherein said second and third regions being located in said video image with a first orientation before said change in location of said second region; said first orientation changing to a second orientation following said change in location of said second region.
13. Apparatus comprising: means for extracting (302) auxiliary information from an auxiliary video signal; means for processing (306,308,310) a main video signal and said auxiliary video signal for generating an output signal representing a video image having a first region representing a main video program included in said main video signal, having a second region representing an auxiliary video program included in said auxiliary video signal, and having a third region representing said auxiliary information; and means for positioning (312) said third region in said video image for indicating to a user that said auxiliary information is associated with said auxiliary video program included in said second region.
14. The apparatus of claim 13 wherein said means for positioning said third region also positions said second region in said video image; said means for positioning said second and third regions being responsive to a user input for producing a change in position of said second and third regions; said third region being positioned subsequent to said change in position such that said user associates said auxiliary information with said auxiliary video program subsequent to said change in position.
15. The apparatus of claim 14 wherein said second region of said video image represents a PIP image or a POP image.
16. A method for generating a multi-image display having character information located proximate an auxiliary image within a main picture, comprising the steps of: extracting (302) auxiliary information from an auxiliary video signal representing said auxiliary image; combining (306) a border value with said auxiliary information to produce a border layer containing said auxiliary information; combining (308) said auxiliary video signal with said border layer to produce an intermediate signal representing said auxiliary image, a border region and said auxiliary information located proximate said auxiliary image and within said border region; and combining (310) said intermediate signal and a main video signal representing said main picture.
PCT/US1997/022750 1996-12-19 1997-12-10 Method and apparatus for positioning auxiliary information proximate an auxiliary image in a multi-image display WO1998027725A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
AU55222/98A AU5522298A (en) 1996-12-19 1997-12-10 Method and apparatus for positioning auxiliary information proximate an auxiliary image in a multi-image display
EP97951632A EP0945005B1 (en) 1996-12-19 1997-12-10 Method and apparatus for positioning auxiliary information proximate an auxiliary image in a multi-image display
DE69733961T DE69733961T2 (en) 1996-12-19 1997-12-10 METHOD AND DEVICE FOR POSITIONING ADDITIONAL INFORMATION NEXT TO AN ADDITIONAL IMAGE IN A MULTI-PICTURE DISPLAY
JP52782798A JP4105769B2 (en) 1996-12-19 1997-12-10 Multi-image display apparatus and method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US08/770,770 1996-12-19
US08/770,770 US6088064A (en) 1996-12-19 1996-12-19 Method and apparatus for positioning auxiliary information proximate an auxiliary image in a multi-image display

Publications (1)

Publication Number Publication Date
WO1998027725A1 true WO1998027725A1 (en) 1998-06-25

Family

ID=25089625

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1997/022750 WO1998027725A1 (en) 1996-12-19 1997-12-10 Method and apparatus for positioning auxiliary information proximate an auxiliary image in a multi-image display

Country Status (10)

Country Link
US (1) US6088064A (en)
EP (1) EP0945005B1 (en)
JP (1) JP4105769B2 (en)
KR (1) KR100514540B1 (en)
CN (1) CN1246238A (en)
AU (1) AU5522298A (en)
DE (1) DE69733961T2 (en)
ES (1) ES2245007T3 (en)
MY (1) MY118079A (en)
WO (1) WO1998027725A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001006770A1 (en) * 1999-07-15 2001-01-25 Koninklijke Philips Electronics N.V. Methods and apparatus for presentation of multimedia information in conjunction with broadcast programming
EP1089561A2 (en) * 1999-09-29 2001-04-04 Nec Corporation Picture-border frame generating circuit and digital television system using the same
US6252906B1 (en) * 1998-07-31 2001-06-26 Thomson Licensing S.A. Decimation of a high definition video signal
KR100407978B1 (en) * 2002-01-18 2003-12-03 엘지전자 주식회사 Method and apparatus for processing teletext of display device
US7158109B2 (en) 2001-09-06 2007-01-02 Sharp Kabushiki Kaisha Active matrix display
EP1820336A1 (en) * 2004-12-06 2007-08-22 Thomson Licensing Multiple closed captioning flows and customer access in digital networks
US7350138B1 (en) 2000-03-08 2008-03-25 Accenture Llp System, method and article of manufacture for a knowledge management tool proposal wizard
EP2373045A1 (en) * 2010-03-04 2011-10-05 Fujifilm Corporation Medical image generating apparatus, medical image display apparatus, medical image generating method and program
US20170110152A1 (en) * 2015-10-16 2017-04-20 Tribune Broadcasting Company, Llc Video-production system with metadata-based dve feature

Families Citing this family (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20050052484A (en) * 1997-03-17 2005-06-02 마츠시타 덴끼 산교 가부시키가이샤 Data processing method
KR100516431B1 (en) * 1997-12-18 2005-09-22 톰슨 라이센싱 소시에떼 아노님 Program signal blocking system
US6563547B1 (en) * 1999-09-07 2003-05-13 Spotware Technologies, Inc. System and method for displaying a television picture within another displayed image
KR20030036160A (en) * 2000-05-22 2003-05-09 가부시키가이샤 소니 컴퓨터 엔터테인먼트 Information processing apparatus, graphic processing unit, graphic processing method, storage medium, and computer program
US7206029B2 (en) * 2000-12-15 2007-04-17 Koninklijke Philips Electronics N.V. Picture-in-picture repositioning and/or resizing based on video content analysis
US7006151B2 (en) * 2001-04-18 2006-02-28 Sarnoff Corporation Video streams for closed caption testing and the like
SG103289A1 (en) * 2001-05-25 2004-04-29 Meng Soon Cheo System for indexing textual and non-textual files
KR100909505B1 (en) * 2001-12-19 2009-07-27 엘지전자 주식회사 And stored digital data broadcasting display apparatus and method
JP2004121813A (en) * 2002-07-29 2004-04-22 Seiko Epson Corp Display method, display device for game machine, game machine, and information display system
KR100471089B1 (en) * 2003-04-15 2005-03-10 삼성전자주식회사 Display Device And Image Processing Method Thereof
WO2005002431A1 (en) * 2003-06-24 2005-01-13 Johnson & Johnson Consumer Companies Inc. Method and system for rehabilitating a medical condition across multiple dimensions
WO2005003902A2 (en) * 2003-06-24 2005-01-13 Johnson & Johnson Consumer Companies, Inc. Method and system for using a database containing rehabilitation plans indexed across multiple dimensions
WO2005002433A1 (en) * 2003-06-24 2005-01-13 Johnson & Johnson Consumer Compagnies, Inc. System and method for customized training to understand human speech correctly with a hearing aid device
US20050232190A1 (en) * 2003-09-22 2005-10-20 Jeyhan Karaoguz Sharing of user input devices and displays within a wireless network
KR101000924B1 (en) 2004-02-03 2010-12-13 삼성전자주식회사 Caption presentation method and apparatus thereof
US20080298614A1 (en) * 2004-06-14 2008-12-04 Johnson & Johnson Consumer Companies, Inc. System for and Method of Offering an Optimized Sound Service to Individuals within a Place of Business
EP1769412A4 (en) * 2004-06-14 2010-03-31 Johnson & Johnson Consumer Audiologist equipment interface user database for providing aural rehabilitation of hearing loss across multiple dimensions of hearing
US20080165978A1 (en) * 2004-06-14 2008-07-10 Johnson & Johnson Consumer Companies, Inc. Hearing Device Sound Simulation System and Method of Using the System
US20080269636A1 (en) * 2004-06-14 2008-10-30 Johnson & Johnson Consumer Companies, Inc. System for and Method of Conveniently and Automatically Testing the Hearing of a Person
US20080240452A1 (en) * 2004-06-14 2008-10-02 Mark Burrows At-Home Hearing Aid Tester and Method of Operating Same
US20080212789A1 (en) * 2004-06-14 2008-09-04 Johnson & Johnson Consumer Companies, Inc. At-Home Hearing Aid Training System and Method
WO2005125281A1 (en) * 2004-06-14 2005-12-29 Johnson & Johnson Consumer Companies, Inc. System for and method of optimizing an individual’s hearing aid
EP1767053A4 (en) * 2004-06-14 2009-07-01 Johnson & Johnson Consumer System for and method of increasing convenience to users to drive the purchase process for hearing health that results in purchase of a hearing aid
EP1767057A4 (en) * 2004-06-15 2009-08-19 Johnson & Johnson Consumer A system for and a method of providing improved intelligibility of television audio for hearing impaired
EP1767061A4 (en) * 2004-06-15 2009-11-18 Johnson & Johnson Consumer Low-cost, programmable, time-limited hearing aid apparatus, method of use and system for programming same
JP4189883B2 (en) * 2004-06-24 2008-12-03 インターナショナル・ビジネス・マシーンズ・コーポレーション Image compression apparatus, image processing system, image compression method, and program
JP4081772B2 (en) * 2005-08-25 2008-04-30 ソニー株式会社 REPRODUCTION DEVICE, REPRODUCTION METHOD, PROGRAM, AND PROGRAM STORAGE MEDIUM
CN101222592B (en) * 2007-01-11 2010-09-15 深圳Tcl新技术有限公司 Closed subtitling display equipment and method
JP5239469B2 (en) * 2008-04-07 2013-07-17 ソニー株式会社 Information presenting apparatus and information presenting method
CA2651464C (en) * 2008-04-30 2017-10-24 Crim (Centre De Recherche Informatique De Montreal) Method and apparatus for caption production
US8275232B2 (en) * 2008-06-23 2012-09-25 Mediatek Inc. Apparatus and method of transmitting / receiving multimedia playback enhancement information, VBI data, or auxiliary data through digital transmission means specified for multimedia data transmission
US20100259684A1 (en) * 2008-09-02 2010-10-14 Panasonic Corporation Content display processing device and content display processing method
US8555167B2 (en) * 2009-03-11 2013-10-08 Sony Corporation Interactive access to media or other content related to a currently viewed program
EP2334088A1 (en) * 2009-12-14 2011-06-15 Koninklijke Philips Electronics N.V. Generating a 3D video signal
JP5376685B2 (en) * 2011-07-13 2013-12-25 Necビッグローブ株式会社 CONTENT DATA DISPLAY DEVICE, CONTENT DATA DISPLAY METHOD, AND PROGRAM
US10007676B2 (en) 2013-03-22 2018-06-26 Accenture Global Services Limited Geospatial smoothing in web applications
WO2014155710A1 (en) * 2013-03-29 2014-10-02 楽天株式会社 Communication control system, communication control method, communication control program, terminal, and program for terminal
KR20150009252A (en) * 2013-07-16 2015-01-26 삼성전자주식회사 Multi contents view display apparatus and method for displaying multi contents view contents
WO2015059793A1 (en) * 2013-10-24 2015-04-30 株式会社 東芝 Display apparatus, display method and display program
WO2015089746A1 (en) * 2013-12-17 2015-06-25 Intel Corporation Techniques for processing subtitles
US9916077B1 (en) * 2014-11-03 2018-03-13 Google Llc Systems and methods for controlling network usage during content presentation
US9940739B2 (en) 2015-08-28 2018-04-10 Accenture Global Services Limited Generating interactively mapped data visualizations
US9578351B1 (en) * 2015-08-28 2017-02-21 Accenture Global Services Limited Generating visualizations for display along with video content

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0301488A2 (en) * 1987-07-28 1989-02-01 Sanyo Electric Co., Ltd. Television receiver having a memorandum function
GB2232033A (en) * 1989-05-26 1990-11-28 Samsung Electronics Co Ltd Synchronising video signals
EP0401930A2 (en) * 1989-06-08 1990-12-12 Koninklijke Philips Electronics N.V. An interface for a TV-VCR system
EP0660602A2 (en) * 1993-12-24 1995-06-28 Kabushiki Kaisha Toshiba Character information display apparatus
JPH07236100A (en) * 1994-02-22 1995-09-05 Victor Co Of Japan Ltd Display device
EP0762751A2 (en) * 1995-08-24 1997-03-12 Hitachi, Ltd. Television receiver

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5537151A (en) * 1994-02-16 1996-07-16 Ati Technologies Inc. Close caption support with timewarp
KR960028297A (en) * 1994-12-04 1996-07-22 사또오 후미오 Multiscreen tv receiver
JP3589720B2 (en) * 1994-12-07 2004-11-17 株式会社東芝 Multi-screen TV receiver

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0301488A2 (en) * 1987-07-28 1989-02-01 Sanyo Electric Co., Ltd. Television receiver having a memorandum function
GB2232033A (en) * 1989-05-26 1990-11-28 Samsung Electronics Co Ltd Synchronising video signals
EP0401930A2 (en) * 1989-06-08 1990-12-12 Koninklijke Philips Electronics N.V. An interface for a TV-VCR system
EP0660602A2 (en) * 1993-12-24 1995-06-28 Kabushiki Kaisha Toshiba Character information display apparatus
JPH07236100A (en) * 1994-02-22 1995-09-05 Victor Co Of Japan Ltd Display device
EP0762751A2 (en) * 1995-08-24 1997-03-12 Hitachi, Ltd. Television receiver

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
PATENT ABSTRACTS OF JAPAN vol. 096, no. 001 31 January 1996 (1996-01-31) *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6252906B1 (en) * 1998-07-31 2001-06-26 Thomson Licensing S.A. Decimation of a high definition video signal
WO2001006770A1 (en) * 1999-07-15 2001-01-25 Koninklijke Philips Electronics N.V. Methods and apparatus for presentation of multimedia information in conjunction with broadcast programming
EP1089561A2 (en) * 1999-09-29 2001-04-04 Nec Corporation Picture-border frame generating circuit and digital television system using the same
EP1089561A3 (en) * 1999-09-29 2005-03-09 NEC Electronics Corporation Picture-border frame generating circuit and digital television system using the same
US7350138B1 (en) 2000-03-08 2008-03-25 Accenture Llp System, method and article of manufacture for a knowledge management tool proposal wizard
US7158109B2 (en) 2001-09-06 2007-01-02 Sharp Kabushiki Kaisha Active matrix display
KR100407978B1 (en) * 2002-01-18 2003-12-03 엘지전자 주식회사 Method and apparatus for processing teletext of display device
EP1820336A1 (en) * 2004-12-06 2007-08-22 Thomson Licensing Multiple closed captioning flows and customer access in digital networks
EP1820336A4 (en) * 2004-12-06 2010-04-28 Thomson Licensing Multiple closed captioning flows and customer access in digital networks
US8135041B2 (en) 2004-12-06 2012-03-13 Thomson Licensing Multiple closed captioning flows and customer access in digital networks
EP2373045A1 (en) * 2010-03-04 2011-10-05 Fujifilm Corporation Medical image generating apparatus, medical image display apparatus, medical image generating method and program
US20170110152A1 (en) * 2015-10-16 2017-04-20 Tribune Broadcasting Company, Llc Video-production system with metadata-based dve feature
US10622018B2 (en) * 2015-10-16 2020-04-14 Tribune Broadcasting Company, Llc Video-production system with metadata-based DVE feature

Also Published As

Publication number Publication date
KR100514540B1 (en) 2005-09-13
EP0945005A1 (en) 1999-09-29
ES2245007T3 (en) 2005-12-16
JP2001506451A (en) 2001-05-15
DE69733961D1 (en) 2005-09-15
MY118079A (en) 2004-08-30
CN1246238A (en) 2000-03-01
JP4105769B2 (en) 2008-06-25
AU5522298A (en) 1998-07-15
EP0945005B1 (en) 2005-08-10
KR20000057340A (en) 2000-09-15
DE69733961T2 (en) 2006-06-01
US6088064A (en) 2000-07-11

Similar Documents

Publication Publication Date Title
US6088064A (en) Method and apparatus for positioning auxiliary information proximate an auxiliary image in a multi-image display
EP0656727B1 (en) Teletext receiver
US5929927A (en) Method and apparatus for providing a modulated scroll rate for text display
EP0945013B1 (en) Apparatus for reformatting auxiliary information included in a television signal for pip display
US6008860A (en) Television system with provisions for displaying an auxiliary image of variable size
EP2559230B1 (en) Method for displaying a video stream according to a customised format
JP2655305B2 (en) Caption subtitle display control device and method
EP1225762A2 (en) Dynamic adjustment of on screen graphic displays to cope with different video display and/or display screen formats
JPH05304641A (en) Television receiver
JPS60165883A (en) Methods for transmission/reception and reception of television signal
KR100638186B1 (en) Apparatus for controlling on screen display
JPH0946657A (en) Closed caption decoder
MXPA99005598A (en) Method and apparatus for positioning auxiliary information proximate an auxiliary image in a multi-image display
JP3424057B2 (en) Television receiver for teletext broadcasting
JPH06217199A (en) Caption decoder device
KR19980020291A (en) How to move the viewer-selectable caption display position
KR100462448B1 (en) Television system with provisions for displaying an auxiliary image of variable size
KR100188272B1 (en) Televiewer option caption display method with adjust function a column space between lines of write vertical
JP2003023579A (en) Digital broadcast display device
JP2003046963A (en) Information display system and method therefor
JPH09181994A (en) Video signal processor
KR20000056148A (en) Method for caption control of korean caption decoder in case cursor position is at 39th column or 40th column.
KR20010004144A (en) Method for controlling display of caption broadcasting in picture in picture mode
JPH10290408A (en) Television receiver and teletext multiplex broadcast display method

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 97181822.3

Country of ref document: CN

AK Designated states

Kind code of ref document: A1

Designated state(s): AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GE GH HU ID IL IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK TJ TM TR TT UA UG UZ VN YU ZW AM AZ BY KG KZ MD RU TJ TM

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW SD SZ UG ZW AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 1019997004838

Country of ref document: KR

WWE Wipo information: entry into national phase

Ref document number: 1997951632

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: PA/a/1999/005598

Country of ref document: MX

ENP Entry into the national phase

Ref document number: 1998 527827

Country of ref document: JP

Kind code of ref document: A

WWP Wipo information: published in national office

Ref document number: 1997951632

Country of ref document: EP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

WWP Wipo information: published in national office

Ref document number: 1019997004838

Country of ref document: KR

WWG Wipo information: grant in national office

Ref document number: 1019997004838

Country of ref document: KR

WWG Wipo information: grant in national office

Ref document number: 1997951632

Country of ref document: EP