GB2290430A - Subtitling video data - Google Patents

Subtitling video data Download PDF

Info

Publication number
GB2290430A
GB2290430A GB9411615A GB9411615A GB2290430A GB 2290430 A GB2290430 A GB 2290430A GB 9411615 A GB9411615 A GB 9411615A GB 9411615 A GB9411615 A GB 9411615A GB 2290430 A GB2290430 A GB 2290430A
Authority
GB
United Kingdom
Prior art keywords
image data
video
subtitling
characters
video signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB9411615A
Other versions
GB2290430A8 (en
GB2290430B (en
GB9411615D0 (en
Inventor
David John Atkins
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SCREEN SUBTITLING SYSTEMS Ltd
Original Assignee
SCREEN SUBTITLING SYSTEMS Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SCREEN SUBTITLING SYSTEMS Ltd filed Critical SCREEN SUBTITLING SYSTEMS Ltd
Priority to GB9411615A priority Critical patent/GB2290430B/en
Publication of GB9411615D0 publication Critical patent/GB9411615D0/en
Priority to CA 2192439 priority patent/CA2192439A1/en
Priority to GB9625754A priority patent/GB2304016A/en
Priority to PCT/GB1995/001353 priority patent/WO1995034165A1/en
Priority to AU26785/95A priority patent/AU2678595A/en
Publication of GB2290430A publication Critical patent/GB2290430A/en
Publication of GB2290430A8 publication Critical patent/GB2290430A8/en
Application granted granted Critical
Publication of GB2290430B publication Critical patent/GB2290430B/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/41Bandwidth or redundancy reduction
    • H04N1/411Bandwidth or redundancy reduction for the transmission or storage or reproduction of two-tone pictures, e.g. black and white pictures
    • H04N1/413Systems or arrangements allowing the picture to be reproduced without loss or modification of picture-information
    • H04N1/419Systems or arrangements allowing the picture to be reproduced without loss or modification of picture-information in which encoding of the length of a succession of picture-elements of the same value along a scanning line is the only encoding step
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/005Statistical coding, e.g. Huffman, run length coding
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/30Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
    • H03M7/46Conversion to or from run-length codes, i.e. by representing the number of consecutive digits, or groups of digits, of the same kind by a code word and a digit indicative of that kind
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/08Systems for the simultaneous or sequential transmission of more than one television signal, e.g. additional information signals, the signals occupying wholly or partially the same frequency band, e.g. by time division
    • H04N7/087Systems for the simultaneous or sequential transmission of more than one television signal, e.g. additional information signals, the signals occupying wholly or partially the same frequency band, e.g. by time division with signal insertion during the vertical blanking interval only
    • H04N7/088Systems for the simultaneous or sequential transmission of more than one television signal, e.g. additional information signals, the signals occupying wholly or partially the same frequency band, e.g. by time division with signal insertion during the vertical blanking interval only the inserted signal being digital
    • H04N7/0884Systems for the simultaneous or sequential transmission of more than one television signal, e.g. additional information signals, the signals occupying wholly or partially the same frequency band, e.g. by time division with signal insertion during the vertical blanking interval only the inserted signal being digital for the transmission of additional display-information, e.g. menu for programme or channel selection
    • H04N7/0885Systems for the simultaneous or sequential transmission of more than one television signal, e.g. additional information signals, the signals occupying wholly or partially the same frequency band, e.g. by time division with signal insertion during the vertical blanking interval only the inserted signal being digital for the transmission of additional display-information, e.g. menu for programme or channel selection for the transmission of subtitles

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Television Systems (AREA)

Description

1 TRANSMITTING IMAGE DATA 2290430 The present invention relates to a
method of adding subtitling data to a video signal, apparatus for adding subtitling data to a video signal and apparatus for keying subtitles over video images at a receiving station.
The addition of subtitles to video images is known. Traditionally, video material is processed by a translating and subtitling department, resulting in new video material having subtitles included as part of the video image. The subtitling data is therefore said to be "burnt" into the video pictures.
It is also known to include subtitling data as a teletext transmitted page. Under the teletext system, character data is supplied during vertical blanking intervals of the video transmission. As is known in the art, each video frame includes blanking intervals at the top and bottom of each image, which may be considered as transmitted lines which do not contain video information.
This in turn may be considered as additional available bandwidth, allowing data to be transmitted therein, such as the aforementioned teletext data and Nicam stereo audio data etc.
In conventional teletext systems, character codes are transmitted during the blanking periods and these codes are translated into displayable characters at the receiving station. Known teletext systems therefore have restricted character sets and the definition of these character sets is severely restricted. Thus, it is possible using these character sets to.generate characters in most roman based languages, including English and most European languages but this character set does not lend itself to conveying characters in many other languages.
Thus, for example, it is not possible to convey Arabic, Chinese and Japanese characters using teletext techniques, therefore subtitles for these languages are, using conventional methods, burnt-in to the video images.
An advantage of the teletext approach is that the subtitles may be selectively switched on or off at the receiving station. The characters generated at the receiving station are effectively keyed over the video images which are in turn transmitted to the receiving stations fully intact. Thus, in the United Kingdom for example, subtitling of this type is conventionally used for transmitting subtitles in the English language, allowing the hard of hearing to select subtitling if required. However, the technique cannot be used for subtitling source material originally recorded with, say, English sound tracks with many foreign language subtitles, such as Chinese. However, it should be appreciated that with the ever increasing availability of satellite broadcast transmission over an ever increasing area of the globe, there is an increasing demand for a back catalogue of available titles to be subtitled in a wide range of languages.
According to a first aspect of the invention, there is provided a method of adding subtitling data to a video signal, characterised by image data representing a bit map of a line of subtitling characters; and combining said image data with said video signal such that said characters derived from said image data are keyable over video images derived from said video signal.
The invention will now be described by way of example only, with reference to the accompanying drawings, in which:
c v 3 Figure 1 shows a video monitor, displaying a video image with subtitles added thereto; Figure 2 shows a system for generating subtitles, of the type shown in Figure 1, including a processing unit; Figure 3 details a processing unit shown in Figure 2, including a combiner for combining subtitling data with a video signal; Figure 4 details the combiner shown in Figure 3; and Figure 5 shows a receiving system f or receiving video signals with subtitles generated by the system shown in Figure 2.
A video monitor 15 is shown in Figure 1, which may form part of a conventional television receiver. The video monitor is displaying a conventional television picture 16, over which subtitles 17 have been overlaid, by a process known as keying.
The subtitles are placed within a horizontal band or strap towards the bottom half of the picture, in response to signals received by the receiver itself or by a separate decoding apparatus. Thus, at the receiving station, it is possible to selectively decide whether the subtitles are to be combined with the displayed video image. Furthermore, in a more sophisticated embodiment, it is possible to select from a plurality of available languages of subtitles, each of which may be keyed onto the video image in synchronism with appropriate frames of a displayed video.
Video data keyed within the strap region 17 is derived from pixel data representing a region 50 lines 4 high and 720 pixel locations across. Subject to bandwidth availability, it is possible to transmit any image within this region, such that the generation of keyable subtitling characters is not restricted to characters available from a character generator provided at the reception end. Thus, with a region of this size, it is possible to display characters from any desired character set, such as Chinese, Japanese and Arabic etc.
Subtitling characters tend to be displayed in solid colours, therefore with a definition of 750 pixel positions over 50 lines, it is possible to supply single bit data for each pixel location. Thus, originating data of this type is usually referred to as a bit map or a one bit plane. In alternative embodiments, subject to available bandwidth, it is possible to transmit keyable image data having multi-bit values, for example to indicate image data having a plurality of colours or brightnesses. However, it should be appreciated that the one bit image data provides a useful key signal, facilitating the keying of a solid colour over the video image so as to effect the display of the required subtitles.
A system for generating subtitling is shown in Figure 2. Input video signals are supplied to a processor 21, which in turn supplies displayable video image frames to a video monitor 22. The video data is received from a video source 23, such as a video tape recorder, a digital video tape recorder, a magnetic disc or an optical disc. The video signals may be supplied to the processor 21 as full bandwidth digital signals, conventional composite video signals or compressed video signals. For example, video source material may be read from compact optical discs in a compressed format, such as a format consistent with the MPEG compression recommendations.
A 9 At the processor 21 the video data is decompressed, if required, and each frame is identified with a unique time code.
In response to operations made on a manual keyboard 24, selected portions of video source material are displayed on the video monitor 22, allowing clips of video to be identified, via the allocated time codes, for which a particular subtitle is to be associated therewith.
Subtitles are created by manual entry on the keyboard 24, resulting in characters of text being displayed on a VDU 25. The system may be programmable so as to operate using many languages and, consequently, generating characters from a wide range of possible character sets. Thus, characters will be identified via the keyboard 24, resulting in a representation of said characters being shown on the VDU 25. In this way, a complete subtitle is built up on the VDU 25 allowing subtitling data to be associated with video signals.
As previously described, a subtitling line is assembled as a bit map image. Thus, individual characters are rendered into a 720 x 50 pixel bit map, in response to individual bit maps stored for each renderable character.
Once a bit map strap, representing the image of a particular subtitle, has been created by an operator and said operator has identified the clip of video for which this subtitle is to be made available, the image data representing the bit map is combined with the video signal and the combined video and encoded text is supplied to an output video recorder 26.
However, it should be noted that the subtitling image data is combined with the video image data in such a way that characters derived from said image data are 6 keyable over the video image derived from the video data. Thus, this keying operation is selectable and the subtitling, although representing any selected character, is not actually burnt into the displayable video frames.
The processor 21 is detailed in Figure 3. Operation of the processor 21 is controlled by a overall control processor 31, such as an XXX having YYY megabytes of internal memory. video data received from the video source 23 is supplied to a decoding device 32 which will decode the video source material into a conventional format. For example, the decoder 32 may perform decompression if MPEG video is being received.
The decoded video signal is supplied to a time code generator 33 which applies a unique time code to the incoming video signals. This time code is generated in response to signals from the control processor 31, which in turn generates similar time code representations for subtitling information.
As previously stated, the keyboard 24 generates representations of characters and a character generator 34 supplies individual bit map data to a subtitling buffer 35 in response to operation of said keyboard.
Within the subtitling buffer 35, bit maps of complete keyable straps are defined, in a region equivalent to 50 lines with 720 pixel positions per line.
In non compressed form, this would result in a total of 36 k bits per subtitle which, when transmitted outside the displayable video image region, would require a significant degree of bandwidth.
The bandwidth requirement is reduced by scanning data written to the subtitling bit map buffer 35 by a run v f 7 length encoding device 36. The run length encoding device 36 scans each line of the bit map buffer 35 and produces codes representing the number of pixel positions which are set to a particular level along each of the lines. In this way, most languages and character sets can be transmitted using in the order of XXX bits per strap, thereby significantly facilitating the combination of said data with the transmitted video signal.
The combining of the subtitling image data with the conventional video data takes place at a combiner 37, whereafter said combined video data is encoded, as required, at an encoder 38, which may perform the reverse process to that effected by the decoder 32.
The combiner 37 is detailed in Figure 4. The run length encoded data from encoder 36 is supplied to a data buffer 41 and data is readable from said buffer at the frame rate. Similarly, video data is written to a video buffer 42 which again may be accessed line by line at the appropriate video rate.
Lines of data are read sequentially f rom buf f er 41 and buffer 42. Switch 43 is shown to schematically represent the way in which lines of data may be read from buffer 41, followed by lines of data being read from buffer 42 within the same field period.
The data read from buffers 41 and 42 is supplied to a video modulator 44, which generates the video signal having image data derived from the actual video data itself and also derived from the encoded data derived from buffer 41.
The way in which the data is combined within a video frame is illustrated in Figure 5.
8 Each f rame of video data is made up of two interlaced fields. Each field contains information for a two dimensional image scan, consisting of a plurality of scan lines. During reproduction, an electron beam scans across a phosphor screen in a cathode ray tube tracing out lines starting from the top of the picture and, over the f ield period, traversing to the bottom of the picture. Before the next f ield can be scanned in this way, the electron beam must back track to the top of the field area which, due to the nature of deflecting elements within the tube itself, takes a finite period of time. The time during which the electron beam is retracing back to its original starting position is known as the field blanking period or vertical blanking period and this period may be considered as a period during which lines of video could be transmitted but, due to the retracing of the scanning beam cannot be transmitted. They may therefore be considered as blank intervals at the top and bottom of the displayed picture which in turn provides additional bandwidth in which other signals may be conveyed.
In the present embodiment bandwidth is identified within the vertical blanking interval, into which the run length encoded data is inserted. In this way, a typical strap of keyable subtitles may be transmitted in the equivalent of X lines, thereby allowing new titles to be transmitted and displayed for each Y number of transmitted frames.
A receiver suitable for receiving signals generated by the system shown in Figure 2, is shown in Figure 6.
A decoder 61 decodes the incoming video signals into a form suitable for reproduction on conventional television equipment. Thus, for example, said incoming signals may be encrypted in a satellite system or, 9 alternatively, may be compressed in a limited bandwidth system. Thus, for example, the encoded video signals may be read from optical disc or other suitable data carrying media.
Conventional video or television signals are supplied to a video demodulator 62, arranged to separate audio signals and to construct RGB colour signals from the luminance and colour difference signals of the composite video signal.
In parallel with this, a timing extraction circuit 63 receives the video signal generated by the decoder 61 and generates timing information, in particular, identifying the start of each field period, from the incoming data. Thus, the timing extraction circuit 36 is arranged to supply timing data to a keying circuit 64 and to a run length decoding circuit 65.
The run length decoding circuit 65 also receives the output from the video decoder 61 and processes the information supplied during the vertical blanking intervals, while rejecting the actual video picture information. The run length encoded subtitling data is decoded so as to reconstitute the 720 x 50 pixel bit map for application to the keying circuit 64.
The bit map generated by the run length decoder 65 effectively constitutes a key signal which is supplied to the keyer 64. A switch 66 is shown, showing that the operation of keyer 64 may effectively be disabled, thereby allowing the original video data to be displayed without the subtitles being keyed there over.
With switch 66 in its closed position, the decoded key signal is supplied to the keyer 64 which reconstitutes the original subtitling text. As scanning occurs within the strap area 17, the output signal will switch between the original video signal and a solid colour, in response to the key signal generated by the decoder 65. Thus, it is possible for a user to select whether the subtitles are to be displayed and it is also possible for the user to select the colour of said subtitles. Alternatively, colour may be pre-set at the decoding station.
W 11

Claims (22)

CLAIMS:
1. A method of adding subtitling data to a video signal, characterised by generating image data representing a bit-map of a line of subtitling characters; and combining said image data with said video signal such that said characters derived from said image data are keyable over video images derived from said video signal.
2. A method according to claim 1, wherein image data representing bit maps are rendered from character codes.
3. A method according to claim 2, wherein said character codes are manually selected and assembled into lines of text.
4. A method according to claim 3, wherein said lines of text are given time codes which relate to time coded video frames.
5. A method according to any of claims 1 to 4, wherein said image data is a compressed representation of said bit map data.
6. A method according to claim 5, wherein said image data is compressed by a process of run length encoding.
7. A method according to any of claims 1 to 6, wherein said image data is combined' with conventional modulated composite video signals.
12 8. A method according to claim 7, wherein said image data is transmitted during vertical blanking intervals.
S 9. A method according to any of claims 1 to 6, wherein said image data is combined with compressed video signals.
10. A method according to claim 9, wherein said compressed video signals are compressed in accordance with MPEG or JPEG recommendations.
11. Apparatus for adding subtitling data to a video signal at a transmitting station, characterised by image data generating means arranged to generate image data representing a bit map of a line of subtitling characters; and combining means arranged to combine said image data with said video signals without corrupting video image data.
12. Apparatus for keying subtitles over video image at a receiving station arranged to receive video signals combined with image data representing a bit map of subtitling characters, comprising means for converting said image data into keyable video characters; and means for keying said characters over displayable video frames.
P 13 Amendments to the claims have been filed as follows 1. Apparatus for adding subtitling data to video signals, comprising image data generating means arranged to generate image data representing a bit-map of a line of subtitling characters; and combining means for combining said image data with a video signal such that said image data is conveyed during vertical blanking periods of said video signal.
2. Apparatus according to claim 1, including means for rendering the bitmapped image data from character codes.
3. Apparatus according to claim 2, including means configured to allow manual selection and assembly of character codes into lines of text.
4. Apparatus according to claim 3, including means for allocating time codes to said lines of text, wherein said time codes are related to time coded video frames.
5. Apparatus according to claim 1, including means for compressing bitmapped data to produce said image data representation.
6. Apparatus according to claim 1, including means for modulating said video signal and transmission means for transmitting said modulated video signal having encoded image data substantially contained within vertical blanking periods.
7. Apparatus according to claim 1, including means for modulating said video signal and storage means for storing said modulated video signal j it having encoded image data substantially contained within vertical blanking periods.
8. Apparatus for processing video signals having subtitling data add thereto, wherein image data representing subtitling characters in bitmapped form has been added to vertical blanking periods of a transmitted or recorded video signal, comprising means for processing said image data to reconstitute bit-mapped characters; and means for keying said re-constituted bit-mapped characters over viewable frames of said video signal.
9. Apparatus according to claim 8, wherein said processing means is arranged to de-compress compressed video data.
10. Apparatus according to claim 8, wherein said keying means is arranged to key subtitling characters over frames specified by time codes.
11. A method of adding subtitling data to a video sig'nal, comprising steps of generating image data representing a bit-map of a line of subtitling characters; and combining said image data with a video signal such that said image data is conveyed during vertical blanking periods of said video signals.
12. A method according to claim 11, wherein bit-mapped image data is rendered from character codes.
r_ 1 T
13. A method according to claim 11, wherein time codes are allocated to lines of text, so as to associate said text with related time coded video frames.
A method according to claim 11, wherein said bit-mapped data is compressed to produce said image data representation.
15. A method according to claim 14, wherein said bit-mapped data is compressed by a process of run length encoding.
16. A method according to claim 11, wherein said video signal is modulated and transmitted with encoded image data substantially contained within vertical blanking periods.
17. A method according to claim 11, wherein said video signal is modulated and stored with encoded image data substantially contained within vertical blanking periods.
18. A method of processing video signals having subtitling data added thereto, wherein image data representing subtitling characters in bit mapped form has been added to vertical blanking periods of a transmitted or recorded video signal, comprising steps of processing said image data to re-constitute bit-mapped characters; and keying said re-constituted bit-mapped characters over viewable frames of said video signal.
19. A method according to claim 18, wherein the compressed image data is de-compressed.
16
20. A method according to claim 19, wherein said de-compressed image data is keyed over frames specified by time codes.
1
21. Apparatus for adding subtitling data to a video signal substantially as herein described with reference to the accompanying Figures.
22. A method of keying subtitles over a video image substantially as herein described with reference to the accompanying Figures.
r,
GB9411615A 1994-06-09 1994-06-09 Subtitling video data Expired - Fee Related GB2290430B (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
GB9411615A GB2290430B (en) 1994-06-09 1994-06-09 Subtitling video data
AU26785/95A AU2678595A (en) 1994-06-09 1995-06-09 Subtitling video data
GB9625754A GB2304016A (en) 1994-06-09 1995-06-09 Subtitling video data
PCT/GB1995/001353 WO1995034165A1 (en) 1994-06-09 1995-06-09 Subtitling video data
CA 2192439 CA2192439A1 (en) 1994-06-09 1995-06-09 Subtitling video data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB9411615A GB2290430B (en) 1994-06-09 1994-06-09 Subtitling video data

Publications (4)

Publication Number Publication Date
GB9411615D0 GB9411615D0 (en) 1994-08-03
GB2290430A true GB2290430A (en) 1995-12-20
GB2290430A8 GB2290430A8 (en) 1998-05-21
GB2290430B GB2290430B (en) 1998-08-05

Family

ID=10756500

Family Applications (2)

Application Number Title Priority Date Filing Date
GB9411615A Expired - Fee Related GB2290430B (en) 1994-06-09 1994-06-09 Subtitling video data
GB9625754A Withdrawn GB2304016A (en) 1994-06-09 1995-06-09 Subtitling video data

Family Applications After (1)

Application Number Title Priority Date Filing Date
GB9625754A Withdrawn GB2304016A (en) 1994-06-09 1995-06-09 Subtitling video data

Country Status (4)

Country Link
AU (1) AU2678595A (en)
CA (1) CA2192439A1 (en)
GB (2) GB2290430B (en)
WO (1) WO1995034165A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2296148A (en) * 1994-11-28 1996-06-19 Snell & Wilcox Ltd Retaining sub-titles in a wide screen television display

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8599064B2 (en) * 2008-02-29 2013-12-03 Honeywell International Inc. Systems and methods for radar data communication

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2170371A (en) * 1984-12-21 1986-07-30 Mitsumi Electric Co Ltd Superimposing apparatus
EP0400990A2 (en) * 1989-05-30 1990-12-05 Sharp Kabushiki Kaisha Apparatus for superimposing character patterns in accordance with dot-matrix on video signals

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4264933A (en) * 1977-07-27 1981-04-28 Canon Kabushiki Kaisha Method and apparatus for facsimile recording
JPS5951653A (en) * 1982-09-17 1984-03-26 Fujitsu Ltd Data compression coding processing system
US4610027A (en) * 1983-12-30 1986-09-02 International Business Machines Corporation Method for converting a bit map of an image to a run length or run end representation
US4646356A (en) * 1984-06-29 1987-02-24 International Business Machines Corporation Method for converting a bit map of an image to a run length or run end representation

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2170371A (en) * 1984-12-21 1986-07-30 Mitsumi Electric Co Ltd Superimposing apparatus
EP0400990A2 (en) * 1989-05-30 1990-12-05 Sharp Kabushiki Kaisha Apparatus for superimposing character patterns in accordance with dot-matrix on video signals

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2296148A (en) * 1994-11-28 1996-06-19 Snell & Wilcox Ltd Retaining sub-titles in a wide screen television display

Also Published As

Publication number Publication date
WO1995034165A1 (en) 1995-12-14
GB2290430A8 (en) 1998-05-21
GB2304016A (en) 1997-03-05
GB9625754D0 (en) 1997-01-29
GB2290430B (en) 1998-08-05
CA2192439A1 (en) 1995-12-14
GB9411615D0 (en) 1994-08-03
AU2678595A (en) 1996-01-04

Similar Documents

Publication Publication Date Title
US5969770A (en) Animated "on-screen" display provisions for an MPEG video signal processing system
EP1079610B1 (en) Moving-picture processing method, and apparatus therefor
KR100375800B1 (en) Animated "on-screen" display provisions for an video signal processing system
RU2129758C1 (en) System for transmitting closed captions in compressed digital video signal
EP0745307B3 (en) Subtitling transmission system
US7274407B2 (en) Method and apparatus for encoding video content
US6335763B1 (en) Television receiver and additional information transmitting method
MY120196A (en) On screen display arrangement for a digital video signal processing system
EP0711486A1 (en) High resolution digital screen recorder and method
Mothersole et al. Broadcast Data Systems: Teletext and RDS
CN110036646B (en) Decoder, encoder, computer program and method
US20020154245A1 (en) Digital broadcast receiving apparatus and control method therefor
JP3555457B2 (en) Encoding device and decoding device for television signal
GB2290430A (en) Subtitling video data
JP4347275B2 (en) Receiver
JP3464229B2 (en) Method and apparatus for synchronizing control function to video signal in television receiver
KR100606692B1 (en) Method of caption data code processing of caption broadcasting system
JPH0918842A (en) Text signal decoding device
JPS6182590A (en) Picture transmitter
JPH0775053A (en) Character program recording system

Legal Events

Date Code Title Description
PCNP Patent ceased through non-payment of renewal fee

Effective date: 20020609