AU1842299A - Subtitle positioning method and apparatus - Google Patents

Subtitle positioning method and apparatus Download PDF

Info

Publication number
AU1842299A
AU1842299A AU18422/99A AU1842299A AU1842299A AU 1842299 A AU1842299 A AU 1842299A AU 18422/99 A AU18422/99 A AU 18422/99A AU 1842299 A AU1842299 A AU 1842299A AU 1842299 A AU1842299 A AU 1842299A
Authority
AU
Australia
Prior art keywords
subtitle
data
subtitles
video
decoding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
AU18422/99A
Other versions
AU726256B2 (en
Inventor
Ikuo Tsukagoshi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Corp
Original Assignee
Sony Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP9943695A external-priority patent/JPH08275205A/en
Application filed by Sony Corp filed Critical Sony Corp
Priority to AU18422/99A priority Critical patent/AU726256B2/en
Publication of AU1842299A publication Critical patent/AU1842299A/en
Application granted granted Critical
Publication of AU726256B2 publication Critical patent/AU726256B2/en
Anticipated expiration legal-status Critical
Expired legal-status Critical Current

Links

Landscapes

  • Studio Circuits (AREA)
  • Television Signal Processing For Recording (AREA)

Description

<u~I S F Rei-
AUSTRALIA
PATENTS ACT 1990 COMIPLETE SPECIFICATION FOR A STANDARD PATENT
ORIGINAL
Name and Address of Applicant Actual Inventor: Sony Corporation 7-35, Kitashinagawa 6-chome, Shinagawa-ku, Tokyo, JAPAN.
Ikuo Tsukagoshi Spruson Ferguson, Patent Attorneys Level 33, St Martins Tower, 31 Market Street Sydney, New South Walcs, 2000, Australia Subtitle Color-%iping and Positioning Method and ApparitLIS Address for Service: Invention Title: The following statement is a full description of this invention, including the best method of performing it known to melus:- SUBTITLE COLORWIPING AND POSITIONING METHOD AND APPARATUS 1 BACKGROUND OF THE INVENTION 2 The present invention relates to subtitles and, more 3 particularly, to colorwiping and positioning the subtitles.
Subtitles are superimposed on a video image to convey information to a viewer which supplements the video image. In Karaoke, for example, lyrics of songs are displayed on the video 7 image as subtitles while a viewer sings along to an audio track of an accompanying video image. The subtitles also convey :9 information to the viewer in the manner in which they are 10 displayed. Highlighting the lyrics of songs in Karaoke, for
S*-
1 example, cues the singer to sing, while moving the lyrics off the S 12 video screen indicates to the viewer to stop singing.
.3 Television broadcasting or video reproduction (such as Sfrom a video disk) provides subtitles for display with the video image. However, the subtitles are permanently combined with the 16 underlying video image and can be manipulated only at the 17 transmitting (or recording) end and not at the receiving (or j 18 reproducing) end. That is, subtitles displayed in television 19 broadcasting or video reproduction are "fixed" and cannot be S 20 highlighted or moved at the receiving (or reproduction) end. The 21 subtitles also cannot be turned off, which is particularly S 22 important in Karaoke where a singer wants to test his/her singing S* fK i- 1 abilities or enjoy the music video without the interruption or 2 the subtitles.
3 The television broadcasting and reproduction systems 4 cannot adequately manipulate the subtitles at the transmitting (or recording) end. The television broadcasting and reproduction .i6 systems require painstaking trial and error creation and manipulation of subtitles. In Karaoke, for example, where singalong music videos are mass produced, it is desirable that each 9 music video be produced quickly and efficiently. This is not possible with the television broadcasting and reproduction S 1:l systems which require slow and tedious work to custom tailor each S12 music video. Notably, dynamic positioning in a fixed-tyDpe 13 television broadcast or recording is not possible because the 14 subtitles are an integral part of video Dicture. Moving tne subtitles, therefore, would leave a blank space where the Ssubtitles were once superimposed.
17 Compact Disc Graphics (CD-G) provide more flexibility 18 in displaying subtitles because this technique records graphics 19 on a compact disc (CD) in the form of subcodes. However, CD-G has a serious disadvantage because this technique is limited to 21 CD applications, which are slow by television standards. That 22 is, the CD-G technique does not lend itself to creation and 23 manipulation of subtitles in real-time television broadcasts or S 24 video reproductions.
I
2
I
SCD-G is successful for computer applications because 2 the graphics are programmed in advance and the large processing 3 time required to create the graphics is largely unseen by the end 4 user. As will be shown with reference to Figs. 16a-16c and 17, however, the lead time required to generate a full CD-G screen is .1 :6 10.24 seconds, which is grossly inadequate for normal television or video broadcasts.
Fig. l6a depicts the CD-G data format in which one .93 frame includes 1 byte of a subcode and 32 bytes of audio channel data. Of the 32 bytes, 24 bytes are allocated for L and R audio channel data (each channel having 6 samples with 2 bytes per 12 sample) and bytes are allocated to an error correction code.
3 The frames are grouped as a block of 98 frames (Frame 0, Frame 1, 14 Frame 96 and Frame 97) as shown in Fig. 16b. Eight blocks and W are transmitted as shown in Fig. 16c. The subcodes for Frames 0 and 1 in each block are defined as sync 17 patterns SO, Sl, whereas the remaining 96 frames store various 18 subcode data. Among a group of 8 blocks, the first 2 blocks P, Q 19 are allocated to search data employed for searching through 20 record tracks; and graphic data can be allocated to the subcodes 21 in the remaining 6 blocks R,S,T,U,V and W.
22 Since each block of 98 frames is transmitted at a 23 repeating frequency of 75 Hz, the data transmission rate for 1 24 block is (75 x 98 bytes) 7.35 kHz, resulting in a subcode bit 3 3 rat-e of 7.35 K bytces/s. The transmission format for transmitting 2 zine inrormation present in blocks R,S,T,EJ,V and W is shown in 3 Fig. 17. Each of the 96 frames 97) of the G blocks (R,S,T,U,V and W) is arranged as a packetL including 6 channels (R to W) of 96 symbols ner channel. The nacket is further 6 subdivided into 4 packs of 24 symbuols each (symbol 0 to symbol 1 23), with each Symbol representing a frame.
.8A CD-G character is made up of 6 x 12 pixels. Since 9 each Dack is 6 x 24, a E x 12 character is easily accommodated in 1 each pack. The CD-G format allocates the six channels of (R,S,TU,V and W) and the 12 symbols 8 to 19toacrce. Th 12 remainder of the symbols in each of the packs store information *.43 about the character.
14 Mode information Is stored in the first 3 channels (R, S, T) of symbol 0 in each pack, and item information is stored in the last 3 channels V, W) of symbol 0. A combination of the 17 mode information and the item information defines the mode for 18 the characters stored in the corresponding pack as follows: 19 Table 1 Mode Item 21 000 000 mode 22 001 000 graphics mode 23 001 001 TV-graphics mode 24 111 000 user's mode '1 4 1 An instruction is stored in all of the channels of 2 symbol 1. Corresponding mode, item, parity or additional 3 information for the instruction is stored in all of the channels 4 of symbols 2 to 7. Parity for all of the data in the channels of symbols 0 to 19 is stored in all of the channels of the last 4 symbols (symbols 20 to 23) of each pack.
I As discussed, the data is transmitted at a repeating S.8 frequency of 75 Hz. Therefore, a packet which contains 4 packs is transmitted at a rate of 300 packs per second (75 Hz x 4 10 packs). That is, with 1 character allocated to the range of 6 x 12 pixels, 300 characters can be transmitted in 1 second.
12 However, a CD-G screen requires more than 300 13 characters. A CD-G screen is defined as 288 horizontal picture S 14 elements x 192 vertical picture elements and requires more than S"5 twice the 300 characters transmitted in 1 second. The total S transmission time for a 288 x 192 screen is, therefore, 2.56 17 seconds as shown by the following equation: 18 (288/6) x (192/12) 300 2.56 seconds 19 This is extremely long to regenerate each screen when it is considered that screens are usually refreshed every 0.6 S 21 seconds. This problem is compounded when hexadecimal codes are 22 used for the characters because each hexadecimal expression S 23 requires 4 bits to represent 1 pixel. As a result, 4 times the S24 data described above is transmitted increasing the transmission 1 rate to 10.24 seconds (4 x 2.56 seconds). Since each screen 2 requires a sluggish 10.24 seconds for transmission, a continual transmission of screens means that a lag time of 10.24 seconds is S exoerienced when transmitting screens using the CD-G technique.
Thus, the CD-G technique is not performed in real time and is unacceptably slow for use in a real time broadcast. In generating Karaoke music videos, for example, it would be nearly S impossible to synchronize the subtitles with the precise moment the lyrics are to be sung because the subtitles would have to be 10 generated 10.24 seconds in advance of the music video.
The CD-G system also suffers from defects in 12 reproducing the subtitles. The CD-G system displays subtitles 13 only upon normal reproduction and not during special reproduction 14 such as a fast forward or fast reverse reproduction. CD-G 1 pictures are also subject to sing phenomena (in which oblicue portions of a character are ragged) or flickering because this 17 system allocates only one bit of data for each picture element.
S 18 The lag time of the CD-G picture also prevents switching the 19 subtitle display on or off at a high speed.
S 20 In one type of system (known as the CAPTAIN system), 21 dot patterns, as well as character codes, represent the 22 subtitles. This system, however, does not appear to be any 23 better than the CD-G system and suffers from some of the same 24 disadvantages. In both systems, for example, the subtitles lack refrneient_ beas hese systems do not provide sufficient 2 resolurtion Powe~r in' displaving the subtitl tes. The CAPTAIN Sv-szem, for exampole, is developed for a 248 (horizontal pictu-e elements) by i92 (ve-Lical picture elements) display and -not for high resolution vidoeo olicrures of 720 x 480.
OBJECTS OF THE INVENTION 8 Anobjectof the invention, therefore, is to provide a subtitle method and anoarat-us for colorwiping subtitles.
A further object of the invention is to provide a Ssubt4itle method and apparatus which colorwiucs tne subtitles a 2.2 a commi~and of an operator and in real time-- 13 A furr?,__ object of th invention is to provide a 14 subtitle method and apparatus for dynamically Positioning tne subtitles.
An even frerobject of the invention is to provide a -17 subtitle method and apparatus which dynamically positions the 18 subtitles at the command of an operator and in real time.
19 A" 20 SMAYOF THE !NVENTION 21 in accordance with the above objectives, the present 22 invention provides a colorwiping encoding apparatus and method.
23 A subtitle generator generates subtitles which are to be 24 superimposed on a video image. The subtitles are encoded senaratel-. 'from the video imaae using an encoder.Acoriin 2 unit colorwje tlatanr~no h subtitles leaving the 3 remainina portion i2n a different color.
A colorwipcing decoding method and apparatus decodes the subtitles and video image encoded by the colorwiping encoding. A Ii ideo decoder decodes the video data encoded at the encoding end.
A buffer stores the subtitles for the video image Including a ecoding information. a contr-ol!er times the orecise moment when :9 the subtitles are to be read out from the dufe uring a real 10 time disulav of said video image and a colorwioang unit causes the color of at least a porcion or-- the subtitles to be a 12 different color- than the remaining portion.
3 osition decoding method and apparatus dynamicamlly i osi t ions the subtitles n. any reg~on of the- video image. A 153 video, decocer decodes v-ideo data o-7 a video image to be disulaved. A buffrer stores the subtitles fo the video image 17 including decoding information. A controller times the precise 18 moment when the subtitles are to be read out from the butrfer 4 19 during a real time display of said video image and a positioning unit dynamically changes the position where the subtitles are 21 superimposed on the video image.
22 The present invention, thus, provides colorwiping and 23 dynamic positioning of the subtitles. Since the subtitles are 24 encoded and decoded separately from the video image, the 8
A
subtitiles may be manipulated with great conz-rol and -real time- 2 olorwiolina is achieved auicklv and erfficety aloign 3 ooerator to rm-ass produce subzitled video nicztures cusuom tailored C to satisfaczfon. Dyam c Sicjon~ng of the subtitles is equally as auick and ef--fcient. Appling the coiorwi-ping and posicioining over- a oeriod of: fraMes, -he end viewer is orovided wituh the i sensatiion off motion as the subtitles are graduallv colorwipea or .9 renositioned over a ner"iod of time- These and other advantages 9 will be nozed uuon a revi-ew. of the dascriotion ofE the preferred 10 errmbodimzents below with reference tio Lhe ftigures.
-2 BRIEF DESCRIPTION OF TH~E DRAWINGS 2 A more comolete anoreciati o' of-- the D-eset 0:enio 3 and manv of its attendant advanzaaes will- be readilv obtained by ce to the f ollowing detailed desc-tion ccnsid-d 'n.
connection with the accomuuanving arawinas, in whi ch: 16 Fig. is a block diagram of a dat-a decoding aoa-ratus i of the noresenL invefltion; B Fig. 2 is a block diaram or- th sbLe eca .0ice -nFa Fig. 3 is a table of communIcations betwFeen the syst-em controller of Fig 1 and rne controller of Fig. 2; 12 Fig- 4 is a table of- raameters for the conmmuni cat-ions 23 between comoonents of Fia. I and Fin. 2; Fins. Sa to 5c are sianal diacrrams demonstratinc dt encd-ngor the o-esent inventi on; cdicFig. 6 is a color Iook uu table referred- to when J 17 encodi ng sbil aa 18 F-c-7a and 7b constitute a block diagra-m of the 19 encoding apparatus of the oresent invention; Figs. Ba and 8b depict a block diagram for the wipe 21 data sampler ofL Fig. 7a; L 22 Fig. 9 is a color look up table referred to when 23 conducting a color wipe operation; rig. 10 is a graph "or the explanation of a code buffer 2 oneration; 3 Fig. 11 -is a block diagram describing the internal 4 coeration cfr the code buffer in Fig. 2; Figs. 12a to 12c depict a scheme for the colorwining 6 operation; .IFig. 13 zzpa depicting the colorwiping .8 ooeration accord na.o,? 'Zt Fics 14at~ -scheme for the dynamic positioning operation; 11 Fig. 15 is a Do depicting the dynamic oosit:oning ooeration accora'ino to Figs. 14a to 14c; 3 Figs. 16a to 1-6c deolct the arrangement of data according to a CD-G format; and Fig. 17. de-picts a transmission fornat of the datLa the D-Gformat.
18 DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS 19 Referring now to the drawings, wherein like reference numerals designate identical or corresponding parts throughout, 21 the oresent i6nvention will be described.
22
I
2 Decoding Apparatus 3 The data decoding apparatus shown in Fig. I which S incorporates the present invention decodes a reproduction signal.
The system controller 14 of the data decoding apparatus causes 6 the reproduction signal to be processed and sent to a subtitle 7 decoder 7. The system controller communicates with the SS8 controller 35 (Fig. 2) of the subtitle decoder to decode the 9 subtitles and superimpose them onto a decoded video image for *0 display on a television screen.
11 A data decoder and demultiplexer 2 receives a digital 2 reproduction signal from, for example; a VCR. The data decoder 13 and demultiplexer 1 error decodes the reproduction signal 44 preferably employing an Error Correcting Code (ECC) technique and demultiplexes the error decoded reproduction signal into video, subtitle and dio data. A memory 2 may be used, for example, as S* 7 a buffer memory and a work area for the purpose of error decoding 18 and demultiplexing the reproduction signal.
19 A video decoder 3 decodes the demultiplexed video data 20 from a video data stream. A memory 4 may be employed for the 21 operation of decoding the video data similar to the operation of 22 the memory 2 employed with data decoder and demultiplexer 1.
23 A letter box circuit 5 converts a video picture with a 24 4:3 aspect ratio (a squeeze mode) to a 16:9 letter box ratio.
12 F ^1r )'i 1 2 3 4 6 .7 *9 ii 13 5 9 3'7 18 19 21 22 23 24 The conversion is performed using a 4 to 3 decimation process, whereby every four horizontal lines are decimated to three horizontal lines, thus squeezing the video picture into a picture. According to the letter box format, a vertical resolution component is derived from the remaining K of the video picture which is employed to enhance the vertical resolution of the decimated video picture. A timing control memory 6 ensures that the W of the letter box picture is not transmitted. When the decoded video data generated by the video decoder 3 is already in a 16:9 letter box format, the letter box circuit bypasses the decimation operation and sends the decoded video data directly to the subtitle decoder 7.
Meanwhile, the decoded subtitle data demultiplexed by the data decoder and demultiplexer 1 is directly sent to the subtitle decoder 7. The subtitle decoder 7 decodes the subtitle data according to instructions from the system controller 1.4 and mixes the decoded subtitle data with the decoded video data.
A composite encoder 8 encodes the mixed subtitle data and video data into a suitable video picture format, such as NTSC/PAL. A mode display 9 interfaces with a user and indicates, for example, the mode of television monitor connected thereto.
A
D/A converter 10 converts the encoded signal received from the composite encoder 8 into an analog signal suitable for display in the indicated mode, such as NTSC or PAL.
13 Al'z 24
L
it -*w
I
1 The audio portion of the audio/video signal decoded by 2 the data decoder and demultiplexer 1 is decoded by an audio 3 decoder 11 which decodes the demultiplexed audio data using a 4 memory 12, for example. The decoded audio data output from the audio decoder is converted into an analog audio signal 6 appropriate for broadcast through a television monitor by a D/A converter 13.
9 Subtitle Decoder ID The subtitle decoder 7, as will be discussed with 11 reference to Fig. 2, decodes the encoded subtitle data and mixes *12 the decoded subtitle data with the appropriate video data. A 13 controller 35 controls the operations of the subtitle decoder and S* "1 communicates with the system controller 14 of the decoder (Fig.
1) using the command signals shown in Fig. 2 (as listed in Fig.
i; Together, the controller and system controller time the decoding of the subtitle data so that the subtitle data is mixed 18 with video image data at the precise position the subtitles are 19 to appear on the video image.
The word detector 20 of the subtitle decoder 7 receives S21 the subtitle data in groups of bit streams. Each group of bit 22 streams makes up one frame (or page) of subtitles to be 23 superimposed on a video image. Different groups of streams may t 24 represent subtitles displayed in different playback modes, such t 14 1 as rormal playback, fast-reverse or fast-forward. The system 2 conroller indicates to the word detec"or using a stream_select 3 signal which playback mode to iisplay and the word detector 4 selects the appropriate stream of signals for the indicated playback mode. In the case where different video images are 6 displayed on different channels, the system controller indicates the appropriate channel to the word detector correspondingly in a :8 ch_select signal and the word detector changes channels to receive only those streams on the selected channel.
A group of bit streams making up one frame and received 11 by the word detector include header information (s.header) which '1:2 describes the format of the group of bit streams. The header.
13 information is accompanied with header error information (header error) and data error information (data error). The system controller uses the header to determine how to parse the group of bit streams and extract the relevant subtitle data. The system S* 7 controller uses the header error information to correct anomalies 18 in the header information and uses the data error information to 19 correct anomalies in the subtitle data.
The word detector forwards the subtitle data (Bitmap) 21 along with other decoding information (including a presentation S 22 time stamp PTS, position data position_data and color look up 23 table data CLUT_data) to the code detector 22. The PTS is a S 24 signal that indicates the length of time the subtitles are to be ii'
S--I
1 displayed. The position data indicates the horizontal and 2 vertical position where the subtitles are to be superimposed on 3 the video image. The CLUT_data indicates which colors are to be 4 used for the pixels making up the subtitles. For example, the system controller determines that a video image is being 6 displayed and causes the code buffer to output the corresponding subtitle data (Bitmap) at a position in the video image 8 represented by the horizontal and vertical position indicated by 9 the positiondata, in the color indicated by the CLUTdata and for a period of time indicated by the PTS.
11 A scheduler 21 is provided to ensure that the data 2 received from the demultiplexer 1 (Fig. 1) does not overflow the 13 code buffer 22. The scheduler controls read/write access to the code buffer by determining a bandwidth for an I/O port (not shown) which receives the bit streams selected by the word detector. The bandwidth refers to the number of parallel bits *A.7 supplied to the I/O port at one time and is calculated by 18 dividing the rate at which the demultiplexer demultiplexes data 19 by the rate at which data is read from the code buffer. For example, a data rate from the demultiplexer of 20 Mbps divided by S 21 a 2.5 Mbps rate of data read from the code buffer is equal to 8 22 bits. Therefore, the scheduler will set the I/O port to receive 23 8 bits in parallel in order to maintain a consistent flow rate of S 24 data into and out of the code buffer. The code buffer, thus, I 16 1 receives the subtitle data (Bitmap) and awaits a decode start 2 signal from the system controller to read out the data.
3 The system controller executes reading in real time 4 when it is determined from the horizontal and vertical sync signals that the television scanner is at a position 6 corresponding to the position indicated by the position_data.
For real time display, the reading rate should correspond to a *i9 picture element sampling rate, preferably 13.5 MHz. As discussed, the subtitle data preferably is written into the code I"O buffer at a rate of 2.5 MHz or more. Thus, the 13.5 MHz sampling II clock is divided into four clock cycles of 3.375 MHz each. One '12 of these 3.375 MHz clock cycles is allocated to writing (because 13 writing requires at least 2.5 MHz) and the remaining three clock 14 cycles are allocated to reading data from the code buffer, thus satisfying the recuirement for real time display.
i *The read/write operation described is not only performed in real time, but also provides high resolution. Eight 18 bits of the subtitle data are read from the code buffer 22 for 19 each of three clock cycles, or twenty-four bits per sampling clock. When display of the picture is conducted by the 21 television monitor every fourth clock cycle, one-fourth of the 22 twenty-four bits, (24/4 6 bits are displayed at every clock 23 cycle. That is, each subtitle picture element may comprise six S17 C 17 1 reeivs te sbtile dta Bitap)andawais adecde tar 2 sina esyte.o.role.t. ra.ot..edaa 3 Th ytmcnrle xctsraigi eltm ii F 1 bits, which is more than sufficient to achieve a high quality of 2 resolution for the subtitles.
3 The operation of the code buffer 22 and corresponding 4 components of Fig. 2 is depicted as a block diagram in Fig. 11.
The code buffer 22-1 accumulates streams of subtitle data until S6 at least one page of subtitle data is accumulated in the code S buffer. The subtitle data for one page is transferred from the 8: code buffer 22-1 to the display memory 22-2 (which acts as a 9 buffer for the subtitle decoder) when the display time stamp (PTS) is aligned with the synchronizing clock (SCR). The '11 synchronizing clock advances a pointer in the display memory 22-2 during reading indicating which address of subtitle data is being currently read. It will be noted that placing the code buffer 14. and display memory in a single unit is preferred since the code buffer need only increment one pointer pointing to the current S" address in the display memory 22-2 which stores the next set of 17. subtitle data. Thus, no delay is caused due to a transfer, 18 resulting in a high speed transfer of the subtitle data.
19 When the code buffer is read during a normal playback mode, the synchronizing clock advances the pointer of the display 21 memory 22-2 at each pulse. However, during special reproduction 22 (such as fast-forward, fast-reverse playback modes), the pointer 23 must be advanced at a different rate. A special command is first 24 sent to the controller 35 and the controller sends back an 18 1 acknowledge signal (special_ack), acknowledging that special 2 reproduction is to be initiated. To uniformly speed up (or slow 3 down) the operations of the subtitle decoder according to the 4 special reproduction rate, the system clock reference (SCR) can be altered by adding or subtracting clock pulses. Subtraction 6 pulses are created at an n times rate corresponding to the rate of fast-feeding or fast-reverse feeding. For example, at the a: time when special reproduction is commenced, real time subtraction is perforried on the bit stream of subtitle data read .1U. out from the code buffer at the n times rate and the pointer advances at the desired rate to effect the special playback mode.
.12 When the special reproduction operation corresponds-to '3 a pause operation, on the other hand, no subtraction pulses are 14 created. Instead, an identical frame is continuously read from .7-6 the code buffer repeatedly, thus providing the sensation illusion that the subtitles are paused.
"17 The reading operation is ended when the subtitle il 18 decoder 7 determines that an end of page (EOP) of the subtitle 19 frame is reached. The system controller 14 sends a repeat time signal to the controller 35 which indicates the length of a page.
21 An inverse run-lenath circuit 24 includes a counter and sends a 22 display end signal to the controller 35 when the count value of 23 the counter reaches the value indicated by the repeat time 24 signal. When the controller 35 determines that the repeat time 19 1 is reached, the reading operation of the code buffer is stopped.
2 For purposes of this invention, the code buffer preferably stores 3 at least two pages of subtitle data because one page will be read 4 as another page is written into the code buffer.
The controller 35 issues a buffer overflow signal to 6 the system controller 14 when an overflow of the code buffer occurs. An overflow can be determined when the controller receives the display end signal from the run-length circuit 24 before the word detector 20 receives an end of page (EOP) signal on the ollowin e. t that time, tr. system controller 14 Q. on the following page. t n 11 withholds transfer of subtitle data from the data decoder and demutiplexer 1 (Fig. 1) to the word detector to prevent an 13 overflow of the code buffer. When an overflow condition has passed, the next stream will be written into the code buffer and displayed at the correct display start position.
An underflow condition exists when the code buffer has I~ completed reading the subtitle data for an entire page and no 18 further data exists in the code buffer. A code buffer with a 19 capacity of two pages is depicted by the "code buffer size" line in Fig. 10. Graphically, an underflow would appear in Fig. 10 as 21 one of the vertical portions of line whic extends below tne S22 lower limit of the code buffer. By contrast, an overflow 23 condition is graphically depicted in Fig. 10 when the subtitle 0 0*
__II
-'i 1 data read into the code buffer is too large, the horizontal 2 portion of line extends beyond line 3 Fig. 10 graphically demonstrates the data flow into and 4 out of the code buffer 22. The T-axis (abscissa) represents time, while the D-axis (ordinate) represents data size for each 6 page of data. Thus, the gradient (rise/run) represents the data flow rate of the subtitles into the code buffer. Graph (C) "8 represents the data flow of the subtitle data. The vertical ortions of graph indicate a transfer of subtitle data from 1 0 the code buffer when the display time stamp (PTS) is aligned with 11 the synchronizing clock (SCR) generated internally by the subtitle decoder 7. The horizontal portions of the graph 13 indicate the transfer of subtitle data into the code buffer. For example, at a time that the display time stamp (PTS) for page S (SO) is received by the code buffer, the previous page of subtitle data is transferred from the code buffer and page (SO) is written into the code buffer. When another display time stamo 18 (PTS) is received by the code buffer, the subtitle data of page 19 (SO) is transferred out of the code buffer and page (Sl) is written in. Similarly, the remaining pages (S3) are 21 written into and read out of the code buffer as indicated.
S22 To precisely time the reading of the subtitle data from 23 the code buffer with the display of the video image, delay 24 compensation must be performed to allow for delays within the 21 1 subtitle decoder. This is especially important where an external 2 memory is employed as the display memory because an external 3 memory increases the delay factor. Delay compensation is 4 achieved by controlling the timing of the decode start command from the system controller 14. The system controller 14 delays 6 the decode start command by a time equal to the processing of aletter box picture (approximately one field) and a delay caused by video decoding at the instant the synchronizing clock of the :9 controller (SCR) is aligned with the display time stamp (PTS).
Delay compensation is particularly useful, since the video, audio 11 and subtitle data are multiplexed on the premise that the decode '12 delay in each of the video, audio and subtitle data signals is 13 zero in the data encoding apparatus.
S.14 When the subtitle data for one page is read out of the display memory 22-2 (Fig. 11), the headers of the bit streams are separated therefrom by a parser 22-3 and forwarded to the inverse i variable-length coder or run-length decoder 23, 24 during a 18 vertical blanking period The inverse VLC (Variable Length 19 Coding) circuit 23 (Fig. 2) subjects the subtitle data to variable length decoding. The variable length decoded subtitle 21 data is composed of level data or and run data as S22 paired data. In the case were variable length decoding is not 23 employed, the inverse VLC circuit may be bypassed and the R 22 s 1 subtitle data read from the code buffer will be directly output 2 to the inverse run-length circuit 24.
3 The inverse run-length circuit 24 conducts run-length decoding by 4 generating the level of data from the number of run data elements. Thus, the VLC circuit 23 and the run-length circuit 24 6 decompress the subtitle data which had been stored as compressed data in the code buffer 22.
The decomoressed subtitle data is then sent to a 3:4 filter 25. The 3:4 filter receives an xsqueeze signal from the system controller 14 indicating the aspect ratio of the 11 corresponding television monitor. Where the signal indicates 12 that the monitor has a 4:3 asoect ratio, the 3:4 filter applies 13 3:4 filtration processing to the subtitle data to match the size of the subtitles to the size of the video picture. There are no 13 Fig. 11. In the preferred embodiment, the controller 35 reads pixels worth of subtitle data from the code buffer 22 before the I27.: H sync pulse is generated. In the case where the television 18 monitor already has a 16:9 aspect ratio, or the decompressed 19 subtitle data represents fonts, the 3:4 filter is bypassed as W 20 shown in Fig. 11.
S21 A color look-up table 26 (CLUT) receives the subtitle S22 data from the 3:4 filter 25 and the CLUTdata from the code S23 buffer 22. The color look up table generates a suitable color S24 from the CLUT data for the subtitle data. The color look up 23 i 1 table selects an address corresponding to the subtitle data for 2 each pixel and forwards a mixing ratio K and color components Y 3 (luminance), C, (color difference signal R-Y) and C, (color 4 difference signal 3-Y) to the mixer 34. The color components Y, C, and when mixed by the mixer, at the mixing ratio K create I 6 a pixel with the color indicated by the color look up table.
Background video data is incorporated in the arrangement of the color look-up table. For example, address 0 of the look-up table includes key data K having the value of 00 1. which means that che subtitle data will not be seen and the !1 background video data will manifest, as shown by regions Tl and 2 T5 in Fig. 5c. Addresses Ih to 6h of the look-up table include 13 values of Che key data K which increase linearly (20, 40 CO t: hexadecimal); which means that the subtitle pixels according to these addresses are mixed with the background data as shown by the regions T2 and T4 in Fig. 5c. Finally, addresses 8h to Fh of 7 the look-up table include values of key data K of EOh; which 18 means that the components Y, Cr and Cb are mixed without any 19 background video data as shown by region T3 in Fig. 5c. The color look-un table data is generated from the system controller 21 and is previously downloaded to the CLUT circuit before decoding.
22 With the color look-up table, the filtered subtitle data is S23 transformed into the appropriate color pixel for display on the 24 television monitor.
I 24 I ui *W mlmnh rg M 4 q S1 Fig. 6 shows an example of a color look-up table where 2 the components Y, Cr, Cb and K are arranged according to the 3 addresses (hexadecimal). As will be explained, color 4 wiping is performed by changing the CLUT_c-ta, thereby replacing part of the color look up table by the color wiping color look up 6 table, shown in Fig. 9. Normally, a particular subtitle frame is S refreshed several times because frames are refreshed on a 8: television several times a second. When the subtitles are refreshed, the same subtitle data will be employed. However, the 0. color will be different due to the changed color look up table.
Thus, the subtitles will appear to be color wiped as they are 2 refreshed with each consecutive frame.
;3 A mixer 34 (Fig. 2) mixes the pixels from the color look-up table 26 with video data from the video decoder 3 (Fig.
L The resulting mixed data represents a video picture with superimposed subtitles and is ready to be output to a television monitor. The mixer 34 is controlled to position the subtitles L within the video picture. The system controller 14 sends a 'P 19 u position signal generated by the commands of an operator to the mixer via controller 35 which designates the vertical position p 21 for display on the screen. The u_position value may be varied 22 (either by a user, the transmitter, or otherwise) allowing a user 23 to place the subtitles anywhere along a vertical axis.
cI 1 The decoding apparatus of the present invention may be 2 practiced with the parameters for the different signals shown in 3 Fig. 4. However, the present invention is not limited to the 4 parameters set forth in that .figure and may be employed in different video systems.
6 With the present invention, a viewer has control over the display of the subtitle through the mode display device 9.
The system controller 14, upon command from the user, sends a i control signal to the mixer 34 (Fig. turning the subtitles on 'ti. or off- Since the present invention decodes subtitles in real 11 time, the user does not experience any unpleasant delay when 12: turning the subtitles on or off. In addition, the subtitles can 13 be controlled, by the user or otherwise, to fade-in/fade out at a variable rate. This is achieved by multiplying a fade I coefficient to the pattern data representing the subtitles at a L designated speed. This function also allows an editor of the *.127 subtitles to present viewers with different sensations according 18 to the broadcast of the audio/video picture. For example, news 19 information may be "flashed" rapidly to draw the attention of the viewer, whereas subtitles in a slow music video "softly" appear 21 in order not to detract from the enjoyment of the music video.
S 22 1 2 3 4
S
6 *5 *.98 24 Encodinlg Technicque The encoding technique employed in the present invention will be described in more particular detail with reference to Figs. 5a, 5b and 5c and Fig. 6. As an example, the technique for encoding the letter "All of Fig- 5a will be explained. The letter "All is scanned along successive horizontal lines and the fill data of Fig. Sb is generated for the letter uAl along each horizontal line. It will be noted that the level 11O" emarks the highest level for recreatinlg a color pixel from the color look-up table shown in Fig. 6, whereas level represents a lack of subtitle data.
The key data (or mixing ratio) determines the degree to which the fill data is mixed with background video.
Regions Tl and T5 of the key data correspond to areas in the video picture that are not superimposed with the fill data; therefore, these areas are designated as level 0 as indicated by address 0 in Fig. 6. Regions T2 and T4 are mixed areas where the subtitles are gradually mixed with the background video picture so that the subtitles blend into the background video picture and *do not abruptly contrast therewith. Any of the fill data in this area is stored in addresses 1 through 6 of the color look up table. The main portion of the letter "A"l is displayed within the T3 region where the backgroun-'I information is muted. The 27
A
9 1 subtitle information in region T3 is stored as addresses 7 to F 2 (hexadecimal). The color look-up table of Fig. 6 is arranged in 3 varying degrees of the luminance component Y. When a pixel in 4 the region T3 is to be stored, for example, and the level of the luminance component Y for that particular pixel is 6 (hexadecimal), the color information for that pixel is obtained I from address 9. In this manner, the remaining pixels for the subtitle characters are encoded.
'i Encoding Apparatus The encoding apparatus of the present invention is depicted in Figs. 7a, b. Audio and video information is received 3' by a microphone 53 and video camera 51, respectively and
I
forwarded to a multiplexer 58. The subtitle data are entered I. through either a character generator 55 or a flying spot scanner 56 and encoded by a subtitle encoding circuit 57. The encoded subtitle information is sent to the multiplexer 58 and combined 18 with the audio/video information onto a record disc 91 or channel 19 for transmission, display, recording or the like.
The video camera 51 generates the video signal and 21 supplies the same to a video encoding unit 52 which converts the 22 video signal from analog to digital form. The digitized video S23 signal is then compressed for video transmission and forwarded to 24 a rate controller 52a, which controls the rate that the 28 1 compressed video data is transferred to the multiplexer in 2 synchronism with the rate that the subtitles are sent to the 3 multiplexer. In this manner, the compressed video data is 4 combined with the subtitle data at the correct time. Similarly, audio information is obtained by the microphone 53 and encoded by 6 an audio encoding unit 54 before being sent to the multiplexer.
The audio encoding unit does not necessarily include a rate controller because the audio data may ultimately be recorded on a 9 different track or transmitted over a different channel from the video data.
11 The subtitles are generated by either character S generator 55 or flying spot scanner 56. The character generator 13 includes a monitor and a keyboard which allows an operator to '.If manually insert subtitles into a video picture. The operator "I edits the subtitles by typing the subtitles through the keyboard.
The flying spot scanner 56, on the other hand, is provided in the ij situation where subtitles are already provided in an external 18 video picture. The flying spot scanner scans the video picture 19 and determines where the subtitles are positioned and generates corresponding subtitle data therefrom. The subtitles from the 21 flying spot scanner are pre-processed by the processing circuit 22 63 to conform with subtitles generated by the character generator 23 and forwarded to the subtitle encoding circuit.
2 1 The subtitle data from either the character generator 2 55 or the flying spot scanner are then selected for compression.
3 The character generator outputs blanking data, subtitle data and 4 key data. The subtitle data and key data are forwarded to a switch 61 which is switched according to a predetermined timing 6 to select either the subtitle or key data. The selected data from switch 61 is filtered by a filter 72 and supplied to another switch 62. Switch 62 switches between the blanking data, the filtered data from the character generator and the processed data from the flying spot scanner. When it is determined that no 11 subtitles are present, the blanking data is chosen by the switch a12: 62. Where subtitles are present, the switch 62 chooses between 13 the character generator data or the flying spot scanner data, lj! depending upon which device is being used to generate the subtitle data.
The data selected by switch 62 is then quantized by a .17: cuantization circuit 64, using a quantization based on data fed 18 back from a subtitle buffer verifier 68. The quantized data, 19 which may be compressed, is supplied to a switch 69 and (during normal operation) forwarded to a differential pulse code 21 modulation (DPCM) circuit 65 for pulse code modulation. The 22 modulated data is run-length encoded by a run-length coding 23 circuit 66 and variable-length encoded by a variable-length K -I7 1 encoding circuit 67 and forwarded to the subtitle buffer verifier 2 68 for final processing before being sent to the multiplexer 58.
3 The subtitle buffer verifier 68 verifies that the 4 buffer is sufficiently filled with data without overflowing.
This is done by feeding a control signal (referred to in Fig. 7A 6 as a filter signal) back to the quantization circuit 64. The control signal changes the quantization level of the quantization Q circuit, thereby changing the amount of data encoded for a particular subtitle. By increasing the quantization level, the LQ: amount of data required for the subtitle data is reduced and the 11 bit rate of data flowing to the subtitle buffer verifier is .L2 consequently reduced. When the subtitle buffer verifier 13 determines that there is an underflow of data, the control signal .14. decreases the quantization level and the amount of data output from the quantization circuit increases, thereby filling the subtitle buffer verifier.
7 The subtitle buffer verifier is also responsible for 18 preparing the subtitle data for transmission (over television 19 airwaves, for example). The subtitle buffer verifier inserts 9 20 information necessary to decode the encoded subtitle data. This 21 information includes a normal/special play signal which indicates S22, whether the subtitles are recorded in a normal or special (fast- S23 forward/reverse) mode. hn upper limit value signal is inserted S24 which indicates the upper limit for the memory size of the 31i r .je 1 subtitle data for a frame. An EOP signal marks the end of page 2 for the subtitle data frame and also is inserted. A time code 3 signal is inserted which is used as the time stamp PTS in 4 decoding. Subtitle encoding information is inserted and includes information used in encoding the subtitle data, such as the 6 quantization factor. Positional information is inserted and is used as the position_data upon decoding. A static/dynamic signal is inserted which indicates whether the subtitle data is in static or dynamic mode. The subtitle buffer verifier also 10: inserts the color look up table address for transmission to the 11 decoder so that the colors of the display will match the colors .12 employed in creating the subtitles.
13 The subtitle buffer verifier is preferably a code "i buffer similar to the code buffer 22 of the decoder (Fig.2). To that end, it is useful to think of the operation of the subtitle buffer verifier to be in symmetry performing the inverse 'il functions of the code buffer) with the code buffer operational 18 diagram of Fig. 11. For example, the color pixels of the 19 subtitles are converted into digital representations; the digital subtitles are encoded by a run length encoder and a variable S21 length encoder; header information is added; and the resultant 22 subtitle information is stored in a buffer and forwarded to a 23 multiplexer for multiplexing with the audio and video data.
23 r rml w- 32 *ll Au a 1 The multiplexer 58 multiplexes the encoded subtitle 2 data with the video and audio data, preferably employing a time- 3 sliced multiplexing encoding unit. The multiplexer also provides 4 error correction processing error correction coding) and modulation processing EFM, eight-to-fourteen modulation).
6 The multiplexed data is then transmitted (via television broadcasting, recording, or other means of transference) to the *'gi decoding apparatus for decoding and display.
Colorwiping Encoding 11 Colorwiping refers to a process by which an image, such 2 as the subtitles, is gradually overlayed with another image. -An 13 exemplary application of colorwiping is highlighting, wherein a S frame of subtitles is dynamically highlighted from left to right with the passage of time. Highlighting is particularly useful in, for example, Karaoke where the displayed lyrics are "1i7 highlighted from left to right as the lyrics are sung. The 18 present invention performs colorwiping by changing the color look 19 up table at different points of the subtitle display. For example, an initial subtitle frame is generated with the standard 21 color look up table in Fig. 6. When colorwiping is performed, 22 the color look up table is changed to the color wiping look up 23 table of Fig. 9. With the passage of each frame, the gradual 24 change of the position at which the color look up table is 33 1 changed from the colorwiping to the standard color look provides 2 the sensation that the subtitles are changing color dynamically 3 over time from left to right.
4 An encoding operation for color wiping will now be discussed with reference to Figs. 7a, 8a and 8b. During the 6 course of encoding subtitles, an operator may desire to color wipe the previously encoded subtitles. To that end, the operator 6.i is provided with a wipe lever 81 to control the colorwiping and a monitor 84 to view the color wiping in real time. The wipe lever is connected to an adapter 82 to adapt the analog voltages of the 11 wipe lever to digital impulses suitable for digital manipulation.
12.- The digital output of the adapter is fed to both a switcher 83 13 and a wipe data sampler 70. The switcher switches the color look iy up table to values represented by the position of the wipe lever and generates color pixels of the subtitles for display on the monitor. Thus, the operator can visually inspect the colorwiping 1I" procedure while it occurs and adjust the speed or color of the 18 wiping to satisfaction.
19 The wipe data sampler and position sampler determines from the adapter signals where in the video picture 21 the color look up table is to be chanaed and outputs this S22 information to the encoding circuits 65, 66 and 67 (via switch 23 69) for encoding and transmission to the multiplexer 58. Figs.
24 8a and 8b depict a block diagram of the operation of the wipe 34 1 data and position sampler. A comparator compares a present pixel 2 signal generated by the adapter with a previous pixel signal from 3 the adapter. This is achieved by transmitting the present pixel 4 value to input A of a comparator 301 while supplying the previous pixel value latched in a register 300 to input B of the 6 comparator 301. The comparator outputs a boolean "true" value to 7. a counter 302 (which is reset at every horizontal or vertical sync pulse) when the present and previous pixels have the same value and the counter increments a count value. That is, the 3.0: comparator registers a true condition when the pixels up until 11 that point are generated from the same color look up table. At the point where the color look up table changes, therefore, the 13 present and previous pixels become unequal their color 4*l changes) and the comparator generates a "false" boolean condition. The count value, thus, is equal to the number of matches between the present and previous values, which is the *1iT same as the position at which the color look up table changes.
13 The count value is latched by a register 303 upon the following 19 vertical sync pulse and transferred to the encoding circuits (via switch 69) for transmission.
21 22 Colorwiping Decoding S23 Color wiping decoding will now be discussed with S24 reference to Figs. 12a-c and 13. Fig. 12a shows the position 'i z E rT 1 where the color look up table is switched at point A from a color 2 wiping look up table (Fig. 9) to the standard color look up table 3 (Fig. Fig. 12b depicts a pattern of subtitle and colorwipe 4 data arranged in discrete blocks of presentation time stamps (PTS(n) PTS(n+t)). The first presentation time stamp PTS(n) 6 corresponds to normal subtitle data and the remaining presentation time stamps PTS(n+1 n+t) correspond to 'S colorwiping data (WPA WPZ). Fig. 12c shows successive frames 9 (n n+t) which correspond to the presentation time stamps. To ,1 execute colorwiping, each successive colorwiping frame (WPA 11 WPZ) sets the point where the color look up table is switched :12 (point A) further along the displayed subtitle, thereby 13 dynamically performing colorwiping as a function of time.
i An operational block diagram of the colorwiping decoding is depicted in Fig. 13. The vertical sync pulse triggers a register 205 to latch the current subtitle frame (Fig.
17 13 shows a colorwiping frame WP being latched). The colorwiping 18 data latched by the register indicates the position of the color 19 look up table switching. A pixel counter decrements the value indicated by the colorwiping data at each horizontal sync pulse 21 and outputs a boolean "true" flag to the color look up table 26.
22 While the flag is "true" the color look up table employs the 23 colorwiping table (Fig. 9) to decode the colors of the subtitle 24 pixels. When the pixel counter reaches zero, the position of l *i f p c a -ii 1 color look table switching is reached and the pixel counter 2 issues a boolean "false" flag to the color look up table 26. At 3 this time, the color look table switches the colorwiping color 4 look up table (Fig. 9) to the standard look up table (Fig. 6), and the remainder of the subtitle frame is displayed in standard 6 color mode. Each successive color-wiping frame (WPA WPZ) moves the position of switching; thus, each refreshed subtitle frame advances (or retreats) the colorwiping, thus performing dynamic colorwiping.
The colorwiping color look up table in Fig. 9 I1 incorporates two sets of colors (one set for addresses Oh to 7h and a second set for addresses 8h to Fh). Thus, the colorwiping 13 color can be changed to a secondary color simply by changing the S most significant bit (MSB) of the color look up table address.
For example, the first set of colorwiping colors has a MSB of while the second set has a MSB of Changing the MSB of '17 address 7h to a transforms the address to Fh and the 18 colorwiping color changes- This may be done, for example, by 19 setting the MSB equal to the flag of pixel counter 208.
Employing the MSB to change between color sets has the 21 advantage of reducing the number of bits required to be encoded. 22 Since the MSB is known, only the three lower order bits need to 23 be encoded where 4 bits are employed for every pixel. Where two S24 bits are employed for every pixel, the subtitle data is coded 4 37 1 al 1 only for the least significant bit. In a 4 bits per 2 pixel 2 format, only the MSB is employed for color control and the 3 remaining three bits can be reserved for pixil information.
4 Thus, by using the MSB the number of bits encoded can be decreased and the overall processing time for encoding and 6 decoding is optimized.
Dynamic Subtitle Positioning 8 The subtitles are repositioned dynamically, as a function of time, by employing a similar technique as described 11 above with reference to colorwiping. As shown in Figs. 14a-c, the position data is measured along the horizontal axis (Fig.- 13 14a) and is transferred to the subtitle decoder with the subtitle S data during the appropriate frame (Fig. 14c) corresponding to a presentation time stamp (PTS(n), for example; Fig. 14b).
The positioning operation will now be explained with 1i7 reference to Fig. 15. The position data is a value representing 18 the position of the subtitle frame along the horizontal axis and 19 is read out from the display buffer and latched by register 205 on each vertical sync pulse. Pixel counter 208 decrements the S21 position data on each horizontal sync pulse and send a boolean 2 22 flag to the controller 35 (Figs. 2 and 15) to indicate that the 23 position of the subtitle frame has not been reached. When the S24 pixel counter reaches zero, the position of the subtitle frame 3 38 k has been reached and chi boolean flag is toggled to indicate this S to the controller. The controller, which has been delaying the reading operation of the code buffer 22 (Fig. then causes the code buffer to read out the subtitle data to the run length S 'decoder 24 (Fig. The subtitle data is then decoded as S described above and displayed with the corresponding video image.
In this manner, the position of the subtitle frame is changed with each frame; thus providing dynamic movement of the subtitle frame.
T The present invention, thus, provides subtitle S colorwiping and dynamic positioning. Since the subtitles are 2. encoded and decoded in real time separately from the audio/video 3 data, the subtitles can be controlled with great flexibility. In S Karaoke, for example, the subtitles may be turned off at any time and instantaneously when it is desired to test the singer:s skill in singing the song;. Colorwiping and dynamic positioning of the 7 subtitles is also performed in real time, allowing an operator to A quickly and easily produce video pictures in mass. Moreover, the 9 results of colorwiping and dynamic positioning may be instantly .0 viewed by an operator and adjusted to satisfaction, providing :1 custom tailoring of each audio/video picture.
2- It will'be appreciated that the present invention is .3 applicable to other applications, such as television or video .4 -graphics. It is, therefore, to be understood that, within the ;i' 39
.I
.I
2 scope of the appended claims, the invention may be practiced otherwise than as specifically described lierei n.
Al

Claims (13)

1. A subtitle position decoding apparatus supplied with multiplexed subtitle data and encoded video data, comprising: video decoding means for decoding the encoded video data of a video image to be displayed; buffer means for storing the subtitle data to be displayed as a frame of subtitles contemporaneously with said video image; control means for timing a read out operation of said subtitle data from said 10 buffer means during a real time display of said video image; and means for dynamically changing a position in the video image where said frame of subtitles is superimposed during display.
2. The subtitle position decoding apparatus of claim 1 wherein the means is for dynamically changing comprises: latching means for latching a value indicative of said position where said frame of subtitles is to be superimposed; and counting means for decrementing said value each time a pixel of said video image is displayed, 20 wherein said control means performs said read out operation when said counting means reaches zero, thereby causing said frame of subtitles to be superimposed with said video image at that time.
3. The subtitle position decoding apparatus of claim 2 further comprising: delay compensation means for compensating a delay caused by components of the subtitle position decoding apparatus so as to display said frame of subtitles with said md video image at a position indicated by decoding information included in said subtitle S[N:data.i 957:MX S[N;\i01957iMXL I i -42- I
4. The subtitle position decoding apparatus of claim 2 further comprising subtitle decoding means for decoding said subtitle data stored in said buffer means.
Y The subtitle position decoding apparatus of claim 4 further comprising mixing means for mixing said video data decoded by said video decoding means with said subtitle data decoded by said subtitle decoding means.
6. A subtitle position decoding method for decoding subtitle data multiplexed with encoded video data comprising the steps of: i e. video decoding the encoded video data of a video image to be displayed; storing the subtitle data in a buffer to be displayed as a frame of subtitles contemporaneously with said video image; timing a read out operation of said subtitle data from said buffer during a real time display of said video image; and dynamically changing a position in the video image where said frame of subtitles is superimposed during display. of
7. The subtitle position decoding method of claim 6 wherein the position S o of said frame of subtitles is dynamically changed by: latching a value indicative of said position where said frame of subtitles is to be superimposed; and decrementing said value each time a pixel of said video image is displayed. wherein said read out operation is performed when said value is decremented to zeto, causing said frame of subtitles to be superimposed with said video image at the time.
8. The subtitle position decoding method of claim 7 further comprising delay compensating an inherent delay caused of the subtitle position decoding method. -43-
9. The subtitle position decoding method of claim 7 further comprising decoding said subtitle data stored in said buffer.
10. The subtitle position decoding method of claim 9 further comprising mixing said decoded video data with said decoded subtitle data.
11. The subtitle position decoding method of claim 6 further comprising repeating said steps of video decoding, storing, reading and dynamically changing for o10 different frames of subtitles having different positions whereat the frame of subtitles is to be superimposed on the video image.
12. A subtitle position decoding apparatus substantially as herein described with reference to any one of the embodiments of the invention shown in the is accompanying drawings.
13. A subtitle position decoding method substantially as herein described with reference to any one of the embodiments of the invention shown in the accompanying drawings. DATED this Twenty-second Day of February 1999 Sony Corporation SPatent Attorneys for the Applicant SPRUSON FERGUSON 2-- i .SilN rMli|o|oiq7,MXI~ H
AU18422/99A 1995-04-03 1999-02-25 Subtitle positioning method and apparatus Expired AU726256B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU18422/99A AU726256B2 (en) 1995-04-03 1999-02-25 Subtitle positioning method and apparatus

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP7-99436 1995-04-03
JP9943695A JPH08275205A (en) 1995-04-03 1995-04-03 Method and device for data coding/decoding and coded data recording medium
AU50457/96A AU707272B2 (en) 1995-04-03 1996-04-02 Subtitle colorwiping and positioning method and apparatus
AU18422/99A AU726256B2 (en) 1995-04-03 1999-02-25 Subtitle positioning method and apparatus

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
AU50457/96A Division AU707272B2 (en) 1995-04-03 1996-04-02 Subtitle colorwiping and positioning method and apparatus

Publications (2)

Publication Number Publication Date
AU1842299A true AU1842299A (en) 1999-04-29
AU726256B2 AU726256B2 (en) 2000-11-02

Family

ID=25628924

Family Applications (1)

Application Number Title Priority Date Filing Date
AU18422/99A Expired AU726256B2 (en) 1995-04-03 1999-02-25 Subtitle positioning method and apparatus

Country Status (1)

Country Link
AU (1) AU726256B2 (en)

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1158810B1 (en) * 1993-06-30 2003-05-14 Sony Corporation Recording medium
US5489947A (en) * 1994-06-17 1996-02-06 Thomson Consumer Electronics, Inc. On screen display arrangement for a digital video signal processing system

Also Published As

Publication number Publication date
AU726256B2 (en) 2000-11-02

Similar Documents

Publication Publication Date Title
AU707272B2 (en) Subtitle colorwiping and positioning method and apparatus
AU700439B2 (en) Multiple data stream searching method and apparatus
US6424792B1 (en) Subtitle encoding/decoding method and apparatus
EP1301043B1 (en) Subtitle encoding/decoding
KR100390593B1 (en) Method and apparatus for encoding / decoding subtitle data and recording medium therefor
US6115077A (en) Apparatus and method for encoding and decoding digital video data operable to remove noise from subtitle date included therewith
MXPA96002842A (en) Method and multip data current search system
MXPA96003105A (en) Method and device for coding / decoding data and decoding means of data codifica
AU726256B2 (en) Subtitle positioning method and apparatus
JPH07231435A (en) Caption data encoding device and decoding device for it and recording medium
JP4391187B2 (en) Data decoding method and apparatus

Legal Events

Date Code Title Description
FGA Letters patent sealed or granted (standard patent)