JP5110521B2 - Subtitled video playback device and program - Google Patents

Subtitled video playback device and program Download PDF

Info

Publication number
JP5110521B2
JP5110521B2 JP2008004281A JP2008004281A JP5110521B2 JP 5110521 B2 JP5110521 B2 JP 5110521B2 JP 2008004281 A JP2008004281 A JP 2008004281A JP 2008004281 A JP2008004281 A JP 2008004281A JP 5110521 B2 JP5110521 B2 JP 5110521B2
Authority
JP
Japan
Prior art keywords
subtitle
time
reproduction
sentence
caption
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
JP2008004281A
Other languages
Japanese (ja)
Other versions
JP2009171015A (en
Inventor
正裕 山崎
友哉 尾崎
正樹 若林
Original Assignee
Necカシオモバイルコミュニケーションズ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Necカシオモバイルコミュニケーションズ株式会社 filed Critical Necカシオモバイルコミュニケーションズ株式会社
Priority to JP2008004281A priority Critical patent/JP5110521B2/en
Publication of JP2009171015A publication Critical patent/JP2009171015A/en
Application granted granted Critical
Publication of JP5110521B2 publication Critical patent/JP5110521B2/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Description

  The present invention relates to a captioned video playback apparatus and program for playing back captioned video.

Conventionally, subtitle text is extracted from a stream with a display time attached, and when the user selects the subtitle text, the subtitle text and the video corresponding to the display time of the subtitle text are combined and displayed. There is an attached video reproduction device (see, for example, Patent Document 1).
Japanese Patent Laying-Open No. 2003-18491 (page 3, FIG. 1)

  In conventional video playback devices with subtitles, when subtitle texts in multiple languages are played back, the subtitle text switches when the elapsed time from the start of program playback reaches a predetermined time. May be different from the content of the subtitle text after switching.

  For example, a conventional video playback device with captions that plays back a Japanese subtitle sentence and an English subtitle sentence operates as follows. As a premise, in the caption text of the Japanese, until the elapsed time Tj1 from the reproduction start of the program (elapsed time 0) "Good morning." Is played, from the elapsed time Tj1 until Tj2 is "Hello." To be played. In the English subtitle sentence, “Good Morning.” Is reproduced from the elapsed time 0 to Te1, and “Hello.” Is reproduced from the elapsed time Te1 to Te2. Further, assume that Tj1 <Te1 <Tj2.

  In this case, the caption sentence of Japanese corresponding to the elapsed time T (Tj1 <T <Te1) is "Hello.", Subtitle sentence of English is "Good Morning.". Therefore, in the elapsed time T, switching to English the language of the subtitle sentence from the Japanese, Jimakubun is switched to "Good Morning." From the "Hello.".

  As described above, in the conventional video playback device with caption, when the language of the caption text is switched, the content of the caption text before switching may be different from the content of the caption text after switching.

  The present invention has been made in view of the above circumstances, and even if the language of the subtitle text is switched, the content of the subtitle text before switching does not conflict with the content of the subtitle text after the switching. An object is to provide an apparatus.

In order to achieve the above object, a video playback device with subtitles according to the first aspect of the present invention provides:
Multiple videos,
A plurality of first subtitle sentences composed of a first language;
A plurality of second subtitle sentences composed of a second language;
A plurality of first time information indicating a reproduction start time for starting reproduction of each first subtitle sentence;
A plurality of second time information indicating a reproduction start time for starting reproduction of each second subtitle sentence;
The plurality of videos, the plurality of first subtitle sentences, and the plurality of second subtitle sentences are stored in association with each other based on the reproduction order, and the plurality of first subtitle sentences and the plurality of first subtitle sentences are stored. Storage means for storing the time information, and the plurality of second subtitle sentences and the plurality of second time information in association with each other;
Subtitle sentence selecting means for selecting any subtitle sentence from a plurality of subtitle sentences stored in the storage means;
Subtitle sentence reproduction means for sequentially reproducing subtitle sentences in accordance with the reproduction order from the subtitle sentence selected by the subtitle sentence selection means;
Video playback means for playing back video corresponding to the caption text being played back by the caption text playback means;
Switching means for switching the language constituting the caption sentence selected by the caption sentence selecting means;
When the switching means switches languages, the first reproduction start time corresponding to the subtitle sentence selected by the subtitle sentence selecting means and a plurality of subtitle sentences composed of the languages switched by the switching means A plurality of second playback start times are determined, and a second playback start time having the smallest difference from the first playback start time among the plurality of second playback start times is determined, and the first playback start time is determined. The subtitle sentence selection means so as to select a subtitle sentence corresponding to the second reproduction start time when it is determined that the difference between the reproduction start time of the second reproduction start time and the second reproduction start time is smaller than a predetermined time. And a control means for controlling.

In order to achieve the above object, a program according to the second aspect of the present invention provides:
On the computer,
A plurality of videos, a plurality of first subtitle sentences, and a plurality of second subtitle sentences are stored in association with each other based on the playback order, and the plurality of first subtitle sentences and each first subtitle are stored. A plurality of first time information indicating a reproduction start time for starting the reproduction of a sentence, and a plurality of first time information indicating a reproduction start time for starting reproduction of the plurality of second subtitle sentences and each second subtitle sentence. A storage step for storing the two pieces of time information in association with each other;
A subtitle sentence selection step of selecting any subtitle sentence from a plurality of subtitle sentences stored in the storage step;
A subtitle sentence reproduction step of sequentially reproducing subtitle sentences from the subtitle sentence selected in the subtitle sentence selection step according to the reproduction order;
A video playback step of playing back a video corresponding to the subtitle text being played back in the subtitle text playback step;
A switching step of switching languages constituting the subtitle sentence selected in the subtitle sentence selection step;
When the language is switched in the switching step, the first playback start time corresponding to the subtitle sentence selected in the subtitle sentence selecting step and the plurality of subtitle sentences composed of the languages switched in the switching step are supported. A plurality of second playback start times are determined, and a second playback start time having the smallest difference from the first playback start time among the plurality of second playback start times is determined, and the first playback start time is determined. A control step for controlling to select a caption sentence corresponding to the second reproduction start time when it is determined that the difference between the reproduction start time of the second reproduction start time and the second reproduction start time is smaller than a predetermined time; Is executed.

  According to the present invention, it is possible to provide a video playback device with subtitles in which the content of the subtitle text before switching the language does not conflict with the content of the subtitle text after the switching.

(Embodiment 1)
Hereinafter, a captioned video reproduction apparatus according to an embodiment of the present invention will be described with reference to the drawings. In the present embodiment, the video playback device with captions will be described as a mobile phone.
An appearance of the mobile phone according to the present embodiment is shown in FIG.
The mobile phone 1 is, for example, a folding type that is divided into an upper housing 1a and a lower housing 1b.

FIG. 2 shows the configuration of the mobile phone 1 according to this embodiment.
The cellular phone 1 according to this embodiment includes an antenna 11, a communication unit 12, an audio microphone 13, an audio speaker 14, an antenna 15, a tuner 16, a decoding unit 17, a storage unit 18, a DAC 19, A speaker 20, a display unit 21, an operation unit 22, a control unit 23, and a bus 24 are provided.

  The mobile phone 1 is capable of displaying a caption text and a video and displaying a caption list, and is configured to be able to select a video based on the caption text selected in the caption list.

  The bus 24 transmits data between the units. The control unit 23, the storage unit 18, the DAC 19, and the display unit 21 are connected via a bus 24.

  The antenna 11 transmits and receives radio signals such as call voice and various data to and from a base station (not shown). As shown in FIG. Provided.

The communication unit 12 performs signal processing of radio signals transmitted and received by the antenna 11.
As shown in FIG. 1, the audio microphone 13 is provided in the lower housing 1 b and collects audio and converts it into an audio signal during a call or the like, and supplies the converted audio signal to the communication unit 12. .

  The audio speaker 14 is provided in the upper housing 1a as shown in FIG. 1, and outputs audio based on an audio signal during a call or the like.

  The antenna 15 receives a radio signal of a broadcast signal of one-segment broadcasting, and is built in the mobile phone 1. The broadcast signal of this one-segment broadcasting includes video data, audio data, and caption data as broadcast program content data. The caption data is data for displaying a caption sentence in a plurality of languages (for example, a Japanese caption sentence and an English caption sentence). The radio wave signal received by the antenna 15 has a format conforming to a data transfer method in one-segment broadcasting, for example.

  The tuner 16 and the decoding unit 17 analyze the broadcast signal received by the antenna 15 and acquire video data, audio data, and caption data. The tuner 16 demodulates the one-segment broadcasting signal from the radio signal received by the antenna 15, and supplies the demodulated demodulated signal to the decoding unit 17.

  The decoding unit 17 decodes the supplied demodulated signal and generates a TS (Transport Stream) as shown in FIG.

  TS is a stream in which TSP (Transport Stream Packet) is multiplexed, and video packet V for video data, audio packet A for audio data, and subtitle packet C for subtitles are included in TSP. is there.

  Each TSP is composed of a TS header and a payload, as shown in FIG. The TS header is an area for storing data indicating the start of TSP, a packet identifier, and the like.

  The payload is an area for storing data, and a PES (Packetized Elementary Stream) packet as shown in FIG. 3C is divided and stored. The PES is composed of data stored in each payload.

  The PES is composed of a PES header and a PES payload. The PES header is an area in which data indicating the start of PES, data such as a packet identifier and a packet length are stored. The PES payload is an area in which ES (Elementary Stream) is divided and stored, and video data, audio data, and caption data are compression-coded and stored in the PES payload.

  Based on the information stored in the TS header, the decoding unit 17 separates the video packet V, the audio packet A, and the caption packet C from the TS, and extracts the PES from the payload of each TSP.

  Further, based on the data stored in the PES header, the decoding unit 17 performs video PES including video data, audio PES including audio data, and subtitles including subtitle data, as shown in FIG. PES is generated.

  TSP has a fixed length, and PES has a variable length. For this reason, the TSP may include an adaptation field for making the length of the TSP constant.

  The caption PES is composed of a PES header and a PES payload as shown in FIG.

  As shown in FIG. 4B, the caption text data of each language is stored as a data group, and is composed of a data group header, data group data, and a CRC (Cyclic Redundancy Check) code. The data group header stores a parameter for identifying the language type and a parameter indicating the size of the data group data.

  In particular, data_group_id, which is a parameter for identifying the language type, is written as 0x01 or the like. If data_group_id is 0x01, the number of subtitle languages is one. If data_group_id is 0x21 or 0x22, the number of subtitle languages is 2, if 0x21, it indicates a subtitle sentence in the first language, and if it is 0x22, it indicates a subtitle sentence in the second language.

  As shown in FIG. 4C, the caption management data referred to when managing the caption text stores data indicating the time control mode (TMD), the number of languages (num_language), the language identification (language_tag), and the like. Yes. The maximum number of languages included in the data of one-segment broadcasting is two per ES. Therefore, the number of languages (num_language) is 1 or 2, and the language identification (language_tag) is 0 (first language) or 1 (second language).

  In the data group data, as shown in FIG. 4D, data indicating the text of the caption text is stored as caption data.

  FIG. 5 shows an example of a video signal and caption TV signal SG displayed on the display unit 21. The cellular phone 1 converts the content recording data into the format of the TV signal SG, further converts it into displayable video data, and reproduces the video.

  Data in the one-segment broadcasting is data conforming to the predictive coding method, and the video PES includes an IDR (Instantaneous Decoder Refresh) frame and a P frame.

  In FIG. 5, Id_1, Id_2, Id_3, and Id_q indicate IDR frames, and P11, P12, P21, P22, and Pq1 indicate P frames.

  IDR frames Id_1 to Id_q are frames that do not refer to other video frames and correspond to independent video packets VI. P frames P11 to Pq1 are frames that refer to IDR frames Id_1 to Id_q that are reproduced prior to their own reproduction timing, and correspond to the reference video packet VP.

  The playback timing Tf is stored in the PES header of the video PES. The reproduction timing Tf is data indicating the reproduction time of the first video of the TV signal SG to be constructed. Further, the TSP header includes a seek point TR indicating the display time of the IDR frames Id_1 to Id_q.

  In FIG. 5, TR1, TR2, TR3, and TRq indicate elapsed times of the IDR frames Id_1, Id_2, Id_3, and Id_q from the reproduction timing Tf, respectively.

  In the PES payload of the caption PES, compression-coded captions (data) M1 to Mp (p: integer) are stored. The caption texts M1 to Mp are data indicating the contents of the caption to be added to the video.

  Each PES header of the caption PES includes playback start times T1 to Tp and control codes CS1 to CSp.

  The reproduction start times T1 to Tp are time information indicating the timing for displaying the caption texts M1 to Mp, respectively. Control codes CS1 to CSp are information indicating delimiters of the caption texts M1 to Mp, respectively, and indicate timings for switching the display of the caption texts M1 to Mp.

  In the TS header shown in FIG. 3B, reference time information called PCR (Program Clock Reference) is stored. A PTS (Presentation Time Stamp) is stored in the PES header. This PTS is time information indicating a time for reproducing an ES (Elementary Stream) in the PES, and is expressed based on a clock frequency of 90 kHz.

  The mobile phone 1 interprets the time at the moment of receiving the PCR as the time indicated by the PCR value, and synchronizes the STC (System Time Clock; reference time) of the mobile phone 1 with the PCR. Thereby, the time of all the receivers (cell-phone 1) synchronizes.

  The reproduction timing Tf, reproduction start times T1 to Tp, and elapsed times TR1 to TRq are stored as data in units of PTS.

  The decoding unit 17 supplies the TS header and PES header information to the control unit 23 together with the decoded video PES, audio PES, and subtitle PES.

  The storage unit 18 stores various data, and includes a nonvolatile memory such as a flash memory, an internal memory such as a RAM (Random Access Memory), and an external recording medium.

  As shown in FIG. 6, the storage unit 18 includes a content recording data storage area 181, a caption management data storage area 182, a seek point management information storage area 183, and a display data storage area 184.

  The content recording data storage area 181 is an area for storing video PES, audio PES, and subtitle PES analyzed by the decoding unit 17 as data.

  The caption management data storage area 182 is an area for storing caption management data as shown in FIG. The caption management data is table data that associates “reproduction start time T1-i” (i = 1 to N) with “caption sentence M1-i”, and is generated by the control unit 23.

  “Reproduction start time T1-i” indicates the time at which the caption text M1-i is displayed. In the present embodiment, the reproduction start time T1-i, that is, the reference time for displaying the caption text M, is the offset time that is the reproduction timing Tf at the start of reproduction of the content recording data (the reproduction time of the first video of the TV signal) And

  When the received one-segment broadcasting signal includes subtitle sentence data in two languages, the subtitle management data storage area 182 of the storage unit 18 stores the first language subtitle management data 1821 and the first subtitle management data 1821 as shown in FIG. Bilingual subtitle management data 1822 is stored.

  The seek point management information storage area 183 is an area for storing seek point management information. As shown in FIG. 8, the seek point management information is table data that associates the seek point TRj (j = 1 to q) with the video position PRj, and is generated by the control unit 23.

  “Seek point TRj” indicates the time at which each video frame is displayed, and is expressed in units of PTS. “Video position PRj” is data indicating the position of the video frame in the content recording data. In the case of one-segment broadcasting, “video position PRj” is usually a TSP number including the head of the IDR frame.

  The display data storage area 184 is an area for storing data such as video and subtitles displayed on the display unit 21.

  Further, the storage unit 18 stores data of an operation control program for the control unit 23 and also functions as a work memory for the control unit 23.

  The DAC 19 converts the digital audio signal (audio data) from the control unit 23 into an analog audio signal, and supplies the converted analog audio signal to the speaker 20.

  The speaker 20 outputs sound based on the analog audio signal supplied from the DAC 19, and is provided in the upper housing 1a as shown in FIG.

  The display unit 21 includes a dot matrix type LCD (Liquid Crystal Display) 31 and a driver circuit (not shown) shown in FIG. 1 and displays subtitles reproduced together with video on the LCD 31. The LCD 31 is provided in the upper housing 1a.

  The display unit 21 reads data from the display data storage area 184 of the storage unit 18 under the control of the control unit 23, and displays the subtitles reproduced along with the video based on the read data on the LCD 31.

  The mobile phone 1 has two modes of “viewing mode with subtitle list” and “normal viewing mode”, and the display unit 21 displays a screen corresponding to the mode on the LCD 31.

  The “viewing mode with subtitle list” is a mode in which subtitles corresponding to the video are displayed and the subtitle list is displayed so that the video can be selected from the subtitle list. “Normal viewing mode” is a mode in which subtitles and video are displayed without displaying a subtitle list.

  When “viewing mode with subtitle list” is selected, the display unit 21 displays a screen D1 as shown in FIG.

  In the “viewing mode with subtitle list” screen, the video area RA is an area for displaying video. The display unit 21 displays a video with a reproduction timing corresponding to the reproduction timing of the subtitle selected from the subtitle list in the video area RA.

  The caption area MA is an area for displaying captions. The display unit 21 displays the caption text selected by the user from the caption list in the caption area MA.

  The subtitle list area LA is an area for displaying a plurality of subtitle sentences arranged in a list format. The display unit 21 displays, for example, five subtitle list branch areas LA1 to LA5 in the subtitle list area LA.

  The soft key area FA is an area for displaying soft keys Soft1 and Soft2. The soft key Soft1 is used to indicate that the current mode is “viewing mode with subtitle list”. The soft key Soft2 is used to indicate that the current mode is the “normal viewing mode”. The soft keys Soft1 and Soft2 are selected by operating the keyboard 32.

  The picto area PA is an area for displaying operation information of the mobile phone 1. The display unit 21 displays the radio wave reception status, the remaining battery level, the current date and time, etc. of the mobile phone 1 in this pictogram area PA.

  The “normal viewing mode” is a mode for displaying the subtitles reproduced together with the video without displaying the subtitle list. When “normal viewing mode” is selected, the display unit 21 displays on the LCD 31 a screen D2 in which the caption list area LA as shown in FIG.

  In addition, when the language switching operation is performed in the viewing mode with caption list in FIG. 9A, the language of the caption text is switched as shown in FIG. 9C.

The operation unit 22 is, for example, an instruction for starting / ending reception of a television broadcast, an instruction for starting / ending content recording when the user records a TV program, and a password input at the time of user authentication. It accepts input.
The operation unit 22 includes a keyboard 32 as shown in FIG. The keyboard 32 includes a power switch, a cursor key 33, a numeric keypad, and the like, and is used for turning on / off the power of the mobile phone 1, switching modes, inputting numbers, characters, symbols, selecting menus, and receiving one-segment broadcasting. .
The operation unit 22 also includes a “language switching key” for switching the language of the displayed caption text.

  The cursor key 33 is a key for scrolling the cursor displayed on the screen and selecting an arbitrary subtitle sentence from the subtitle list.

  The control unit 23 includes a microprocessor unit and the like, and controls the operation of the entire mobile phone 1 according to operation control program data stored in the storage unit 18.

  The control unit 23 controls the operation of the mobile phone 1 based on the operation information supplied from the operation unit 22.

Next, the operation of the mobile phone 1 according to this embodiment will be described.
The control unit 23 turns on and off the power of the mobile phone 1 every time operation information indicating that the power switch of the keyboard 32 is pressed is supplied from the operation unit 22.

  When the operation information for selecting the menu for receiving the broadcast signal of the one-segment broadcast is supplied from the operation unit 22, the control unit 23 controls the antenna 15, the tuner 16, and the decoding unit 17 to start reception of the one-segment broadcast. .

  The antenna 15 receives a one-segment broadcast radio signal, and the tuner 16 demodulates the one-segment broadcast signal from the radio signal received by the antenna 15 and supplies the demodulated demodulated signal to the decoding unit 17.

  The video data, audio data, and subtitle data included in the supplied content data are separated, these data are decoded, and the decoded video PES, audio PES, and subtitle PES are supplied to the control unit 23.

  When the operation information indicating that the menu for recording the content of the 1Seg broadcast is selected from the operation unit 22, the control unit 23 records the 1Seg broadcast.

  That is, the control unit 23 stores the video PES, audio PES, and subtitle PES supplied from the decoding unit 17 in the content recording data storage area 181 of the storage unit 18.

  When storing the video PES, the audio PES, and the caption PES in the content recording data storage area 181, the control unit 23 reads the program data for the caption management data generation process and executes the caption management data generation process according to the flowchart shown in FIG. To do.

  First, caption management data generation processing when caption text is written in one language will be described.

  The control unit 23 determines whether or not the caption PES can be acquired (step S11).

  If it is determined that the caption PES cannot be acquired (step S11: No), the control unit 23 ends the caption management data generation process.

  When it determines with subtitle PES being acquirable (step S11: Yes), the control part 23 determines whether a subtitle sentence exists (step S12).

  When it is determined that the caption sentence Mi does not exist (step S12: No), the control unit 23 determines again whether the caption PES can be acquired (step S11).

  When it is determined that the caption text Mi exists (step S12: Yes), the control unit 23 analyzes the acquired caption PES, subtracts the reproduction timing Tf from the PTS stored in the PES header of the caption PES, and starts from the top video. Is obtained (step S13).

  The control unit 23 converts the obtained elapsed time in PTS units into elapsed time in hour / minute / second units (step S14), and uses the converted elapsed time as the reproduction start time Ti, and this reproduction start time Ti is stored in the storage unit 18. Registration is made in the caption management data in the caption management data storage area 182 (step S15).

  The control unit 23 divides the data with the control code CS included in the subtitle PES, and acquires the subtitle sentence Mi immediately before the control code CS. The control unit 23 registers the acquired subtitle sentence Mi in the subtitle management data in the subtitle management data storage area 182 of the storage unit 18 (step S16).

  The control unit 23 determines whether or not the caption text Mi is the last data (step S17). If there is a control code CS, the control unit 23 determines that the caption text Mi is not the last data, and if there is no control code CS, the control text 23 determines that the caption text Mi is the last data.

  When it is determined that the caption text Mi is not the last data (step S17: No), the control unit 23 executes steps S11 to S16 again.

  When it is determined that the caption text Mi is the last data (step S17: Yes), the control unit 23 ends the caption management data generation process.

  Next, caption management data generation processing in the case where a caption text is written in two languages (first language and second language) will be described with reference to the flowchart shown in FIG.

  First, the control unit 23 executes processing from step S11 to step S14 in the flowchart of FIG.

  Next, the control unit 23 determines whether or not the caption sentence whose elapsed time has been converted in Step S14 is a caption for the first language (Step S21). Here, the control unit 23 refers to the data_group_id and determines whether or not the caption text is written in the first language. When data_group_id is 0x21, the control unit 23 determines that the language is the first language, and when 0_22, the control unit 23 determines that the language is the second language.

  When it is determined that the subtitle is a subtitle for the first language (step S21: Yes), the control unit 23 uses the converted elapsed time as the reproduction start time T1-i, and stores the reproduction start time T1-i as a storage unit. The first language subtitle management data 1821 in the 18 subtitle management data storage area 182 is registered (step S22).

  The control unit 23 separates the data with the control code CS included in the caption PES, and acquires the caption text M1-i immediately before the control code CS. The control unit 23 registers the acquired subtitle sentence M1-i in the first language subtitle management data 1821 in the subtitle management data storage area 182 of the storage unit 18 (step S23).

  The control unit 23 determines whether or not the caption text M1-i is the last data (step S17). If there is a control code CS, the control unit 23 determines that the subtitle sentence M1-i is not the last data, and if there is no control code CS, the control section 23 determines that the subtitle sentence M1-i is the last data.

  When the first language subtitle management data 1821 is generated, the control unit 23 starts to generate the second language subtitle management data 1822 (step S17: No).

  Similar to the case of generating the first language subtitle management data, the control unit 23 executes the processing from step S11 to step S14 in the flowchart of FIG.

  Next, the control unit 23 determines whether or not the caption sentence whose elapsed time has been converted in Step S14 is a caption for the first language (Step S21). Since the first language subtitle management data 1821 has already been generated, the control unit 21 determines that it is not a subtitle for the first language (step S21: No).

  The control unit 23 uses the converted elapsed time as the reproduction start time T2-i, and registers the reproduction start time T2-i in the second language subtitle management data 1822 in the subtitle management data storage area 182 of the storage unit 18 ( Step S24).

  The control unit 23 divides the data with the control code CS included in the caption PES, and acquires the caption text M2-i immediately before the control code CS. The control unit 23 registers the acquired subtitle sentence M2-i in the second language subtitle management data 1822 in the subtitle management data storage area 182 of the storage unit 18 (step S25).

  The control unit 23 determines whether or not this caption sentence M2-i is the last data (step S17). If there is a control code CS, the control unit 23 determines that the subtitle sentence M2-i is not the last data, and if there is no control code CS, the control section 23 determines that the subtitle sentence M2-i is the last data.

  When the subtitle management data generation process is executed in this way, the control unit 23 sequentially analyzes the TSP based on the TS header data supplied from the decoding unit 17. The control unit 23 registers the analyzed data in the seek point management information in the seek point management information storage area 183 of the storage unit 18 to generate seek point management information.

  Next, processing for switching the language of the subtitle text being played back by the control unit 23 when the subtitle of the program played back by the mobile phone 1 is a bilingual subtitle text will be described with reference to the flowchart shown in FIG. .

  The control unit 23 reproduces the captioned video of the first language caption. At this time, the control unit 23 sets the “caption management data flag” to 0. The “caption management data flag” is a software variable stored in the storage unit 18. If the value of this variable is 0, the control unit 23 reproduces the subtitle-added video of the first language, and if it is 1, it reproduces the subtitle-added video of the second language.

  When the “language switch key” provided on the keyboard 32 is pressed and operation information indicating that the “language switch key” is pressed is supplied from the operation unit 22, the control unit 23 performs the processing shown in FIG. Start.

  First, the control unit 23 acquires a currently set caption management data flag (step S31).

  When the caption management data flag is acquired, the control unit 23 determines the caption management data to be switched to (step S32).

  The control unit 23 determines whether there is subtitle management data as a switching destination (step S33). When it is determined that there is no switching destination caption management data (step S33: No), the control unit 23 ends the language switching process.

  When it is determined that there is switching destination caption management data (step S33: Yes), the control unit 23 sets the caption management data flag to the switching destination caption management data flag (step S34).

  The control unit 23 acquires the reproduction start time T1-i of the currently selected subtitle sentence M1-i in the first language from the subtitle management data for the first language (step S35).

  The control unit 23 determines the reproduction start time T2-i having the smallest difference from the reproduction start time T1-i from the subtitle management data in the second language of the switching destination, and the caption sentence M2 corresponding to the reproduction start time T2-i. -i is discriminated (step S36).

  When the subtitle sentence M2-i is determined, the control unit 23 selects the subtitle sentence M2-i, displays the subtitle sentence M2-i at the center of the subtitle list (step S37), and ends the language switching process.

  Specifically, for example, a case where the first language is Japanese and the second language is English will be described. At this time, the storage unit 18 stores subtitle management data 1821 and 1822 as shown in FIG.

  When the control unit 23 selects the subtitle sentence M1-3 (“Today is a very good weather”), the “language switch key” provided on the keyboard 32 is pressed, and the operation unit 22 Operation information for switching the language is supplied to the control unit 23.

  The control unit 23 acquires the flag value 0 of the caption management data 1821 (processing in step S31). Next, the control unit 23 determines whether or not there is subtitle management data to be switched (processing in step S33), and determines that there is subtitle management data 1822 (step S33: Yes). The control unit 23 updates the subtitle management data flag from 0 to 1 (processing in step S34), refers to the subtitle management data 1821, and reproduces the reproduction start time T1-3 (“00” of the currently selected subtitle sentence M1-3). : 00:13 ") (processing in step S35).

  Next, the control unit 23 refers to the caption management data 1822 to determine the reproduction start time T2-3 (“00:00:26”) having the smallest difference from the reproduction start time T1-3. The subtitle sentence M2-3 (“It's fine today.”) Corresponding to 3 is determined (processing in step S36). The control unit 23 selects the determined subtitle sentence M2-3 and displays it in the center of the subtitle list (step S37).

  Language switching is executed by pressing a “language switching key” provided on the operation unit 22. The “language switching key” may be a soft key shown in FIGS.

  In this way, when switching the language, the subtitle sentence before and after language switching is selected by selecting the subtitle sentence of the language switching destination corresponding to the reproduction start time with the smallest difference from the reproduction start time of the currently selected subtitle sentence. The language can be switched without causing any flaws in the contents.

(Embodiment 2)
However, in the language switching process in the first embodiment, if the subtitle sentence selected when the language is switched once is not changed to the next subtitle sentence and the language is further changed, a subtitle sentence different from the original subtitle sentence is displayed. May be selected.

  For example, when the caption management data is the caption management data 1821 and 1822 shown in FIG. 13, it is assumed that the caption sentence M1-3 (“Today is a very good weather”) is selected. When the language is switched from Japanese to English, the subtitle sentence M2-3 (“It ’s fine today.”) Is selected. Then, before the subtitle sentence M2-3 is switched to the next subtitle sentence, the language is further switched. Then, the subtitle sentence corresponding to the reproduction start time with the smallest difference from the reproduction start time T2-3 of the subtitle sentence M2-3 is M1-4. Will be selected. That is, the subtitle sentence is switched from “It ’s fine today.” To “Goodbye, be careful.”.

  As described above, when the language is continuously switched twice, a subtitle sentence different from the original subtitle sentence is selected. In order to avoid such a phenomenon, the control unit 23 performs the following language switching process.

  The language switching process according to the present embodiment will be described with reference to the flowchart shown in FIG.

  The control unit 23 selects the subtitle sentence M1-i in the first language. At this time, when the “language switching key” provided on the keyboard 32 is pressed and operation information indicating that the “language switching key” is pressed is supplied from the operation unit 22, the control unit 23 displays the information shown in FIG. The process shown is started.

  First, the control unit 23 executes the same processing as steps S31 to S34 in the flowchart of FIG.

  Next, the control unit 23 sets the index N of the currently selected caption text M1-i and stores it in the storage unit 18 (step S41). The index N is a program variable stored in the storage unit 18, and a value is held by substituting a value for this variable.

  Next, the control unit 23 executes the same processing as steps S35 to S37 in the flowchart of FIG. 12, selects the subtitle sentence M2-i in the second language, and displays it in the center of the subtitle list.

  Then, the control unit 23 reproduces the selected caption text M2-i and the video corresponding thereto on the display unit 21 as captioned video (step S42).

  When the operation information is supplied from the operation unit 22 while the captioned video is being reproduced, the control unit 23 determines whether the operation information is operation information for instructing language switching (step S43). ). When it is determined that the operation information does not indicate language switching (step S43: No), the control unit 23 continues the reproduction of the captioned video and the caption text (step S42).

  When it is determined that the operation information is instructing language switching (step S43: Yes), the control unit 23 determines whether or not the caption text is switched while the captioned video is being reproduced in step S42. (Step S44). When it is determined that the subtitle sentence has been switched (step S44: Yes), the control unit 23 performs the same processing as in steps S35 to S37, selects the subtitle sentence M1-j, and displays it in the center of the subtitle list. (Step S50).

  When it is determined that the caption text has not been switched (step S44: No), the control unit 23 reads the index N acquired in step S41 from the storage unit 18 (step S45).

  When the index N is read, the control unit 23 determines the caption sentence M1-i corresponding to the index N from the caption management data (step S46).

  The control unit 23 selects the determined subtitle sentence M1-i, displays it at the center of the subtitle list (step S47), and ends the language switching process.

  Specifically, for example, a case where the first language is Japanese and the second language is English will be described. At this time, the storage unit 18 stores subtitle management data 1821 and 1822 as shown in FIG.

  When the control unit 23 selects the subtitle sentence M1-3 (“Today is a very good weather”), the “language switch key” provided on the keyboard 32 is pressed, and the operation unit 22 Operation information for switching the language is supplied to the control unit 23.

  The control unit 23 substitutes the index value of the caption sentence M1-3 for the index N (processing in step S41). The value assigned to the index N is, for example, 0103 in the case of the caption sentence M1-3.

  The control unit 23 selects the subtitle sentence M2-3 (“It's fine today.”) Corresponding to the reproduction start time T2-3 with the smallest difference from the reproduction start time of the subtitle sentence M1-3 (in step S37). Process), and the corresponding subtitle-added video is reproduced (process in step S42).

  It is assumed that operation information for switching the language is supplied from the operation unit 22 before the control unit 23 switches the subtitle sentence from the subtitle sentence M2-3 to the subtitle sentence M2-4 (“Goodbye. Be careful.”).

  When acquiring the operation information from the operation unit 22, the control unit 23 determines that there is a language switching instruction (Yes in step S43), and determines that there is no subtitle sentence switching (No in step S44).

  Then, the control part 23 reads the value of the index N (N = 0103) memorize | stored in the memory | storage part 28, and discriminate | determines that the subtitle sentence to select is M1-3 (process of step S46). Then, the control unit 23 selects the subtitle sentence M1-3 and displays it in the center of the subtitle list (step S47).

  By executing such processing, even when the language is switched twice in succession, the subtitle text is not shifted, and the user can view the video with subtitles without a sense of incongruity.

(Embodiment 3)
In addition, when switching languages, there may be no subtitle text after language switching near the reproduction start time of the currently selected subtitle sentence. In the process of the first embodiment, even in such a case, the caption sentence corresponding to the reproduction start time with the smallest difference in reproduction start time is selected from the subtitle sentences in the language after switching. Then, the subtitle text is completely unrelated to the subtitle text before switching.

  For example, when the caption management data is the caption management data 1821 and 1822 shown in FIG. 15, it is assumed that the caption sentence M1-3 (“Today is very nice weather”) is selected. When the language is switched from Japanese to English, the subtitle sentence M2-1 ("Good Morning.") Corresponding to the reproduction start time with the smallest difference from the reproduction start time of the subtitle sentence M1-3 is selected. .

  In order to avoid such a phenomenon, the control unit 23 performs the following language switching process.

  The control unit 23 selects the subtitle sentence M1-i in the first language. At this time, when the “language switching key” provided on the keyboard 32 is pressed and operation information indicating that the “language switching key” is pressed is supplied from the operation unit 22, the control unit 23 displays the information shown in FIG. The process shown is started.

  The control unit 23 executes processing similar to that in steps S31 to S36 in the flowchart of FIG. 12, and the caption text corresponding to the playback start time with the smallest difference from the playback start time T1-i of the caption text M1-i. M2-i is determined (step S36).

  The control unit 23 acquires the reproduction start time T2-i of the caption sentence M2-i (step S61).

  Next, the control unit 23 determines whether or not the difference (| T2-i−T1-i |) between the reproduction start times T1-i and T2-i is smaller than a predetermined time δT (step S62). When it is determined that the difference (| T2-i−T1-i |) between the reproduction start times T1-i and T2-i is larger than the predetermined time δT (step S62: No), the control unit 23 switches the language. The process is terminated without executing.

  When it is determined that the difference (| T2-i−T1-i |) between the reproduction start times T1-i and T2-i is smaller than the predetermined time δT (step S62: Yes), the control unit 23 displays the subtitle sentence M2. -i is selected and displayed in the center of the subtitle list (step S37), and the language switching process is terminated.

  Specifically, for example, a case where the first language is Japanese and the second language is English will be described. At this time, the storage unit 18 stores subtitle management data 1821 and 1822 as shown in FIG.

  Further, the predetermined time ΔT is set to 15 seconds. For the time ΔT, a predetermined time may be set in advance, or for example, the user may set an arbitrary value from the menu screen.

  When the control unit 23 selects the subtitle sentence M1-3 (“Today is a very good weather”), the “language switch key” provided on the keyboard 32 is pressed, and the operation unit 22 Operation information for switching the language is supplied to the control unit 23.

  The control unit 23 refers to the caption management data 1822 and, among the caption sentences in the second language, the caption sentence M2-1 having the smallest difference from the reproduction start time T1-3 of the caption sentence M1-3. (“Good Morning.”) Is determined (step S36).

  Next, when the control unit 23 calculates the difference (| T2-1−T1-3 |) between the reproduction start time of the caption sentence M1-3 and the reproduction start time of M2-1, the difference is 2 minutes 45 seconds ( 165 seconds).

  The control unit 23 compares | T2-1−T1-3 | (165 seconds) with a predetermined time δT (15 seconds), and the difference in reproduction start time (| T2-1−T1-3 |) is predetermined. It is determined that the time is longer than (No in step S62). Therefore, the control unit 23 ends the language switching process without switching to the subtitle sentence M2-1.

  By executing such processing, if there is no subtitle sentence after language switching near the playback start time of the selected subtitle sentence when the language is changed, language switching is not executed. There is no switching to a completely different subtitle sentence.

  In the above embodiment, the video playback device with captions has been described as the mobile phone 1. However, the video playback device with captions is not limited to this, for example, PHS, PDA (Personal Digital Assistants), electronic camera, electronic watch, notebook PC, portable TV, portable video recording device / playback It can also be applied to devices and car navigation devices.

  In the above embodiment, the program is described as being recorded in advance in a memory or the like. However, a program for operating the subtitled video playback apparatus as all or a part of the entire apparatus or executing the above-described processing is stored in a flexible disk, a CD-ROM (Compact Disk Read-Only Memory), a DVD ( Store and distribute in a computer-readable recording medium such as Digital Versatile Disk (MO) or MO (Magneto Optical disk), install it on another computer, operate as the above-mentioned means, or execute the above-mentioned steps May be.

  Furthermore, the program may be stored in a disk device or the like included in a server device on the Internet, and may be downloaded onto a computer by being superimposed on a carrier wave, for example.

It is a figure which shows the external appearance of the mobile telephone which concerns on embodiment of this invention. It is a block diagram which shows the structure of the mobile telephone which concerns on embodiment of this invention. It is a figure which shows the structure of the signal to receive, (a) shows TS, (b) shows TSP, (c) shows PES. It is a figure which shows the structure of the signal to receive, (a) is subtitle PES, (b) is a data group, (c) is subtitle management data, (d) shows subtitle data. It is a figure which shows an example of a structure of the converted television signal. It is a figure which shows each storage area of the memory | storage part shown in FIG. It is a figure which shows an example of the data structure of subtitle management data for 1st languages, and the data structure of subtitle management data for 2nd languages. It is a figure which shows an example of seek point management information. It is a figure which shows the screen which a display part displays on LCD, (a) shows an example of the screen in viewing mode with a caption list, (b) shows an example of the screen in normal viewing mode, (c) is 4 shows an example of a screen in a viewing mode with a caption list. It is a flowchart which shows the title management data generation process which the control part shown in FIG. 2 performs. It is a flowchart which shows the subtitle management data generation process which produces | generates the subtitle management data of the bilingual subtitle which the control part shown in FIG. It is a flowchart which shows the language switching process which the control part shown in FIG. 2 performs. It is a figure which shows an example of the data structure of subtitle management data for 1st languages, and the data structure of subtitle management data for 2nd languages. It is a flowchart which shows the language switching process which the control part shown in FIG. 2 performs. It is a figure which shows an example of the data structure of subtitle management data for 1st languages, and the data structure of subtitle management data for 2nd languages. It is a flowchart which shows the language switching process which the control part shown in FIG. 2 performs.

Explanation of symbols

DESCRIPTION OF SYMBOLS 1 ... Mobile phone, 18 ... Memory | storage part, 21 ... Display part, 22 ... Operation part, 23 ... Control part

Claims (2)

Multiple videos,
A plurality of first subtitle sentences composed of a first language;
A plurality of second subtitle sentences composed of a second language;
A plurality of first time information indicating a reproduction start time for starting reproduction of each first subtitle sentence;
A plurality of second time information indicating a reproduction start time for starting reproduction of each second subtitle sentence;
The plurality of videos, the plurality of first subtitle sentences, and the plurality of second subtitle sentences are stored in association with each other based on the reproduction order, and the plurality of first subtitle sentences and the plurality of first subtitle sentences are stored. Storage means for storing the time information, and the plurality of second subtitle sentences and the plurality of second time information in association with each other;
Subtitle sentence selecting means for selecting any subtitle sentence from a plurality of subtitle sentences stored in the storage means;
Subtitle sentence reproduction means for sequentially reproducing subtitle sentences in accordance with the reproduction order from the subtitle sentence selected by the subtitle sentence selection means;
Video playback means for playing back video corresponding to the caption text being played back by the caption text playback means;
Switching means for switching the language constituting the caption sentence selected by the caption sentence selecting means;
When the switching means switches languages, the first reproduction start time corresponding to the subtitle sentence selected by the subtitle sentence selecting means and a plurality of subtitle sentences composed of the languages switched by the switching means A plurality of second playback start times are determined, and a second playback start time having the smallest difference from the first playback start time among the plurality of second playback start times is determined, and the first playback start time is determined. The subtitle sentence selection means so as to select a subtitle sentence corresponding to the second reproduction start time when it is determined that the difference between the reproduction start time of the second reproduction start time and the determined second reproduction start time is smaller than a predetermined time. Control means for controlling
A video playback apparatus with captions, comprising:
On the computer,
A plurality of videos, a plurality of first subtitle sentences, and a plurality of second subtitle sentences are stored in association with each other based on the playback order, and the plurality of first subtitle sentences and each first subtitle are stored. A plurality of first time information indicating a reproduction start time for starting the reproduction of a sentence, and a plurality of first time information indicating a reproduction start time for starting reproduction of the plurality of second subtitle sentences and each second subtitle sentence. A storage step for storing the two pieces of time information in association with each other;
A subtitle sentence selection step of selecting any subtitle sentence from a plurality of subtitle sentences stored in the storage step;
A subtitle sentence reproduction step of sequentially reproducing subtitle sentences from the subtitle sentence selected in the subtitle sentence selection step according to the reproduction order;
A video playback step of playing back a video corresponding to the subtitle text being played back in the subtitle text playback step;
A switching step of switching languages constituting the subtitle sentence selected in the subtitle sentence selection step;
When the language is switched in the switching step, the first playback start time corresponding to the subtitle sentence selected in the subtitle sentence selecting step and the plurality of subtitle sentences composed of the languages switched in the switching step are supported. A plurality of second playback start times are determined, and a second playback start time having the smallest difference from the first playback start time among the plurality of second playback start times is determined, and the first playback start time is determined. A control step of controlling to select a caption sentence corresponding to the second reproduction start time when it is determined that the difference between the reproduction start time of the second reproduction start time and the determined second reproduction start time is smaller than a predetermined time; ,
A program characterized by having executed.
JP2008004281A 2008-01-11 2008-01-11 Subtitled video playback device and program Expired - Fee Related JP5110521B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2008004281A JP5110521B2 (en) 2008-01-11 2008-01-11 Subtitled video playback device and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2008004281A JP5110521B2 (en) 2008-01-11 2008-01-11 Subtitled video playback device and program

Publications (2)

Publication Number Publication Date
JP2009171015A JP2009171015A (en) 2009-07-30
JP5110521B2 true JP5110521B2 (en) 2012-12-26

Family

ID=40971759

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2008004281A Expired - Fee Related JP5110521B2 (en) 2008-01-11 2008-01-11 Subtitled video playback device and program

Country Status (1)

Country Link
JP (1) JP5110521B2 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5423425B2 (en) * 2010-01-25 2014-02-19 富士通モバイルコミュニケーションズ株式会社 Image processing device
ITTO20120966A1 (en) * 2012-11-06 2014-05-07 Inst Rundfunktechnik Gmbh MEHRSPRACHIGE GRAFIKANSTEUERUNG IN FERNSEHSENDUNGEN

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10341418A (en) * 1997-06-06 1998-12-22 Matsushita Electric Ind Co Ltd Superimposed digital character signal decoder
JP3861278B2 (en) * 2000-06-28 2006-12-20 オンキヨー株式会社 Image playback device

Also Published As

Publication number Publication date
JP2009171015A (en) 2009-07-30

Similar Documents

Publication Publication Date Title
JP4569673B2 (en) Subtitled video playback device, subtitled video playback method and program
JPWO2011111321A1 (en) Voice reading apparatus and voice reading method
JP2008252358A (en) Broadcast receiver
EP1843604A2 (en) A video recording/reproducing apparatus and a television receiver including the same therein
KR100845621B1 (en) Information searching device, information receiver, and methods therefor
JP5310808B2 (en) Subtitled video playback device and subtitled video playback program
JP2009159125A (en) Reproducer for video image with caption, search result notifying method for reproducer for video image with caption, and program
JP2006245907A (en) Broadcast recording and reproducing apparatus
JP5110521B2 (en) Subtitled video playback device and program
JP5036267B2 (en) Portable electronic device
JP5649769B2 (en) Broadcast receiver
JP2009159126A (en) Apparatus for playing back video with caption, screen control method and program for the same
JP5311448B2 (en) Subtitled video playback device, subtitled video playback method and program
KR20080004344A (en) Portable terminal apparatus, computer-readable recording medium, and computer data signal
JP2009130469A (en) Captioned video playback apparatus, and program
JP5240833B2 (en) Subtitled video playback device, subtitled video playback method and program
JP5198088B2 (en) Playback device and control method
KR101285906B1 (en) Method and Apparatus for Recording and Reproducing of Broadcasting Signal capable of Searching Caption Data
JP4333560B2 (en) TV broadcast recording and playback device
JP4367535B2 (en) Subtitled video playback device and program
JP2008053991A (en) Digital broadcast receiver
JP4546425B2 (en) Mobile phone with broadcast reception function
JP2009159484A (en) Broadcast receiver
JP4821733B2 (en) Subtitled video playback terminal device and program
JP2008085940A (en) Television receiver

Legal Events

Date Code Title Description
A711 Notification of change in applicant

Free format text: JAPANESE INTERMEDIATE CODE: A712

Effective date: 20100805

A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20101206

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20111128

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20120104

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20120305

A02 Decision of refusal

Free format text: JAPANESE INTERMEDIATE CODE: A02

Effective date: 20120522

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20120816

A911 Transfer of reconsideration by examiner before appeal (zenchi)

Free format text: JAPANESE INTERMEDIATE CODE: A911

Effective date: 20120824

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20120911

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20121002

R150 Certificate of patent or registration of utility model

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20151019

Year of fee payment: 3

S111 Request for change of ownership or part of ownership

Free format text: JAPANESE INTERMEDIATE CODE: R313113

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20151019

Year of fee payment: 3

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20151019

Year of fee payment: 3

R350 Written notification of registration of transfer

Free format text: JAPANESE INTERMEDIATE CODE: R350

S111 Request for change of ownership or part of ownership

Free format text: JAPANESE INTERMEDIATE CODE: R313113

R350 Written notification of registration of transfer

Free format text: JAPANESE INTERMEDIATE CODE: R350

LAPS Cancellation because of no payment of annual fees