JP2010263528A - Motion picture processor, and method and program for processing motion picture - Google Patents

Motion picture processor, and method and program for processing motion picture Download PDF

Info

Publication number
JP2010263528A
JP2010263528A JP2009114268A JP2009114268A JP2010263528A JP 2010263528 A JP2010263528 A JP 2010263528A JP 2009114268 A JP2009114268 A JP 2009114268A JP 2009114268 A JP2009114268 A JP 2009114268A JP 2010263528 A JP2010263528 A JP 2010263528A
Authority
JP
Japan
Prior art keywords
unit
moving image
subtitle
area
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP2009114268A
Other languages
Japanese (ja)
Inventor
Tatsuya Inoue
Hajime Kawatake
Fuyuki Takazawa
達也 井上
一 川竹
冬樹 高沢
Original Assignee
Sourcenext Corp
ソースネクスト株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sourcenext Corp, ソースネクスト株式会社 filed Critical Sourcenext Corp
Priority to JP2009114268A priority Critical patent/JP2010263528A/en
Publication of JP2010263528A publication Critical patent/JP2010263528A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/79Processing of colour television signals in connection with recording
    • H04N9/80Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • H04N9/82Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only
    • H04N9/8205Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only involving the multiplexing of an additional signal and the colour video signal
    • H04N9/8233Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only involving the multiplexing of an additional signal and the colour video signal the additional signal being a character code signal
    • H04N9/8238Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only involving the multiplexing of an additional signal and the colour video signal the additional signal being a character code signal for teletext
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/435Processing of additional data, e.g. decrypting of additional data, reconstructing software from modules extracted from the transport stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/485End-user interface for client configuration
    • H04N21/4858End-user interface for client configuration for modifying screen layout parameters, e.g. fonts, size of the windows
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/44Receiver circuitry for the reception of television signals according to analogue transmission standards
    • H04N5/445Receiver circuitry for the reception of television signals according to analogue transmission standards for displaying additional information

Abstract

A moving image processing apparatus, a moving image processing method, and a program capable of increasing the flexibility of the relationship between a displayed moving image and subtitles are provided.
A unit caption data acquisition unit 24 acquires a plurality of unit caption data corresponding to a caption associated with a partial moving image that is at least a part of a moving image. The screen display output unit 26 displays and outputs a screen including a moving image area in which a frame image included in the moving image is displayed and a caption area in which captions are displayed. The screen display output unit 26 displays and outputs unit subtitles corresponding to each of a plurality of unit subtitle data to be displayed among a plurality of unit subtitle data having different associated partial moving images from each other in the subtitle area.
[Selection] Figure 2

Description

  The present invention relates to a moving image processing apparatus, a moving image processing method, and a program.

  For example, there is an information storage medium (for example, a DVD-ROM) in which caption data (for example, a caption image) is stored together with moving images and sounds, such as an information storage medium in which movie content is stored. . There is a moving image processing apparatus (for example, a DVD player) that reads such an information storage medium and displays and outputs subtitles superimposed on the moving image. Patent Document 1 describes a disc playback apparatus that can move subtitles by user operation.

  When a movie content or the like is displayed and output by a conventional moving image processing apparatus, when a character speaks a line, a character string corresponding to the line (for example, a character string corresponding to a script or translation of the line) Is displayed and output as subtitles.

JP 2004-320324 A

  In recent years, language learning has been performed using foreign movies as teaching materials. And in a foreign movie, not only when the characters are speaking the line, but also after the line has been spoken or before it starts, the text corresponding to the script or translation of this line is displayed and output as subtitles Then, the moving image processing apparatus becomes more convenient for the user who learns the language.

  And the flexibility of the relationship between moving images such as movies that are displayed and output in this way and subtitles is increased not only for users who use moving image processing devices for language learning, but also for moving image processing devices. Applicable to general users.

  The present invention has been made in view of the above problems, and provides a moving image processing apparatus, a moving image processing method, and a program capable of enhancing the flexibility of the relationship between moving images to be displayed and subtitles. Objective.

  In order to solve the above problems, a moving image processing apparatus according to the present invention is a moving image processing apparatus that sequentially displays and outputs at least one frame image included in a moving image, and is at least a part of the moving image. Unit subtitle data acquisition means for acquiring a plurality of unit subtitle data corresponding to subtitles associated with a partial moving image, a moving image region in which a frame image included in the moving image is displayed, a subtitle region in which subtitles are displayed, Screen display output means for displaying and displaying a screen including the plurality of unit subtitle data to be displayed, wherein the associated partial moving images are different from each other. The unit caption corresponding to each unit caption data is displayed and output in the caption area.

  The moving image processing method according to the present invention is a moving image processing method for sequentially displaying and outputting at least one frame image included in a moving image, and a unit corresponding to a subtitle associated with at least a part of the moving image. Screen display output step of displaying and outputting a screen including unit caption data acquisition step for acquiring a plurality of caption data, a moving image region in which a frame image included in the moving image is displayed, and a caption region in which the caption is displayed In the screen display output step, unit subtitles corresponding to each of the plurality of unit subtitle data to be displayed among the plurality of unit subtitle data are displayed and output in the subtitle area. .

  The program according to the present invention is a program that causes a computer to function as a moving image processing apparatus that sequentially displays and outputs at least one frame image included in a moving image, and includes subtitles associated with at least a part of the moving image. Unit subtitle data acquisition means for acquiring a plurality of corresponding unit subtitle data, a screen display that displays and outputs a screen including a moving image area in which a frame image included in the moving image is displayed, and a subtitle area in which the subtitle is displayed The computer functions as output means, and the screen display output means displays and outputs unit captions corresponding to each of a plurality of unit caption data to be displayed among the plurality of unit caption data in the caption area. It is characterized by that.

  According to the present invention, unit subtitles corresponding to each of a plurality of unit subtitle data to be displayed are displayed and output in the subtitle area. Therefore, the flexibility of the relationship between the moving image to be displayed and the subtitles can be more flexible than before. Can be increased.

  In one aspect of the present invention, it further includes unit subtitle data specifying means for specifying at least one unit subtitle data based on an operation received from a user, wherein the screen display output means is specified by the unit subtitle data specifying means The unit subtitle corresponding to the unit subtitle data is displayed and output in the subtitle area. In this way, the unit subtitle corresponding to the unit subtitle data specified by the user is displayed in the subtitle area, so that the user can control the unit subtitle displayed in the subtitle area.

  In this aspect, the unit subtitle data includes language data indicating the language of the subtitle corresponding to the unit subtitle data, and the unit subtitle data specifying means is displayed as the language data and subtitles received from the user. At least one of the unit caption data may be specified based on an operation for designating a language. In this way, the unit caption corresponding to the language specified by the user is displayed in the caption area, so that the user can control the language of the caption displayed in the caption area.

  In the aspect of the invention, the screen display output unit displays the unit subtitle corresponding to the frame image displayed in the moving image area in a display mode different from the unit subtitle other than the unit subtitle. It is characterized by outputting. In this way, the user can more easily identify the unit caption corresponding to the frame image displayed in the moving image area from among the plurality of unit captions displayed in the caption area.

  Further, in one aspect of the present invention, the screen further includes unit subtitle selection operation reception means for receiving a unit subtitle selection operation for selecting any one of a plurality of unit subtitles displayed in the subtitle area from the user. When the display output means accepts the unit subtitle selection operation, the display output means displays and outputs the frame image corresponding to the unit subtitle selected by the unit subtitle selection operation in the moving image area. In this way, a frame image corresponding to the unit caption selected by the user is displayed and output in the moving image area.

  In the aspect of the invention, the screen display output unit may display the subtitle area when the unit subtitle corresponding to the frame image displayed in the moving image area is not displayed in the subtitle area. The unit caption to be updated is updated. In this way, the unit caption corresponding to the frame image displayed and output in the moving image area is displayed and output in the caption area.

  In one aspect of the present invention, the screen display output means displays and outputs contents related to at least a part of the unit subtitle data in response to a user operation that specifies at least a part of the unit caption data. To do. In this way, the content related to at least a part of the unit caption data designated by the user is displayed and output.

  In one aspect of the present invention, the screen display output means repeatedly outputs a partial moving image associated with the unit subtitle data specified based on an operation received from a user to the moving image area repeatedly predetermined times. It is characterized by. In this way, partial moving images corresponding to subtitles can be repeatedly displayed and output.

  Further, according to an aspect of the present invention, the apparatus further includes means for displaying and outputting dictionary data related to a word corresponding to the unit subtitle data based on the unit subtitle data specified by the user. In this way, it is possible to display and output dictionary data relating to each word corresponding to the caption.

It is a figure which shows an example of the hardware constitutions of the moving image processing apparatus which concerns on one Embodiment of this invention. It is a functional block diagram which shows an example of the function implement | achieved by the moving image processing apparatus which concerns on one Embodiment of this invention. It is a figure which shows an example of the data structure of unit caption data. It is a figure which shows an example of a moving image output screen. It is a flowchart which shows an example of the flow of the display output process performed with the moving image processing apparatus which concerns on this embodiment. It is a figure which shows an example of a moving image output screen. It is a figure which shows an example of a moving image output screen. It is a figure which shows an example of a moving image output screen. It is a figure which shows an example of a moving image output screen. It is a figure which shows an example of a moving image output screen. It is a figure which shows an example of a moving image output screen. It is a figure which shows an example of a moving image management screen.

  Hereinafter, an embodiment of the present invention will be described in detail with reference to the drawings.

  FIG. 1 is a diagram illustrating an example of a hardware configuration of a moving image processing apparatus 10 according to the present embodiment. As shown in FIG. 1, the moving image processing apparatus 10 according to the present embodiment includes, for example, a control unit 12, a storage unit 14, a user interface (UI) unit 16, a communication unit 18, and an input / output unit 20. . These elements are connected via a bus or the like.

  The control unit 12 is a program control device such as a CPU, and operates according to a program installed in the moving image processing apparatus 10. The storage unit 14 is a storage element such as a ROM or a RAM, a hard disk, or the like. The storage unit 14 stores a program executed by the control unit 12. The storage unit 14 also operates as a work memory for the control unit 12. The UI unit 16 is a display, a mouse, a microphone, a speaker, and the like, and outputs the content of the operation performed by the user and the voice input by the user to the control unit 12. In addition, the UI unit 16 displays and outputs information according to an instruction input from the control unit 12. The communication unit 18 is, for example, a communication interface such as a network board, and transmits and receives information to and from various servers (not shown) connected via a network such as a LAN. The input / output unit 20 is, for example, a USB interface, a CD-ROM drive, a DVD drive, or the like. Data input / output between the input / output unit 20 and an information storage medium such as a USB memory, a CD-ROM, or a DVD-ROM is possible. Output.

  FIG. 2 is a functional block diagram illustrating an example of functions realized by the moving image processing apparatus 10 according to the present embodiment. As illustrated in FIG. 2, the moving image processing apparatus 10 includes a moving image acquisition unit 22, a unit subtitle data acquisition unit 24, a screen display output unit 26, a unit subtitle data specification unit 28, a unit subtitle selection operation reception unit 30, and a region. It functions as including a change instruction receiving unit 32, a region changing unit 34, a subtitle changing unit 36, a frame image changing unit 38, a display mode changing unit 40, a subtitle change instruction receiving unit 42, and a moving image management screen control unit 44. These elements are realized mainly by the control unit 12.

  These elements are realized by executing a program installed in the moving image processing apparatus 10 that is a computer by the control unit 12 of the moving image processing apparatus 10. This program is supplied to the moving image processing apparatus 10 via a computer-readable information transmission medium such as a USB memory, a CD-ROM, or a DVD-ROM, or via a communication network such as the Internet.

  In the present embodiment, a moving image, audio data, and at least one unit subtitle data 46 are associated with words or phrases having different languages but corresponding meanings to a USB memory connected via the input / output unit 20. Stored dictionary data, word data indicating words, and the like are stored. FIG. 3 is a diagram illustrating an example of the data structure of the unit caption data 46.

  In the present embodiment, the moving image includes at least one frame image 48 (see FIG. 4). Each frame image 48 is associated with data indicating the order of display output. In the present embodiment, each frame image 48 is associated with, for example, a time code indicating an elapsed time (hour, minute, second, number of frames, etc.) from the start point of the moving image. In this embodiment, the audio data is associated with the moving image so as to be reproduced and output in synchronization with the moving image.

  As shown in FIG. 3, the unit subtitle data 46 is associated with at least a part of the unit subtitle data ID 50 that is an identifier of the unit subtitle data 46, the subtitle character string data 52 indicating the subtitle character string, and the subtitle character string data 52. Related content data 54 indicating a character string or an image of the content related to the current part, at least one frame image 48 corresponding to the caption, and data indicating a section of the frame image 48 corresponding to the caption (in this embodiment, For example, start time code 56 and end time code 58), and language data 60 indicating the language of the subtitle (for example, English, Japanese, etc.). Thus, in this embodiment, the unit caption data 46 is associated with the partial moving image that is at least a part of the moving image by the start time code 56 and the end time code 58. The data structure of the unit caption data 46 is not limited to the data structure described above.

  The moving image acquisition unit 22 acquires a moving image. The moving image acquisition unit 22 is, for example, a moving image stored in an information storage medium such as a USB memory or a DVD-ROM connected via the input / output unit 20 or a server connected via the communication unit 18. The stored moving image is acquired.

  The unit caption data acquisition unit 24 acquires unit caption data 46 corresponding to the moving image acquired by the moving image acquisition unit 22. In the present embodiment, the unit caption data acquisition unit 24 acquires a plurality of unit caption data 46.

  The screen display output unit 26 outputs the moving image output screen 62 illustrated in FIG. 4 to the UI unit 16 such as a display. The moving image output screen 62 includes, for example, a moving image area 64 in which a frame image 48 included in the moving image is displayed, a subtitle area 66 in which subtitles are displayed (in the example of FIG. 4, a vertically long band-shaped area), A boundary image 68 (in the example of FIG. 4, a vertically long bar-shaped image) indicating the boundary between the moving image area 64 and the caption area 66 is included. In the example of FIG. 4, the subtitle area 66 is arranged on the right side of the moving image area 64, but the subtitle area 66 may be arranged on the left side of the moving image area 64.

  The screen display output unit 26 includes unit subtitles 70 corresponding to each of a plurality of unit subtitle data 46 whose associated partial moving images are different from each other (for example, at least one of the start time code 56 and the end time code 58 is different from each other). A display is output in the caption area 66. In the present embodiment, the screen display output unit 26 is configured by a plurality of unit subtitles 70 arranged in an order from the top to the bottom in the order of the start time code 56 included in the corresponding unit subtitle data 46, for example. The entire subtitle image is generated, and at least a part thereof is displayed and output in the subtitle area 66. At this time, the screen display output unit 26 may generate an entire subtitle image in which Japanese unit subtitles 70 and English unit subtitles 70 are alternately arranged as shown in FIG. In this way, the screen display output unit 26 includes unit captions 70 corresponding to the unit caption data 46 for the unit caption data 46 in which the included language data 60 are different from each other and the included start time codes 56 are corresponding to each other. An entire subtitle image in which are arranged side by side may be generated.

  Further, as shown in FIG. 4, in the present embodiment, a scroll bar 72 is included in the caption area 66. The position of the knob 74 included in the scroll bar 72 corresponds to the position of the image (at least a part of the entire caption image) displayed in the caption area 66 in the entire caption image. Then, when the user performs an operation of dragging the scroll bar 72 up and down with the UI unit 16 such as a mouse, the screen display output unit 26 responds to the operation within the entire subtitle image of the image displayed in the subtitle area 66. The position at is changed (that is, the entire caption image is scrolled). In this way, the screen display output unit 26 updates the image displayed in the caption area 66.

  In the present embodiment, the screen display output unit 26 sequentially converts the frame images 48 included in the moving image into a moving image area in the order of the associated time codes at a predetermined time (for example, 1/60 second). 64 is displayed and output. Thus, in this embodiment, the screen display output unit 26 reproduces and outputs a moving image. In the present embodiment, the screen display output unit 26 reproduces and outputs audio data associated with the moving image together with the moving image. In the present embodiment, the screen display output unit 26 changes the frame rate of the frame image 48 according to a change operation of the display output interval (frame rate) of the frame image 48 received from the user.

  Then, the screen display output unit 26 displays and outputs subtitles in the subtitle area 66 at predetermined time intervals (which may or may not correspond to the frame rate of the frame image 48). Here, an example of a subtitle display output process performed by the screen display output unit 26 at predetermined time intervals will be described with reference to a flowchart illustrated in FIG.

  First, the screen display output unit 26 acquires the time code of the frame image 48 displayed in the moving image area 64 (S101). Then, the screen display output unit 26 specifies at least one unit subtitle data 46 that includes the time code acquired in the process shown in S101 between the start time code 56 and the end time code 58 (S102). ). Here, the screen display output unit 26 may specify a plurality of unit subtitle data 46 (for example, English unit subtitle data 46 and Japanese unit subtitle data 46) having different language data 60 from each other.

  Then, it is confirmed whether or not the unit caption 70 corresponding to the unit caption data 46 specified in the process shown in S102 is displayed in the caption area 66 (S103). If not displayed (S103: N), the screen display output unit 26 displays the subtitle area 66 so that the unit subtitle 70 corresponding to the unit subtitle data 46 specified in the process shown in S102 is displayed in the subtitle area 66. The image displayed on the screen is updated (S104). For example, the screen display output unit 26 changes the position of the image displayed in the caption area 66 in the entire caption image. More specifically, the screen display output unit 26, for example, the entire subtitle image so that the unit subtitle 70 corresponding to the unit subtitle data 46 specified in the process shown in S102 is arranged at the lower end of the subtitle area 66. Execute the scroll process. As described above, in this embodiment, the screen display output unit 26 displays the subtitle area 66 in the subtitle area 66 when the unit subtitle 70 corresponding to the frame image 48 displayed in the moving image area 64 is not displayed in the subtitle area 66. The unit caption 70 to be updated is updated.

  Then, the screen display output unit 26 updates the position of the knob 74 displayed in the subtitle area 66 in accordance with the update of the image displayed in the subtitle area 66 (S105).

  In the process shown in S103, when it is confirmed that the unit caption 70 corresponding to the unit caption data 46 is displayed in the caption area 66 (S103: Y), or when the process shown in S105 is completed, The screen display output unit 26 indicates that the unit subtitle 70 corresponding to the frame image 48 displayed in the moving image area 64 is on the unit subtitle 70 corresponding to the unit subtitle data 46 specified in the process shown in S102. The highlighted image 76 shown is displayed and output (S106).

  As described above, in the present embodiment, the screen display output unit 26 displays the unit subtitle 70 corresponding to the frame image 48 displayed in the moving image area 64 in a display mode different from the unit subtitles 70 other than the unit subtitle 70 ( For example, the emphasized image 76 is displayed on the unit caption 70).

  In the present processing example, if there is no unit caption data 46 that satisfies the above-described conditions in the processing shown in S102 described above, the screen display output unit 26 performs the emphasized image 76 in the processing shown in S106 described above. The display output of is not executed. Thus, in the present processing example, for example, when the moving image processing apparatus 10 reproduces and outputs a moving image of a movie, for a period in which the characters do not speak the dialogue (for example, a shooting battle scene), Since the screen display output unit 26 does not execute display output of the emphasized image 76, the user is in a period during which the character speaks the speech depending on whether or not the emphasized image 76 is displayed in the caption area 66. It can be determined whether or not there is.

  Further, for example, the screen display output unit 26 may control whether or not to execute the processes shown in S103 to S105 described above in response to a request from the user. In this way, when the screen display output unit 26 is controlled so as not to execute the processes shown in S103 to S105 by the user, the subtitle area 66 is not scrolled even if the reproduction of the moving image proceeds. The unit subtitle 70 displayed in the subtitle area 66 can be viewed without being affected by the progress of playback of the subtitle.

  Note that the flow of subtitle display output processing is not limited to the above-described processing example. For example, each time the screen display output unit 26 sequentially displays and outputs frame images 48 in the moving image area 64, the unit includes a start time code 56 (or an end time code 58) corresponding to the time code of the frame image 48. It may be confirmed whether or not the caption data 46 exists. Then, when the screen display output unit 26 confirms the presence of the unit caption data 46, the image displayed in the caption area 66 may be updated by the same processing as in the above-described S103 to S106.

  Further, the screen display output unit 26 may detect the switching timing of the unit caption 70 based on the time code of the frame image 48 and the start time code 56 and the end time code 58 included in the unit caption data 46. Then, when the screen display output unit 26 detects the switching timing of the unit caption 70, the image displayed in the caption area 66 may be updated by the same processing as in the above-described S103 to S106.

  In the present embodiment, the screen display output unit 26 receives a moving image playback start request (for example, a playback start request from the beginning of a moving image or a playback start request from the middle of a moving image) by a user. Based on the time code of the frame image 48 (for example, the first frame image 48 or the frame image 48 designated by the user) to be reproduced, at least one unit is obtained by the same processing as in S102 described above. The caption data 46 is specified. Then, the screen display output unit 26 performs initialization to display and output at least a part of the entire subtitle image including the unit subtitle 70 (hereinafter referred to as start unit subtitle) corresponding to the unit subtitle data 46 in the subtitle area 66. Execute the process. Note that the screen display output unit 26 may display and output the start unit subtitle in the central portion of the subtitle area 66 in the initialization process. Of course, the screen display output unit 26 may display and output the start unit subtitle on the upper end and lower end of the subtitle area 66. In this way, in the present embodiment, the screen display output unit 26, when starting the playback of the moving image, the unit subtitle 70 corresponding to the frame image 48 that is the start of playback and the vicinity thereof (for example, Several unit captions 70) before and after that are displayed and output in the caption area 66. The screen display output unit 26 may determine the number of unit subtitles 70 to be displayed and output based on the size of characters displayed as subtitles. The screen display output unit 26 may determine the number of unit subtitles 70 to be displayed and output based on a comparison result between the size of the subtitle area 66 and the character size of the unit subtitle 70. Note that the screen display output unit 26 may execute the same processing as the initialization processing described above instead of the processing described in S101 to S106 during the caption display output processing performed at predetermined time intervals. Good.

  In the present embodiment, the user can change the frame image 48 displayed in the moving image area 64 (for example, an operation of a slidable bar (not shown) displayed and output at the lower end of the moving image area 64). ) Is performed, the screen display output unit 26 displays and outputs the frame image 48 selected by the user in the moving image area 64. Based on the frame image 48, the screen display output unit 26 executes a scroll process of the image displayed in the caption area 66 and a display output process of the emphasized image 76 as in the process example of FIG.

  In the present embodiment, when the user performs a repeated playback operation (for example, an operation of pressing a repeated playback button (not shown)) during playback of a moving image, the screen display output unit 26 The unit subtitle data 46 corresponding to the frame image 48 displayed in the moving image area 64 is specified at the time of pressing. Then, the screen display output unit 26 repeatedly outputs the partial moving image corresponding to the period from the start time code 56 to the end time code 58 included in the unit caption data 46 in the moving image area 64 by a predetermined number of times. At this time, the screen display output unit 26 may pause the reproduction for a predetermined time every time the repeated reproduction of the partial moving image ends. In the present embodiment, after the above-described repeated reproduction, the screen display output unit 26 displays and outputs the subsequent portion of the moving image in the moving image area 64. As described above, the screen display output unit 26 may display and output the partial moving image associated with the unit caption data 46 specified based on the operation received from the user in the moving image area 64 by repeating the predetermined number of times.

  The unit subtitle data specifying unit 28 specifies at least one unit subtitle data 46 based on an operation received from the user. In this embodiment, for example, when the unit subtitle data specifying unit 28 receives an operation for specifying at least one language from the user during the display output of the moving image, the unit subtitle data specifying unit 28 indicates at least one language specified by the operation. The unit caption data 46 including the language data 60 is specified. As described above, in this embodiment, the unit caption data specifying unit 28 specifies the unit caption data 46 based on the language data 60 and the operation received from the user. Then, the screen display output unit 26 displays and outputs the unit subtitle 70 corresponding to the unit subtitle data 46 specified in this manner in the subtitle area 66. In this way, in the present embodiment, the user displays subtitles in a specific language (for example, only English subtitles, only Japanese subtitles, or both English subtitles and Japanese subtitles) in the subtitle area 66. Can be made.

  The unit subtitle selection operation receiving unit 30 receives a unit subtitle selection operation for selecting any one of the plurality of unit subtitles 70 displayed in the subtitle area 66 from the user. Then, in this embodiment, the unit caption selection operation accepting unit 30 specifies the frame image 48 displayed in the moving image area 64 based on the accepted unit caption selection operation. For example, the unit subtitle selection operation reception unit 30 specifies the start time code 56 included in the unit subtitle data 46 corresponding to the selected unit subtitle 70. Then, the unit caption selection operation accepting unit 30 specifies the frame image 48 corresponding to the start time code 56. Then, the screen display output unit 26 displays and outputs the specified frame image 48 in the moving image area 64. In this embodiment, the screen display output unit 26 sequentially displays and outputs the frame images 48 of the frames subsequent to the frame image 48 in the moving image area 64 every predetermined time (for example, 1/60 second). . In this way, in the present embodiment, the user can view the partial moving image by selecting the subtitle image corresponding to the partial moving image to be viewed.

  The area change instruction receiving unit 32 receives an instruction to change at least one of the shape and size in the moving image output screen 62 of at least one of the caption area 66 and the moving image area 64 from the user. The area changing unit 34 changes at least one of the shape and size of the caption area 66 in the moving image output screen 62 in accordance with the instruction received by the area change instruction receiving unit 32. In the present embodiment, the region changing unit 34 also changes at least one of the shape and size of the moving image region 64 in the moving image output screen 62 in accordance with an instruction received by the region change instruction receiving unit 32.

  The subtitle changing unit 36 changes at least one of the number and size of characters displayed as subtitles in the subtitle area 66 according to the change of the subtitle area 66. The frame image changing unit 38 changes the size of the frame image 48 displayed in the moving image area 64 according to the change in the moving image area 64.

  Specifically, in the present embodiment, for example, on the moving image output screen 62 shown in FIG. 4, during the moving image display output, the user moves the boundary image 68 to the left by the UI unit 16 such as a mouse. Is executed, the area change instruction accepting unit 32 accepts this operation. Then, the region changing unit 34 changes the display position of the boundary image 68 to the left. FIG. 6 shows an example of the moving image output screen 62 after the display position of the boundary image 68 has changed to the left. As shown in FIG. 6, when the display position of the boundary image 68 changes to the left, the width of the moving image area 64 becomes narrower and the width of the caption area 66 becomes wider. Then, the area changing unit 34 maintains the ratio between the height and width of the frame image 48 displayed in the moving image area 64, and the frame image 48 so that the width of the frame image 48 corresponds to the width of the moving image area 64. Make it smaller. The area changing unit 34 further increases the width of the unit caption 70 displayed in the caption area 66. Thus, in the present embodiment, the number of characters displayed as captions in the caption area 66 increases.

  In this way, in the present embodiment, it is possible to meet various needs of users who use the moving image processing apparatus 10 such as users who want to enjoy moving images and users who learn languages by subtitles.

  In the present embodiment, the screen display output unit 26 displays and outputs contents related to the part in accordance with the user's operation that specifies at least a part of the unit caption data 46. Specifically, the screen display output unit 26, for example, as shown in FIG. 7, in accordance with a user operation that specifies at least a part of the unit caption 70, the related content data 54 associated with this part. The related content image 78 corresponding to is displayed and output in the caption area 66 so as to overlap the unit caption 70. The display position of the related content image 78 is not limited to the above example. For example, the screen display output unit 26 may display and output the related content image 78 in the vicinity (for example, above and below) of the frame image 48 in the moving image area 64.

  In the present embodiment, the area change instruction receiving unit 32 receives an instruction to change the positional relationship between the moving image area 64 and the caption area 66 from the user. Then, the area changing unit 34 changes the positional relationship between the moving image area 64 and the caption area 66 in accordance with this instruction. Specifically, the area change instruction accepting unit 32, for example, as shown in FIGS. 4, 6, and 7, a moving image area 64 and a caption area 66 are arranged side by side in a moving image output screen 62. As shown in FIG. 8, an instruction to change from a state in which the moving image area 64 and the caption area 66 are arranged in the vertical direction to the moving image output screen 62 is received. Then, in response to this instruction, the area changing unit 34 arranges the moving image area 64 and the subtitle area 66 vertically and outputs them to the UI unit 16 such as a display. In the example of FIG. 8, the screen display output unit 26 displays and outputs the unit caption data ID 50 and the start time code 56 in addition to the character string displayed as the caption in the caption area 66.

  As described above, when the user performs an operation of moving the boundary image 68 upward by the UI unit 16 such as a mouse in a state where the moving image region 64 and the subtitle region 66 are aligned vertically, the region change instruction receiving unit 32 accepts this operation. Then, the region changing unit 34 changes the display position of the boundary image 68 upward. FIG. 9 shows an example of the moving image output screen 62 after the display position of the boundary image 68 is changed upward. Then, the area changing unit 34 reduces the frame image 48 and increases the number of unit subtitles 70 displayed in the subtitle area 66 so that the height of the frame image 48 corresponds to the height of the moving image area 64. . As a result, the number of characters displayed as subtitles in the subtitle area 66 increases. 8 and 9, the subtitle area 66 is arranged below the moving image area 64, but the subtitle area 66 may be arranged above the moving image area 64.

  In the present embodiment, the area change instruction receiving unit 32 receives an instruction to superimpose the caption area 66 on the moving image area 64 from the user. Then, in response to this instruction, the area changing unit 34 superimposes the caption area 66 on the moving image area 64 and outputs it to the UI unit 16 such as a display as shown in FIG. In the example of FIG. 10, the caption area 66 and the moving image area 64 are completely overlapped. A part of the caption area 66 and a part of the moving image area 64 may overlap.

  The display mode changing unit 40 is a display mode (for example, color, brightness, transparency, etc.) of at least one pixel displayed in an area where the moving image area 64 and the caption area 66 overlap, which is at least a part of the frame image 48. To change. For example, as shown in FIG. 10, the display mode changing unit 40 is included in the frame image 48 displayed and output in the moving image area 64 when the subtitle area 66 and the moving image area 64 completely overlap. Decrease the brightness of all pixels. In this way, even if the caption area 66 and the moving image area 64 overlap, the user can clearly read the characters shown as the caption.

  The caption change instruction receiving unit 42 receives an instruction to change at least one of the number and size of characters displayed as captions in the caption area 66. In this embodiment, the subtitle changing unit 36 changes at least one of the number and size of characters displayed as subtitles in the subtitle area 66 in accordance with an instruction received by the subtitle change instruction receiving unit 42. For example, when the subtitle change instruction receiving unit 42 receives an instruction to greatly change the characters displayed as subtitles, the subtitle changing unit 36 is displayed as subtitles in the subtitle area 66 as shown in FIG. Increase letters. In this way, the user can control the size and number of characters displayed as subtitles.

  The moving image management screen control unit 44 performs display output of the moving image management screen 80 illustrated in FIG. 12 in response to a request from the user.

  When the user inputs a search character string in the search form 82 and presses the search button 84, the moving image management screen control unit 44 includes word data, unit subtitle data 46, and the like stored in a USB memory or the like. Then, data including the search character string is acquired and displayed on the list display unit 86 as a list.

  Here, when the user selects one of the data displayed and output as a list, the moving image management screen control unit 44 displays the detailed contents of the selected data (phrase, memo, translation, A corresponding partial moving image or the like is displayed and output on the detail display unit 88. In addition, the moving image management screen control unit 44 displays and outputs dictionary data related to words included in the selected data on the dictionary display unit 90. In the present embodiment, the moving image management screen control unit 44 (which may be the screen display output unit 26), for example, based on the unit subtitle 70 or unit subtitle data 46 specified by the user, The dictionary data related to the phrase corresponding to the caption data 46 is displayed and output. More specifically, the moving image management screen control unit 44 receives, for example, an operation of pressing a dictionary display button (not shown) displayed in the subtitle area 66 in association with the unit subtitle 70 from the user. Dictionary display unit displays dictionary data related to a phrase (for example, a word or phrase included in a character string indicated by subtitle character string data 52) specified based on subtitle character string data 52 included in unit subtitle data 46 corresponding to subtitle 70 A list is displayed and output at 90.

  When the user selects one label from at least one label using the label setting tag 92 included in the moving image management screen 80 as a pull-down menu, the moving image management screen control unit 44 selects the label. The label data corresponding to the labeled label is output to the storage unit 14 in association with the word data or the unit caption data 46 corresponding to the data displayed and output on the detail display unit 88. Further, the moving image management screen control unit 44 may associate the label corresponding to the condition with the unit caption data 46 satisfying the predetermined condition in advance and output it to the storage unit 14. When at least one label filter 94 is selected when the user inputs a search character string in the search form 82 and presses the search button 84, the label corresponding to the selected label filter 94 is displayed. The data including the search character string is specified from the word data and the unit caption data 46 associated with.

  In the present embodiment, the screen display output unit 26 controls whether or not to display and output the bookmark selection image 96 corresponding to the unit caption 70 in the caption area 66 in accordance with an operation received from the user (FIG. 8). FIG. 9 and FIG. 10). When the user performs an operation of selecting the bookmark selection image 96 using the UI unit 16 such as a mouse, the screen display output unit 26 adds a bookmark to the unit subtitle data 46 corresponding to the bookmark selection image 96. The bookmark data indicating this is associated and output to the storage unit 14. In this embodiment, the moving image management screen control unit 44 causes the list display unit 86 to display data corresponding to the unit subtitle data 46 associated with the bookmark data (for example, the subtitle character string included in the unit subtitle data 46). When the data 52) is displayed and output, an image indicating that a bookmark is attached to the data is also displayed and output.

  The present invention is not limited to the above embodiment.

  For example, the moving image processing apparatus 10 may provide a quiz function. Specifically, for example, the moving image processing apparatus 10 hides a part of the character string indicated by the subtitle character string data 52 included in the unit subtitle data 46 and displays it on the UI unit 16 such as a display as a question to be asked. Output. Then, the moving image processing apparatus 10 displays and outputs the partial moving image associated with the unit caption data 46 on the UI unit 16 such as a display. Then, the moving image processing apparatus 10 receives an input of a character string from the user, and determines whether or not the user has answered correctly based on whether or not the character string corresponds to a character string to be entered as a blank. . The moving image processing apparatus 10 may display and output a plurality of options on the UI unit 16 such as a display. Then, it may be determined whether or not the user has answered correctly based on an option selected by the user from among these options. Then, the moving image processing apparatus 10 displays and outputs on the UI unit 16 such as a display an image indicating the user's correct answer rate and whether or not the user has correctly answered each problem.

  DESCRIPTION OF SYMBOLS 10 moving image processing apparatus, 12 control part, 14 memory | storage part, 16 user interface (UI) part, 18 communication part, 20 input / output part, 22 moving image acquisition part, 24 unit subtitle data acquisition part, 26 screen display output part, 28 unit subtitle data specifying unit, 30 unit subtitle selection operation receiving unit, 32 region change instruction receiving unit, 34 region changing unit, 36 subtitle changing unit, 38 frame image changing unit, 40 display mode changing unit, 42 subtitle change instruction receiving unit , 44 moving image management screen control unit, 46 unit subtitle data, 48 frame image, 50 unit subtitle data ID, 52 subtitle character string data, 54 related content data, 56 start time code, 58 end time code, 60 language data, 62 Video output screen, 64 video areas, 66 subtitle areas, 68 border images, 70 unit subtitles, 72 screens Roll bar, 74 knob, 76 highlighted image, 78 related content image, 80 moving image management screen, 82 search form, 84 search button, 86 list display section, 88 detailed display section, 90 dictionary display section, 92 label setting tag, 94 Label filter, 96 Bookmark selection image.

Claims (11)

  1. A moving image processing apparatus for sequentially displaying and outputting at least one frame image included in a moving image,
    Unit subtitle data acquisition means for acquiring a plurality of unit subtitle data corresponding to subtitles associated with a partial video that is at least a part of the video;
    Screen display output means for displaying and displaying a screen including a moving image area in which a frame image included in the moving image is displayed and a subtitle area in which subtitles are displayed;
    The screen display output means displays unit subtitles corresponding to each of a plurality of unit subtitle data to be displayed among the plurality of unit subtitle data in which the associated partial moving images are different from each other in the subtitle area. Output,
    A moving image processing apparatus.
  2. Further comprising unit subtitle data specifying means for specifying at least one unit subtitle data based on an operation received from a user,
    The screen display output means displays and outputs the unit subtitle corresponding to the unit subtitle data specified by the unit subtitle data specifying means in the subtitle area;
    The moving image processing apparatus according to claim 1.
  3. The unit caption data includes language data indicating the language of the caption corresponding to the unit caption data;
    The unit subtitle data specifying means specifies at least one unit subtitle data based on the language data and an operation for specifying a language to be displayed as a subtitle received from a user;
    The moving image processing apparatus according to claim 2.
  4. The screen display output means displays and outputs the unit caption corresponding to the frame image displayed in the moving image area in a display mode different from the unit caption other than the unit caption.
    The moving image processing apparatus according to any one of claims 1 to 3, wherein
  5. Unit subtitle selection operation accepting means for accepting a unit subtitle selection operation for selecting any of the plurality of unit subtitles displayed in the subtitle area from the user,
    When the screen display output means accepts the unit subtitle selection operation, the frame image corresponding to the unit subtitle selected by the unit subtitle selection operation is displayed and output in the moving image area;
    The moving image processing apparatus according to any one of claims 1 to 4, wherein
  6. The screen display output means updates the unit subtitle displayed in the subtitle area when the unit subtitle corresponding to the frame image displayed in the moving image area is not displayed in the subtitle area;
    The moving image processing apparatus according to claim 1, wherein the moving image processing apparatus is a moving image processing apparatus.
  7. The screen display output means displays and outputs contents related to at least a part of the unit subtitle data in response to a user operation specifying at least a part of the unit subtitle data;
    The moving image processing apparatus according to any one of claims 1 to 6, wherein
  8. The screen display output means repeatedly outputs a partial moving image associated with the unit subtitle data specified based on an operation received from a user to the moving image area repeatedly predetermined times.
    The moving image processing apparatus according to claim 1, wherein the moving image processing apparatus is a moving image processing apparatus.
  9. Means for displaying and outputting dictionary data relating to a phrase corresponding to the unit caption data based on the unit caption data specified by the user;
    The moving image processing apparatus according to claim 1, wherein:
  10. A moving image processing method for sequentially displaying and outputting at least one frame image included in a moving image,
    A unit subtitle data acquisition step of acquiring a plurality of unit subtitle data corresponding to subtitles associated with at least a part of the moving image;
    A screen display output step for displaying and displaying a screen including a moving image area in which a frame image included in the moving image is displayed and a subtitle area in which subtitles are displayed;
    In the screen display output step, a unit subtitle corresponding to each of a plurality of unit subtitle data to be displayed among the plurality of unit subtitle data is displayed and output in the subtitle area.
    And a moving image processing method.
  11. A program that causes a computer to function as a moving image processing apparatus that sequentially displays and outputs at least one frame image included in a moving image,
    Unit subtitle data acquisition means for acquiring a plurality of unit subtitle data corresponding to subtitles associated with at least a part of the moving image;
    Causing the computer to function as screen display output means for displaying and displaying a screen including a moving image area in which a frame image included in the moving image is displayed and a subtitle area in which subtitles are displayed;
    The screen display output means displays and outputs a unit caption corresponding to each of a plurality of unit caption data to be displayed among the plurality of unit caption data in the caption area;
    A program characterized by that.
JP2009114268A 2009-05-11 2009-05-11 Motion picture processor, and method and program for processing motion picture Pending JP2010263528A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2009114268A JP2010263528A (en) 2009-05-11 2009-05-11 Motion picture processor, and method and program for processing motion picture

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2009114268A JP2010263528A (en) 2009-05-11 2009-05-11 Motion picture processor, and method and program for processing motion picture
PCT/JP2010/050590 WO2010131493A1 (en) 2009-05-11 2010-01-20 Video image processing device, video image processing method, information storage medium, and program
TW99108417A TW201041386A (en) 2009-05-11 2010-03-22 Moving image processing apparatus, moving image processing method, and information storage medium

Publications (1)

Publication Number Publication Date
JP2010263528A true JP2010263528A (en) 2010-11-18

Family

ID=43084875

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2009114268A Pending JP2010263528A (en) 2009-05-11 2009-05-11 Motion picture processor, and method and program for processing motion picture

Country Status (3)

Country Link
JP (1) JP2010263528A (en)
TW (1) TW201041386A (en)
WO (1) WO2010131493A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5765592B2 (en) * 2012-12-20 2015-08-19 カシオ計算機株式会社 Movie playback device, movie playback method, movie playback program, movie playback control device, movie playback control method, and movie playback control program
JP5765593B2 (en) * 2012-12-20 2015-08-19 カシオ計算機株式会社 Movie playback device, movie playback method, movie playback program, movie playback control device, movie playback control method, and movie playback control program

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08331525A (en) * 1995-05-30 1996-12-13 Matsushita Electric Ind Co Ltd Closed caption decoder
JP2001186446A (en) * 1999-12-24 2001-07-06 Toshiba Corp Information reproducing device and information reproducing method
JP2003018534A (en) * 2001-07-03 2003-01-17 Sony Corp Reproducing equipment and method, recording medium and program
JP2003018491A (en) * 2001-07-04 2003-01-17 Sony Corp Caption display device and method
JP2005012499A (en) * 2003-06-19 2005-01-13 Matsushita Electric Ind Co Ltd Method and device for reproducing data

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006337490A (en) * 2005-05-31 2006-12-14 Matsushita Electric Ind Co Ltd Content distribution system
JP2007165981A (en) * 2005-12-09 2007-06-28 Toshiba Corp Information processing apparatus and control program thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08331525A (en) * 1995-05-30 1996-12-13 Matsushita Electric Ind Co Ltd Closed caption decoder
JP2001186446A (en) * 1999-12-24 2001-07-06 Toshiba Corp Information reproducing device and information reproducing method
JP2003018534A (en) * 2001-07-03 2003-01-17 Sony Corp Reproducing equipment and method, recording medium and program
JP2003018491A (en) * 2001-07-04 2003-01-17 Sony Corp Caption display device and method
JP2005012499A (en) * 2003-06-19 2005-01-13 Matsushita Electric Ind Co Ltd Method and device for reproducing data

Also Published As

Publication number Publication date
WO2010131493A1 (en) 2010-11-18
TW201041386A (en) 2010-11-16

Similar Documents

Publication Publication Date Title
US10257576B2 (en) Global speech user interface
JP6004497B2 (en) How to display dynamic menu buttons
US20150302651A1 (en) System and method for augmented or virtual reality entertainment experience
US10083151B2 (en) Interactive mobile video viewing experience
US8704948B2 (en) Apparatus, systems and methods for presenting text identified in a video image
JP4843872B2 (en) Television receiver
US6204885B1 (en) Method and apparatus for displaying textual or graphic data on the screen of television receivers
JP4870576B2 (en) Recording medium for recording interactive graphic stream and reproducing apparatus thereof
RU2316063C1 (en) Data carrier for storing text data of subtitles, including style information, and device and method for its reproduction
US8724027B2 (en) Video output device and video output method
US8761574B2 (en) Method and system for assisting language learning
KR101321859B1 (en) Information processing apparatus, method and computer readable recording medium
JP4550044B2 (en) Audio visual playback system and audio visual playback method
KR100763189B1 (en) Apparatus and method for image displaying
US8529264B2 (en) Method facilitating language learning
JP2013536528A (en) How to create and navigate link-based multimedia
KR100654455B1 (en) Apparatus and method for providing addition information using extension subtitle file
KR100884144B1 (en) Multimedia reproduction device and menu screen display method
JP4492462B2 (en) Electronic device, video processing apparatus, and video processing method
JP4102847B2 (en) Image data providing apparatus, image display apparatus, image display system, image data providing apparatus control method, image display apparatus control method, control program, and recording medium
US20130177891A1 (en) Audio-visual learning system
US8434007B2 (en) Multimedia reproduction apparatus, menu screen display method, menu screen display program, and computer readable recording medium recorded with menu screen display program
JP5042490B2 (en) Personalization of user accessibility options
CN104247442B (en) Display device, television receiver, and search method
Udo et al. The rogue poster-children of universal design: Closed captioning and audio description

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20120511

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20130528

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20130725

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20130910

A02 Decision of refusal

Free format text: JAPANESE INTERMEDIATE CODE: A02

Effective date: 20140121