MXPA06007710A - Reproduction device, reproduction method, program, recording medium, and data structure - Google Patents

Reproduction device, reproduction method, program, recording medium, and data structure

Info

Publication number
MXPA06007710A
MXPA06007710A MXPA/A/2006/007710A MXPA06007710A MXPA06007710A MX PA06007710 A MXPA06007710 A MX PA06007710A MX PA06007710 A MXPA06007710 A MX PA06007710A MX PA06007710 A MXPA06007710 A MX PA06007710A
Authority
MX
Mexico
Prior art keywords
sub
reproduction
stream
file
main
Prior art date
Application number
MXPA/A/2006/007710A
Other languages
Spanish (es)
Inventor
Kato Motoki
Hamada Toshiya
Original Assignee
Sony Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Corporation filed Critical Sony Corporation
Publication of MXPA06007710A publication Critical patent/MXPA06007710A/en

Links

Abstract

There are provided a reproduction device, reproduction method, a program, a recording medium, and a data structure enabling interactive operation when reproducing an AV content. A controller (34) acquires a sequence list of the numbers of the audio streams in advance. When audio switching is performed by a user, the controller acquires the next audio stream number following the audio stream number being reproduced, checks which of the main clip and the sub clip contains the stream judged to have the reproduction function in the reproduction device,and reads out the clip where the corresponding audio stream is multiplexed and the main clip referenced by the main path. The audio stream file of the corresponding clip and the file contained in the main clip and to be reproduced are selected by switches (57 to 59, 77), combined by a video data processing unit (96) and an audio data processing unit (97), and outputted. The present invention can be appliedto a reproduction device.

Description

REPRODUCTION DEVICE, REPRODUCTION METHOD, PROGRAM, RECORDING MEDIA AND DATA STRUCTURE.
Technical Field The present invention relates to reproduction apparatuses, reproduction methods, programs, recording medium, and data structures. More particularly, the invention relates to a reproduction apparatus, a reproduction method, a program, a recording medium, and a data structure that allow interactive operations when the AV content is reproduced.
Background art In DVD video standards (Digital Versatile Disc), interactive operations can be performed, ie users can change the sound or subtitles when playing the AV content (Visual Audio), such as a movie, recorded on a recording medium of information for example, (see Document 1 Without Patent). More specifically, the user operates a sound change button 11 or a subtitle switching button 12 of a remote control 2 to change the sound or subtitles of displayed AV content in a display device 1 shown in figure 1. For example , if the user operates the sound change button 11 when sound 1 is set to the initial state in the display device 1, sound 1 is changed to sound 2, as shown in Figure 2. The AV content in the DVD video is recorded in the form of a stream of MPEG2 programming (Expert Group of Moving Images). In the MPEG2 programming stream, as shown in Figure 3, a video stream (Video in Figure 3) a plurality of audio streams (indicated by Audio 1, 2, and 3 in Figure 3) and a plurality of sub-picture streams (sub-images 1, 2, and 3), are multiplexed so that the audio streams and the sub-picture streams are synchronized by AV with the video stream. The sub-image streams (sub-images 1, 2, and 3) are streams in which the bitmap images are blank segment codes, and are used mainly for subtitles. Generally, a plurality of audio streams are used to record sound of different languages, and a plurality of streams of sub-images are used to record sub-titles of different languages. The user can interactively select sounds or sub-titles of a desired language when using the remote control 2 while the video is playing. The DVD video defines a table structure, provided to users that indicates the relationship between sound numbers and numbers of sub-titles or a plurality of audio streams (audio 1, 2, and 3) and a plurality of sub-streams. images (sub-images 1, 2, and 3) and a programming stream. Figure 4 illustrates a current number table indicating the relationship between the audio signals - and the subtitle signals. In this table, the numbers "" 'of sounds are referred to as "A_SN (Audio Current Number)", and the numbers of sub-titles are referred to as "S_SN (Sub-Picture Current Numbers)". In Figure 4, each of the plurality of audio streams is provided with A_SN, and each of the plurality of sub-image streams is provided with S_SN. More specifically, A_SN = 1: audio 2, A_SN = 2: audio 1, and A_SN-3: audio 3. Also, S_SN = 1: sub-image 3, S_SN = 2: Sub-image 1, and S_SN = 3: sub-image 2. In this case, a smaller number of A_SN or S_SN indicates an audio signal or a subtitle signal that is provided to users, with higher priority. That is, A_SN = 1 is an audio stream reproduced as an omission, and S_SN = 1 is a stream of sub-images reproduced as' an omission. More specifically, the sound 1 reproduced in the initial state in Figure 1 is audio 2, which is A_SN = 1 (Figure 4), and sound 2 played after changing sound 1 in Figure 2 is audio 1, which is A_SN = 2. Document 1 Without a Patent: DVD specifications for part 3 Read Only Disc; Version 1.1.
Description of the invention Problems that are solved by the invention However, according to the DVD video, when the user changes sound or subtitles while playing a video program stream, the user may select only the audio streams or streams of the video. sub-images multiplexed in the programming stream which is currently reproduced. That is, when playing an MPEG2 programming stream, such as that shown in Figure 3, the user can only select from audio 1 to audio 3 when changing sound. Accordingly, even if another stream having different audio currents and subtitles in a currently playing stream of programming is available, the user can not change the sound or subtitles to the audio streams or subtitles in the different streams. Therefore, the extensibility, to select current is low. The present invention has been made in view of the above background. An object of the present invention is to select the sound and subtitle of streams or data files different from an AV main stream when the user changes the sound or subtitles.
Means for Problem Solving A reproduction apparatus of the present invention includes: obtaining a means for obtaining reproduction management information including first information having a main reproduction path indicating the position of an AV current file recorded in a medium recording and second information having a plurality of sub-reproducing Trajectories indicating the sub-file positions that include appended data that is reproduced simultaneously with the reproduction of the main image data included in the AV stream file; selection means for selecting appended data to be reproduced, based on a user's instruction, from appended data that is reproduced simultaneously with the main image data included in the AV stream file referenced by the main playback path, and the data annexes included in the sub-files referred to by the sub-trajectories of reproduction; reading means for reading, if the appended data selected by the selection means is included in a sub-file referred by a reproduction sub-path, it is a file referred to by the reproduction sub-path together with the AV current file referred by the main reproduction path; and the reproduction means for reproducing the main image data included in the laned AV file by the reading means and the appended data included in the sub-file selected by the selection means and read by the reading means. The first information may include a table defining the appended data included in the AV current file referred to by the main reproduction path and the appended data referred by the reproduction sub-path, and the selection means may select the appended data to be reproduced, based on the instruction of the user, among the annexed data defined in the table. The reproduction apparatus may further include determining means for determining whether the reproduction apparatus has a function to reproduce the appended data selected by the selection means. If it is determined by the means of determination that the reproduction apparatus has a function to reproduce the appended data and if the appended data is included in a sub-file referred to by the reproduction sub-path, the reading means can read the sub. -file referred by the reproduction sub-path together with the AV current file referenced by the main reproduction path, and the reproduction medium can reproduce the main image data included in the AV current file read by the medium of reading and the attached data included in the sub-file selected by the means of selection and read by the reading medium. The reproduction apparatus may further include determining means for determining whether the reproduction apparatus has a function for reproducing the appended data selected by the selection means. If it is determined by the means of determination that the reproduction apparatus has a function to reproduce the appended data and if the appended data is included in a sub-file referred by a reproduction sub-trajectory., the reading means can read the sub-file referred to by the reproduction sub-path together with the AV current file referred to by the main reproduction path., and the reproduction medium can reproduce the main image data included in the AV stream file read by the reading medium and the appended data included in the sub-file selected by the means of selection and read by the reading medium. The table may further define the appended information with respect to the appended data, and the determining means may determine whether the reproduction apparatus has a function to reproduce the appended data based on attribute information with respect to the appended data defined in the table. . The second information may include type of information regarding the types of reproduction sub-trajectories, file names and sub-files referred to by. the sub-trajectories of reproduction, and INPUT points and OUTPUT points of the sub-files referred to by the sub-trajectories of reproduction. The second information may also include specification information to specify the AV stream file referenced by the main playback path to reproduce the reproduction sub-paths simultaneously with the main playback path, and a time in the main playback path to allow at INPUT points start in synchronization with the main playback path on the time axis of the main playback path. A reproducing method of the present invention includes: a step of obtaining to obtain reproduction handling information including first information having the main reproduction path indicating the position of an AV current file recorded on a medium, of recording and second information having a plurality of reproduction sub-paths indicating sub-file positions that include appended data to be reproduced simultaneously with the reproduction of the main image data included in the AV current file.; a selection step for selecting the appended data to be reproduced, based on a user's instruction, from among the appended data to be reproduced simultaneously with the main image data included in the AV stream file referenced by the main playback path and the annexed data included in the sub-files referred to by the sub-trajectories of reproduction; and a reading step for reading, if the appended data selected by the processing of the selection step is included in a sub-file referred by a reproduction sub-path, the sub-file referred by the reproduction sub-path together with the AV current file referenced by the main playback path; and a reproduction step for reproducing the main data and images included in the AV current file read by the processing of the reading stage and the appended data included in the sub-file selected by the processing of the selection stage and read by the processing of the reading stage. A program of the present invention includes: A obtaining step for obtaining reproduction handling information including first information having a main reproduction path indicating the position of an AV current file recorded on a recording medium and second information having a plurality of reproduction sub-paths indicating the positions of the sub-files that include appended data that is reproduced simultaneously with the reproduction of the main image data included in the AV stream file; a selection step for selecting appended data that is reproduced, based on a user's instruction, from the appended data that is reproduced simultaneously with the main image data included in the AV stream file referenced by the main playback path and the appended data included in the sub-files referred to by the sub-trajectories of reproduction; a reading stage to read, if the appended data selected by the processing of the selection stage is included in * a sub-file referred by the reproduction sub-path, the sub-file referred by the reproduction sub-path together with the AV current file referenced by the main playback path; and a reproduction step for reproducing the main data and images included in the AV current file read by the processing of the reading stage and the appended data included in the sub-file selected by the processing of the selection stage and read by the processing of the reading stage. According to a first aspect of the present invention, the reproduction management information including first information having a main reproduction path indicating the position of an AV current file recorded on a recording medium and according to the information is obtained. which has a plurality of reproduction sub-paths indicating sub-file positions include appended data that is reproduced simultaneously with the reproduction of the main image data included in the AV stream file. Then, the appended data that is reproduced is selected based on a user's instruction, from the appended data that is reproduced simultaneously with the main image data included in the AV stream file referenced by the main playback path and the appended data included in the sub-files referred to by the sub-trajectories of reproduction. If the selected appended data includes in a sub-file referenced by a reproduction sub-path, the sub-file referred to by the reproduction sub-path is read together with the AV current file referenced by the main reproduction path. Then, the data of main images included in the current AV file read and the attached data included in the selected and read sub-file are reproduced. The association data recorded in a first recording medium of the present invention indicates whether the appended data includes in a fragment used by a main reproduction path indicating the position of the AV stream file or in fragments used by a plurality of sub-regions. playback paths that indicate the positions of the sub-files that include the attached data reproduced simultaneously with the AV stream file. If the association data indicates that the appended data is included in the fragments used by the plurality of sub-trajectories of reproduction that indicate the positions of the sub-files that include the appended data, the association data includes at least one ID of the reproducing sub-trajectories reproduced selected from the ID to specify the reproduction sub-paths to be reproduced, an ID to specify a fragment used by the reproduction sub-trajectories , and an ID to specify an elementary stream that is reproduced by the fragment. According to a second aspect of the present invention, the association data indicates whether the appended data is included in a fragment used by a main reproduction path indicating the position of the AV stream file or in fragments used by a plurality of sub -replay tracks that indicate the positions of the sub-files that include the attached data reproduced simultaneously with the AV stream file. If the association data indicates that the appended data is included in the fragments used by the plurality of reproduction sub-path indicates the positions of the sub-files that include the appended data, the association data includes at least one user ID. the reproduction sub-trajectories to be reproduced selected from the ID to specify the reproduction sub-trajectory to be reproduced, an ID to specify a fragment used by the reproduction sub-trajectories, and an ID to specify an elementary stream to be reproduced by the fragment. A playback control file recorded in a second recording medium of the present invention includes a reproduction sub-path indicating the position of a sub-file that includes the appended data that is reproduced simultaneously with the main data and images included in it. an AV stream file. The main playback path includes a table that defines a list of elementary streams that can be selected while the main playback path is playing. The table includes data indicating whether elementary streams that can be selected are included in the AV stream file selected by the main playback path or the sub-file selected by the playback sub-path. According to a data structure of the present invention, a reproduction control file includes a reproduction sub-path indicating the position of a sub-file that includes appended data that is simultaneously reproduced with the main image data included in an AV stream file. The main playback path includes a table that defines a list of elementary streams that can be selected while the main playback path is playing. The table includes data indicating whether elementary streams that can be selected are included in the AV stream file selected by the main playback path or sub-file selected by the playback sub-path. According to a third aspect of the present invention, a reproduction control file includes a reproduction sub-path indicating the position of a sub-file that includes appended data that is simultaneously reproduced with the main data and images included in a AV current file. The main playback path includes a table that defines a list of elementary streams that can be selected while the main playback path is playing. The table includes data indicating whether elementary streams that can be selected are included in the AV stream file selected by the main playback path or the sub-file selected by the playback sub-path.
Advantages - * In accordance with the present invention, interactive operations may be performed when an AV stream file is played. In particular, according to the present invention, interactive operations may also be performed on sub-files referred to by the reproduction sub-paths, which are different from the AV current files referred to by the main playback path.
Brief Description of the Drawings Figure 1 illustrates known sound change. Figure 2 illustrates known sound change. Figure 3 illustrates the structure of an MPEG2 programming stream. Figure 4 illustrates a current number table indicating the relationship between the sound signals and the subtitle signals provided to a user. Figure 5 illustrates an example of an application format in a recording medium installed in a reproduction apparatus to which the present invention is applied. Figure 6 illustrates the structure of the Main Trajectory and the Sub-Trajectory. Figure 7 illustrates an example of the Main and Sub-trajectory. Figure 8 illustrates another example of Main and Sub-trajectory. Figure 9 illustrates yet another example of the Main and Sub-Trajectory Path. Figure 10 illustrates another example of the Main and Sub-Path. Figure 11 illustrates the syntax of PlayList (): Figure 12 illustrates the syntax of SubPath (). Figure 13 illustrates the syntax of SubPlay? Tem (i). Figure 14 illustrates the syntax of Playltem (). Figure 15 illustrates the syntax of STN__table (). Figure 16 illustrates an example of the stream_entry () syntax. Figure 17 illustrates the syntax of stream_attribute (). Figure 18 illustrates stream_cording_type. Figure 19 illustrates video_format. Figure 20 illustrates Frame_rate. Figure 21 illustrates aspect_ratio. Figure 22 illustrates audio_presentation_type. Figure 23 illustrates sampling_frequency. Figure 24 illustrates the Character code. Figure 25 is a block diagram illustrating an example of the configuration of a reproduction apparatus to which the present invention is applied. Figure 26 is a flowchart illustrating reproduction processing performed by the reproduction apparatus shown in Figure 25. Figure 27 is a flow chart illustrating reproduction processing performed by the reproduction apparatus shown in the Figure 25. Figure 28 is a flow chart illustrating reproduction processing performed by the reproduction apparatus shown in Figure 25. Figure 29 is a flow chart illustrating the processing when an instruction to change audio is given by an user. Figure 30 is a flow diagram illustrating processing when an instruction to change subtitles by a user is given. Figure 31 illustrates the configuration of a personal computer. Figure 32A illustrates another example of the PlayList () syntax. Figure 32B illustrates another example of the PlayList () syntax. Figure 33 illustrates another example of the STN_table () syntax. Figure 34 illustrates the types in STN table () shown in Figure 33.
Reference Numbers 20 reproduction apparatus, 31 storage unit, 32 switch, 33 AV decoder, 34 controller, 51 to 54 buffers, 55, 56 PID filters, 57 to 59 switches, 71 background decoder, 72 decoder MPEG2 video, 73 presentation graphics decoder, 74 interactive graphics decoder, 75 audio decoder, 76 Text-ST composition, 77 switcher, 91 background plane generator, 92 video plane generator, 93 plane generator of presentation graphics, 94 interactive graphics plane generator, 95 buffer, 96 video data processor, 97 audio data processor.
Best Way to Carry Out the Invention One embodiment of the present invention is described in the following with reference to the accompanying drawings. Figure 5 illustrates an example of an application format in a recording medium to be installed in a reproduction apparatus 20 (which is discussed in the following with reference to Figure 25) which applies to the present invention. The recording medium can be an optical disc, a magnetic disk, or a semiconductor memory, which is discussed in the following. The application format has two layers, it is deoir, PlayList and Clip, to handle AV (Visual Audio) streams. In this case a pair of an AV stream and a fragment information element accompanying the AV stream are considered as an object, which is referred to as a "fragment". An AV stream is also referred to as an "AV stream file". The fragment information is also referred to as a "fragment information file". Generally the files used in the computer are handled as byte strings. On the other hand the contents of the AV stream files are expanded on an axis of time, and the access points in the fragments are mainly specified by the PlayList to use timers. That is to say, it can be said that the PlayList and the Clip form two layers to handle AV currents. If the access points in the Clip are indicated by the PlayList when using timers, a fragment information file is used to find, from the timers, information regarding an address in which decoding is initiated in a file of AV current. The PlayList is a set of reproduction zone of an AV current. A playback area in an AV stream is referred to as a "Playltem", which is indicated by a pair of an INPUT point (playback start point) and a OUTPUT point (playback endpoint) in the time axis . Therefore the PlayList has one or a plurality of Playltems, as shown in Figure 5. In Figure 5, the first PlayList on the left has two Playlons, which refer to the first half and-, the second half of the AV current contained in the fragment on the left side in Figure 5. The second PlayList on the left has a Playltem, which refers to the entire AV current contained in the fragment on the right side. The third PlayList on the left has two Playltems, which refer to a certain portion of the AV current contained in the fragment on the left side and a certain portion of the AV current contained in the fragment on the right side. If a disk navigation program shown in Figure 5 designates the left Playlist contained in the first PlayList on the left as information with respect to the current playback position, the first half of the AV stream contained in the left fragment, which it is referred to by the designated Playltem, it is reproduced. The disk navigation program has a function to control the playback order specified in the PlayList and the interactive playback operations when using the PlayList. The disk navigation program also has a function to display a menu screen to allow a user to give instructions for performing various types of playback operations. The disk navigation program is described in a programming language, for example, Java ™, and recorded on a recording medium. In this modality, a reproduction trajectory that includes at least one Playltem (sequential Playlons) in the PlayList is referred to as "Main Trajectory" and a reproduction trajectory that includes at least one Sub-Trajectory (formed by sequential or non-sequential SubPlayltems). ) arranged in parallel with the Main Path in a PlayList is referred to as an "S, b-Trajectory". That is, the application format in a recording medium installed in the playback apparatus 20 (in which the following is discussed with reference to Figure 25) has at least one Sub-Trajectory which is reproduced in relation to the Main Path, in a PlayList. Figure 6 illustrates the structure of the Main Trajectory and the Sub-Trajectory. The PlayList is allowed to have the simple Main Trajectory and at least one Sub-Trajectory. A Sub-Path includes at least one SubPlayltem.
The PlayList shown in Figure 6 has a Main Trajectory that includes three Playltems and three Sub-Trajectory. The Playltems that form the Main Path are provided with the title IDs. More specifically, the Main Path includes Playlons such as Playltem_id = 0, Playltem_id = l, and Playltem_id = 2. The Sub-Trajectories are also provided with the title IDs, such as Subpath_id = 0, Subpath_id = l, and Subpath_id = 2. Subpath_id = 0 has a SubPlayltem, Subpath_id = l has two SubPlayltems, and SubPath_id = 2 has a SubPlayltem. SubPath_id = l to, for example, the Director's Edition is applied, and a predetermined portion of the AV stream file can be inserted as the director's comments. A fragment AV stream file referenced by a Playltem includes at least video stream data (main image data). The fragment AV stream file may also include at least one audio stream, which is played simultaneously with (synchronization with) the video stream (main image data) which are also contained in the AV stream file of fragment.
The fragment AV stream file may also include at least one bitmap subtitle stream file which is reproduced in synchronization with the video stream which is also contained in the fragment AV stream file. The fragment AV stream file may also include at least one interactive graphics stream file that is played in synchronization with the video stream which is also contained in the fragment AV stream file. The video stream contained in the fragment AV stream file and the audio stream, the bitmap subtitle stream, or the interactive graphics stream, which is reproduced in synchronization with the video stream, are multiplexed. In other words, a fragment AV stream file referenced by a Playltem includes video stream data and at least 0 audio stream data, at least 0 stream data "of bitmap subtitles or at least 0 Interactive graphics stream data, which are reproduced in synchronization with the video stream data, so that they are multiplexed into the fragment AV stream file A SubPlayltem refers for example to audio stream data or Subtitle data contained in a stream other than the fragment AV stream file referred to by the Playlist When playing a PlayList that includes only one Main Path, the user can select sound and subtitles only from the audio stream and sub streams. multiplexed images in a fragment required by that Main Path In contrast, when playing a PlayList that includes a Main Path and a Sub-Trajectory, the user can refer to the audio streams and sub-pixel streams multiplexed in a fragment referred to by the SubPlayl em in addition to the audio streams and sub-pixel streams multiplexed into an AV stream file of fragment referred to by the Main Path. As discussed above a plurality of Sub-Trajectories are included in a PlayList, and each Sub-Path refers to the corresponding SubPlayltem. Accordingly, AV currents having high extensibility and high flexibility can be provided. In other words, your Playltem can be added later. Figure 7 illustrates an example of the trajectory Principal and an example of the Sub-Trajectory. In Figure 7, an audio playback path reproduced simultaneously with (in AV sync with) - the Main trajectory is indicated when using the trajectory. The PlayList shown in Figure 7 includes a Playltem, that is, Playltem_id = 0, as the Main Path, and a SubPlayltem as the Sub-Trajectories. The SubPlayItem () includes the following data. SubPlayItem () includes Clip_Information_file__name to specify the fragment referred to by the Sub-Path in the PlayList. In the example of Figure 7, the SubPlayltem refers to an Auxiliary audio stream of Sub_Clip_entry_íd = 0. Also SubPlayItem () includes SubPlayItem_IN_time and SubPlayItem_OUT_time to specify the playback zone of the Sub-Path contained in the fragment (in this case, Auxiliary audio stream). The SubPlayItem () also includes sync_PlayItem_id and sync_start_PTS_of_PlayItem to specify the time at which the playback operation of the Sub-Trajectory starts on the time axis of the Main Trajectory. In Figure 7, sync_PlayItem_íd = 0 and sync_start_PTS_of_PlayItem = tl. With this information, the time ti in which the reproduction operation of the Sub-trajectory starts in the time axis of Playltem_id = 0 in the Main Path can be specified. That is, in the example, in Figure 7, the start time of reproduction of the Main Trajectory and the start time of reproduction of the Sub-Trajectory is the same, that is, ti. The fragment AV audio stream referred to by the Sub-Path must not include non-sequential STC points (non-sequential points of the system time base). The fragment audio sample clock used by the Sub-Trajectory is closed in the audio sample clock used by the Main Trajectory. In other words, SubPlayItem () includes information to specify the fragment referred by the Sub-Trajectory, information to specify the reproduction zone of the Sub-Trajectory, and information to specify the moment in which the reproduction operation of the Sub-Trajectory Trajectory starts on the time axis of the Main Trajectory. Since the fragment AV stream used by the Sub-Path does not include STC, the user may refer to a fragment AV audio stream different from the fragment AV stream (main AV stream) referenced by the Main Trajectory of the base the information included in SubPlayItem () (information to specify the fragment referred to with the Sub-Trajectories, information to specify the reproduction zone of the Sub-Trajectory and information to specify a moment in which the reproduction operation of the Sub-Trajectory starts on the time axis of the Main Path), and plays the fragment AV audio stream. As stated in the above, the Playltem and the SubPlayltem individually handle fragment AV stream files. The fragment Av stream file handled by the Playltem is different from the fragment AV stream file handled by the SubPlayltem.
In a manner similar to the example shown in Figure 7, a subtitle current playback path reproduced simultaneously with the Main Path can be indicated when using a Sub-Path. Figure 8 illustrates another example of the trajectory Main and another example of Sub-Trajectory. Figure 8, an audio reproduction path reproduced asynchronously with the Main Path is indicated by using a Sub-Path. The primary fragment AV stream file referred to by the Main Path Playlist is similar to that in Figure 7, and an explanation of it in this way is omitted. The configuration shown in Figure 8 is used when, for example, the Main Trajectory is used as the slideshow of still images and the audio path and the Sub-Trajectories is used as the BGM, (background music) of the Main trajectory. That is, the configuration shown in Figure 8 is used to allow the BGM to be continuously played when the user instructs a playback (player) apparatus to update the images of the slide show. In Figure 8, Playltem__id = 0, 1, and 2 are arranged in the Main Trajectory and a SubPlayltem is arranged in the Sub-Trajectory. The Sub-Path includes SubPlayItem_IN_time and SubPlayItem_OUT_time to specify the playback zone of the Sub-Path in the fragment (Auxiliary audio stream). In the example of Figure 8, the fragment (Auxiliary audio stream) is referred to by the SubPlayltem. When comparing Figure 8 with Figure 7, it can be understood that the SubPlayltem in Figure 8 does not include sync_PlayItem_id and sync_start_PTS_of_PlayItem. The reason for this is that, since the playing time of the AV current (video data) referred to by the Main Trajectory does not refer to the audio playback time, this is not necessary to specify the time at which the Sub -Trajectory begins the reproduction operation on the time axis of the Main Trajectory. That is, "'is sufficient information indicating that the AV current referred to by the Main Trajectory and the audio current referred to by the Sub-Trajectory are reproduced together. It has been described that the playing time of the video stream data Included in the AV current is different from the audio stream referred to by the Sub-Path To put it more specifically, this means that the playing time of the video current included in the AV current is synchronous with that of the current video (that is, the audio stream is associated with the video stream), but no specific association is given, that is, although a predetermined frame in the video stream is being played, the corresponding sound is played. , in the example shown in Figure 7, the playing time of the video stream is synchronous with that of the audio stream, and there is also an association specific, that is, while a predetermined frame in the stream., video is playing the corresponding sound is played. In contrast, in the example shown in Figure 8, although the playing time of the video stream is synchronous with that with the audio stream, a specific association is not given, which means that although a predetermined frame in the video stream is being played, the corresponding sound is not played. Figure 9 illustrates another example of the Main Path and the Sub-Path. In Figure 9, the reproduction path of the text subtitle (stream of interactive graphics) reproduced simultaneously with the Main Path is indicated by using a Sub-Path. The main AV current file referred to by the Main Path Playlist is similar to that shown in Figure 7., and in this way an explanation of it is omitted. In this case, the text subtitles are defined as a multiplexed stream of an MPEG-2 system or as a data file, which is not a multiplexed stream. The data file is a file that contains the subtitle text data (character code string) that is reproduced in synchronization with the video of the Main Path and the attributes of the Text data. Attributes are information regarding font type, font size and font color when "textual data is subjected to interpretation." When comparing Figure 9 with Figure 7, it can be understood that the SubPlayltem can refer to the subtitle based on Text (text subtitles) of SubClip_entry_id = 0, 1, ..., N by the SubPlayltem More specifically, according to the structure shown in Figure 9, a plurality of text subtitle files can be simultaneously referred by a SubPlayltem , and when the SubPlaylte is preproduced, one of the plurality of text subtitle files is selected and reproduced For example, among the text subtitles of a plurality of languages, a text subtitle file is selected and played More specifically, a SubClip_entry_id is selected from SubClíp_entry_id = 0 to N (based on a user's instruction), and the subtitle based on Text referenced by the ID of the sele ID cited is reproduced. Not only text subtitle files, bitmap subtitle stream files, transport stream files, and various data files can be applied to the example shown in Figure 9. Alternatively, file data that includes Letter codes and information to represent letter codes can also be applied. Figure 10 illustrates another example of the trajectory Main and the Sub-Trajectory. In figure 10, the reproduction path of a stream of interactive graphics reproduced asynchronously with the Main Path when using a Sub-Path is indicated. By comparing Figure 10 with Figure 8, it can be understood that the stream of interactive graphics' of SubClip_entry_id = 0, 1, ..., N can be referred to by the SubPlayltem. That is, according to the structure in Figure 10, a SubPlayltem can simultaneously refer to a plurality of interactive graphics stream files. When the SubPlayltem is played, an interactive graphics stream file is selected and ,, reproduced from the plurality of interactive graphics stream files. More specifically, between SubClip_entry_id = 0 to N, a SubClip_entry_id is selected (based on a user instruction), and the stream of interactive graphics referenced by the ID is reproduced. For example, based on the user's instruction, a language of the interactive graphics streams is selected, and the stream of interactive graphics of the selected language is reproduced. The data structure (syntax) that implements the structure of the Main Path and the Sub-Path discussed with reference to Figures 6 to 10 are described in the following. Figure 11 illustrates the PlayList () syntax. The "length" is an unassigned 32-bit integer that indicates the number of bytes immediately after the length field at the end of the PlayList (), that is, a field indicating the number of bytes - of reserved_for_future_use at the end of PlayList (). After the "length", follows the reserved_for_future_use of 16 bits. The "number_of_PlayItems" is a 16-bit field that indicates the number of playback elements contained in the PlayList. In the case of the example in Figure 6, the number of Playltems () is three, and the numerical value is assigned to the Playltems as Playltem_id of 0 in the order in which PlayltemO appears in the playlist. For example, Playltem_id = 0, 1, 2 are assigned, as shown in Figure 6, 8, or 10. The number_of_SubPaths is a 16-bit field that indicates the number of Sub-Trajectories (number of entries) contained in the PlayList. In the case of the example in Figure 6, the number of Sub-Trajectories is three, and the numeric value is assigned to the SubPlayltems as SubPath id of 0 in the order in which SubPath () appears in the playlist. For example, Sub Path_id = 0, 1, 2 is assigned, as shown in Figure 6. Then, in the subsequent TO statement, the Playltems refer to the same number of times as the number of Playltems, and the Sub- Trajectories are referred to for the same number of times as the number of Sub-Trajectories. As an alternative to the PlayList () syntax shown in Figure 11, the Syntax shown in Figure 32 can be considered. In Figure 11, the SubPath () data structure that stores the information with respect to the Sub-Path is contained in the PlayList (). However, in Figure 32, the SubPath () data structure is arranged independently of PlayList (). In PlayList () shown in Figure 32A, only the Playltems of the Main Path are indicated, and in SubPathsO shown in Figure 32B, -the Sub-Trajectories and the SubPlayltems are indicated. According to the data structure shown in Figure 32, SubPathsO can be stored in a different file than the file stored by PlayList (). For example, the file that stores SubPath () and a subtitle stream file or an audio stream file referred by the Sub-Path can be downloaded from a network and can be played along with the Main Path stored in a recording medium . That is, the extensibility of the Sub-trajectory can be easily implemented. The file that stores PlayList () and the file that stores SubPathsO can be associated with each other for example, by allowing part of the file names of the two files to be the same. Figure 12 illustrates the syntax of SubPath (). The "length" is an unassigned 32-bit integer that indicates the number of bytes immediately after the length field to the end of SubPath (). After the "length", follows the reserved_for_future_use of 16 bits. ^ The SubPath_type is an 8-bit field that indicates the application type of the Sub-Path. The SubPath_type is used to indicate, for example, the type of Sub-Path, such as audio, bitmap subtitles or text subtitles. That is, SubPath_type indicates the types of Sub-Trajectories shown in Figures 7 through 10. After SubPath_type, follow reserved_for_future_use of 15 bits. The is_repeat_SubPath is a one-bit field that indicates the playback operation for the Sub-Path, and more specifically, indicates whether the Sub-Path is played repeatedly or only once while playing the Main Path. This field is used when, for example, the playback time of the stream contained in the fragment specified by the Sub-Path is different from that of the main AV stream, as shown in Figure 8 or 10. After is repeat_SubPath, follow the 8-bit reserved_for_future_use. The nunber_of_SubPlayItems is an 8-bit field that indicates the number of SubPlayltems (number of entries) contained in the Sub-Path. For example, the number of SubPlayltems of SubPath_id = 0 in Figure 6 is one, and the number of SubPlayltems of SubPath__id = l is 2. In the statement FOR subsequent, SubPlayltems refer to the same number of times as the number of SubPlayltems . Figure 13 illustrates the syntax of SubPlay? Tem (i). The "length" is an unassigned 16-bit integer that indicates the number of bytes immediately after the length field until the end of the SubPlayltem (). In Figure 13, the syntax is divided into two portions, and more specifically, a portion is shown where the SubPlayltem refers to a fragment and a portion where the SubPlayltem refers to a plurality of fragment. The portion where the SubPlayltem refers to a fragment is discussed first. The SubPlayltemO includes Clip_Information_file_name [0] to specify the fragment. The SubPlayltemO also includes Clip_codec_identifier [0] to specify the encoding-decoding method for the fragment, reserved for_future_use, is multi_Clip enfries, which is a tag that indicates whether multiple fragments are recorded, and ref_to_STC_id [0], which is information about. the non-sequential points of STC (non-sequential points of the time base of the system). If the label of is_multi_Clip_entries is ACTIVE, the syntax of the portion where SubPlayItem () refers to a plurality of fragment is checked. The SubPlayItem () also includes SubPlayItem_IN_time and SubPlayItem__OUT_time to specify the playback zone of the Sub-Path contained in the fragment, and sync_PlayItem_id and sync_start_PTS_of_playItem to specify the playback start time in which the playback operation of the Sub-Trajectory starts on the time axis of the Main Path. The sync_PlayItem_id and sync_start_PTS_of_PlayItem are used when the playing time of the main AV current is the same as that of the stream contained in the file referred to by the Sub-Trajectory, as shown in Figures 7 and 9, but they are not used for the cases shown in Figures 8 and 10 (when the reproduction time of the main AV current is different from the current contained in the file referred to by the Sub-Trajectory). The SubPlayItem_IN_time, SubPlayItem_OUT_time, sync_PlayItem_id, and sync_start_PTS_of_PlayItem are used in common for the fragment referred by the SubPlayltem. Next, the portion where the SubPlayltem refers to a plurality of fragments si (is_multi_Clip_entries == lb) is discussed. More specifically, the case where SubPlayltem refers to a plurality of fragments as shown in Figure 9 or 10. The num_of_Clip_entries indicates the number of fragments, and designates fragment to those that have Clíp_informatíon_file_name_ [0] [subClip_entiry_id]. That is, num_of_Clip_entries designates fragments, such as those -which have Clip_Information_file_name [1], Clip_Information_file_name [2], etc., different from those that Clip__Information_file_name [0] has. The SubPlayltem also includes Clip_code_identifier [subClip_entry_id] to specify the encoding-decoding method for the fragment, ref_to_STC_id [subClip_entry__id], which is the information with respect to the non-sequential points of STC (non-sequential points of the system time base) ), and reserved_for_future_use. The SubPlayItem_IN__time, SubPlayItem_OUT_time, sync_playltem_id, and sync_start_PTS_of_PlayItem are used in common for the fragments referred by the SubPlayltem • In the example of Figure 9 SubPlayItem_IN_time, SubPlayItem_OUT_time, sync_PlayItem_id, and sync_start_PTS_of_playItem are used in common for SubClip_entry_id = 0 to N. The subtitle based in Text for selected subClip entry id is played based on the SubPlayItem_IN_time, SubPlayItem_OUT_time, sync_PlayItem_id, and sync_start_PTS_of_PlayItem. The numerical value is assigned sequentially to SubClip_entry_id of 1 in the order in which Clip_Information_file_name [subClip_entry_id] appears in the SubPlayltem. The subClip_entry_id of Clip_Information_file_name [0] is 0. Figure 14 illustrates the syntax of Playltem (). The "length" is an unassigned 16-bit integer that indicates the byte number immediately after the length field at the end of the Playltem (). Clip_Information_file_name [0] is a field to specify the fragment referred to by Playltem (). In the example of Figure 7, the main stream AV file is referred to by Clip_Information_file_name [0]. The Playltem () also includes Clip_codec_identifier [0] which specifies the encoding-decoding method for the fragment, reserved_for_future_use, is_multi_angle, connection__condition, and ref_to_STC_id [0], which is information regarding the non-sequential points of STC (points not sequential of the system time base). The PlayltemO also includes IN_time and OUT_time to specify the playback area of the playback element in the clip. In the example in Figure 7, IN time and OUT time specific the playback area of the main stream AV file. The PlayltemO also includes UO_mask_table (), PlayItem_random_access_mode, and still_mode. A description of a case where is_multi_angle indicates a plurality of angles is not given here, since such a case does not refer directly to the present invention. The STN_table () in Playltem () provides a mechanism to allow a user, if the target Playltem and at least one Sub-Trajectory to be played in relation to the target Playltem are provided, to select from the contained streams in the fragment referred to by the Playltem and the fragments referred by at least one Sub-Trajectory when the user changes the sound or subtitles. Figure 15 illustrates the syntax of STN__table (). The STN_table () is set as an attribute of Playltem. The "length" is an unassigned 16-bit integer that indicates the number of bytes immediately after the length field at the end of STN_table (). After the "length", it follows reserved_for_future_use of 16 bits. The number_of_video_stream_entries indicates the number of streams provided with video_stream_id entered (registered) in STN_table (). The video_stream_id is the information to identify the video streams. The video_stream_number is the number of video stream that can be observed by the user when switching to video. The number_of_audio_stream_entries indicates the number of streams provided with audio_stream_id entered (registered) in STN_table (). The audio_stream_id is the information to identify the audio streams. The audio_stream_number is the number of audio stream that can be observed by the user when switching to sound. The number_of_PG_txtST_stream_entries indicates the number of streams provided with PG_txtST_stream_id entered in STN_table (). In STN_table () shown in Figure 15, the streams (PG, display graphics stream) in which the bitmap subtitles, such as sub DVD images, are blank segment codes, and the subtitle files of text (txtST) - are entered. The PG_txtST_stream_id is the information to identify subtitle streams and PG_txtST_stream_number is the subtitle stream number (text subtitle stream number) that can be observed by the user when switching to subtitles. The num_of_IG_stream_entries indicate the number of streams provided with IG_stream_id entered in STN_table (). In STN_table () shown in Figure 15, the streams of interactive graphics are entered. The IG_stream_id is the information to identify the streams of interactive graphics. The IG_stream_number is the current number of graphics that can be observed when switching to graphics. The syntax of stream_entry () is discussed in the following with reference to Figure 16. The "type" is an 8-bit field that indicates the type of information required to specify only the current provided with the current number described above. If type = l, a packet ID (PID) is designated to specify an elementary stream of a plurality of elementary streams multiplexed in the fragment (Main Fragment) referred to by the Playltem. The ref_to_stream_PID_of_mainClip indicates this PID. That is, if type = l, the stream can be determined only by specifying the PID in the main stream AV file. If type = 2 when the Sub-Path refers to a fragment in which only one elementary stream is multiplexed, the SubPath_id of the Sub-Trajectories is designated to specify that elementary stream. The ref_to_SubPath_id indicates the SubPath_id. Type = 2 is used when only one audio stream is referred to by the Sub-Trajectories, as shown in Figure 8, that is, when the Sub-Element of Reproduction contains only one fragment. If type = 3, when the Sub-Trajectory refers to a plurality of fragments at the same time and only one elementary stream is multiplexed in each fragment, the SubPath_id and Clip id of the Sub-Trajectory are designated to specify the elementary stream of a fragment (Sub Fragment ) referred by the Sub-Trajectory. The ref_to_SubPath_id indicates this SubPath_id, and ref_to_subClip_entry_id indicates this Clip id. Type = 3 '• is used when the Sub-Path refers to a plurality of fragments (Text-based subtitles), as shown in Figure 9, that is, when the SubPlayltem contains a plurality of fragments. If type = 4, when the Sub-Path refers to a plurality of fragments at the same time and a plurality of elementary streams is multiplexed in each fragment, the SubPath_id, Clip id and - the packet ID (PID) of the Sub-Path. Path are designated to specify one of the plurality of elementary streams of a fragment (Sub-Fragment) referred to by the Sub-Path. The ref_to_SubPath_id indicates this SubPath_id, ref__to_subClip_entry_id indicate this Clip id, and ref_to_stream_PID_of_subClip indicate this PID. TypV = 4 is used when a plurality of fragments are referred to in the SubPlayltem and when a plurality of elementary streams are referred to by each fragment. When the Playltem and at least one Sub-Path reproduced in relation to the Playltem are provided, the use of types (type = type = 4) makes it possible to specify an elementary stream of the fragment referred to by the Playltem and the fragments referred to by minus a Sub-Trajectory. It should be noted that type = l indicates the fragment (main fragment) referred to by the Main Path, and type = 2 to 4 indicates the fragment (Sub-Fragment) referred to by the Sub-Path. In Figure 16, four types are provided to specify the elementary streams. However, only two types, and more specifically, the type (type = l in Figure 16) to specify the multiplexed elementary stream in the main fragment and the type (type = 2 to 4 in Figure 16) can be provided to specify the elementary stream of the fragment used by the Sub-Trajectory. The stream_entry () syntax of such a case is described in the following with reference to Figure 33. In Figure 33, the "type" is an 8-bit field that indicates the type of information required to specify only the stream provided with the current number described above. More specifically, the 8-bit type field is used to designate the type of a database to specify the elementary stream referred to by the current number of stream_entry (). In the example shown in Figure 33, the type is divided into two types, as shown in Figure 34. In Figure 34, type = l is the type (type = l in Figure 16) to specify 'the current multiplexed elementary in the main fragment and type = 2 is the type (types = 2 to 4 in Figure 16) to specify the elementary stream of the fragment used by the Sub-Trajectory. Type = l in Figure 33 is used specifically for an elementary stream of the fragment (main fragment) used by the Playltem. More specifically, when type = l, the packet ID (PID) is designated to specify one of a plurality of elementary streams multiplexed in the fragment (main fragment) referred to by the Playltem. The ref_to_stream_PID_of_mainClip indicates this PID. In other words, when type = l, the stream can be determined only by specifying the PID in the main stream AV file. Type = 2 in Figure 33 is used to specify the elementary stream of the fragment used by the Sub-Path that accompanies the Playltem. In the case of type = 2, for example, when the Sub-Trajectory refers to a fragment in which only one elementary stream is multiplexed (type = 2 in Figure 16), or when the Sub-Trajectory refers to a plurality of fragments at the same time and only one elementary stream is multiplexed in each fragment (type = 3 in figure 16), or when the Sub-Path refers to a plurality of fragments, at the same time and when a plurality of elementary streams are multiplexed in each fragment (type = 4 in Figure 16), the SubPath_id, Clip id, and the packet (PID) is designated to specify the elementary stream. Although in Figure 33, when type = 2, three VEDs, such as SubPath_id, Clip id, and packet ID (PID), are specified, it is not necessary to specify all three IDs. For example, when the Sub-Path refers to a fragment in which only one elementary stream is multiplexed (type = 2 in Figure 16), it is sufficient if the SubPath_id of the Sub-Path is designated to specify the elementary stream. When the Sub-Trajectory refers to a plurality of fragments at the same time and when only one elementary stream is multiplexed in each fragment (type = 3 in Figure 16), it is sufficient if the SubPath_id, and Clip id of the Sub-Trajectory they are designated to specify the elementary stream of the fragment (Sub fragment) referred to by the Sub-Trajectory. When the Sub-Path refers to a plurality of fragments at the same time and when a plurality of elementary streams are multiplexed in each fragment (type = 4 in Figure 16), it is necessary that the SubPath_id Clip id, and the packet ID (PID) of the Sub-Trajectory are designated to specify one of the plurality of elementary streams of a fragment (Sub-Fragment) referred to by the Sub-Trajectory. That is, when type = 2 in Figure 33 or 34, between the SubPath_id, Clip id, and the packet ID (PID), it is sufficient if at least Subpath_id is designated. When the Playltem and at least one Sub-trajectory reproduced in relation to the Playltem 'is provided, the use of types (type = ly 2) makes it possible, as shown in Figures 33 and 34, to specify an elementary stream of the fragment referred by the Playltem and the fragments referred by at least one Sub-Trajectory. Referring again to a description of STN_table () in Figure 15, in the PARA loop of the video stream ID (video_stream_id), video_stream_id is assigned from 0 to a video elementary stream specified by each stream_entry (). Instead of the video stream ID (video_stream_id), the video stream number (video_stream_number) can be used, in which case, the video_stream_number is assigned as 1. That is, the number obtained by adding one to the video_stream_id is the video_stream_number The current number of video is assigned from 1 since video_stream_number is the number of video stream that can be observed by the user when switching to video. Similarly, in the PARA loop of the audio stream ID (audio_stream_id), audio_stream_id is assigned from 0 to an audio elementary stream specified by each stream entry (). As in the video stream, instead of the audio stream ID (audio_stream_id), the audio stream number (audio_stream_number) can be used, in which case, the audio_stream_number is assigned to 1. That is, the number obtained at add one to audio_stream_id is the audio_stream_number. The audio stream number is assigned from 1 since audio_stream_number is the number of audio stream that can be observed by the user when switched to sound. Similarly, in the PARA loop of the subtitle stream ID (PG_txtST_stream_ ± d) r PG__txtST_stream_id is assigned from 0 to a bitmap subtitle or text subtitle elementary stream specified by each stream_entry (). As in the video stream, instead of the subtitle stream ID (PG_txtST_stream_id), the subtitle stream number (PG_txtST_stream_number), it can be used, in which case the PG_txtST_stream_number is assigned from 1. That is, the number obtained by adding one to PG_txtST_stream_id is the PG_txtST_stream_number The subtitle stream number is assigned from 1 since PG_txtST_stream_number is the text subtitle stream number that can be observed by the user when switching to subtitles. Similarly in the PARA loop of the graphics stream ID (IG_stream_id), IG_stream_id is assigned from 0 to an elementary stream of interactive graphics specified by each stream_entry (). As in the video stream, instead of the graphics stream ID (IG_stream_id), the graphics stream number (IG_stream_number) can be used, in which case, the IG_stream_number is assigned as 1. That is, the number obtained by adding one to IG_stream_id is the IG_stream_number. The IG_stream_number is assigned from 1 since IG_stream_number is the graphics stream number that can be observed by the user when switching to graphics. The stream_attribute () in the STN_table () shown in Figure 15 is as follows. The stream_attribute () in the PARA loop of the video stream ID (video_stream_id) provides current attribute information with respect to a video elementary stream specified by each stream_entry (). That is, in stream_attribute (), the attribute information 'of current with respect to a video elementary stream specified by each stream_entry () is indicated. Similarly, the stream_attributes () in the PARA loop of the audio stream ID (audio_stream_id) provides the current attribute information with respect to at least one elementary audio stream specified by each stream_entry. That is, stream_attribute (), the current attribute information with respect to at least one elementary stream of audio specified by each stream_entry () is indicated. Similarly, the stream_attribute in the PARA loop of the subtitle stream ID (PG__txtST_stream_id) provides current attribute information with respect to a bitmap subtitle elementary stream or a text subtitle elementary stream specified by each stream_entry () . That is, in stream_attribute (), the current attribute information with respect to an elementary stream of map bit subtitle is specified by each stream_entry () is indicated. Similarly, the stream_attribute () in the PARA loop of the graphics stream ID (TG_stream__id) provides current attribute information with respect to an elementary stream of interactive graphics specified by each stream_entry (). That is, *, in stream_attribute (), the current attribute information with respect to an interactive graphics elementary stream specified by each stream_entry () is indicated. The syntax of stream_attribute () is discussed in the following with reference to Figure 17. The "length" is an unassigned 16-bit integer that indicates the length field byte number at the end of stream_attribute (). The stream_coding_type indicates the encoding type of the elementary stream, as shown in Figure 18.
The encoding types of the elementary streams include MPEG-2 video stream, HDMV LPCM audio, Dolby AC-3 audio, dts audio, presentation graphics stream, interactive graphics stream, and Text subtitle stream. The video_format indicates the video format of a video elementary stream, as shown in the Figure 19. The video formats of the elementary streams - video include 480i, 576i, 480p, 1080i, 720p, and 1080p. The frame_rate indicates the frame rate of a video elementary stream, as shown in the Figure . The frame rates of the video elementary streams include 24000/1001, 24, 25, 30000/1001, 50, and 60000/1001. The aspect__ratio indicates the aspect ratio of a video elementary stream, as shown in Figure 21. The aspect ratios of elementary video streams include 4: 3 display aspect ratio and 16 aspect ratio. : 9 The audio_presentation_type indicates the type of presentation of an elementary stream of audio, as shown in Figure 22. The types of presentation of the audio elementary streams include single monophonic channel, double monophonic channel, stereo (2-channel), and multi -channel . í? The sampling_frequency indicates the sampling frequency of an elementary audio stream, as shown in Figure 23. The sampling frequencies of the elementary audio streams include 48 kHz and 96 kHz. The audio_language_code indicates -the language code (for example, Japanese, Korean or Chinese) of an elementary stream of audio. The PG_language_code indicates the language code (for example, Japanese, Korean or Chinese) of a bitmap subtitle elementary stream. The IG_language_code indicates the language code (for example, Japanese, Korean or Chinese) of an elementary stream of interactive graphics. The textST_language_code indicates the language code (for example, Japanese, Korean or Chinese) of an elementary stream of text subtitle. The character_code indicates the letter code of a text subtitle elementary stream, as shown in Figure 24. The letter codes of the text subtitle elementary streams include Unicode VI .1 (ISO 10646-1), Shift JIS (Japanese), KSC5601-1987 which includes KSC 5653 for Roman (Korean), GB18030-2000 (Chinese), GB2312 (Chinese) and BIG5 (Chinese). The stream attribute () syntax shown in Figure 17 is specifically described in the following with reference to Figures 17, and 18 to 24. If the type of coding (stream_coding_type, in Figure 17) of the elementary stream is the current of video MPEG-2 (Figure 18), stream_attribute () includes the video format, (Figure 19), the frame rate (Figure 20), and the aspect ratio (Figure 21) of the elementary stream. If the encoding type (stream_coding_type in Figure 17) of the elementary stream is the HDMV LPCM audio, Dolby AC-3 audio or audio dts (Figure 18), stream_attribute () includes the audio presentation type (Figure 22), the sampling frequency (Figure 23), and the language code of the audio elementary stream. If the type of coding (stream__coding_type, in Figure 17) of the elementary stream is the presentation graphics stream (Figure 18), stream_attribute () includes the language code of the bitmap subtitle elementary stream. If the type of coding (stream_coding_types in Figure 17) of the elementary stream is the stream of interactive graphics (Figure 18), stream_attribute () includes the language code of the elementary stream of interactive graphics. If the type of coding (stream_coding type, in Figure 17) of the elementary stream is the subtitle stream of Text (Figure 18), stream_attribute () includes the letter code (Figure 24) and the language code of the stream elementary subtitle text. The attribute information is not restricted to the types described above. In this way, if the Playltem and at least one Sub-Trajectory reproduced in relation to the Playltem are provided, by referring to the fragment referred to by the Playltem and the fragments referred by at least one Sub-Trajectory, the attribute information with with respect to an elementary stream specified by stream_entry () it can be defined by stream_attribute (). When checking the attribute information (stream_attribute (), the playback apparatus can determine if it has a function to reproduce the corresponding elementary stream., when checking the attribute information, the reproduction apparatus may select the elementary streams according to the initial information with respect to the language set in the reproduction apparatus. It is now assumed, for example, that the reproduction apparatus has a function to reproduce elementary streams of bitmap subtitles without a function of reproducing elementary currents of text subtitle. In this case in response to an instruction to change the user languages, the playback apparatus sequentially selects only the bitmap subtitle elementary streams from the PARA loop of the subtitle stream ID (PG_txtST_stream_id) and reproduces the selected elementary streams . If the initial information regarding the language set in the reproduction apparatus is Japanese, in response to an instruction to change the sound from the user, the reproduction apparatus sequentially selects only the audio elementary stream whose language code is Japanese of the loop FOR of the audio stream ID (Audio_stream_id) and reproduces -the selected elementary streams. As described above, by providing STN_table () in Playltem (), if Playltem and at least one Sub-Path reproduced in relation to the Playltem are provided, the user may select a stream to be reproduced from the fragment referred to by the Playltem and the fragments referred to by at least one Sub-Trajectory when the sound or subtitles are changed. In this way, interactive operations can be performed for streams or data files other than a main AV stream being reproduced. Since a PlayList includes a plurality of Sub-Trajectories and each Sub-Path refers to a SubPlayltem, AV streams that have high extensibility and high flexibility are implemented. That is, SubPlayltems can be added after that. For example, if the PlayList that includes a fragment AV stream file referenced by the Main Path is replaced by the PlayList that includes the fragment AV stream file and a new Sub-Path, the user can refer to it, based on the new playlist, not only the fragment AV stream file referenced by the Main Trajectory, but also the fragment AV stream files different from the fragment AV stream file referenced by the Main Trajectory. In this way, AV currents have high extensibility. A reproduction apparatus to which the present invention is applied is described in the following. Figure 25 is a block diagram illustrating an example of the configuration of the reproduction apparatus 20 to which the present invention is applied. The reproduction apparatus 20 is the reproduction apparatus 20 for reproducing the PlayList which includes the Main Path and the Sub-Path previously described. The reproduction apparatus 20 includes a storage unit 31, a switch 32, a decoder 33 AV and a controller 34.
In the example shown in Figure 25 the controller 34 first reads a PlayList file from the storage unit 31, and reads an AV stream or AV data from a recording medium, such as an HDD, Blu-ray Disc, or DVD , by storage unit 31 in the base of. the information regarding the PlayList file. The user can give an instruction to switch the sound or subtitles in the controller 34 when using a user interface. The controller 34 reads the initial information with respect to the language set in the reproduction apparatus 20 of a storage unit (not shown). The PlayList file includes, not only information regarding the Main Path and the Sub-Path, but also STN_table (). The controller 34 reads a primary fragment AV stream file (after that referred to as a "main fragment") referred to by the Playltem contained in the PlayList file, a sub-fragment AV stream file (referred to thereafter as "-sub fragment") referred to by the SubPlayltem, and the text subtitle data referred by SubPlayltem of a recording medium by the storage unit 31. The controller 34 controls the playback apparatus 20 for selecting and reproducing elementary streams according to the reproduction function of the reproduction apparatus 20 or for selecting and reproducing elementary streams according to the initial information with respect to the language set in the recording apparatus 20. reproduction. The AV decoder 33 includes buffers 51 to 54, PID filters 55 and 56, switches 57 to 59, a background decoder 71, an MPEG 2 video decoder 72 (Groups of Experts in Moving Images), a decoder 73. of display graphics, an interactive graphics decoder 74, an audio decoder 75, a Text-ST composition 76, a switch 77, a background plane generator 91, a video plane generator 92, a generator 93 of "plane" of presentation graphics, a graphics processor 94 generator, a buffer 95, a video data processor 96 and an audio data processor 97. The file data read by the controller 34 is demodulated by a demodulator, and the demodulated multiplexed currents are then subjected to error correction by an ECC decoder.The switch 32 then divides the multiplexed streams subjected to error correction. s according to the types of currents and provides the currents divided to the corresponding buffers 51 to 54 under the control of the controller 34. More specifically, under the control T of the controller 34, the switch 32 provides the background image data to the buffer 51, the main fragment data to the buffer 52, the sub-fragment data to the buffer 53, and the ST-text data to the buffer 54. Then, the buffers 51 to 54 store the data. background image data, main fragment data and sub-fragment data and ST-text data, respectively therein. The main fragment is a stream (for example, a transport stream) in which at least one stream of video, audio, bitmap subtitle (stream of presentation graphics), and stream of interactive graphics are multiplexed together with a video stream. The sub-fragment is a stream in which at least one audio stream, bitmap subtitle (stream of presentation graphics) and stream of interactive graphics are multiplexed. The text subtitle data file data (ST-text) may be a multiplexed stream, such as a transport stream, but this is not essential. When reading the main fragment, the sub-fragment, and the text subtitle data of the storage unit 31 (recording medium), they can be read alternately in a time division form. Alternatively, the sub-fragment or the text subtitle data can be pre-loaded completely into the buffer (buffer 53 or 54), respectively, before reading the main fragment. The playback apparatus 20 reads that file data from a recording medium via the storage unit 31 for playing video, bitmap subtitle, interactive graphics, and audio. More specifically, the current data read from the buffer 52, which serve as the main fragment read by the buffer memory, is produced in the PID filter 55 (packet ID), which is arranged subsequent to the buffer 52, at a predetermined time The PID filter 55 assimilates currents contained in the main fragment for the corresponding elementary current decoders which are arranged subsequent to the PID filter 55, according to the PIDs (the packet IDs). More specifically, the PID filter 55 provides the video streams to the MPEG2 video decoder 72, presentation graphics stream to the switch 57, which provides the graphics streams to the presentation graphics decoder 73, interactive graphics stream to the switch 58, which provides the graphics streams to the interactive graphics decoder 74, and audio streams to the switch 59, which provides audio streams to the audio decoder 75. The presentation graphics streams are, for example, the bitmap subtitle data, and the text subtitle data, for example, are the text subtitle data. The current data read from the buffer 53 serving as a read sub-fragment of the buffer memory occurs to the PID type 56 (packet ID), which is subsequently set to the buffer 53, at a predetermined time . The PID filter 56 allocates currents contained in the sub-fragment to the corresponding elementary current decoders, which are arranged subsequent to the PID filter 56, according to the PIDs (the packet IDs). More specifically, the PID filter 56 provides the presentation graphics streams to the switch 57, which provides the graphics stream to the presentation graphics decoder 73, interactive graphics stream to the switch 58, which provides the graphics streams to the decoder. of interactive graphics and audio streams to the switch 59, which provides the audio streams to the audio decoder 75. The data read from the buffer 51, which is used as the background image data buffer, is provided to the background decoder 71 at a predetermined time. The background decoder 71 decodes the background image data, and then provides the decoded data to the background plane generator 91. The video streams assigned by the PID filter 55 are provided to the video decoder 72, which is subsequently set to the PID filter 55. The video decoder 72 decodes the video streams and provides the decoded video streams to the video plane generator 92. The switch 57 selects one of the presentation graphics strings contained in the main given fragment of the PID filter 55 and the presentation graph currents contained in the provided sub-fragment of the PID filter 56, and provides the graph currents of selected presentation to the presentation graphics decoder 73, which is arranged subsequent to the switch 57. The presentation graphic decoder 73 decodes presentation graphics streams and provides them to the switch 77, which furthermore supplies them to the plane generator 93. presentation graphic. The switch 58 selects one of the interactive graphics streams contained in the main provided fragment of the PID filter 55 and the interactive graphics streams contained in the sub-fragment and provides the selected interactive graphics streams to the interactive graphics stream decoder 74. that they have subsequent to switch 58. That is to say, the streams of interactive graphics simultaneously entered into the interactive graphics decoder 74 are separate streams of the main fragment or the sub-fragment. The interactive graphics decoder 74 decodes the interactive graphics streams, and provides the decoded streams to the interactive graphics plane generator 94. The switch 59 selects one of the audio streams contained in the main provided fragment of the PID filter 55 and the audio streams contained in the sub-fragment and provides the selected audio streams to the audio decoder 75, which is subsequently arranged to the switch 59. That is, the audio streams input simultaneously into the audio decoder 75 are separate streams of the main fragment or the sub-fragment. The audio decoder 75 decodes the audio stream and provides the decoded audio stream to the audio data processor 97. The sound data selected by the switch 32 is provided to the buffer 95 and stored therein. The buffer 95 provides the sound data to the audio data processor 97 at a predetermined time. The sound data eg., Is the effective sound that can be selected from a menu. The data read from the buffer 54 which serves as the subtitle of text read from the buffer is produced to the text subtitle composition 76 (decoder), which is subsequently set to the buffer 54, at a predetermined time. The text subtitle composition 76 decodes the ST-text data and provides the decoded data to the switch 77. The switch 77 selects one of the decoded presentation graphics streams 73 of presentation graphics and ST-text (subtitle data). of text), and provides the selected data to the presentation graphics plane generator 93. That is, the subtitle images simultaneously provided to the presentation graphics plane generator 93 are those produced from the presentation graphics decoder 73 or the composition 76 of the text subtitle (text-ST). The streams of presentation graphics simultaneously input to the presentation graphics decoder 73 are separate streams of the main fragment or the sub-fragment (selected by the switch 57). Accordingly, subtitle images simultaneously entered into the presentation graphics plane generator 93 are presentation streams of the main fragment, stream of presentation graphics of a sub-fragment or text subtitle data. The background plane generator 91 generates a background plane, which serves as, for example, a carpet image when a video image is displayed by reducing the size of the video image, on the basis of the background image data. provided from the background decoder 71, and provides the generated background plane to the video data processor 96. The video plane generator 92 generates a video plane based on the video data provided from the MPEG2 video decoder 72, and provides the video plane generated to the video data processor 96. The presentation graphics plane generator 93 generates a presentation graphics plane, which serves as, for example, an interpretation image, in the database (stream of presentation graphics or text subtitle data) selected by the switch 77 and provides the presentation graphics plane generated to the video data processor 96. The interactive graphics plane generator 94 generates an interactive graphics plane based on the data of the interactive graphics streams provided from the interactive graphics decoder 74, and provides the generated interactive graphics plane to the video data processor 96. The video data processor 96 combines the background plane of the generator 91 of the background plane, the video plane of the video plane generator 92, the presentation graphics plane of the presentation graphics plane generator 93, and the interactive graphics plane of the interactive graphics plane generator / y produces the combined plane as a video signal The audio data processor 97 combines the decoded audio data 75 with the sound data of the buffer 95, and produces the combined data as an audio signal. The switches 57 to 59 and the switch 77 select the data according to the selection by the user through the user interface or depending on the type of file containing the target data. For example, if the audio streams are contained only in the sub-fragment AV stream files, the switch 59 changes the selection next to the sub-stream. The reproduction processing performed by the reproduction apparatus 20 shown in Figure 25 is described in the following with reference to the flow diagrams in Figures 26 to 28. This processing is initiated when an instruction to preproduce a predetermined AV stream is given by a user through a user interface. In the Sil stage, the controller 34 reads a PlayList file recorded on a recording medium or on an HDD (Hard Disk Drive) (not shown) by the storage unit 31. For example, the PlayList file discussed with reference to Figure 11 is read. In step S12, the controller 34 reads a main fragment, a sub-fragment, and text subtitle data (text data-ST). More specifically, the controller 34 reads the corresponding main fragment based on the Playltem contained in the PlayList discussed with reference to Figure 11. Controller 34 also reads a sub-fragment and text subtitle data based on the SubPlayltem discussed with reference to Figures 12 and 13, which is referred to by the Sub. -Trayectoria contained in the PlayList. In step S13, the controller 34 controls the switch 32 to supply the read data (main fragment, sub-fragment, and text subtitle data) to the corresponding buffers 51 to 54. More specifically, the controller 34 controls the switch 32 to provide the background image data to the buffer 51, the main fragment data to the buffer 52, and the sub-fragment data to the buffer 53, and the ST-text data to the buffer 54.
In step S14, the switch 32 is changed under the control of the controller 34. Then the background data is provided to the buffer 51, the main fragment data is provided to the buffer 52, the sub-fragment data is provided to the buffer 53 and the text subtitle data are provided to the buffer 54. In step S15, the buffers 51 to 54 store the data provided therein. More specifically, the buffer 54 stores the background image data, the buffer 52 stores the main fragment data, the buffer 53 stores the sub-fragment data and the buffer 54 stores the text-ST data. . In step SI6 the buffer 51 produces the background image data to the background decoder 71. In step S17 the buffer 52 produces the data from the main fragment stream to the PID filter 55. In step S18, the PID filter 55 allocates the elementary streams to the corresponding elementary current decoders based on the PIDs attached to the TS packets forming the main fragment AV stream file. More specifically, the PID filter 55 provides the video streams (MPEG2 video encoders 72, display graphics stream to switch 57, which provides the streams to the presentation graphics decoder 73, interactive graphics streams to the switch 58, which provides the currents to the interactive graphics decoder 74, and audio streams to the switch 59, which provides the currents to the audio decoder 75. That is, the video streams, the presentation graphics streams, the interactive graphics streams. , and the audio streams are provided with different PIDs In the step S19, the buffer 53 produces the sub-fragment current data to the PID filter 56. In step S20, the PID filter 56 allocates the elementary streams to the corresponding decoders based on the PIDs More specifically, the PID filter 56 provides the graph currents presenting to the switch 57, which provides the currents to the presentation graphics decoder 73, interactive graphics streams to the switch 58, which provides the currents to the interactive graphics decoder 74, and audio streams to the switch 59 providing the 75 audio decoder. In step S21 the switches 57 to 59, which are subsequently arranged with the PID filters 55 and 56, selected one of the main fragment and the sub-fragment under the control of the controller 34 via the user interface. More specifically, the switch 57 selects presentation graphics streams from the main fragment or those from the provided sub-fragment of the PID filter 55, and provides the selected streams to the presentation graphics decoder 73, which is arranged subsequent to the switch 57. The switch 58 selects the interactive graphics streams of the main fragment or those of the sub-fragment provided from the PID filter 55, and provides the selected streams to the interactive graphics decoder 74, which is arranged subsequent to the switch 58. The switch 59 selects streams audio of the main fragment or those of the provided sub-fragment of the PID filter 55, and provides the selected streams to the audio decoder 75, which is arranged subsequent to the switch 59. In step S22, the buffer 54 produces the data from text subtitle to composition 76 of text subtitle. In step S23, the background decoder 71 decodes the background data and images and provides the decoded data to the background plane generator 91. In step S24, the MPEG2 video decoder 72 decodes the video streams and provides the decoded streams to the video plane generator 92. In step S25, the presentation graphics decoder 73 decodes presentation graphics streams selected by the switch 57, and produces the decoded streams to the switch 77, which is subsequently set to the presentation graphics decoder 73. In step S26 the interactive graphics decoder 74 decodes the provided interactive graphics streams selected by the switch 58, and outputs the decoded streams to the interactive graphics plane generator 94, which is subsequently set to the interactive graphics decoder 74: Step S27, the audio decoder 75 decodes the provided audio data selected by the switch 59 produces the decoded data to the audio data processor 97, which is subsequently set to the audio decoder 75. In step S28, the Text-ST composition 76 decodes the text subtitle data and produces the decoded data to the switch 77, which is subsequently set to the Text-ST composition 76. In step S29, the switch 77 selects the data of the presentation graphics decoder 73 or the Text-ST composition 76. More specifically, the switch 77 selects presentation graphics streams decoded by the presentation graphics decoder 73 or the Text-ST (text subtitle data) of the Text-ST composition 76 and provides the selected data to the generator 93. of presentation graphics flat. In step S30 the background plane generator 91 generates a background plane based on the background image data provided by the background decoder 71. In step S31 the video plane generator 92 generates a video plane based on the video data provided from the MPEG-2 video decoder 72. In step S32, the presentation graphics plane generator 93 generates a presentation graphics plane based on the data selected by the switch 77 and provided by the presentation graphics decoder 73 or the Text-ST composition 76 in the step S29. In step S33, the interactive graphics plane generator 94 generates an interactive graphics plane based on the interactive graphics stream data provided from the interactive graphics decoder 74. In step S34, the buffer 95 stores the sound data selected and provided in step S14 and provides it to the audio data processor 97 and at a predetermined time. In step S35, the video data processor 97 combines the planes and produces the combined data. More specifically, the video data processor 97 combines the data from the background plane generator 91, the video plane generator 92, the presentation graphics plane generator 93, and the interactive graphics plane generator 94, and produces the combined data as video data. In step S36 the audio data processor 97 combines the audio data with the sound data, and produces the resulting data. According to the processing shown in Figures 26 to 28, referring to the main fragment referred to by the Main Path included in the PlayList, a s "ub-fragment referred by the corresponding Sub-Path included in the PlayList, and the data of Text subtitle, the corresponding data are reproduced By providing the Main Path and the Sub-Path in the PlayList, a fragment AV stream file, which is different from the main fragment AV stream file specified by the Main Path, This can be specified by the Sub-Trajectories, so the sub-fragment data, which is different from the sub-fragment specified by the Main Path Playlist, can be reproduced together with (in synchronization with) the main fragment data contained in the main fragment In Figures 26 to 28, the order of steps S16 and S17 can be reversed or steps S16 and S17 can be executed e in parallel. Similarly, the order of steps S18 and S20 can be reversed or steps S18 and S20 can be executed in parallel. The order of steps S23 to S28 can be reversed or steps S23 to S28 can be executed in parallel. The order of steps S30 to S33 can be reversed or steps S30 to S33 can be executed in parallel. The order of steps S35 and S36 can be reversed or steps S35 and S36 can be executed in parallel. That is, in Figure 25, the elements arranged vertically in the same layer, that is, the processing works of the buffers 51 to 54, those of the switches 57 to 59 those of the decoders 71 to 76 those of the generators 91 to 94 of planes, and those of the video data processor 96 and the audio data processor 97 may also be executed in parallel and the order thereof is not particularly restricted. The processing performed by the reproduction apparatus 20 when an instruction to change a sonio or subtitle is now given is described with reference to the flow diagram in Figures 29 and 30. First reference is made to the flow diagram in the Figure 29 to discuss processing when an instruction to change sound is given by the user. This processing is executed when the reproduction processing shown in Figures 26 to 28 is being performed. In step S51, the controller 34 obtains an order list of the audio stream numbers (may be the IDs). More specifically, the controller 34 refers to the PlayltemO STN_table () discussed with reference to Figure 14 to obtain the order list of the audio stream numbers i (the IDs) entered in STN_table () discussed with reference to the Figure 15. That processing is executed when the reproduction processing shown in Figures 26 to 28 starts. In response to an instruction to change the sound given by the user through the user interface, in step S52, the controller 34 receives the instruction to change the sound given by the user. That is, in Figure 29, step S51 has been executed and in response to a command to change the user's sound, step S52 is executed. In step S53, the controller 34 obtains the audio current number subsequent to the audio stream number which is currently played. "For example, if the audio stream (although indicated by the subtitle based on Text in Figure 9, it reads as the audio stream file in this example) that has SubClip_entry_id = 0 shown in Figure 9 is reproduced, the audio stream number having SubClip_entry_id = l is obtained.In step S54 the controller 34 determines whether the playback apparatus 20 has a function to reproduce the audio stream associated with the obtained number.More specifically, the controller 34 makes This determination based on the information indicated in stream_attribute () (Figure 17). If it is determined in step S54 that the function for reproducing the audio stream associated with the obtained number is not provided, the process proceeds to step S55 in which the controller 34 obtains the current number subsequent to the number of current current. That is, if the function to reproduce the audio current associated with the current current number is not provided, the current current number is skipped (which will not be reproduced) and the subsequent current numbers are obtained. Then, after step S55, the process returns to step S54, and the subsequent processing is repeated. That is, the processing is repeated until a number of audio stream that can be reproduced by the playback apparatus 20 can be obtained. If it is determined in step S54 that the function of the audio stream reproduction associated with the obtained number is provided the process proceeds to step S56. In step S56, the controller 34 checks whether the audio stream is contained in the main fragment or sub-fragment. In the example shown in Figure 9, since the SubClip_entry_id = l obtained is referred to by the Sub-Trajectories, the controller 34 can determine that the audio stream associated with the obtained number is contained in the sub-fragment. In step S57, the controller 34 specifies a desired audio stream. More specifically, the controller 34 specifies a desired audio stream contained in the main fragment or in the sub-fragment associated with the obtained number. More specifically, type = 3 is specified in STN_table () discussed with reference in Figure 16. In step S58, the controller 34 instructs the storage unit 31 to read the fragment (main fragment or sub-fragment) in which the desired audio stream is multiplexed. The storage unit 31 reads the target fragment based on this instruction. In step S59, the controller 34 instructs the AV decoder 33 to reproduce the audio stream of the read fragment. In step S60, the decoder 33 AV decodes the audio stream and produces it. More specifically, the audio data decoded by the audio decoder 75 and the sound data produced from the buffer 95 are processed by the processor 97 of the audio data and the data, and the resulting data is produced as an audio signal. In accordance with this processing, the selection made by the switch 59 shown in Figure 25 in step S21 in Figure 27 is determined. More specifically, if the target fragment shown in Figure 29 is the main fragment, the switch 59 provides the audio stream provided from the main side, i.e., the PID filter 55, to the audio decoder 75. If the target fragment is a sub-fragment, the switch 59 provides the audio stream provided from the sub-side, i.e., the PID filter 56, to the audio decoder 75. In this way, the controller 34 can control the change of sound (audio) based on STN_table () of Playltem. By referring to the stream_attribute of STN__table (), the controller 34 can control the change of the playback operation by selecting streams that can be reproduced by the playback apparatus 20.
Although in the processing shown in Figure 29, the audio is changed based on numbers of audio streams, the audio can be changed based on the audio stream IDs (audio_stream_id). In this case, the number obtained by subtracting one from the audio stream number is the audio stream ID. * A description is now given, with reference to the flow diagram in Figure 30, of processing when an instruction to change subtitles is given by the user. This processing is executed when the reproduction processing shown in Figures 26 to 28 is being performed. In step S81, the controller 34 has an order list of subtitle stream numbers (may be the IDs). More specifically, the controller 34 refers to Playltem STN_table () discussed with reference in Figure 14 to obtain the order list of the subtitle stream ID (PG_txtST_stream_id) entered into STN_table () with reference to Figure 15 This process-- is executed when the reproduction processing shown in Figures 26 to 28 is initiated. In response to an instruction to change subtitles given by the user through the user interface, in step S82, the controller 34 receives the instruction to change subtitles given by the user. That is, in Figure 30, step S81 has been executed and in response to an instruction to change user subtitles, step S82 is executed. In step S83, the controller 34 obtains the subtitle stream number subsequent to the subtitle stream number that is currently being played. For example, if the subtitle based on Text in the Figure that has SubClip_entry_id = 0 shown in Figure 9 is played, the subtitle number based on Text that has SubClip_entry_id = l is obtained. In step S84, the controller 34 determines whether the reproduction apparatus 20 has a function to reproduce the subtitle stream associated with the obtained number. More specifically, the controller 34 makes this determination based on the information indicated in stream_attribute () (Figure 17). If it is determined in step S84 that the function for reproducing the subtitle stream associated with the obtained number is not provided, the process proceeds to step S85 in which the controller 34 obtains the current number subsequent to the current current number. That is, if the function to reproduce the subtitle current associated with the current current number is not provided, the current current number is skipped (which is not reproduced) and the subsequent current number is obtained. Then, after step S85, the process returns to step S84, and the subsequent processing is repeated. That is, processing is repeated until a current number of subtitle that can be reproduced by the reproduction apparatus 20 can be obtained. If it is determined in step S8 that the reproduction function of the subtitle stream associated with the obtained number is provided, the process proceeds to step S86. In step S86, the controller 34 checks whether the data corresponding to the obtained number (subtitle stream number subsequent to the subtitle stream currently reproduced) is contained in the main fragment (Main Path) and an eub-fragment (Sub. -path), or a text subtitle data file (Sub-trajectory). In step S87 the controller 34 specifies a stream of desired presentation graphics or text subtitle data. More specifically, the controller 34 specifies a desired presentation graphics stream contained in the main fragment or sub-fragment or desired text title or subtitle data of the text subtitle file. In step S88, the controller 34 instructs the storage unit 31 to read the fragment (main fragment or sub-fragment) in which the currents' - of desired presentation graphics are multiplexed or to read the subtitle data of desired text In step S89, the controller 34 instructs the AV decoder 33 to reproduce the graphics presentation currents of the read fragment or the text subtitle data. In step S90, the AV decoder 33 decodes the presentation graphics streams or the text subtitle data and produces the subtitle image. More specifically, a plane is generated from the decoded presentation graphics streams or the text subtitle data by the presentation graphics plane generator 93 and is combined by the video data processor 96 and is produced as video . According to this processing, the selection made by the switch 77 shown in Fig. 25 in step S29 is determined in Fig. 28. More specifically, if the target data in step S87 in Fig. 30 is "a stream of graphics For example, the switch 77 provides the presentation graphics data provided from the display graphics decoder 73 to the presentation graphics plane generator 93. If the target data is text subtitle data, the 77 provides the data text subtitle provided from the Text-ST composition 76 to the display graphics plane generator 93. The controller 34 can control the change of the reproduction operation by selecting only the streams that can be reproduced by the reproduction apparatus 20. Although in the processing shown in Figure 30, the subtitles are changed based on the correct numbers Subtitle entity, the subtitles can be changed based on the subtitle stream IDs (PG_txtST_stream_id). In this case, the number obtained by subtracting one from the subtitle stream number is the subtitle stream ID. By providing the Main Path and Sub-Paths in the PlayList, audio or subtitles may be selected from streams or data files other than the main AV stream when an instruction to change audio or subtitle by the user is given. The Playltem in the Main Path includes multiplexed data in AV current files and the Current Number Definition table that defines the type of data referred to by the Sub-Path is provided. In this way, currents that have higher extensibility can be implemented. By referring to stream_attribute in STN_table (), the playback apparatus 20 can select sequentially and reproduce only streams that can be reproduced by the playback apparatus 20.
The processing described above can be summarized as follows. The reproduction apparatus 20 obtains the PlayList, which serves as the information as reproduction handling, which includes the Main Trajectory, which is the Main Reproduction Trajectory, which indicates the position of an AV current file recorded in a medium of recording, and the Sub-Trajectories, which serve as a plurality of reproduction Sub-Trajectories, which indicate the positions of the sub-fragments that include appended data (e.g., audio stream data or subtitle stream file data). bitmap) reproduced in synchronization with the playback time of the main image data (video stream data) included in the AV stream file referenced by the Main Playback Path. The reproducing apparatus 20"selects the appended data to be reproduced, based on a user instruction, from among attached data (e.g., audio stream file data) reproduced in synchronization with the video stream data included in the file. of AV current referred to by the Main Path and the appended data (for example, audio stream file data) included in the sub-fragments referred to by the Sub-Trajectories In the processing shown in Figure 29 or 30, an instruction to change subtitles or audio is given, then, the playback apparatus 20 determines whether the playback apparatus 20 has a playback function to reproduce the selected append data (e.g., audio stream file data). determine whether the controller 34 (the playback apparatus 20) can reproduce the appended data when referring to stream_attribute in STN_table (). terminates that the reproduction apparatus 20 has a function to reproduce the selected appended data and if the appended data is contained in its sub-fragment referred by a Sub-trajectory, the sub-fragment referred to by the Sub-Trajectory is read and combined with the main AV stream file (main fragment) referred to by the Main Trajectory, and it is reproduced. For example, if, as the appended data that is reproduced, the audio stream file data referred by a Sub-path is selected by the user (without an instruction to change audio is given by the user), the device 20 of playback combines the data from the audio stream file referred to by the Sub-Path with a Main Fragment AV stream file, that is, an MPEG2 video stream file, a stream of presentation graphics, or a stream file. stream of interactive graphics, and reproduce the combined data. That is, the decoded audio stream file selected by the user is played as audio. As described in the above, since the PlayList includes the Main Trajectory and the Sub-Trajectory, which refers to different fragments, the extensibility of the currents can be achieved. Since a subpath can refer to a plurality of files (eg, Figures 9 and 10), the user can select from within a plurality of different streams. In addition, in the Playlist of the Main Path, STN_table () shown in Figure 15 is arranged as a table that defines multiplexed annexed data (included) in the AV current file referred to by the Main Trajectory and the appended data referred by the Sub-Trajectories. In this way, currents that have greater extensibility can be implemented. The Sub-Path can be easily extended by being entered in STN_table (). The provision of stream_attribute () shown in Figure 17, which is the attribute information with respect to streams, in STN_table () makes it possible to determine whether the selected stream can be reproduced by the playback display 20. Also when referring to stream_attribute (), only streams that can be reproduced by the reproduction apparatus 20 can be selected and reproduced.
The Sub-Path includes SubPath_type that indicates the type of Sub-path (such as audio or text subtitle), as shown in Figure 12, Clip_information_file_name shown in Figure 13 that indicates the name of the sub-fragment referred to by the Sub - Trajectory, and SubPlayItem_IN_time and SubPlayItem_OUT_time shown in Figure 13 indicate the INPUT point and the OUTPUT point, respectively of the fragment referred to by the Sub-Trajectory. Therefore, the data referred to by the Sub-Trajectory can be precisely specified. The Sub-Path also includes sync_PlayItem_id (for example, sync_PlayItem_id shown in Figure 7 or 9), which serves as specific information to specify the AV stream file in the Main Trajectory, to reproduce the Sub-Trajectory simultaneously with the Main Trajectory, and sync_start_PTS_of_PlayItem (for example, sync__start_PTS_of_PlayItem shown in Figure 7 or 9), which is the time in the Main Trajectory, in which the INPUT point of the data referred by the Sub-Trajectory starts in synchronization with the Main Trajectory in the time axis of the Main Path. Therefore, the data (files) referred to by the Sub-Trajectories can be reproduced in synchronization with the primary stream AV file referred to by the Main Trajectory, as shown in Figure 7 or 9.
The data read by the storage unit 31 shown in Figure 25 may be data recorded on a recording medium, such as a DVD, (Digital Versatile Disk) data recorded on a hard disk, data downloaded via a network (not shown) , or combined data of such data. For example, the data can be reproduced based on the PlayList and a sub-fragment recorded on a hard disk and AV stream file of Main Fragment recorded on a DVD. Alternatively, if the PlayList, which uses an Fragment AV stream file recorded on a DVD as a sub-fragment, and a Main Fragment is recorded on a hard disk, the Main Fragment and the Sub-Fragment can be read and played from the disc hard and the DVD, respectively, based on the PlayList recorded on the hard drive. The above described series of processing operations can be executed by hardware or software. In this case the processing operations can be performed by a personal computer 500 shown in Figure 31. In Figure 31, in the personal computer 500, a CPU 501 (Central Processing Unit) executes several processing operations according to a program stored in a ROM 502 (Read Only Memory) or a program loaded in a RAM 503 (Random Access Memory) from a storage unit 508. In RAM 503, the data necessary for the CPU 501 to execute various processing operations is also stored. The CPU 501, the ROM 502, and the RAM 503 are connected to each other via an internal bus 504. An input / output interface 505 is also connected to the internal bus 504. The input / output interface 505 is connected to an input unit 506, such as a keyboard and a mouse, -'- an output unit 507, such as a screen, eg, a CRT or an LCD, the unit 508 storage, such as a hard disk, and a communication unit 509 such as a modem or a terminal adapter. The communication unit 509 performs communication through several networks that include telephone lines or CATV. A unit 510 is connected to the input / output interface 505 if necessary. A removable means 521, such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory, is installed in the unit 510. A computer program read from the removable media 521 is installed in the storage unit 508. . If software is used to run the series of processing operations, the corresponding software program is installed from a network or a recording medium.This recording medium can be formed from a packet medium, such as the removable media 521, which records the program therein, as shown in Figure 31 which is distributed to the user separately from the computer. Alternatively, the recording medium may be formed from the ROM 502 or a hard disk forming the storage unit 508 that records the program therein, which is distributed to the user while it is integrated into the computer. In this specification, the stages that make up the computer program can be executed in chronological order described in this specification.
Alternatively, they can be executed in parallel or individually. In this specification, the system represents the general apparatus that includes a plurality of devices.

Claims (12)

  1. CLAIMS 1. A reproduction apparatus comprising: A obtaining means for obtaining reproduction management information including first information having a main reproduction path indicating a position of an AV current file recorded on a recording medium and second information having a plurality of reproduction sub-trajectories indicating sub-file positions that include appended data to be reproduced simultaneously with the reproduction of main image data included in the AV stream file; selection means for selecting appended data to be reproduced, based on a user's instruction, from among appended data to be reproduced simultaneously with the main image data included in the AV stream file referenced by the main reproduction path and the appended data included in the sub-files referred to by the sub-trajectories of reproduction; read means for reading, if the appended data selected by the selection means are included in a sub-file referred by a reproduction sub-path, the sub-file referred by the reproduction sub-path together with the current file AV referred by the main playback path; and reproducing means for reproducing the data of main images included in the AV current file read by the reading means and the appended data included in the sub-file selected by the selection means and read by the reading means. The reproduction apparatus according to claim 1, wherein the first information includes a table defining the appended data included in the AV current file referred to by the main reproduction path and the appended data referred by the sub-paths of reproduction, and lf 'the selection means selects the appended data to be reproduced, based on the instruction of the user, from among the appended data defined in the table. 3. The reproduction apparatus according to claim 1, further comprising determining means for determining whether the reproduction apparatus has a function for reproducing the appended data selected by the selection means, wherein if it is determined by the means of reproduction. determining that the reproduction apparatus has a function for reproducing the appended data and if the appended data is included in a sub-file referenced by a reproduction sub-path, the reading means reads the s'ub-file referred by the sub - playback path together with the AV stream file referenced by the main playback path, and the playback medium reproduces the main image data included in the AV stream file read by the reading medium and the appended data included in the sub-file selected by the means of selection and read by the reading medium. 4. The reproduction apparatus according to claim 2, further comprising determining means for determining whether the reproduction apparatus has a function for reproducing the appended data selected by the selection means, wherein if it is determined by the means of reproduction. determining that the reproduction apparatus has a function to reproduce the appended data and if the appended data is included in a sub-file referred by a reproduction sub-path, the reading means reads the sub-file referred by the sub-trajectory of reproduction together with the AV current file referred to by the main reproduction path, and the reproduction medium reproduces the data of the main images included in the AV current file read by the reading medium and the appended data included in the sub-file. file selected by the means of selection and read by the reading medium. 5. The reproduction apparatus according to claim 4, wherein the table further defines appended information with respect to appended data, and the determining means determine whether the reproduction apparatus has a function to reproduce the appended data based on the attribute information with respect to the appended data defined in the table. The reproduction apparatus according to claim 1, wherein the second information includes type information with respect to the types of the sub-trajectories of reproduction, name of files of the sub-files referred by the sub-trajectories of playback, and INPUT points and OUTPUT points of the sub-files referenced by the playback path. The reproduction apparatus according to claim 6, wherein the second information further includes specifying information to specify the AV stream file referenced by the main playback path to reproduce the reproduction sub-paths simultaneously with the "path" of main playback, and a time in the main playback path to allow the INPUT points to start in synchronization with the main playback path on the time axis of the main playback path 8. A reproduction method comprising: a obtaining step for obtaining reproduction handling information including first information having a main reproduction path indicating a position of an AV current file recorded in a recording medium and second information having a plurality of sub-trajectories of reproduction that indi ca sub-file positions that include appended data to be played simultaneously with the playback of the main image data included in the AV stream file; a selection step for selecting appended data to be reproduced, based on a user's instruction, from among appended data to be reproduced simultaneously with the main image data included in the AV stream file referenced by the main reproduction path and the appended data included in the sub-files referred to by the sub-trajectories of reproduction; a reading stage to read, if the appended data selected by the processing of the selection stage is included in a sub-file referred by a reproduction sub-path, the sub-file referred to by the b-b reproduction path together with the AV current file referenced by the main playback path; and a reproduction step for reproducing the main image data included in the AV stream file read by the processing of the reading stage and the appended data included in the sub-file selected by the processing of the selection stage and read by the processing of the reading stage. 9. A program that allows a computer to execute processing comprising: a obtaining step for obtaining reproduction handling information including first information having a main reproduction path indicating a position of an AV current file recorded in a medium recording and second information having a plurality of reproduction sub-paths indicating sub-file positions that include appended data that is reproduced simultaneously with the reproduction of the main image data included in the AV stream file; a selection step for selecting appended data to be reproduced, based on a user's instruction, from among appended data to be reproduced simultaneously with the main image data included in the AV stream file referenced by the main reproduction path and the appended data included in the sub-file referred to by the sub-trajectories of reproduction; a reading stage for reading, if the appended data selected by the processing of the selection step is included in a sub-file referred by a reproduction sub-path, the sub-file referred by the reproduction sub-path together with the AV current file referenced by the main playback path; and a reproduction step for reproducing the data of main images included in the AV current file read by the processing of the reading stage and the appended data included in the sub-file selected by the processing of the selection stage and read by the processing of the reading stage. 10. A recording medium that records therein, association data with respect to an AV stream file included in a fragment and appended data to be played simultaneously with the playback of the stream file AV, wherein the association data indicates whether the appended data is included in a fragment used by a main reproduction path indicating a position of the AV current file or in fragments used by a plurality of reproduction sub-trajectories indicating positions sub-files that include appended data reproduced concurrently with the playback of the AV stream file, and, if the association data indicate that the appended data are included in the fragments used with "- the plurality sub-reproduction trajectories that indicate the sub-file positions that include the appended data, the association data includes at least one ID of the reproduction sub-paths to be reproduced selected from the ID to specify the reproduction sub-paths to be reproduced, an ID to specify a fragment used by the reproduction subpath, and an ID to specify a elementary stream to reproduce by the fragment. 11. A recording medium that records therein, data including a reproduction control file having a main reproduction path indicating a position of an AV stream file included in a fragment, wherein the control file of reproduction includes a reproduction sub-path indicating a position of a sub-file that includes appended data that is reproduced simultaneously with the reproduction of the main image data included in an AV stream file, the main reproduction path includes a table which defines a list of elementary streams that can be selected while the main playback path is being played, and the table includes data indicating whether elementary streams that can be selected are included in the AV stream file selected by the playback path main or sub-file selected by a sub-path of r eproduction 12. A data structure that includes a reproduction control file having a main reproduction path indicating a position of an AV stream file included in a fragment, wherein the reproduction control file includes a sub-path of reproduction indicating a position of a sub-file that includes appended data that is reproduced simultaneously with the reproduction of the main image data included in the AV current file, the main reproduction path includes a table defining a list of streams Elementals that can be selected while the main playback path is being played, and the table includes data indicating whether elementary streams that can be selected are included in the AV stream file selected by the main playback path or the sub-file selected by the playback path. SUMMARY OF THE INVENTION A reproduction device, reproduction method, a program, a recording medium and a data structure that allow interactive operation when an AV content is reproduced are provided. A controller (34) acquires a sequence listing of the numbers of the audio streams in advance. When the audio change is made by a user, the controller acquires the next audio stream number after the audio stream number is played, check which of the main fragment and the sub-fragment contains the stream judged to have the reproduction function in the playback device, and reads the fragment where the corresponding audio stream is multiplexed and the main fragment is referenced by the main path. The audio stream file of the corresponding fragment and the file contained in the main fragment and reproduced are selected by the switches (57 to 59, 77) combined by a unit (96) of video data processing and one unit (97). ) of audio data processing, and they are transmitted. The present invention can be applied to a reproduction device.
MXPA/A/2006/007710A 2004-02-16 2006-07-05 Reproduction device, reproduction method, program, recording medium, and data structure MXPA06007710A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2004-038574 2004-02-16
JP2004-108650 2004-04-01

Publications (1)

Publication Number Publication Date
MXPA06007710A true MXPA06007710A (en) 2006-12-13

Family

ID=

Similar Documents

Publication Publication Date Title
EP2562758B1 (en) Reproduction device, reproduction method, and program
US8145035B2 (en) Reproduction device, recording medium, program, and reproduction method
TWI420907B (en) Playback apparatus and method, program, recording medium, data structure, and manufacturing method for recording medium
US8437599B2 (en) Recording medium, method, and apparatus for reproducing text subtitle streams
US8351767B2 (en) Reproducing device and associated methodology for playing back streams
KR20080040617A (en) Reproduction device, reproduction method, program, program storage medium, data structure, and recording medium fabrication method
WO2005101988A2 (en) Recording medium, reproducing method thereof and reproducing apparatus thereof
US7616862B2 (en) Recording medium having data structure for managing video data and additional content data thereof and recording and reproducing methods and apparatuses
US8488946B2 (en) Reproducing apparatus, reproducing method and reproducing computer program product
US7583887B2 (en) Recording medium having data structure for managing main data additional content data thereof and recording and reproducing methods and apparatuses
JP2008199527A (en) Information processor, information processing method, program, and program storage medium
JP2008199528A (en) Information processor, information processing method, program, and program storage medium
MXPA06007710A (en) Reproduction device, reproduction method, program, recording medium, and data structure
AU2012213937B2 (en) Playback apparatus, playback method, program, recording medium, and data structure