CN101026725A - Reproducing apparatus, reproducing method, and manufacturing method of recording device and recording medium - Google Patents

Reproducing apparatus, reproducing method, and manufacturing method of recording device and recording medium Download PDF

Info

Publication number
CN101026725A
CN101026725A CN200610168939.4A CN200610168939A CN101026725A CN 101026725 A CN101026725 A CN 101026725A CN 200610168939 A CN200610168939 A CN 200610168939A CN 101026725 A CN101026725 A CN 101026725A
Authority
CN
China
Prior art keywords
data
stream
audio
information
reproducer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN200610168939.4A
Other languages
Chinese (zh)
Other versions
CN101026725B (en
Inventor
服部忍
加藤元树
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Corp
Original Assignee
Sony Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2006147981A external-priority patent/JP4251298B2/en
Application filed by Sony Corp filed Critical Sony Corp
Publication of CN101026725A publication Critical patent/CN101026725A/en
Application granted granted Critical
Publication of CN101026725B publication Critical patent/CN101026725B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

A reproducing apparatus includes a playback data acquisition unit for acquiring playback data containing encoded stream data, a decoding unit for decoding the stream data, a mixing unit for mixing data to be mixed, different from the stream data, with the stream data decoded by the decoding unit, and a selecting unit for selecting between supplying the stream data to the decoding unit and outputting the stream data, and a control unit for controlling the selecting unit. The control unit acquires, from the playback data acquired by the playback data acquisition unit, determination information indicating whether the playback data contains the data to be mixed with the stream data, and controls the selecting unit to output the stream data if the determination information determines that the playback data contains no data to be mixed and if data processed by the playback data processing unit is output as encoded data.

Description

The production method of reproducer, reproducting method, recording equipment and recording medium
The cross reference of related application
Present invention resides on July 15th, 2005 in the Japanese patent application JP2005-206997 of Japan Patent office submission and the related theme of submitting in Japan Patent office on May 29th, 2006 of Japanese patent application JP2006-147981, be incorporated herein it in full, for your guidance.
Technical field
The present invention relates to the method for reproducer, reproducting method, computer program, program recorded medium, data structure, recording medium, recording equipment and record of production medium.In particular, the present invention relates to be used in the method for mixing the suitably used reproducer of audio playback data, reproducting method, computer program, storage medium, data structure, recording medium, recording equipment and record of production medium,
Background technology
Japanese Unexamined Patent Application No.2005-20242 discloses a kind of technology that is used for browsing simultaneously different content.In this technology, the view data of a plurality of contents is decoded and be mixed into uncompressed video data subsequently.Unpressed video data is carried out digital-to-analogue (D/A) conversion subsequently, outputs to video output terminal, also is presented on the external display device then.
In the disclosure technology, for the mixed video data, every video data through overcompression (coding) need be decoded, and be mixed into the video data of uncompressed then.Not only all must carry out this process to video data but also to voice data.For mixing audio data, need the voice data of uncompressed.
Dateout must depend on the function of exporting destination device or utilize the method for attachment of output destination device to encode.
For example, main audio data can be mixed with another voice data, and the mixing audio data of output can be a coded data.Below with reference to Fig. 1 this process is described.
First voice data of the coding form that reads and provide from CD is provided the first voice data acquiring unit 11, then first voice data is offered decoder 12.This first voice data is by the content of successively reproducing.Decoder 12 decodings offer blender 14 with unpressed first voice data then by coding (compression) data that the first voice data acquiring unit 11 provides.
The second voice data acquiring unit 13 obtains second voice data of uncompressed, then second voice data is offered blender 14.In case from second voice data of the second voice data acquiring unit, 13 reception uncompressed, blender 14 just mixes second voice data with first voice data of the uncompressed that provides from decoder 12, and the voice data that mixes is offered encoder 15.
The voice data that encoder 15 codings are provided, the voice data with coding offers digital interface 16 then.The voice data that digital interface 16 will be carried generation by predetermined network sends another device to.
Summary of the invention
For content, second voice data of uncompressed may exist or may not exist.In particular, depend on this content, the second voice data acquiring unit 13 is designed to not obtain second voice data of uncompressed.As selection, for the predetermined portions of content is prepared second voice data, and 13 predetermined portions that obtain second voice data of the second voice data acquiring unit.If second voice data of uncompressed is not to be provided by the second voice data acquiring unit 13, blender 14 offers encoder 15 with first voice data of uncompressed from decoder 12 so.
The device of Fig. 1 can be exported two contents, that is, and and as first voice data of the main output of resetting and a content that second voice data mixes mutually with comprise first voice data and do not comprise another content of second voice data.Perhaps, the device of Fig. 1 uses the predetermined portions of second voice data.Conventional reproducer can not determine whether will mix second voice data (whether the second audio frequency number exists).
In known reproducer, no matter whether first voice data mixes with second voice data, and does not pay close attention to the part of not mixing second voice data, and described content is decoded and encoded once more.Because first voice data is always decoded and then by recompile, so sound quality descends.
Be used for the mixed video data so that the above-mentioned public technology of outputting encoded data has identical problem.In particular, if be used for the reproducer of mixed video data can not detect will with the having (presence) or lack (absence) of another data of main output video data mixing, therefore the always decoded and recompile of so main output video data causes video quality to descend.
Therefore expect that blended data descends to avoid the quality of data as much as possible on demand.
According to one embodiment of present invention, reproducer comprises the replay data acquiring unit, be used to obtain the replay data that comprises the encoding stream data, the decoding unit that is used for the decoded stream data, be used to mix be different from flow data with mixed data and mixed cell by the flow data of decoding unit decodes, and be used for the selected cell selected between decoding unit and the output stream data in that flow data is offered, and the control unit that is used to control selected cell, wherein control unit is determined information from the replay data acquisition that the replay data acquiring unit obtains, whether described definite information representation replay data comprises the data of mixing with flow data, if and definite information determines that replay data does not comprise mixed data, if and be used as coded data output, so described control unit control selected cell output stream data by the data of replay data processing unit processes.
The replay data of being obtained by the replay data acquiring unit can comprise a predetermined file of the data that comprise corresponding replay data title, and control unit can obtain definite information from predetermined file.
The replay data of being obtained by the replay data acquiring unit can comprise at least one predetermined file that comprises the information of representing the replay data playback order, and control unit can obtain to determine information from predetermined file.
At least one unit of at least one unit that the replay data of being obtained by the replay data acquiring unit can comprise at least the first data and second data relevant with first data, first data are the information of the playback order of expression replay data, and second data are the information of representing according to the playback cycle of the data of being reproduced by the playback order of first Data Control, and control unit can obtain definite information from second data.
According to one embodiment of the invention, the reproducting method that is used to reproduce data and exports the reproducer that reproduces data, make the program that computer carry out to reset handles and be stored in the program on the program recorded medium each may further comprise the steps: obtain the definite information whether the expression replay data comprises the data of will mix with flow data from the replay data that comprises the encoding stream data, determine according to definite information of obtaining whether replay data comprises the data of mixing with flow data, if and determine that the information representation replay data does not comprise the data of mixing with flow data, if and when the reproduction data of reproducer output are coded data, the output stream data.
According to one embodiment of the invention, the data structure of the data of reproducing by reproducer or be stored in the first information that data on the recording medium comprise the playback order that is used for the management flow data, wherein the first information comprises second information, and it is different from flow data and relevant with the existence or the shortage of the data that will mix with flow data.
According to one embodiment of the invention, be used for and comprise acquiring unit at the recording equipment that the data of resetting on the reproducer are recorded on the recording medium, be used to obtain data with the data structure that comprises the first information, the playback order of described first information management flow data, the first information comprises second information, it is different from flow data and relevant with data existence or the shortage that will mix with flow data, and record cell, and the data that are used for being obtained by acquiring unit are recorded in recording medium.
According to one embodiment of the invention, the manufacture method that but replay data is recorded in the recording medium on the reproducer comprises the step that produces the data with following data structure, described data structure comprises the first information that is used for management flow data playback order, the described first information comprises second information, described second information is different from flow data and relevant with data existence or the shortage that will mix with flow data, and the data that produced are recorded on the recording medium.
Obtain the definite information whether the expression replay data comprises the data of will mix with flow data from the replay data that comprises the encoding stream data.Therefore, use definite information to determine whether replay data comprises the data of mixing with flow data.If replay data does not comprise the data of mixing with flow data, and if be encoded data from the data of reproducer output, output stream data then.
The data of top data structure or be kept at the first information that data on the recording medium comprise management flow data playback order, and the first information comprise be different from flow data and with existence or the second relevant information of shortage that will mixed data.
Recording equipment obtains the data of the first information that comprises the playback order that is used for the management flow data, the described first information comprise be different from flow data and with the existence of mixed data or lack the second relevant information, and write down the data of obtaining thereon.
But the manufacture method of the recording medium of record replay data may further comprise the steps on reproducer: produce the data with the data structure that comprises the first information, the described first information is used for the playback order of management flow data, the described first information comprises and is different from flow data and has or lack the second relevant information with the data that will mix with flow data, and the data of generation are recorded on the recording medium.
Network refers to allow two devices to be connected to each other so that information is sent to the mechanism of another device from a device.This device by network service can be self-contained unit or the internal block that forms a device.
Communication refers to the combination of radio communication, wire communication or radio communication and wire communication.Under the situation of radio communication and wire communication combination, radio communication is carried out in a zone, and wire communication can be carried out in another zone.In addition, install second device from first and carry out wire communication, and install the 3rd device from second then and carry out radio communication.
According to a feature of the present invention, reproduce flow data.Particularly, do not comprise the data of mixing with flow data if determine replay data, and if dateout be coded data, output stream data under the situation of recompile also that not have to decode so.
According to another characteristic of the invention, the data that provide have the data structure of the first information of the playback order that comprises the management flow data, and first data further comprise be different from flow data and with mixed data are existed or lack the second relevant data.If replay data does not comprise the data of mixing with flow data, if and dateout is when being coded data, then lay equal stress under the situation of newly organized bit stream data not needing to decode, the reproducer that has obtained the data with this data structure can the output stream data.
Description of drawings
Fig. 1 illustrates known audio frequency mixed process;
Fig. 2 illustrates the reproducer of one embodiment of the invention;
Fig. 3 illustrates the application form of the recording medium on the reproducer that is loaded in one embodiment of the invention;
Fig. 4 illustrates concordance list and navigation object;
Fig. 5 illustrates the structure of main path and subpath;
Fig. 6 illustrates the example of main path and subpath;
Fig. 7 illustrates another example of main path and subpath;
Fig. 8 illustrates the data structure of the data that can reset on reproducer;
Fig. 9 illustrates the grammer of index.bdmv;
Figure 10 illustrates the grammer of index;
Figure 11 illustrates the grammer of index;
Figure 12 illustrates the grammer of index;
Figure 13 illustrates the data structure of playlist (PlayList) file;
Figure 14 illustrates the grammer of AppInfoPlayList ();
Figure 15 illustrates the grammer of AppInfoPlayList ();
Figure 16 illustrates the grammer of PlayList ();
Figure 17 illustrates the grammer of PlayList ();
Figure 18 illustrates the grammer of PlayList ();
Figure 19 illustrates the grammer of SubPath ();
Figure 20 illustrates the grammer of SubPath_type;
Figure 21 illustrates the grammer of SubPlayItem (i);
Figure 22 illustrates the grammer of PlayItem ();
Figure 23 illustrates the grammer of PlayItem ();
Figure 24 illustrates the grammer of PlayItem ();
Figure 25 illustrates the grammer of STN_table ();
Figure 26 illustrates the grammer of stream_entry ();
Figure 27 illustrates the grammer of stream_attribute ();
Figure 28 illustrates stream_coding_type;
Figure 29 illustrates video_format;
Figure 30 illustrates frame_rate;
Figure 31 illustrates aspect_ratio;
Figure 32 illustrates audio_presentation_type;
Figure 33 illustrates sample frequency (sampling_frequency);
Figure 34 illustrates character code;
Figure 35 illustrates the example that representative offers the stream number form that concerns between user's audio signal and the caption signal;
Figure 36 illustrates the grammer of sound.bdmv;
Figure 37 is according to one embodiment of present invention, the block diagram of first structure of reproducer;
Figure 38 illustrates to reset to handle 1 flow chart;
Figure 39 illustrates to reset to handle 2 flow chart;
Figure 40 illustrates to reset to handle 3 flow chart;
Figure 41 is the block diagram that illustrates second structure of reproducer according to one embodiment of the invention;
Figure 42 illustrates to reset to handle 4 flow chart;
Figure 43 illustrates to reset to handle 5 flow chart;
Figure 44 illustrates to reset to handle 6 flow chart;
Figure 45 is according to one embodiment of the invention, illustrates the block diagram of the 3rd structure of reproducer;
Figure 46 illustrates the manufacturing of the recording medium of the data that record can reset on reproducer;
Figure 47 illustrates the manufacturing of the recording medium of the data that record can reset on reproducer;
Figure 48 illustrates the structure of personal computer.
Embodiment
Before describing embodiments of the invention, the corresponding relation between the present invention spy and the disclosed in embodiments of the present invention particular element is discussed below.This illustrative purposes has been to guarantee to describe in this manual the inventive embodiment that support is asked for protection.Therefore, even the element in following examples is not to be described as relating to special characteristic of the present invention, this must not mean that element does not relate to that feature of the present invention.On the contrary, even the element here is described to relate to special characteristic of the present invention, this must not mean that this element does not relate to other features of the present invention yet.
Reproducer in one embodiment of the invention (for example, the reproducer 20-1 of Figure 37, the reproducer 20-2 of Figure 41, with among the reproducer 20-3 of Figure 45 one) (for example comprise the replay data acquiring unit, Figure 37, the replay data acquiring unit 31 of each in 41 and 45), be used to obtain comprise the encoding stream data replay data (for example, audio stream #1 or by Video Decoder 72 decoded video stream of Figure 45), the decoding unit that is used for the decoded stream data (for example, the audio decoder 75 of Figure 37 or the audio decoder 75-1 of one of Figure 41 and 45), mixed cell (for example, Figure 37, one of 41 and 45 blender 97 or the blender 102 of one of Figure 41 and 45), be used for being different from flow data with mixed data (for example, voice data, sound stream #1, will with another video flowing that mixes by Video Decoder 72 decoded video stream and video data in one) mix mutually with the flow data of decoding unit decodes, selected cell (for example, Figure 37,41 and 45 switch 61), be used for selecting flow data being offered between decoding unit and the output stream data, and the control unit that is used to control selected cell (for example, the controller 34-1 of Figure 37, the controller 34-2 of Figure 41, among the controller 34-3 of Figure 45 one), wherein control unit (for example obtains definite information from the replay data of being obtained by the replay data acquiring unit, is_MixApp, is_MixApp_1 or is_MixApp_2), whether described definite information representation replay data comprises the data of mixing with flow data, if and if determine that information determines that replay data does not comprise when being outputted as coded data with mixed data and by the data of replay data processing unit processes, controls selected cell output stream data so.
In reproducer, the replay data of being obtained by the replay data acquiring unit comprises a predetermined file (for example, index file) that comprises corresponding to the data of replay data title, and control unit obtains definite information from predetermined file.
In reproducer, the replay data of being obtained by the replay data acquiring unit (for example comprises at least one predetermined file, the XXXX.X.mpls of the play list file of corresponding diagram 8), the information that this document comprises the playback order of representing replay data (for example, playlist), and control unit obtain definite information from predetermined file.
In reproducer, at least one unit that the replay data of being obtained by the replay data acquiring unit comprises first data (for example, playlist) with at least one unit of second data relevant with first data (for example, play item), first data are the information of the playback order of expression replay data, and second data are expressions according to the information in the playback cycle of the data of reproducing by the playback order of first Data Control, and control unit obtains definite information from second data.
Be used to reproduce the reproducting method that the reproducer of data is reproduced in data and output according to the embodiment of the invention, may further comprise the steps with the program that is used to reproduce the program of data and be stored on the program storage: (for example from the replay data that comprises the encoding stream data, audio stream #1 or by Video Decoder 72 decoded video stream of Figure 45) (for example obtain, the step S2 of Figure 38, the step S64 of Figure 39, the step S135 of Figure 40, the step S202 of Figure 42, among the step S264 of Figure 43 one) determines that information (for example, is_MixApp, is_MixApp_1, perhaps is_MixApp_2), whether this information representation replay data (for example comprises the data of will mix with flow data, voice data, sound stream #1, with another video flowing that is mixed to by Video Decoder 72 decoded video stream, with in the video data one), (for example determine according to definite information of obtaining, the step S3 of Figure 38, the step S65 of Figure 39, the step S136 of Figure 40, the step S203 of Figure 42, among the step S265 of Figure 43 and the step S336 of Figure 44 one) whether replay data comprises the data of mixing with flow data, if and determine that the information representation replay data does not comprise the data of will mix with flow data and if the reproduction data of exporting from reproducer is coded datas, output stream data (for example, the step S11 of Figure 38 so, the step S71 of Figure 39, the step S141 of Figure 40, the step S211 of Figure 42, among the step S271 of Figure 43 one).
Will be (for example by reproducer, the reproducer 20-1 of Figure 37, among the reproducer 20-2 of Figure 41 and the reproducer 20-3 of Figure 45 one) data structure of the one embodiment of the invention of reproducing comprise be used for management flow data playback order the first information (for example, XXXXX.mpls corresponding to the playlist of Fig. 8), wherein the first information (for example comprises second information, is_MixApp, is_MixApp_1, with among the is_MixApp_2 one), it is different from flow data, and with data (for example, the voice data that will mix with flow data, sound stream #1, will with another video flowing that mixes by Video Decoder 72 decoded video stream, and in the video data one) existence or lack relevant.
The data that write down on recording medium at reproducer (for example, the reproducer 20-1 of Figure 37, among the reproducer 20-2 of Figure 41 and the reproducer 20-3 of Figure 45 one) go up reproduced, and the first information that comprises the playback order that is used for the management flow data (for example, playlist), wherein the first information (for example comprises second information, is_MixApp, among is_MixApp_1 and the is_MixApp_2 one), its be different from flow data and with data (for example, the voice data that will mix with flow data, sound stream #1, will with another video flowing that mixes by Video Decoder 72 decoded video stream, with in the video data one) existence or lack relevant.
The recording equipment of one embodiment of the invention comprises that acquiring unit (for example, the CPU501 of Figure 48, one of communication unit 509 and driver 510) be used to obtain and (for example have the first information of comprising, the data of data structure playlist), the described first information is used for the playback order of management flow data, this first information (for example comprises second information, is_MixApp, one of is_mix_App_1 and is_MixApp_2), its be different from flow data and with the data that will mix with flow data (for example, voice data, sound stream #1, will with another video flowing that mixes by Video Decoder 72 decoded video stream, with in the video data one) existence or lack relevant, and record cell (for example, the driver 510 of Figure 48), the data that are used for obtaining from acquiring unit are recorded in recording medium.
Being used for the record of production can be at reproducer (for example, the reproducer 20-1 of Figure 37, among the reproducer 20-2 of Figure 41 and the reproducer 20-3 of Figure 45 one) method that goes up the one embodiment of the invention of the recording medium that reproduces may further comprise the steps: produce and (for example have the first information of comprising, the data of data structure playlist (PlayList)), the described first information is used for the playback order of management flow data, this first information (for example comprises second information, is_MixApp, one of is_MixApp_1 and is_MixApp_2), its be different from flow data and with the data that will mix with flow data (for example, voice data, sound stream #1, will with another video flowing that mixes by Video Decoder 72 decoded video stream, with in the video data one) existence or lack relevantly, and the data of generation are recorded on the recording medium.
Below with reference to accompanying drawing the embodiment of the invention is described.
The reproducer 20 of one embodiment of the invention is described below with reference to Fig. 2.
Reproducer 20 can reproduce and the information that provides such as the information on the recording medium 21 of CD, by network 22 is provided or be recorded in information on its recording medium (such as hard disk).Reproducer 20 offers the demonstration/audio output apparatus 23 that connects in wired or wireless mode with the data of reproducing, so as on demonstration/audio output apparatus 23 display image and output sound.Reproducer 20 also can be sent to another device by network 22 with reproducing data.Reproducer 20 can receive by such as the operation from user's input of the input equipment of the button that provides or remote controller 24 on oneself device.
Can be recorded on the recording medium 21 in data that reset, that comprise video and audio frequency on the reproducer 20.Recording medium 21 can be a CD.Recording medium 21 also can be one of disk or semiconductor memory.
If demonstration/audio output apparatus 23 is designed to receive the numerical data of uncompressed, reproducer 20 decodings are recorded in the coded data on the recording medium 21 so, and the data of uncompressed are offered demonstration/audio output apparatus 23.If demonstration/audio output apparatus 23 has decoding function to receive the data after compressing, reproducer 20 offers demonstration/audio output apparatus 23 with the data of compression so.If demonstration/audio output apparatus 23 is designed to receive the analogue data of uncompressed, reproducer 20 decodings are recorded in the coded data on the recording medium 21 so, produce analog signal by unpressed data being carried out the D/A conversion, and analog signal is offered demonstration/audio output apparatus 23.And reproducer 20 is reproduced in the data of record on the recording medium 21, and with compressive state data is sent to network 22.
Fig. 3 illustrates the information that can reset on the reproducer 20 of Fig. 2, the application form of the data that write down on data that just be recorded in data on the recording medium 21 that can be loaded on the reproducer 20, provide by network 22 or the recording medium in reproducer 20.
Use form and comprise playlist and be used to manage the segment (clip) of audio frequency and video (AV) stream that this is two-layer.AV stream and follow that the clip information of AV stream constitutes to being considered to single object, be called segment.AV stream is also referred to as the AV stream file.Clip information is also referred to as clip information file.
The file that uses in computer generally is used as byte serial and handles.The content of AV stream file is expanded along time shaft, and the accessing points of segment is specified by the time mark of playlist.In particular, the playlist and the sheet people are the layers that are used to manage AV stream.
If playlist mark service time (timestamp) is specified accessing points in segment, clip information file is used to the address information that discovery begins to decode in the file according to time mark in AV flows.
Playlist is the set in the playback cycle (playback period) of AV stream.A playback cycle in given AV stream is known as plays (PlayItem).Play by the IN point (playback starting point) and the OUT point (playback end point) in playback cycle and represent along a pair of playback cycle of time shaft.Therefore, playlist is made up of at least one broadcast item as shown in Figure 3.
Referring to Fig. 3, a left side is played first playlist and is comprised that two are play, and the first half parts and the second half parts that are included in AV stream in the segment on the left side are play a reference by these two.A left side second playlist is made up of single broadcast item, and the whole AV stream that comprises in the segment on the right of the segment of on the left side is by this broadcast reference.The 3rd playlist play by two and formed, and be included in the left side segment AV stream give certain portions and be included in AV stream in the segment on the right play item references by two respectively to certain portions.
Navigator has the function of the interactive playback of the playback order of controls playing item and playlist.And Navigator also has the function of display menu screen, and the user provides on described screen and refers to contain to carry out various replay operations.For example, to describe Navigator such as the programming language of Java (registered trade mark).
The Navigator of Fig. 3 can be specified the information of the broadcast item on the left side that comprises as the expression replay position from first playlist that the left side begins.Play the first half reproduced of the AV stream that comprises in the segment on the left side of a reference.In this mode, use the playback management information of playlist as the playback of management AV stream file.
Navigation object is formed by concordance list with from the navigation object that concordance list is read.Below with reference to Fig. 4 concordance list and navigation object are described.
Concordance list definition content title and menu, and the entrance of storing each title and each menu.First playback (First Playback) comprises the information of the navigation object (NavigationObject) that relates to reading of concordance list of response when the recording medium 21 of storage data is loaded onto on the reproducer 20 and at first carry out automatically.Top level menu (TopMenu) comprises the relevant information of navigation object of the TopMenu that calls when showing the replayed menus screen.The replayed menus screen to user's display items display to reproduce full content, only to reproduce particular chapter, repeat playback particular chapter and show initial menu.Each title all comprises the information that relates to navigation object, and this navigation object can be assigned to each title of each title ID appointment, and can be called.As shown in Figure 4, show a navigation command for each title.
Navigation object is formed by carrying out navigation command.Navigation command comprises and is used to the various command of resetting playlist, calling another navigation object or the like.For example, navigation command #3 can comprise the command statement that is used to reproduce playlist #1.If carry out navigation command #3, ordered series of numbers table #1 broadcast in reproduction.
The index file of the data file that comprises concordance list is described below with reference to Fig. 9-11.
In the present embodiment, play the playback path that (play continuously) sequence forms by in the playlist at least one and be called as main path, and by at least one subpath (comprise continuous or discontinuous son and play item) form and in playlist the playback path parallel with main path be known as subpath.The application form of the data that can on reproducer 20, reset in playlist, comprise relevant with playlist and with main path reproduced relatively subpath.
Fig. 5 illustrates the structure of main path and subpath.Playlist can comprise a main path and at least one subpath.A main path is made up of at least one sequence of playing item, and a subpath is made up of at least one height broadcast item.
Referring to Fig. 5, playlist comprises a main path and three subpaths, and described main path is made up of three sequences of playing item.In particular, main path is made up of PlayItem_id=0, PlayItem_id=1 and PlayItem_id=2.Subpath from top in order with ID Subpath_id=0, Subpath_id=1 and Subpath_id=2 mark.The subpath of Subpath_id=0 comprises the single sub item of playing, and the subpath of Subpath_id=1 comprises two sub-broadcast items, and the subpath of Subpath_id=2 comprises single son broadcast item.
Playing stream that item quotes by the son in the path that is included in Subpath_id=0 for example can be that the Japanese sound of film is dubbed, and it is reproduced to replace the audio stream of the AV stream file file of being quoted by main path.Playing stream that item quotes by the son in the subpath that is included in Subpath_id=1 can be that the director of film shears, and movie director's comment is comprised in the predetermined portions of the AV stream file that main path quotes.
At least comprise video stream data (main cinematic data) by a single segment AV stream file of quoting of playing.Segment AV stream file can comprise or can not comprise be included in the segment AV stream file video flowing (main video data) simultaneously (synchronously) reproduction one or more audio streams.Segment AV stream file can comprise or can not comprise and be included in one or more bitmap caption stream that the video flowing in the segment AV stream file reproduces simultaneously.Segment AV stream file can comprise or can not comprise and the simultaneously reproduced one or more interactive graphic streams of video flowing that are included in the segment AV stream file.One of video flowing in segment AV stream file and audio stream, bitmap caption stream and interactive graphic stream are by multiplexed, and each in described these streams and video flowing are simultaneously reproduced.In play the segment AV stream file that item quotes by one, video stream data by with zero or a plurality of audio streams, zero or a plurality of bitmap caption stream or zero or a plurality of interactive graphic streams multiplexed, each in described these streams and video flowing are reproduced synchronously.
Play the stream that segment AV stream file that item quotes comprises a plurality of types by one, comprise video flowing, audio stream, bitmap subtitle stream files and interactive graphic stream.
A son is play the caption data that item can be quoted audio stream data or be different from the stream of a broadcast segment AV stream file of quoting.
If reproduce the playlist only have main path, so the user can be only from the segment of quoting with main path multiplexed audio stream and audio frequency and captions the sub-picture streams, in audio frequency handover operation or captions handover operation, select audio frequency or captions.By comparison, if reproduce playlist with main path and subpath, so except the segment AV stream file of quoting with main path multiplexed audio stream and sub-picture streams, the user can play in the segment that item quotes at son quote audio stream and sub-picture streams.
Because a plurality of subpaths are comprised in the single playlist, wherein each subpath is allowed to quote son and plays item, so produced the AV stream with highly scalable and flexibility.In particular, after the segment AV stream file that main path is quoted, also add son subsequently and play item.
Fig. 6 for example understands main path and and subpath.As shown in Figure 6, use subpath to represent (to flow synchronously) simultaneously the playback path of the audio frequency that reproduces with AV with main path.
The playlist of Fig. 6 comprises as the single of the PlayItem_id=0 of main path to be play item and plays item as the single son of subpath.Single the PlayItem () piece of PlayItem_id=0 is quoted the main AV stream of Fig. 6 in main path.SubPlayItem () piece comprises following data.SubPlayItem () comprises the Clip_Information_file_name of the segment that the subpath in the given playlist quotes.As shown in Figure 6, son is play the auxiliary audio stream that item is quoted SubClip_entry_id=0.SubPlyItem () comprises SubPlayItem_IN_time and the SubPlayItem_OUT_time that specifies in the playback cycle that is included in the subpath in the stream (the auxiliary audio stream here) of specifying segment.SubPlayItem () further comprises sync_PlayItem_id and the syncstart_PTS_of_PlayItem that specifies the playback time started, and in the described time, subpath begins to reset along the time shaft of main path.As shown in Figure 6, sync_PlayItem_id=0 and syncstart_PTS_of_PlayItem=t1.In this mode, the time t1 that specifies subpath to begin to reset along the time shaft of the PlayItem_id=0 of main path.In particular, the playback time started t1 of the playback time started t1 of main path and subpath is simultaneously.
The audio-frequency fragments AV stream file that subpath is quoted should not comprise the discontinuous point of STC (the discontinuous point when system in the base).The audio sample clock of the stream that comprises in the segment that the audio sample clock lock of stream uses in subpath in comprising in the segment that use is used in main path.
In other words, SubPlayItem () comprises the information of specifying the segment that subpath quotes, the information in playback cycle of specifying subpath and specify the information of subpath along the time that the time shaft of main path begins to reset.Because the segment AV stream file that uses in subpath does not have STC, according to the information that in SubPlayItem (), comprises (just, specify the segment that subpath quotes information, specify subpath the playback cycle information and specify the information of subpath along the time that the time shaft of main path begins to reset) quote and reproduce the audio stream of the segment AV stream file of the segment AV stream file (main AV stream) that is different from main path and quotes.
Play and son each management segment AV stream file in playing, and to play a segment AV stream of managing be two different files by playing a segment AV stream file of management (main AV stream) and son.
In mode same as shown in Figure 6, use the caption stream playback path that reproduces simultaneously with main path as subpath.
Fig. 7 illustrates another example of main path and subpath.As shown in Figure 7, use subpath to represent and the main path playback path of (AV is synchronous) audio frequency of reproducing simultaneously.The segment AV stream file of being quoted by the broadcast item of main path is identical with the stream file of Fig. 6, omits its discussion here.
The segment AV stream file that main path is quoted can be single movie contents (an AV content), and the auxiliary audio stream quoted of the audio path of subpath can be the comment about the movie director of film.The audio stream of the segment AV stream file that main path is quoted can mix (stack) with the auxiliary audio stream that the audio path of subpath is quoted in playback.The example of Fig. 7 can be applicable to this configuration.For example, when watching film, the user to reproducer (player) input command so that listen to the commentary of director about film.The audio frequency of the segment AV stream file that the mixing main path is quoted in playback and the auxiliary audio stream of quoting as the audio path of subpath.The example of Fig. 7 can be applicable to this situation.
As shown in Figure 7, in main path, arrange three to play, PlayItem_id=0,1 and 2 and in subpath (Subpath_id=0), arrange two sons to play items.The son that the subpath of Subpath_id=0 (hereinafter with reference Figure 19 discussion) calls is play (hereinafter with reference Figure 21 discussion) and is comprised SubPlayItem_IN_time and SubPlayItem_OUT_time, is used to specify the playback cycle (segment of the segment of the English auxiliary audio stream of SubClip_entry_id=0 and the Japanese auxiliary audio stream of SubClip_entry_id=1) of the subpath of auxiliary audio stream.
Comparison diagram 7 and Fig. 6, SubClip_entry_id=0 and 1 auxiliary audio stream (English and Japanese audio stream) quilt play and quote.In particular, use son to play item and quote a plurality of audio stream files.When reproducing son broadcast item, in playback, from a plurality of audio stream files, select an audio stream file.As shown in Figure 7, from English audio stream file and Japanese audio stream file, select an audio stream file.In particular, from SubClip_entry_id=0 and 1, select one (in response to instruction), and reproduce auxiliary audio stream by this ID appointment from the user.In addition, if select to use mix the playback (if select two audio streams as with reproduced audio stream) of the audio stream that main path quotes, so audio stream file that the mixing main path is quoted in playback and the audio stream file of quoting as the audio path of subpath.
Fig. 8 illustrates the file system of the data file that can reset on reproducer 20.As shown in Figure 8, such as CD recording medium 21 in the data file that can reset on reproducer 20 is provided, and file system has bibliographic structure.
In file system, the catalogue of " BDMV " by name is set under " root " catalogue.The file of " Index.bdmv " by name and the file of " NavigationObject.bdmv " by name are set under " BDMV " catalogue, and are referred to as index file and navigation object file hereinafter respectively.Hereinafter, when suitable, each file is known as " filename ", and the back is with there being word " file ", and each catalogue is known as " directory name ", and the back is with word " catalogue " is arranged.
Index file comprises above-mentioned concordance list, and the information that relates to the menu of the data file that reproduction can reset on reproducer 20.For example, according to index file, reproducer 20 makes display device show to have and is used for reproducing all the elements that the data file that can reset comprises on reproducer 20, is used for only reproducing particular chapter, is used for the menu screen of the project of repeat playback particular chapter and demonstration initial menu.The navigation object that is performed when selecting one is set in the concordance list in the index file.When the user when the replayed menus screen is selected one, reproducer 20 is carried out the order of describing in the navigation object that is provided with in the concordance list of indexed files.
The navigation object file comprises navigation object.Navigation object comprises controlling packet is contained in the playback of the playlist in the data file that can reset on reproducer 20 order.For example, one of navigation object that comprises is selected and carried out to reproducer 20 in file system, therefore reproduces content.
What be provided with under the BDMV catalogue is the catalogue (BACKUP catalogue) of " BACKUP (backup) " by name, the catalogue (CLIPINF catalogue) of the catalogue (PLAYLIST catalogue) of " PLAYLIST (playlist) " by name, " CLIPINF " by name, the catalogue (STREAM catalogue) of " STREAM " by name and the catalogue (AUXDATA catalogue) that is called " AUXDATA ".
The BACKUP catalogue comprises and be used to back up the file that can reset and the file and the data of data on reproducer 20.
The PLAYLIST catalogue comprises play list file.Each play list file has the name of being made up of the expansion " .mpls " of 5 filenames and back, as shown in Figure 8.
The CLIPINF catalogue comprises clip information file.Each clip information file has the name of being made up of the expansion " .clpi " of 5 filenames and back, as shown in Figure 8.
The STREAM catalogue comprises segment AV stream file and sub-stream file.Each stream file has the name of being made up of the expansion " .m2ts " of 5 filenames and back, as shown in Figure 8.
The AUXDATA catalogue does not comprise segment AV stream file or sub-stream file, but comprises the file of the data that segment AV stream file and sub-stream file quote, and the file that is independent of the data that segment AV stream file and sub-stream file use.As shown in Figure 8, the AUXDATA catalogue comprises the captions font of " 11111.otf " by name and such as the file of the voice data of the effect sound of by name " sound.bdmv ".
When the data file that can reset on reproducer 20 is distributed in the CD, identifier author_id and disc_id are recorded on the CD with the unalterable safe electronic data mode of user or with the form of physics hole (pit).Identifier author_id is distributed to each title author to be used to discern such as content producer, such as Products Co., Ltd or Film Releasing Co. or as the title author of the provider of the CD of recording medium.Distribute identifier disc_id so that be identified in the optical disc types that author_id title specified author makes.
On the data file that can reset on the reproducer 20 can be recorded in removable recording medium except CD, maybe can pass through network download.In this case, according to the bibliographic structure identical with Fig. 8, distribution and author_id and disc_id corresponding characteristic.Even the data file that can reset is provided with the characteristic that is assigned with corresponding to author_id and disc_id, also comprise the file of " Index.bdmv " by name and the file of " NavigationObject.bdmv " by name in the mode identical with the bibliographic structure of Fig. 8 on reproducer 20.And, in bibliographic structure, can comprise the file group of the file group of the file group of the file group from " BACKUP " by name, " PLAYLIST " by name, " CLIPINF " by name, " STREAM " by name and any suitable file group that is called the file group of " AUXDATA ".
Reproducer 20 can be with the form outputting audio data of packed data not or with the form outputting audio data of coding (compression) data.In case of necessity, reproducer 20 can be mixed into voice data the voice data as main playback dateout.And, reproducer 20 can not only the compound voice effect sound fruit and also can mix supplement sound (auxiliary sound) to main playback outputting audio data.In other words, according to whether comprising the voice data that mixes by the function of reproducer 20, determine that whether reproducer 20 is with mixing audio data.
Hereinafter will lead the playback outputting audio data and be called audio stream #1.The voice data that is described as sound.bdmv in the AUXDATA catalogue is called as voice data.For example, voice data comprises the click sound that produces when user's input operation is imported, harmony effect sound fruit.Voice data can mix with audio stream #1.Comprise that the stream that replenishes sound is called as audio stream #2, this additional sound is different from audio stream #1 and will mixes with it.It is acceptable being different from audio stream #1 and a plurality of audio data streams that will mix with it, except audio stream #2.These files can be called as audio stream #3, audio stream #4 or the like.
Can be to the audio stream #1 execution of compression (coding) data mode output and mixing of audio stream #2 and voice data.Audio stream #1 is decoded, mix and and then coding.Can be the audio stream #1 with the output of compression (coding) data mode be carried out and twist with audio stream and the audio mix of voice data.In this case, if output audio flows #1 under the situation that does not have decoding, the sound quality of audio stream #1 can not reduce so.
In order to determine whether on audio stream #1, to carry out decode procedure, reproducer 20 needs the output form of identification equipment audio stream #1 and is included in the type of the voice data in the data file (just, whether expression comprises another voice data that mixes with audio stream #1).The data file that can reset on reproducer 20 comprises in the precalculated position whether expression comprises the sign of voice data and the sign whether expression comprises audio stream #2.
The position whether expression comprises the sign of other voice datas that will mix with audio stream #1 is index file, play list file and plays a file.If place these signs in the indexed file, be included in playback whether definition comprises other voice datas that mix with audio stream #1 in the process of all data that comprise in the data structure of Fig. 8.If in playlist, describe these signs, so definition whether comprise with the data of reproducing according to corresponding playlist in other voice datas of mixing of the audio stream #1 that comprises.If in playing, describe these signs, definition whether in corresponding to a segment of playing, comprise with audio stream #) other its audio data of mixing.
Fig. 9 illustrates the grammer of index file (Index.bdmv).
The type_indicator field comprises the value according to " INDEX " of ISO646 standard code.
The version_number field represents to represent the character string of four letters of the version number of Index.bdmv, and comprises the value " 0089 " according to the ISO646 standard code.
The Index_start_address field comprises the start address of Index () piece.
AppIfoBDMV () piece comprises supplier's the identifier that expression comprises the data file of index file.
Index () piece covers the description of the link of application (navigation object), and menu read operation, title search operation are carried out in described application, beginning automatically when being loaded onto in the reproducer 20 to the skip operation of predetermined title, the recording medium 21 such as CD that maybe comprises the data file of index file when record operated.
Hereinafter hand over the grammer of describing Index () with reference to figures 10 to 12.
The Padding_word field comprises 0 or positive integer according to the grammer insertion of Index.bdmv.
Figure 10 for example understands first example of the grammer of Index (), does not wherein write the sign of the existence that is illustrated in other voice datas that mix with audio stream #11 in the data of being quoted by Index () in Index ().
The length field list is shown in the byte number of the information that writes among the Index ().
The FirstPlayback piece is the data block that comprises about the information of the navigation object at first carried out when reproducing the data of this document system.In particular, this data block comprise at first automatically perform when being loaded onto on the reproducer 20 about recording medium 21 such as CD when the data of record this document system the information of navigation object.
The Firstplayback_mobj_id_ref field is specified the value of the mobj_id of the navigation object of at first carrying out.Mobj_id is the sign of unique navigation by recognition object.Do not automatically perform and navigation object if be arranged on when resetting beginning, in other words, if this application is not to carry out at the beginning playback time, and be in response to the command execution from the user, this field comprises " 0Xffff ".
TopMenu is the data block that comprises the navigation object information of the TopMenu that calls when making the display menu screen as the user.
The value of the mobj_id of the navigation object of TopMenu_mobj_id_ref field appointment TopMenu.If TopMenu is not set, this field comprises " 0xFFFF. " so.
The Number_of_Titles field list is shown in the title number that writes among the Index ().
Title[title_id] () be the piece that comprises about by the information of each title of the unique identification of title_id.Since 0 to the Title_id designation number.
Title_playback_type[title_id] field represents the play-back type by the title of title_id identification.For example, play-back type can comprise as the movie title with the representative content of reproduced mobile image and audio frequency, and is allowed to respond from user's operation input and the interactive title of the content of interactive modifying.If title is a movie title, carries out to reset according to playlist so and handle.
Title_access_type[title_id] whether field comprise expression and allow to use Title_Search to reproduce information by the title of title_id identification.
The Reserved_for_future_use field is 29 bit fields that also do not have padding data to describe and supply to further expand.
Title_mobj_id_ref[title_id] field specifies the value wherein imported by the mobj_id of the navigation object of title_id title specified.
Figure 11 for example understands second example of the grammer of Index ().As shown in figure 11, the data quoted of Index () only allow morbid sound data (not comprising audio stream #2).Index () comprises the sign whether data quoted of expression Index () comprise the voice data that will mix with audio stream #1.
Represent a bit flag is_MixApp and reserved_for_future_use field that whether the data of being quoted by Index () comprise voice data except new description and change to 28 that second example of the grammer of the Index () of Figure 11 has the identical structure of first example with the grammer of the index () of Figure 10 by 29.Sign is_MixApp can be defined as being illustrated in and comprise the voice data that will mix with audio stream #1 and which the sign among the audio stream #2 in the data that Index () quotes.By only discerning a sign, mixing audio data whether just just can be determined the necessity of decoding processing fast.
Figure 12 for example understands the 3rd example of the grammer of Index ().As shown in figure 12, Index () comprises two signs, promptly a sign represents whether the data of being quoted by Index () comprise the voice data that mixes with audio stream #1, and a sign represents whether the data of being quoted by Index () comprise the audio stream #2 that mixes with audio stream #1.
Except new description represents whether data that Index () quotes comprise the bit flag is_MixApp_1 of audio stream #2, whether comprise a bit flag is_MixApp_2 of voice data with the data quoted of expression Index (), and the reserved_for_future_use field is modified to beyond 27 from 29, the 3rd example class of the Index () of Figure 12 is similar to first example of grammer of the Index () of Figure 10.
Can define expression and whether carry out the sign of mixed process, rather than whether the definition expression comprises the sign of the data (one of audio stream #2 and voice data at least) of will mix with audio stream #1 in the data of being quoted by Index ().In this case, carry out playback time, the sign that is defined is represented whether mixed data are applied to playlist when index and playlist according to the management playback order.
Figure 13 for example understands the data structure of play list file.Playlist is the data file with extension name " .mpls ", and be stored in be loaded on the reproducer 20 or the recording medium of the local storage in reproducer 20 (such as hard disk) in playlist directory in.
The type_indicator field comprises the information of representing file type.In particular, to comprise the type of representing this document be information as the playlist (MoviePlayList) of playback management information to this field.Playback management information is used for the playback of managing video.
The version_number field comprises the version number of four characters of xxxx.mpls (MoviePlayList).
The PlayList_start_address field comprises the address, front by the PlayList () that equals to represent from the unit of the byte number of the front byte of play list file.
The PlayListMark_start_address field comprises the address, front by the PlayListMark () that equals to represent from the unit of the byte number of the front byte of play list file.
The ExtensionData_start_address field comprises the address, front by the ExtensionData () that equals to represent from the unit of the byte number of the front byte of play list file.
The storage of AppInfoPlayList () piece relates to the parameter of the playback control of playlist, such as the restriction to resetting.Describe AppInfoPlayList () piece in detail below with reference to Figure 14 and 15.
PlayList () piece is preserved the parameter of the main path and the subpath that relate to playlist.To describe PlayList () in detail referring to figures 16 to 18.
PlayListMark () piece is preserved label information, in particular, relates to the information as the mark of the order of redirect purpose (jump-point) or chapters and sections redirect in user's operation etc.
ExtensionData () piece storage exclusive data.
First example of AppInfoPlayList () is described below with reference to Figure 14.When Index () is with reference to as described in Figure 10, just, when the sign that whether comprises other voice datas that will mix with audio stream #1 when data that expression Index () quotes was not described among the Index (), the AppInfoPlayList () of Figure 14 was suitable.
The length field list is shown in the length of grammer after the length field, just from followed by the position of the length field byte number to the AppInfoPlayList () of the rearmost position of reserved_for_future_use.After the length field, prepare 8 reserved_for_future_use fields.
The PlayList_playback_type field comprises the information of the play-back type of expression response playlist execution.Play-back type can comprise that continuous playback, random access are reset and mixing (shuttle) is reset.
Playback_count comprises the information about the number of the broadcast item of resetting in playlist, this playlist is carried out the random access playback and mixed and one of reset.
U0_mask_tble () piece comprises the information that relates to the special playback function, and described playback comprises time-out, chapter search, redirect, F.F., rewind down (fast reverse playback) and the restriction that the user who is used to show is operated.
The PlayList_random_access_flag field comprises and is used to control the information of resetting sign from another playlist redirect.If PlayList_random_access_flag=1 forbids resetting from other playlist redirects.
Reproducer 20 allows the user's operation to the expansion of himself, comprises that the replay position quoted from current playlist to the end or the user operation commands of the replay position redirect of another playlist predetermined chapters and sections position of playing of quoting.The redirect (change of replay position) of the segment AV stream that the play position of the segment AV stream file of quoting from other playlists when the order of user's operation issue is quoted to this playlist, PlayList_random_access_flag is used to be provided with whether limited subscriber operation.
If skip command (change replay position) is not operated by the user but sent by navigation command, do not consider PlayList_random_access_flag (fill order, and the generation of response command change replay position).
The Is_MixApp sign is used for determining whether that the stream with playlist reproduces is mixed into audio frequency or sound effect.In particular, is_MixApp sign is defined as data that the expression playlist quotes and comprises the voice data that to mix with audio stream #1 and the sign of audio stream #2.By only discerning this sign, determine whether mixing audio data fast, just carry out the necessity of decode procedure.
Lossless_may_bybass_mixer_flag relates to the playback of harmless sound.This sign is afterwards followed by 13 reserved_for_future_use fields.
If as shown in figure 10, the sign whether data that expression Index () quotes comprise other voice datas that will mix with audio stream #1 is described, describe with reference to Figure 14 as mentioned, in AppInfoPlayList () piece, describe is_MixApp, so whether playlist represents morbid sound and sound effect.Can in one of PlayList () and PlayItem, describe rather than in AppInfoPlayList () piece, describe this is_MixApp sign.
Second example of AppInfoPlayList () described below with reference to Figure 15.
Except not describing the expression playlist whether the is_MixApp sign of morbid sound and sound effect, second example of the AppInfoPlayList () of Figure 15 is identical with first example with reference to the AppInfoPlayList () of Figure 14 discussion.In particular, in second example of the AppInfoPlayList of Figure 15 (), among PlayList () that after the sign such as is_MixApp sign is written into, discusses or the PlayItem, Index () piece is that piece with reference to Figure 10 discussion.Whether the data that expression Index () quotes comprise the sign of other voice datas that will mix with audio stream #1 is not described in Index () but in PlayItem or PlayList ().Perhaps, Index () piece is that just, whether the data that description expression Index () quotes in Index () comprise the sign of other voice datas that will mix with audio stream #1 with reference to that piece of Figure 11 or Figure 12 description.
Figure 16 for example understands first example of the grammer of PlayList ().In the grammer of Figure 16, the sign in the data whether other voice datas that expression will mix with audio stream #1 be included in playlist reproduction is by reference described in PlayList ().
When Index () piece is that piece of describing with reference to Figure 10, just, when not being at Index () but when describing data that expression Index () quotes whether comprising the sign of other voice datas that will mix with audio stream #1 in the broadcast item of discussing after a while, first example of the grammer of the PlayList () of Figure 16 is suitable.When Index () piece is with reference to Figure 11 or 12 pieces of describing, just, when describing data that expression Index () quotes whether comprise the sign of other voice datas that will mix with audio stream #1 in Index (), first example of the grammer of the PlayList () of Figure 16 is suitable.
The length field comprises 32 signless integers, expression is just represented the byte number of the rearmost position from the reserved_for_future_use field to playlist from immediately following the byte number of the position after the length field to the rearmost position of PlayList ().It after the length field 16 reserved_for_future_use field.The reserved_for_future_use field does not also have padding data and is provided with the back expansion.The Number_of_PlayItem field is 16 bit fields that are illustrated in the number of the broadcast item that comprises in the playlist.For example, the number of broadcast item is 3 among Fig. 5.The order that appears among the PlayList with PlayItem () sorts since 0 to PlayItem_id.Shown in Fig. 5 and 7, PlayItem_id=0,1,2 ...
The Number_of_SubPath field is 16 bit fields of the number (entry number) that is illustrated in SubPath in the playlist.For example, the number of SubPath is 3 among Fig. 5.The order that appears in the playlist with SubPath () sorts since 0 to SubPath_id.As shown in Figure 5, SubPath_id=0,1,2....After description,, and quote SubPath according to the number of SubPath according to a number reference playitem of playing.
Figure 17 for example understands second example of the grammer of PlayList ().As shown in figure 17, the audio stream of playlist reproduction only is audio stream #1 (not comprising audio stream #2) by reference.Sign in the data whether voice data that expression will mix with audio stream #1 be included in PlayList () reproduction is by reference described in PlayList ().
When Index () piece is the piece of discussing with reference to Figure 10, just, when describing other voice datas that expression will mix with audio stream #1 in the broadcast item that neither also is not discussed below whether be comprised in sign in the data of quoting by Index () in Index (), second example of the PlayList () of Figure 17 is suitable.
Except new description represents whether voice data is comprised in the is_MixApp sign in the data that PlayList () quotes, and the reserved_for_future_use field is revised as outside 15 from 16, second example of the grammer of the PlayList () of Figure 17 is identical with first example of the grammer of the PlayList () that discusses with reference to Figure 16.Is_MixApp sign can be defined as data that expression PlayList () quotes and comprise the voice data that to mix with audio stream #1 and the sign of audio stream #2.In this case, by only discerning a sign, determine whether mixing audio data fast, just the necessity of decode procedure.
Figure 18 for example understands the 3rd example of the grammer of PlayList ().As shown in the figure, PlayList () comprises the data quoted of expression PlayList () and whether comprises the sign of the voice data that will mix with audio stream #1 and represent whether data that PlayList () quotes comprise the sign of the audio stream #2 that will mix with audio stream #1.
When PlayList () piece is the piece of describing with reference to Figure 10, just whether the data that description expression Index () quotes in the broadcast item of neither also not describing in the back in Index () comprise the sign of other voice datas that will mix with audio stream #1, and the 3rd example of the grammer of the PlayList () of Figure 18 is suitable for.
Except describing two signs, represent just whether data that PlayList () quotes comprise the is_MixApp_2 sign whether is_MixApp_1 sign of audio stream #2 and data that expression PlayList () quotes comprise voice data, and the reserved_for_future_use field has been revised as outside 14 from 16, the 3rd example of the grammer of the PlayList () of Figure 18 is identical with first example of the grammer of the PlayList () of Figure 16.
Figure 19 for example understands the example of the grammer of SubPath ().
The length field comprises 32 signless integers, expression is just represented the byte number of the rearmost position from the reserved_for_future_use field to playlist from immediately following the byte number of the position after the length field to the rearmost position of SubPath ().It after the length field 16 reserved_for_future_use field.The reserved_for_future_use field does not also have padding data and is provided with the back expansion.The SubPath_type field is 8 bit fields of the application type of expression SubPath.The SubPath_type field is used to represent about SubPath to be the SubPath type of audio frequency, bitmap captions or text subtitle.Below with reference to Figure 20 SubPath_type is described.It after the SubPath_type field 15 reserved_for_future_use field.Isrepeat_SubPath is a bit field of specifying the playback method of SubPath, and the playback that is illustrated in main path handle in only once or repetition SubPath.For example, if the main AV stream and the segment of SubPath appointment have different playbacks regularly (just, main path be used to show still frame and as the subpath of audio path background music as main path), use is_repeat_SubPath.Is_repeat_SubPath field back is 8 a reserved_for_future_use field.The number_of_SubPlayItem field is 8 bit fields that are illustrated in the son broadcast item number (entry number) in the single subpath.In the value of number_of_SubPlayItem field, the number that son is play is having of SubPath_id=0 and having two of SubPath_id=1 as shown in Figure 5.In for of back statement, a sub number of times that is cited of playing equals the number that son is play item.
Figure 20 represents SubPath_type (type of subpath).Define the type of subpath as shown in figure 20.
Referring to Figure 20, keep SubPath_type=0 and 1.SubPath_type=2 is used for audio frequency and presents browsable slide show.For example, SubPath_type=2 represents to use the audio frequency of subpath to present the path and use the main path of playing item asynchronous mutually in playlist.
SubPath_type=3 is used for the subpath that interactive graphics (IG) presents menu.For example, SubPath_type=3 represents to use the interactive graphics (IG) of subpath and uses the main path of playing item asynchronous mutually in playlist.
SubPath_type=4 is used for the subpath that text subtitle presents the path.For example, SubPath_type=4 represents to use the presenting the path and use a main path of playing asynchronous mutually in playlist of text subtitle of subpath.
SubPath_type=5 is used for the subpath (just, being used to quote the path of second audio stream) that second audio frequency presents the path.In particular, SubPath_type=5 represents to use second audio frequency of subpath to present the path and use the main path of playlist synchronous mutually in playlist.For example, second audio stream of use subpath is the comment (voice) about the movie director.In the SubPath_id of Fig. 7, the SubPath_type of Figure 19 is corresponding to SubPath_type=5.
SubPath_type=6 is used for the subpath (just, being used to quote the path of second video flowing) that second audio frequency presents the path.In particular, SubPath_type=6 represents to use second audio frequency of subpath to present the path and use the main path of playing item synchronous mutually.For example, second audio stream of use subpath is the comment (motion picture) about the movie director.
Keep SubPath_type=7 to 255.
Figure 21 for example understands the grammer of SubPlayItem (i).
The length field comprises 16 signless integers, and expression is from immediately following position after the length field byte number to the rearmost position of SubPlayItem ().
Figure 21 for example understands two kinds of situations, and its neutron is play item and quoted first kind of situation of single segment and second kind of situation that its neutron broadcast item is quoted a plurality of segments.
At first describe its neutron and play first kind of situation that item is quoted single segment.
Son is play item and is comprised the Clip_Information_file_name[0 that specifies segment].Son is play the Clip_codec_identifier[0 that item further comprises the decoding method that is used to specify segment] field, the reserved_for_future_use field, the is_multi_Clip_entries sign of the existence of the registration that the expression multi-disc is disconnected or shortage, and the ref_to_STC_id[0 that relates to the discontinuous point of STC (the discontinuous point during system in the base)] field.If the is_mulfi_Clip_entries sign is set, use to allow son to play the grammer that item is quoted a plurality of segments.Son is play item and is further comprised the SubPlayItem_IN_time field and the SubPlayItem_OUT_time field of specifying the playback duration of subpath in the segment.Son play item further comprise along the time axle specify sync_playItem_id field and the sync_start_PTS_of_PlayItem field of the time started of SubPath.As mentioned above, sync_PlayItem_id field and syne_start_PTS_of_PlayItem field are used under the situation of Fig. 6 and 7 (when reproducing the file that main AV stream and SubPath represent simultaneously).When not reproducing the file that main AV flows and SubPath represents simultaneously, (for example do not use sync_PlayItem_id field and sync_start_PTS_of_PlayItem field, as in the situation of the background music (BGM) of the lantern slide of being made up of still frame, the audio frequency that still frame that main path is quoted and subpath are quoted is asynchronous mutually).SubPlayItem_IN_time field, SubPlayItem_OUT_time field, sync_PlayItem_id field and sync_start_PTS_of_Playltem play in the segment of quoting at son and generally use.
Son is play item can quote a plurality of segments (is_multi_Clip_entries=1b) shown in Figure 7, in other words, and registration multi_clip.This situation is described below.
The num_of_Clip_entries field is represented the number of segment.At Clip_Information_file_name[SubClip_entry_id] in value specify except Clip_Information_file_name[0] segment.In particular, Clip_Information_file_name[SubClip_entry_id] in value specify except Clip_Information_file_name[0] such as Clip_Information_file_name[1] and Clip_Information_file_name[2] segment.Son is play item and is further comprised a Clip_codec_identifier[SubClip_entry_id who specifies the segment decoding method] field, relate to the ref_to_STC_id[SubClip_entry_id of the discontinuous point of STC (the discontinuous point during system in the base)] field and reserved_for_future_use field.
SublayItem_IN_time field, SubPlayItem_OUT_time field, sync_PlayItem_id field and sync_start_PTS_of_PlayItem field are generally used in a plurality of segments.As shown in Figure 7, SubPlayItem_IN_time field, SubPlayItem_OUT_time field, sync_PlayItem_id field and syncstart_PTS_of_PlayItem are shared use by SubClip_entry_id=0 and SubClip_entry_id=1.Reproduce the text based subtitle of corresponding selected SubClip_entry_id according to SubPlayItem_IN_time field, SubPlayItem_OUT_fime field, sync_PlayItem_id field and sync_start_PTS_of_playItem field.
With Clip_Information_file_name[SubClip_entry_id] appear at the order of son in playing and number since 1 to SubClip_entry_id.Clip_Information_file_name[0] SubClip_entry_id be 0.
Figure 22 for example understands first example of the grammer of PlayList (), does not wherein describe the sign whether other voice datas of expression mix with audio stream #1.
When Index () piece is the piece of describing with reference to one of Figure 11 and 12, whether just comprise when mixing the sign of other voice datas that contain with audio stream #1 when describe data that expression Index () quotes in Index (), first example of the grammer of the PlayItem () of Figure 22 is suitable for.When AppInfoPlayList () piece is the piece of discussing with reference to Figure 14, whether just comprise with other voice datas that audio stream #1 mixes the time when describe data that the expression playlist quotes in AppInfoPlayList () piece, first example of the grammer of the PlayItem () of Figure 22 is also suitable.When PlayList () piece is the piece of discussing with reference to one of Figure 17 and 18, whether just comprise with other voice datas that audio stream #1 mixes the time when describing data that the expression playlist quotes, first example of the grammer of the PlayItem () of Figure 22 is also suitable.
The length field comprises 16 signless integers, and the position of expression after closelying follow the length field is to the byte number of the rearmost position of PlayItem ().Clip_Information_file_name[0] a field appointment broadcast segment of quoting.With reference to figure 6, Clip_Information_file_name[0] field quotes main AV stream.PlayItem () piece further comprises the Clip_codec_identifier[0 of the decoding method of specifying segment] field, do not have padding data to be provided with 11 reserved_for_future_use fields of back expansion, and represent whether PlayItem () piece characterizes the is_multi_angle sign of multi-angle playback.PlayItem () piece further comprise the connection_condition field and as relating to the ref_to_STC_id[0 of the information of the discontinuous point of STC (the discontinuous point during system in the base)].PlayItem () piece further is included in the IN_time field and the OUT_time field of specifying the playback duration of playing item in the segment.As shown in Figure 6, the playback scope of the disconnected AV stream file of main leaf is represented by IN_time field and OUT_time field.PlayItem () piece further comprises U0_mask_table () piece, PlayItem_random_access_mode field and still_mode field.Can use a plurality of is_multi_angle fields, but the use of a plurality of is_multi_angle fields is not discussed here, because it is not directly to relate to the present invention.
Can prepare interested at least one subpath of playing item and reproducing relatively with the broadcast item.In this case, the STN_table () piece of playing in the item is represented in response to the user's operation that is used for the conversion of audio conversion or captions, is allowed to select and selected by the segment that at least one SubPath quotes the mechanism of a segment from playing a segment of quoting.STN_table () expression allows selected two audio streams to mix the mechanism of resetting.
Figure 23 for example understands second example of the grammer of PlayItem ().As shown in the figure, the data corresponding to PlayItem () do not comprise audio stream #2.Whether expression comprises the voice data that will mix with audio stream #1 corresponding to the data of PlayItem () sign is described in PlayItem ().
When Index () piece is the piece of describing with reference to Figure 10, just when in Index (), describing data that expression Index () quotes and whether comprise the sign of other voice datas that will mix with audio stream #1, and when AppInfoPlayList () be among Figure 15 that, just when in AppInfoPlayList (), describing data that the expression playlist quotes and whether comprise the sign of other voice datas that will mix with audio stream #1, and when PlayList () is that discuss with reference to Figure 16, just describe the sign whether data that the expression playlist quotes comprise other voice datas that will mix with audio stream #1, second example of the grammer of the PlayItem () of Figure 23 is suitable.
Except newly retouching 1 the is_MixApp sign whether data that art represents that PlayItem () quotes comprise voice data, and the reserved_for_future_use field is revised as outside 10 from 11, second example of the grammer of the PlayList () of Figure 23 is identical with first example of the grammer of the PlayList () of Figure 22.The is_MixApp field can be defined as representing whether data that PlayItem () quotes comprise the sign of voice data and audio stream #2.By only discerning a sign, determine whether mixing audio data, the necessity of decode procedure just fast.
Figure 24 for example understands the 3rd example of the grammer of PlayItem ().The PlayItem () of Figure 24 comprises expression and whether comprises the sign of the voice data that will mix with audio stream #1 and whether expression comprises the audio stream #1 that will mix with audio stream #1 corresponding to the data of PlayItem () sign corresponding to the data of PlayItem ().
When Index () piece is the piece of describing with reference to Figure 10, just do not represent when describing whether data that Index () quotes comprise the sign of other voice datas that will mix with audio stream #1 in Index () lining, and when AppInfoPlayList () be the piece of Figure 15, just whether comprise with the time with other voice datas that audio stream #1 mixes when in AppInfoPlayList (), describing data that the expression playlist quotes, and be the piece of describing with reference to Figure 16 as PlayList (), just when not describing the data quoted of expression playlist and whether comprise the sign of other data of will mix with audio stream #1, the 3rd example of the grammer of the PlayItem () of Figure 24 is suitable.
Except two signs of new description, just represent whether comprise 1 is_MixApp_1 sign of audio stream #2 and represent whether the data corresponding to PlayItem () comprise 1 is_MixApp_2 sign of voice data corresponding to the data of PlayItem (), and the reserved_for_future_use field is revised as outside 9 from 11, the 3rd example of the grammer of the PlayItem () of Figure 24 is identical with first example of the grammer of the PlayItem () of Figure 22.
Figure 25 for example understands the grammer of STN_table ().STN_table () is set to play the attribute of item.
The Length field is 16 signless integer fields, and expression is from immediately following position after the length field byte number to the rearmost position of STN_table ().It after the length field 16 reserved_for_future_use field.The Number_of_video_steam_entries field list is shown in the number of the stream of input (registration) among the STN_table (), and has video_streamid.Video_stream_id is the identifying information of sign video flowing, and video_stream_number is used for converting video and to the visible video stream number of user.
The Number_of_audio_stream_entries field is included in the input of STN_table () lining and has the flow amount of first audio stream of audio_stream_id.Audio_stream_id is the identifying information of identification audio stream, and audio_stream_number is used for the audio frequency stream number of switch audio and to the user as seen.The Number_of_audio_stream2_entries field is included in the input of STN_table () lining and has the flow amount of second audio stream of audio_stream_id2.Audio_stream_id2 is the identifying information of identification audio stream, and audio_stream_number is used for the audio frequency stream number of switch audio and to the user as seen.The audio stream of the number_of_audio_stream_entries that imports in STN_table () lining is the audio stream by the first audio decoder 75-1 decoding of the reproducer 20-2 of the Figure 41 that hereinafter discusses.The audio stream of the number_of_audio_stream2_entries that imports in STN_table () lining is the audio stream by the second audio decoder 75-2 decoding of the reproducer 20-2 of the Figure 41 that hereinafter discusses.In the STN_table of Figure 25 (), can import audio stream by two audio decoder decodes.
In the following discussion, be known as the first audio stream #1 by the audio stream in the number_of_audio_stream_entries field of the first audio decoder 75-1 decoding of the reproducer 20-2 of Figure 41, and be known as the second audio stream #2 by the audio stream in the number_of_audio_stream2_entries field of the second audio decoder 75-2 decoding of the reproducer 20-2 of Figure 41.The first audio stream #1 has higher priority than the second audio stream #2.
The number_of_PG_textST_stream_entries field is included in the number of importing and having the stream of PG_textST_stream_id among the STN_table ().The stream (for example presenting graphical stream) and the text subtitle file (textST) of run length coding, RLC wherein carried out in input to the bitmap captions of for example DVD sprite.PG_textST_stream_id is the identifying information of identification caption stream, and PG_textST_stream_number is the captions stream number (text subtitle stream number) that is used to switch captions and to the user as seen.
The Number_of_IG_stream_entries field is included in the number of importing and having the stream of IG_stream_id among the STN_table ().The input interactive graphic stream.IG_stream_id is the identifying information of identification interactive graphic stream, and IG_stream_number is used to switch the figure stream number of figure and to the user as seen.
Hereinafter the grammer of stream_entry () piece will be described with reference to Figure 26.
The length field is 8 signless integers, and expression is from immediately following position after the length field byte number to the rearmost position of stream_entry ().Type (Type) field is 8 bit fields that the unique identification of expression has the type of the required information of the stream of above-mentioned flow amount.
During Type=1, specify 16 bag ID (PID) of a basic stream of identification from a plurality of basic streams, described a plurality of basic streams are multiplexed in the segment (main leaf is disconnected) of playing item and quoting.The ref_to_stream_PID_of_mainClip field is represented PID.In particular, during Type=1, by in the disconnected AV stream file of main leaf, specifying PID to determine stream simply.
Subpath can be quoted a plurality of segments simultaneously, and a plurality of basic stream can be multiplexed in the segment.During Type=2, specify SubPath_id, Clip_id and bag ID (PID), so that one of identification is flowed substantially a plurality of basic stream of single (master) segment of quoting from SubPath.Ref_to_SubPath_id represents that SubPath_id, ref_to_SubClip_entry_id field represent that Clip_id and ref_to_stream_PID_of_SubClip field represent PID.These ID are used for quoting a plurality of segments that son is play item, and a plurality of basic stream in each segment.
When playing and be prepared when having two types (type=1 and type=2) that adopted, discern single basic stream from playing the segment that segment that item quotes and at least one subpath quote with playing at least one subpath that item reproduces relatively.Type=1 represents the segment (main leaf is disconnected) that main path is quoted, and type=2 represents the segment (sub-segment) that subpath is quoted.
Return the description of the STN_table () piece of Figure 25 now, since 0 each of each stream_entry () identification being flowed serial number substantially in for of video flowing ID (video_stream_id) circulation is video_stream_id.Can use video stream number (video_stream_number) rather than video flowing ID (video_stream_id).In this case, since 1 rather than since 0 to each video_stream_number numbering.In other words, video_stream_id is added 1 and produced video_stream_number.Because the video stream number is used for Switch Video and to the user as seen, therefore from 1 open numbering.
Similarly, in for of audio stream ID (audio_stream_id) circulation, flowing serial number substantially since 0 each audio frequency to each stream_entry () identification is audio_stream_id.As in the video flowing, can use audio frequency stream number (audio_stream_number) rather than audio stream ID (audio_stream_id).Since 1 rather than be each audio_stream_number numbering since 0.In other words, audio_stream_id is added 1 and produced audio_stream_number.Because the audio frequency stream number is used for switch audio and to the user as seen, therefore from 1 open numbering.
Similarly, in for of audio stream ID2 (audio_stream_id2) circulation, flowing serial number substantially since 0 each audio frequency to each stream_entry () identification is audio_stream_id2.As in the video flowing, can use audio frequency stream number (audio_stream_number2) rather than audio stream ID2 (audio_stream_id2).Since 1 rather than be each audio_stream_number2 numbering since 0.In other words, audio_stream_id2 is added 1 and produced audio_stream_number2.Because audio frequency stream number 2 is used for switch audio and to the user as seen, therefore from 1 open numbering.
In the STN_table of Figure 25 (), the audio stream (the first audio stream #1) of definition number_of_audio_stream_entries and the audio stream (the second audio stream #2) of number_of_audio_stream2_entries.In other words, because used STN_table () the input one audio stream #1 and the second audio stream #2, so the user can select two audio streams with mutual reproduced in synchronization.
In for of caption stream ID (PG_textST_stream_id) circulation, be numbered PG_textST_stream_id since 0 to the text subtitle that each bitmap captions flows substantially or each stream_entry () defines.As in the video flowing, can use captions stream number (PG_textST_stream_number) rather than caption stream ID (PG_textST_stream_id).In this case, since 1 rather than number since 0 to PG_textST_stream_number.In other words, PG_textST_stream_id is added 1 and produced PG_textST_stream_number.Because captions stream number (text subtitle stream number) is used to switch captions and to the user as seen, since 1 to its numbering.
Similarly, in for circulation to graphical stream ID (IG_stream_id), flowing serial number substantially since 0 each interactive graphics (IG) to each stream_entry () definition is IG_stream_id.As in the video flowing, can use figure stream number (IG_stream_number) rather than graphical stream ID (IG_stream_id).In this case, since 1 rather than number since 0 to IG_stream_number.IG_stream_id is added 1 produced IG_stream_id.Because the figure stream number is used to switch figure and to the user as seen, since 1 to its numbering.
The stream_attribute () piece of the STN_table () of Figure 25 is described below.
In for statement after the reserved_for_future_use field, video flowing is quoted its video flowing part itself, to audio stream reference data provider (for example, the producer of recording medium 21) the audio stream part of main path and subpath is set to it, PG textST stream is quoted PG textST stream part, and IG stream is quoted IG stream part.
The stream_attribute () of for circulation of video flowing ID (video_stream_id) is provided at the stream attribute information of a video-frequency basic flow of identification on each stream_entry () basis.In stream_attribute (), be described in the stream attribute information of a video-frequency basic flow of identification on each stream_entry () basis.
Similarly, the stream_attribute () of for of audio stream ID (audio_stream_id) circulation is provided at a stream attribute information that audio frequency flows substantially of identification on each stream_entry () basis.In stream_attribute (), be described in a stream attribute that audio frequency flows substantially of identification on each stream_entry () basis.For example, by the audio frequency of the type=1 of the stream_entry () of Figure 26 or type=2 identification substantially stream be single, and the stream attribute information that stream_attribute () provides an audio frequency to flow substantially.
Similarly, the stream_attribute () of for of audio stream ID2 (audio_stream_id2) circulation is provided at a stream attribute information that audio frequency flows substantially of identification on each stream_entry () basis.In particular, in stream_attribute (), be described in a stream attribute information that audio frequency flows substantially of identification on each stream_entry () basis.For example, by the audio frequency of the type=1 of the stream_entry () of Figure 26 or type=2 identification substantially stream be single, and the stream attribute information that stream_attribute () provides this audio frequency to flow substantially.
Similarly, the stream_attribute () of for of caption stream ID (PG_textST_stream_id) circulation is provided at bitmap captions of discerning on each stream_entry () basis and flows substantially or the stream attribute information that text subtitle flows substantially.In particular, bitmap captions that are described in identification on each stream_entry () basis in stream_attribute () flow or the stream attribute information that text subtitle flows substantially substantially.
Similarly, the stream_attribute () of for of graphical stream ID (IG_stream_id) circulation is provided at a stream attribute information that interactive graphics (IG) flows substantially of identification on each stream_entry () basis.In particular, in stream_attribute (), be described in a stream attribute information that interactive graphics (IG) flows substantially of identification on each stream_entry () basis.
The grammer of stream_attribute () is described below with reference to Figure 27.
The length field comprises 16 signless integers, and expression is from immediately following address after the length field byte number to the FA final address of stream_attribme () piece.
The Stream_coding_type field comprises the type of coding of basic stream as shown in figure 28.The type of coding of basic stream comprises MPEG-2 video flowing, HDMV LPCM audio frequency, Doby AC-3 audio frequency, dts audio frequency, presents graphical stream, interactive graphic stream and text subtitle stream.
Video_format (video format) field comprises the video format of video-frequency basic flow as shown in figure 29.The video format of video-frequency basic flow comprises 480i, 576i, 480p, 1080i, 720p and 1080p.
The frame_rate field comprises the frame rate of video-frequency basic flow as shown in figure 30.The frame rate of video-frequency basic flow comprises 2400/1001,24,25,30000/1001,50 and 60000/1001.
The aspect_ratio field comprises the aspect ratio information of video-frequency basic flow as shown in figure 31.Be described as video-frequency basic flow aspect ratio information be demonstration the ratio of width to height of 4: 3, and demonstration the ratio of width to height of 16: 9.
The audio_presentation_type field comprises the type information that presents that shown in figure 32 audio frequency flows substantially.Be described as that audio frequency flows substantially what present type information is single monophony, two monophonys, stereo (2 sound channels) and multichannels.
The sampling_frequency field comprises the sample frequency that audio frequency as shown in figure 33 flows substantially.That be described as sample frequency that audio frequency flows substantially is 48KHz and 96KHz.
The audio_language_code field comprises the language codes (such as Japanese, Korean, Chinese etc.) that audio frequency flows substantially.
The PG_language_code field comprises the language codes (Japanese, Korean, Chinese etc.) that the bitmap captions flow substantially.
The IG_language_code field comprises the language codes (Japanese, Korean, Chinese etc.) that interactive graphics (IG) flows substantially.
The textST_language_code field comprises the language codes (Japanese, Korean, Chinese etc.) that text subtitle flows substantially.
The character_code field comprises the character code that text subtitle shown in Figure 34 flows substantially.The character code that text subtitle flows substantially comprises UnicodeV1.1 (ISO 10646-1), Shift JIS (Japanese), comprises the KSC 5601-1987 (Korean) of the KSC5653 that is used for roman character, GB 18030-2000 (Chinese), GB2312 (Chinese) and BIG5 (Chinese).
The grammer of the stream_attribute () of Figure 27 is described below with reference to Figure 27 and 28 to 34.
If the type of coding (stream_coding_type of Figure 27) of basic stream is video format (Figure 29), frame rate (Figure 30) and the aspect ratio information (Figure 31) that MPEG-2 video flowing (Figure 28), stream_attribute () comprise basic stream.
If the type of coding (stream_coding_type of Figure 27) of basic stream is one of HDMV LPCM audio frequency, Doby AC-3 audio frequency and dts audio frequency (Figure 28), stream_attribute () comprises that audio frequency flows substantially so presents type information (Figure 32), sample frequency (Figure 33) and language codes.
If the type of coding (stream_coding_type of Figure 27) of basic stream is to present graphical stream (Figure 28), stream_attribute () comprises the language codes that the bitmap captions flow substantially.
If the type of coding (stream_coding_type of Figure 27) of basic stream is interactive graphic stream (Figure 28), stream_attribute () comprises the language codes that interactive graphics (IG) flows substantially.
If the type of coding (stream_coding_type of Figure 27) of basic stream is a text subtitle stream (Figure 28), stream_attribute () comprises that text subtitle flows the character code and the language codes of (Figure 34) substantially.
Attribute information is not limited to these information.
When prepare playing and when playing at least one SubPath of a dependent reproduction, learn the attribute information of a basic stream of stream_attribute () identification from the stream_attribute () that plays the segment that segment that item quotes and at least one SubPath quote.
By checking attribute information (stream_attribute ()), reproducer can determine whether device itself has the function of reproducing basic stream.Equally, by checking attribute information, reproducer can select to mate the basic stream of the initial information that its language is provided with.
For example, reproducer can have and reproduces the function that the bitmap captions flow substantially and do not have and reproduce the function that text subtitle flows substantially.If user command reproducer switch audio, reproducer only select the bitmap captions to flow in playback substantially from for circulation of caption stream ID (PG_textST_stream_id) continuously.
The initial settings information of language can be a Japanese on reproducer.If user command reproducer switch audio, the basic stream that reproducer only selects to have the language codes of Japanese continuously from for of audio stream ID (audio stream id) circulation is used for resetting.
Can reproduce the AV stream (film) that the audio stream quoted by video flowing and main path is formed.When user command reproducer switch audio to specify (selections) audio stream #1 (the audio frequency output in the standard film) and audio stream #2 (direct or comment that the performer provides), reproducer mixing (stack) audio stream #1 and audio stream #2 are used to reproduce audio stream and video flowing.
As what understand from the STN_table () of Figure 25 and 26, audio stream #1 and audio stream #2 can be included in the audio stream in the segment that main path quotes.Perhaps, one of audio stream #1 and audio stream #2 can be the audio streams in the segment that main path is quoted, and another among audio stream #1 and the audio stream #2 can be the audio stream in the segment that subpath is quoted.In this mode, can from a plurality of audio streams on being superimposed upon the main AV stream that main path quotes, select two streams and be mixed for resetting.
When prepare playing and when playing at least one subpath of a dependent reproduction, the STN_table () among the PlayItem () provides a kind of user's of permission switch audio or captions in this mode so that select from play the segment that segment that item quotes and at least one subpath quote.Therefore reproducer allows user interactions ground to operate on stream that is different from the main AV stream of resetting and file.
Because a plurality of subpaths are used at single playlist, wherein subpath is quoted each height and is play item, and the AV stream of generation has highly scalable and flexibility.In particular, can add son later on and play item.For example, can use segment AV stream file that main path quotes with the relevant playlist of segment AV stream file.If rewrite playlist as the result who adds new subpath, quote segment AV stream file together according to new playlist and be used for resetting with the segment AV stream file that is different from the segment AV stream file of quoting by main path.Therefore, reproducer provides the extensibility of height.
STN_table () in PlayItem () provides a kind of like this mechanism, its allow in the reproducer 20-2 of Figure 41 will be by the audio stream #1 of first audio decoder 75-1 decoding and will be also reproduced subsequently by the audio stream #2 mixed (mixings) of second audio decoder 75-2 decoding.For example, can prepare PlayItem () and with at least one subpath of PlayItem () dependent reproduction, and provide a kind of mix and reproduce as audio stream #1 by the audio stream of playing the segment that item quotes with as the mechanism of the audio stream of the segment of quoting by SubPath of audio stream #2.A kind of mechanism also is provided, two audio streams that permission comprises in playing a segment of quoting (main leaf is disconnected), audio stream #1 and audio stream #2, mixed and reproduced subsequently.In this mode, can superpose and reproduce the audio stream (such as the stream of the comment of directing) that is different from the main audio stream that is write down.The audio stream of two stacks, just, audio stream #1 and audio stream #2 can mixed and reproductions.
Below with reference to Figure 35 specific example is described.Figure 35 understands that for example expression offers the stream number table that concerns between user's audio signal and the caption signal respectively.
With reference to Figure 35, audio number is called as A_SN (audio frequency stream number) and A_SN2, and captions number are called as S_SN (sprite stream number).Each audio stream #1 of input (as the audio stream of audio_stream_id input) is distributed A_SN respectively among the STN_table () of the broadcast item of main path in forming playlist.Each audio stream #2 of input (as the audio stream of audio_stream_id2 input) is distributed A_SN2 respectively among the STN_table () of the broadcast item of main path in forming playlist.
In particular, A_SN=1 is assigned to audio frequency 2, and A_SN=2 is assigned to audio frequency 1, and A_SN=3 is assigned to audio frequency 3.And A_SN2=1 is assigned to audio frequency 4, and A_SN2=2 is assigned to audio frequency 5.The user selects reproduced audio stream #1 from the audio stream that has distributed A_SN, and selects the audio stream #2 that will mix with audio stream #1 from the audio stream that has distributed A_SN2.For example, user's audio frequency 5 of selecting the audio frequency 1 of A_SN=2 and A_SN2=2 is as with reproduced audio stream.
If the user sends the audio frequency that the audio frequency 2 of the A_SN=1 that selects is switched in instruction, audio frequency is switched to the audio frequency 1 of A_SN=2.If the user sends the instruction switch audio again, then audio frequency is switched to the audio frequency 3 of A_SN=3.If the user sends the instruction switch audio again, audio frequency is switched to the audio frequency 2 of A_SN=1.If the user sends the audio frequency that the audio frequency 4 with selected A_SN=1 is switched in instruction, audio frequency is switched to the audio frequency 5 of A_SN2=2.If the user sends the instruction switch audio once more, audio frequency is switched to the audio frequency 4 of A_SN2=1.Switch independently and be used to select the A_SN of audio stream #1 and be used for the A_SN2 that switch audio flows #2.In particular, the user selects an audio stream from A_SN1 to A_SN3, and selects an audio stream from A_SN2=1 to A_SN2=2.
A_SN and A_SN2 number are more little, and the priority of audio signal that offers the user is high more.The stream that A_SN provides is higher on priority than the stream that A_SN2 provides.A_SN=1 is with reproduced audio stream in default setting.
The sound of the information regeneration that is provided with according to the opriginal language in reproducer is corresponding to the audio frequency 2 (Figure 35) of A_SN1.The sound that reproduces when switch audio is corresponding to the audio frequency 1 (Figure 35) of A_SN=2.
The STN_table () that is combined in the PlayItem () that quotes among the PlayList () provides this stream number table (Figure 25).In order to import audio stream #1, provide audio stream 2 by audio_stream_id=0 (A_SN=1), provide audio frequency 1 by audio_stream_id=1 (A_SN=2), and provide audio frequency 3 by audio_stream_id=2 (A_SN=3).Next, for input audio stream #2 (Figure 25) in STN_table (), provide audio frequency 4 by audio_stream_id2=0 (A_SN2=1), and provide audio frequency 5 by audio_stream_id2=1 (A_SN2=2).
By defining two playback of audio streams (audio stream #1 and audio stream #2) respectively, the user selects any two audio streams from the audio stream of definition.The user can freely select two playback of audio streams (from the audio stream of audio stream #1 and audio stream #2 definition), is therefore selecting to enjoy highly freedom in the audio stream combination.For example, the user selects the audio frequency 2 and the audio frequency 5 (combination of A_SN=1 and A_SN2=2) of the audio frequency 2 that makes up and audio frequency 4 (combination of A_SN=1 and A_SN2=1) or combination.
(for example, Figure 26) two audio streams of middle input can mix two audio streams and be used for resetting because at the stream_entry () of PlayItem () STN_table () (Figure 25).For example, can mix (stack or mix) is used for resetting simultaneously from the stream (being audio stream in this example) of two same types of the stream of a plurality of types.The user can order the stream of two expectations of same type to be mixed for resetting.
In above-mentioned discussion, allow the user to select respectively by the audio stream #1 of the first audio decoder 75-1 decoding of the reproducer 20-2 of Figure 41 and will be by the audio stream #2 of second audio decoder 75-2 decoding.Can define the combination of audio stream #1 and audio stream #2, so can be in mix resetting select the combination of flowing by the user.
The grammer of the sound.bdmv file of AUXDATA catalogue is described below with reference to Figure 36.
The Sound.bdmv file comprises the stream that at least one effect sound is used for interactive graphics (IG).
The SoundData_start_address field is 32 bit fields that comprise the leading address of SoundData () piece.SoundIndex () piece comprises about the information (such as sound channel and frequency number) as the attribute of the effect sound of the SoundData () of the real data of effect sound.
SoundData () as the real data of effect sound is not compressed voice data.SoundData () comprises the content of preparing and be designed in the effect sound of presumptive address output from sound stream respectively, the content that the mode, response that for example is designed to revise replay data from the operation of user's input with response worked from the operation of user's input, and the data of in interaction content, preparing as click sound.Depend on that the interior perhaps partial content by a playlist or a broadcast appointment can not comprise SoundData ().If when sending the order of reproducing effect sound or receiving operation from user's input, SoundData () can mix with audio stream.
The configuration and the process of the above-mentioned data of the output reproducer 20 that is used to reset are described below.
Figure 37 is the configuration that illustrates as the reproducer 20-1 of one embodiment of the invention.Reproducer 20-1 reproduces the playlist with above-mentioned main path and subpath.Whether the audio stream #2 no matter audio_stream_id2 provides exists, and reproducer 20-1 only can reproduce audio stream #1 and can not reproduce audio stream #2.Reproducer 20-1 can reproduce the audio stream #1 with the voice data that mixes with it.
Reproducer 20-1 comprises replay data acquiring unit 31, switch 32, AV decoder 33-1, controller 34-1, audio coder 41, video encoder 42, D/A (numeral is to simulation) transducer 43, D/A converter 44, not compressing audio signal interface 81, compressing audio signal interface 82, uncompressed video signal interface 83, compressed video signal interface 84, simulated audio signal interface 85 and analog video signal interface 86.
As shown in figure 37, controller 34-1 reads the Index file by replay data acquiring unit 31, described replay data acquiring unit 31 such as from the memory drives of recording medium 21 reading of data that load, from reproducer 20-1 the recording medium reading of data data driver or obtain the network interface of data by network 22.In response to the order that produces, reproducer 20-1 reads play list file, read from the information of play list file and play, detect corresponding to a segment of playing, and read corresponding AV stream and AV data according to ClipInfo then.The user interface of use such as remote controller 24, the user to controller 34-1 input command with switch audio or captions.The information of the opriginal language setting of reproducer 20-1 is provided to controller 34-1 from the memory (not shown).
Controller 34-1 is according to the value control switch 61 of is_MixApp sign and is_MixApp_2 sign, and described two signs all are described in one of Index file and play list file (one of AppInfoPlayList (), PlayList () and PlayItem) and the existence or the shortage of the voice data that expression will mix with audio stream.
Play list file also comprises STN_table () except the information of main path and subpath.Controller 34-1 reads the disconnected AV stream file of main leaf that main leaf that the playlist that comprises the play list file quotes comprises in disconnected by replay data acquiring unit 31 from recording medium 21, is included in son in the sub-segment and plays sub-segment AV stream file and the son that item quotes and play the text subtitle data that item is quoted.Can be kept in the different recording mediums playing a main leaf of quoting sub-segment disconnected and that son broadcast item is quoted.For example, main leaf is disconnected can be recorded on the recording medium 21, and corresponding sub-segment is provided and is kept on the hard disk (not shown) of hard disk drive among the reproducer 20-1 (HDD) by network 22.Controller 34-1 carries out control procedure, selects also to reproduce the basic stream corresponding to the playback of self device (reproducer 20-1), or selects and reproduce the basic stream of the information that the opriginal language corresponding to reproducer 20-1 is provided with.
AV decoder 33-1 comprises buffer 51-54, PID filter 55, PID filter 56, switch 57-59, switch 61, background decoder 71, Video Decoder 72, presents graphic decoder 73, interactive graphics (IG) decoder 74, audio decoder 75, text ST are synthetic 76, switch 77, background plane generator 91, video plane generator 92, present graphics plane generator 93, interactive graphics plane generator 94, buffer 95, video data processor 96 and blender 97.Because audio decoder 75 is unique audio decoder at this, so reproducer 20-1 can decoded audio stream #1 and can not decoded audio stream #2.In particular, can the decode audio stream of the audio_stream_id identification among the STN_table () of Figure 25 and the audio stream of the audio_stream_id2 that can not decode identification of reproducer 20-1.Video Decoder 72 can comprise MPEG2, MPEG4, H.264/AVC a plurality of coding/decoding methods in response to reproduced stream is used.
Decode by ECC decoder (not shown) by the file data that controller 34-1 reads, then the multiplexed stream through decoding is carried out correction process.Under the control of controller 34-1, switch 32 offers stream each buffer 51-54 then from through flowing according to type selecting the data of decoding and error correction.In particular, under the control of controller 34-1, switch 32 provides background image data, the main leaf data that disconnected AV flows is provided, puies forward the data that sub-segment AV of generation flows to buffer 53 to buffer 52 to buffer 51, and the data of Text-ST are provided to buffer 54.Buffer 51 buffering background image datas, the data of the disconnected AV stream of buffer 52 buffering main leaves, the data of buffer 53 buffer sublayer segment AV stream and buffer 54 buffering Text-ST data.
The stream (for example transport stream) of at least one in the disconnected AV stream of the main leaf video that has been wherein multiplexing and audio frequency, bitmap captions (presenting graphical stream) and the interactive graphics (IG).The stream of at least one in sub-segment AV stream has been wherein multiplexing audio frequency, bitmap captions (presenting graphical stream) and the interactive graphics (IG).Data text subtitle file (Text_ST) can adopt or can not adopt for example form of the multiplex stream of transport stream.
Replay data acquiring unit 31 can be cut apart the file that form reads the disconnected AV stream of main leaf, sub-segment AV stream and text subtitle data with the time.Perhaps, replay data acquiring unit 31 can read sub-segment AV stream and text subtitle data in advance before reading the disconnected AV stream of main leaf, and the sub-segment AV stream and the text subtitle data in advance that will read are loaded on the buffer (buffer 53 and buffer 54).
The flow data that reads from the buffer 52 as the disconnected AV stream of main leaf read buffers is outputed to the PID of level (bag ID) filter 55 subsequently at the fixed time.PID filter 55 outputs to by PID (bag ID) classification (sort) the disconnected AV stream of the main leaf of input each decoder of basic stream for the stream that flows and will classify.In particular, PID filter 55 is to presenting graphical stream as providing to the switch that the source is provided 57 that presents graphic decoder 73, to interactive graphic stream being provided as the switch that the source is provided 58 to interactive graphics (IG) decoder 74 or providing audio stream to the switch that the source is provided 59 that is used as to audio decoder 75.
For example, presenting graphical stream is bitmap subtitle data, and the text subtitle data are text subtitle data.
The flow data that reads from the buffer 53 as sub-segment AV stream read buffers is output to the PID of level (bag ID) filter 56 subsequently.PID filter 56 is pressed the sub-segment AV traffic classification of PID to input, and the stream that will classify outputs to each decoder of basic stream.In particular, PID filter 56 is to presenting graphical stream as providing to the switch that the source is provided 57 that presents graphic decoder 73, to interactive graphic stream being provided as the switch that the source is provided 58 to interactive graphics (IG) decoder 74 or providing audio stream to the switch that the source is provided 59 that is used as to audio decoder 75.
The data that read from the buffer 51 that is used to cushion background image are provided to background decoder 71 at the fixed time.Background decoder 71 decoding background image datas also are provided to background plane generator 91 with the background image data of decoding.
Be provided to subsequently Video Decoder 72 by the video data stream of PID filter 55 classification.Video Decoder 72 decoded video streams also output to video plane generator 92 with decoded video stream.
Switch 57 from the main leaf that provides from PID filter 55 is disconnected, comprise present graphical stream and presenting in the graphical stream of comprising in sub-segment selected one, and will select present graphical stream offer subsequently present graphic decoder 73.Present that graphic decoder 73 decoding presents graphical stream and the data that present graphical stream that will decode offer and are used as to the switch that the source is provided 77 that presents graphics plane generator 93.
Switch 58 is selected one from interactive graphic stream that comprises and the interactive graphic stream that comprises in sub-segment the main leaf that provides from PID filter 55 is disconnected, and the graphical stream that presents that will select offers subsequently interactive graphics (IG) decoder 74.The interactive graphic stream that is provided to interactive graphics (IG) decoder 74 be from the disconnected AV stream of main leaf or sub-segment AV flow point from stream.Interactive graphics (IG) decoder 74 decoding interactive graphic streams also offer interactive graphics plane generator 94 with the interactive graphic stream of decoding.
Switch 59 is selected one from audio stream that comprises and the audio stream that comprises in sub-segment the main leaf that provides from PID filter 55 is disconnected, and the audio stream of selecting is offered subsequently switch 61.
Switch 61 provides the voice data of reception to one of audio decoder 75 and compressing audio signal interface 82 under the control of controller 34-1.
Audio decoder 75 decoded audio stream, and the audio stream of decoding is provided to blender 97.
To offer buffer 95 from the voice data that switch 32 is selected and be used for buffering.Buffer 95 provides voice data to blender 97 at the fixed time.In this case, voice data is the effect sound data that menu is selected.Voice data that (stack, or mix) provide from audio decoder 75 and the voice data that provides from buffer 95 are provided for blender 97, and with the data of mixing as audio signal output.
The text subtitle that the data that read from the buffer 54 as the text subtitle read buffers are output to is subsequently at the fixed time synthesized decoder 76.Text subtitle is synthesized decoder 76 decoding Text-ST data and the Text-ST data of decoding is offered switch 77.
Switch 77 is selected presenting presenting between graphical stream and the text subtitle data of graphic decoder 73 decoding.Being provided to the subtitle graphic that presents graphics plane generator 93 is to present one of the output of graphic decoder 73 and output of the synthetic decoder 76 of text subtitle.Being input to the graphical stream that presents that presents graphic decoder 73 is the stream that separates with one of sub-segment AV stream (being selected by switch 57) from the disconnected AV stream of main leaf.Output to the subtitle graphic that presents graphics plane generator 93 and be presenting graphical stream, present one of decoding output of graphical stream and text subtitle data from the disconnected AV stream of main leaf from sub-segment AV stream.
In response to the background image data that provides from background decoder 71, background plane generator 91 produces background plane, is used as the wallpaper that occurs when video image is pressed the ratio contraction, and background plane is offered video data processor 96.In response to the video data that provides from Video Decoder 72, video plane generator 92 produces video plane and the video plane that produces is offered video data processor 96.In response to the data (presenting one of graphical stream and text subtitle data) of selecting and providing by switch 77, present graphics plane generator 93 and produce as playing up the graphics plane that presents of (rendering) image, and the graphics plane that presents that will produce offers video data processor 96.In response to the interactive graphic stream that provides from interactive graphics (IG) decoder 96, interactive graphics plane generator 94 produces interactive graphics plane, then the interactive graphics plane that produces is offered video data processor 96.
Video data processor 96 mixes background plane from background plane generator 91, from the video plane of video plane generator 92, from presenting presenting graphics plane and, and the plane of mixing being output as vision signal of pattern generator 93 from the interactive graphics plane of interactive graphics plane generator 94.
Switch 57-59 and 77 is in response to switching from the operation of user's input or in response to the file that comprises processed data by user interface.For example, if voice data only is comprised in the sub-segment AV stream, forwards switch 59 to sub-side and select.
The not audio compressed data that audio coder 41 coding provides from blender 97, and the voice data of coding offered compressing audio signal interface 82.The uncompressed video signal that video encoder 42 coding provides from video data processor 96 also offers compressed video signal interface 84 with encoded signals.D/A converter 43 conversion from the conduct of blender 97 not the digital signal of audio compressed data be analog signal, and the analog signal that produces is offered simulated audio signal interface 85.D/A converter 44 conversions are analog signal from the digital signal as uncompressed digital video signal of video data processor 96, and then the analog signal that produces are offered analog video signal interface 86.Compressing audio signal interface 81 will not output to the outside of device from the not audio compressed data that blender 97 provides.Compressing audio signal interface 82 will output to the outside of device from the compressing audio signal that one of audio coder 41 and switch 61 provide.Uncompressed video signal interface 83 will output to the outside of device from the uncompressed video signal that video data processor 96 provides.Compressed video signal interface 84 will output to the outside of device from the compressed video signal that video encoder 42 provides.Simulated audio signal interface 85 will output to the outside of device from the simulated audio signal that D/A converter 43 provides.Analog video signal interface 86 will be provided to the outside of device from the analog video signal that D/A converter 44 provides.
When describing the sign that other voice datas that expression will mix with audio stream #1 exist in the Index of Index file (), reproducer 20-1 carries out to reset and handles 1 with the reproduction data.Flow chart description playback below with reference to Figure 38 handles 1.
In step S1, controller 34-1 determines whether the output from reproducer 20-1 is encoding stream.If determine that in step S1 the output from reproducer 20-1 is not encoding stream, handle to proceed to step S24.
If determine that in step S1 the output from reproducer 20-1 is encoding stream, the Index file that provides from switch 32 is provided 34-1 at step S2 middle controller.
In step S3, controller 34-1 determines whether is_MixApp sign or is_MixApp_2 sign are 1, and described two signs are described among the Index () and represent the existence of voice data.
Except audio stream #1, have in the content of the audio stream #2 that discerns by audio_stream_id2 by audio_stream_id identification, except the is_MixApp_2 sign that the expression voice data exists, the is_MixApp_1 sign that expression audio stream #2 exists has been described.Yet reproducer 20-1 does not have the function of reproduction by the audio stream #2 of audio_stream_id2 identification.In step S3, reproducer 20-1 only detects the value of statistical indicant of is_MixApp sign or is_MixApp_2 sign, even and the is_MixApp_1 sign be written into reading of data and also do not quote the is_MixApp_1 sign.In other words, the value that is_MixApp_1 indicates in reproducer 20-1 does not influence the control of switch 61.
If determine that in step S3 one of is_MixApp sign and is_MixApp_2 sign are 1, handle proceeding to the step S14 that discusses after a while so.
If determine that in step S3 one of is_MixApp sign and is_MixApp_2 sign are not 1, the data that relate to the Index file so do not comprise the voice data that mixes with audio stream #1 in reproducer 20-1.In step S4, controller 34-1 control switch 61 is so that will be referred to the compressing audio signal interface 82 that the voice data of Index file offers the outlet terminal that is used as compression (coding) audio signal.
In step S5, controller 34-1 determines whether to have sent the order that produces in response to the operation from user's input to read play list file (such as the xxxxx.mpls of Figure 13).If determine also not send the reading order that reads play list file in step S5, then repeating step S5 is up to determining to have sent the reading order that reads play list file.
If in step S5, determine to have sent the reading order that reads play list file, read the play list file that provides from switch 32 at step S6 middle controller 34-1 so.
In step S7, controller 34-1 reads the broadcast item by the playlist of describing (PlayList () of Figure 13) appointment in play list file, read the disconnected AV stream of corresponding main leaf, sub-segment AV stream and text subtitle data and the data that read are offered switch 32.
In step S8, controller 34-1 offers corresponding buffer with the data that read and is used for buffering.In particular, controller 34-1 control switch 32 is to offer background image data in buffer 51, the disconnected AV stream of main leaf is offered buffer 52, sub-segment AV stream is offered buffer 53 and the text subtitle data is offered buffer 54.The buffer 51-54 buffering data that provide wherein.In particular, buffer 51 buffering background image datas, the disconnected AV stream of buffer 52 buffering main leaves, buffer 53 buffer sublayer segment AV stream, and buffer 54 buffering text subtitle data.
In step S9, controller 34-1 control PID filter 55 and 56 and switch 57-59 utilizes the data of predetermined decoding device decoding based on video thus, and utilizes video data processor 96 to handle decoded datas.
In step S10, the treated video data that video encoder 42 codings provide from video data processor 96.
In step S11, the compressing audio signal interface 82 that is used as the output of compression (coding) audio signal output flows #1 as the coding audio data of exporting from switch 61 to the outside output audio of device.As the outside output encoder video data of the compressed video signal interface 84 that compresses (coding) vision signal to device.
In step S11, do not degenerate in tonequality, because the AV decoder 33-1 of reproducer 20-1 does not carry out decoding processing from compression (coding) voice data of compressing audio signal interface 82 outputs.
In step S12, controller 34-1 determines whether to exist with reference to playlist the reproduced next one is play item.If in step S12, define the next item of playing, handle and proceed to step S7 so that repeating step S7 and step subsequently.
If in step S12, determine not have the next item of playing, determine to reset to handle at step S13 middle controller 34-1 and whether will finish.Handle not end if determining resets in step S13, handle and turn back to step S5 with repeating step S5 and step subsequently.Handle and will finish if determining in step S13 resets, processing finishes.
If determine that in step S3 one of is_MixApp sign and is_MixApp_2 sign are 1, the data that relate to the Index file comprise the voice data that mixes with audio stream #1 by reproducer 20-1.In step S14, controller 34-1 control switch 61 offers audio decoder 75 with the audio stream #1 that will be referred to the Index file.
Step S15 to S19 is similar to step S5 to S9 respectively substantially.In particular, controller 34-1 determines whether to have sent the order that produces in response to the operation of importing from the user and reads play list file.If determine to have issued the reading order that reads play list file, read playlist, and read the broadcast item of playlist appointment.To offer corresponding buffer corresponding to the segment data (AV stream) that reads the broadcast item and be used for buffering.Control PID filter and switch, therefore buffering based on video data by corresponding decoder decode, handled by video data processor 96 then.
In step S20, controller 34-1 control PID filter and switch, offer audio decoder 75 with audio stream #1 by switch 61 and be used for decoding, and control where necessary that blender 97 is carried out mixed processing so that hybrid decoding data and the voice data of buffering on buffer 95 with buffer 53 buffering.
Because control switch 61 is controlled to decoded audio stream #1 and audio decoder 75, mixed audio stream #1 and voice data.
In step S21, the not audio compressed data of 41 pairs of mixing of audio coder is encoded, and the voice data of will encode (compression) offers compressing audio signal interface 82 so that output to the outside of device.42 pairs of uncompressed video data of being handled by video data processor 96 of video encoder are encoded, and the video data of will encode (compression) offers compressed video signal interface 84 so that output to the outside of device.
In step S22, controller 34-1 plays item to determine whether to exist with the reproduced next one with reference to playlist.If in step S22, determine to exist the next item of playing, handle and turn back to step S17 with repeating step S17 and step subsequently.
If determine not have a next existence of playing in step S22, controller 34-1 determines whether to finish to reset in step S23 to handle.Do not handle if determining in step S23 does not finish to reset, handle turning back to step S15 so with repeating step S15 and step subsequently.Handle if determining in step S23 finishes to reset, processing finishes so.
If determine that in step S1 the output from reproducer 20-1 is not encoding stream, offer audio decoder 75 at step S24 middle controller 34-1 control switch 61 with the audio stream #1 that will provide so.
If output is not encoding stream, whether the voice data that no matter will mix with audio stream #1 exists, all decoded audio stream #1.
Step S25 to S30 is substantially the same with step S15 to S20 respectively.In particular, controller 34-1 determines whether to have sent in response to the order that produces from the operation of user's input to read play list file.If determine to have sent the reading order that reads play list file, read playlist, and read broadcast item by the playlist appointment.Be provided for corresponding buffer corresponding to the segment data (AV stream) of the broadcast item that reads and be used for buffering.Control PID filter and switch make the data based on video of buffering also be handled by video data processor 96 then by the corresponding decoder decoding.And PID filter and switch are controlled, and therefore the data of buffering are provided for audio decoder 75 so that decoding there on buffer 53.If voice data is provided to and is present on the buffer 95, blender 97 is carried out mixed processing as required.
In step S31, compressing audio signal interface 81 will not output to the outside of device from the treated not audio compressed data that blender 97 provides.Uncompressed video signal interface 83 will be provided to the outside of device from the uncompressed video data that video data processor 96 provides.The unpressed voice data of D/A converter 43 digital-to-analogue conversions.Simulated audio signal interface 85 outputs to analog signal the outside of device.D/A converter 44 digital-to-analogue conversion uncompressed video data.Analog video signal interface 86 outputs to analog signal the outside of device.
In step S32, controller 34-1 plays item to determine whether to exist with the reproduced next one with reference to playlist.If in step S32, determine a next existence of playing, handle and turn back to step S27 with repeating step S27 and step subsequently.
If in step S32, determine not have a next existence of playing, controller 34-1 determines whether to finish to reset in step S33 to handle.Do not handle if determining in step S33 does not finish to reset, handle and turn back to step S25 with repeating step S25 and step subsequently.Handle if determining in step S33 finishes to reset, processing finishes.
In the superincumbent processing, reproducer 20-1 is with reference to the sign of describing in Index ().If the data that Index () quotes do not comprise voice data, audio stream #1 is output to the outside of device as compression (coding) data that do not have decoding.Therefore the tonequality of audio stream #1 can not descend.
In the superincumbent processing, controller 34-1 determines that in the initial stage of handling output is packed data or packed data not, and reference mark is to determine whether to carry out mixed processing then.Perhaps, at first reference mark to be to determine whether to carry out mixed processing, determines that then output signal is packed data or packed data not.In this configuration, not aspect its flesh and blood but only to handle changing aspect the order.
When having described the sign of other voice datas existence that will mix with audio stream #1 in playlist, reproducer 20-1 carries out to reset and handles 2 so that reproduce data.Flow chart description playback below with reference to Figure 39 handles 2.
In step S61, controller 34-1 determines whether the output from reproducer 20-1 is encoding stream.If determine that in step S61 the output from reproducer 20-1 is not encoding stream, handle to proceed to step S82 described below.
If determine that in step S61 the output from reproducer 20-1 is encoding stream, the Index file that provides from switch 32 is provided in step S62 controller 34-1.
In step S63, controller 34-1 has determined whether to send order that response produces from the operation of user's input to read play list file (for example xxxxx.mpls of Figure 13).If determine also not send the reading order that reads play list file in step S63, repeating step S63 is up to determining to have sent the reading order that reads play list file.
If determine to have sent the reading order that reads play list file in step S63, the play list file that provides from switch 32 is provided in step S64 controller 34-1.
In step S65, controller 34-1 determines whether one of is_MixApp sign and is_MixApp_2 sign are 1, and described two signs are described among AppInfoPlayList () or the PlayList () and represent the existence of voice data.
Except audio stream #1, also have in the content of the audio stream #2 that discerns by audio_stream_id2 by audio_stream_id identification, except the is_MixApp_2 sign of the existence of expression voice data, the is_MixApp_1 sign that expression audio stream #2 exists has been described.Yet reproducer 20-1 does not have the function of reproduction by the audio stream #2 of audio_stream_id2 identification.Reproducer 20-1 only detects the value of statistical indicant of is_MixApp_2 sign, even the value that does not also indicate with reference to is_MixApp_1 when the is_MixApp_1 sign writes reading of data.
If determine that in step S65 one of is_MixApp sign and is_MixApp_2 sign are 1, handle proceeding to the step S74 that discusses after a while.
If determine that in step S65 one of is_MixApp sign and is_MixApp_2 sign sign are not 1, the data that relate to play list file do not comprise the voice data that mixes with audio stream #1 in reproducer 20-1.In step S66, controller 34-1 control switch 61 offers the compressing audio signal interface 82 of the outlet terminal of using compression (coding) audio signal of opposing with the voice data that will be referred to play list file.
Step S67 to S71 is substantially the same with the step S7 to S11 of Figure 38.
In particular, read broadcast item by the playlist appointment.Read the data of corresponding segment (AV stream) and offer switch 32, therefore the data that read are offered corresponding buffer and be used for buffering. Control PID filter 55 and 56 and switch 57-59, thus decode by corresponding decoder based on the data of video, handle by video data processor 96 then.Then treated data are encoded.Be output to the outside of device by compressing audio signal interface 82 from the coding audio data of switch 61 outputs.Coding video frequency data is output to the outside of device by compressed video signal interface 84.
Because compression (coding) voice data from 82 outputs of compressing audio signal interface in step S71 does not also have the AV decoder 33-1 the reproduced equipment 20-1 to decode, so tonequality does not reduce.
In step S72, controller 34-1 with reference to playlist to determine whether to exist the next reproduced broadcast item of wanting.If in step S72, define the next item of playing, handle and turn back to step S67 with repeating step S67 and step subsequently.
If determine not have the next item of playing in step S72, controller 34-1 determines whether to finish to reset in step S73 to handle.Do not handle if determining in step S73 does not finish to reset, handle and turn back to step S63 with repeating step S63 and step subsequently.The value that depends on the sign of describing in the playlist that next will read, the control of switch 61 can change.Handle if determining in step S73 finishes to reset, processing finishes.
If determine that in step S65 one of is_MixApp sign and is_MixApp_2 sign are 1, the data that relate to play list file comprise the voice data that mixes with audio stream #1 by reproducer 20-1.In step S74, controller 34-1 control switch 61 offers audio decoder 75 with the audio stream #1 that will be referred to play list file.Audio decoder 75 decodings relate to the voice data of play list file.
Step S75 to S79 is substantially the same with the step S17 to S21 of Figure 38 respectively.In particular, read broadcast item by the playlist appointment.Read the data of corresponding segment (AV stream), provide it to corresponding buffer then and be used for buffering.Control PID filter and switch therefore by the data based on video of corresponding decoder decode buffering, and are handled the data of decoding by video data processor 96.
Control PID filter and switch, the audio stream #1 that will cushion on buffer 53 offers audio decoder 75 to decode by switch 61 thus.The data of blender 97 hybrid decodings and the voice data that on buffer 95, cushions.Because switch 61 Be Controlled make audio stream #1 by audio decoder 75 decodings, mixed audio stream #1 and voice data.
The not audio compressed data that mixes is encoded, and then it is outputed to the outside of device by compressing audio signal interface 82.The uncompressed video data of being handled by video data processor 96 is encoded, and by compressed video signal interface 84 it is outputed to the outside of device then.
In step S80, controller 34-1 plays item with reference to playlist to have determined whether the next one.If in step S80, define the next item of playing, handle turning back to step S75 to repeat S75 and step subsequently.
If determine not have the next item of playing in step S80, controller 34-1 determines to reset to handle whether will finish in step S81.Do not handle if determining in step S81 does not finish to reset, handle and turn back to step S63 with repeating step S63 and step subsequently.The control of switch 61 can change according to the value of the sign of describing in the playlist that next will read.If determine that in step S81 the playback processing finishes, processing finishes.
If determine that in step S61 the output from reproducer 20-1 is not encoding stream, execution in step S82-S91, these steps are substantially the same with the step S24-S33 of Figure 38.
In particular, if output is not encoding stream, no matter whether have the voice data that will mix, decoded audio stream #1 with audio stream #1.Thereby control switch 61 offers audio decoder 75 with audio stream #1 like this.Controller 34-1 determines whether to send the reading order that reads playlist.If controller 34-1 determines to have sent reading order, read playlist.Read broadcast item, read the data of corresponding segment then by the playlist appointment.When being offered, reading of data answer buffer to be used for buffering.Control PID filter and switch, thus by the data that corresponding decoder decode cushions, handle the data of decoding then by video data processor 96 based on video.And, control PID filter and switch, thus the audio stream #1 that will cushion on buffer 53 by switch 61 offers audio decoder 75 so that decode.If voice data is provided for and is present on the buffer 95,97 pairs of decoded datas of blender and voice data are carried out mixed processing.
Compressing audio signal interface 81 will not output to the outside of device from the treated not audio compressed data that blender 97 provides.Uncompressed video signal interface 83 will output to the outside of device from the uncompressed video data that video data processor 96 provides.The unpressed voice data of D/A converter 43 digital-to-analogue conversions.Simulated audio signal interface 85 outputs to analog signal the outside of device.D/A converter 44 digital-to-analogue conversion uncompressed video data.Analog video signal interface 86 outputs to analog signal the outside of device.
Determine whether the reproduced next one is play item.If define the next item of playing, handle and turn back to step S85 with repeating step S85 and step subsequently.If determine not have the next item of playing, whether controller 34-1 determines to reset to handle finishes.Do not handle if determining does not finish to reset, handle turning back to step S83.Finish if determine the playback processing, processing finishes.
In above-mentioned processing, reproducer 20-1 is with reference to the sign of describing in playlist.If the data by the playlist reference do not comprise voice data, reproducer 20-1 outputs to the outside of device with audio stream #1, as compression (coding) data that do not have decoding.Therefore, tonequality can not reduce.If describe sign among one of the AppInfoPlayList () in play list file and PlayList (), on each playlist basis, be provided with whether comprise voice data.Therefore, improve the freedom of creation.
Also in above-mentioned processing, controller 34-1 determines that at the commitment of handling output is packed data or packed data not, and reference mark is to determine whether to carry out mixed processing then.Perhaps, at first reference mark to be to determine whether to carry out mixed processing, determines that then output signal is packed data or packed data not.In this configuration, not aspect its flesh and blood but only change processing aspect its order.
Reproducer 20-1 carries out to reset and handles 3 so that the reproduction data when having described the sign that other voice datas that expression will mix with audio stream #1 exist in playing.Flow chart description playback below with reference to Figure 40 handles 3.
In step S131, controller 34-1 determines whether the output from reproducer 20-1 is encoding stream.If determine that in step S131 the output from reproducer 20-1 is not encoding stream, handle proceeding to after a while with the step S151 that discusses.
If determine that in step S131 the output from reproducer 20-1 is encoding stream, the Index file that provides from switch 32 is provided in step S132 controller 34-1.
In step S133, controller 34-1 determines whether to have sent in response to the order that produces from the operation of user's input to read play list file (for example xxxxx.mpls of Figure 13).If determine also not send the reading order that reads play list file in step S133, repeating step S133 is up to determining to have sent the reading order that reads play list file.
If determine to have sent the reading order that reads play list file in step S133, the play list file that provides from switch 32 is provided in step S134 controller 34-1.
In step S135, controller 34-1 reads in the broadcast item of playlist (PlayList () of Figure 13) appointment of describing in the play list file.According to playing item, controller 34-1 control replay data acquiring unit 31 reads the disconnected AV stream of corresponding main leaf thus, and sub-segment AV flows and the text subtitle data, and provides these data to switch 32.
In step S136, controller 34-1 determines whether one of is_MixApp sign or is_MixApp_2 sign are 1, and described two denotational descriptions are in playing and represent the existence of voice data.
Except audio stream #1, have in the content of the audio stream #2 that discerns by audio_stream_id2 by audio_stream_id identification, except the is_MixApp_2 sign that the expression voice data exists, the is_MixApp_1 sign that expression audio stream #2 exists is described.Yet reproducer 20-1 does not have the function of reproduction by the audio stream #2 of audio_stream_id2 identification.In step S136, reproducer 20-1 only detects the value of is_MixApp sign or is_MixApp_2 sign, even and write the is_MixApp_1 sign also not with reference to the value of is_MixApp_1 sign in reading of data.
If determine that in step S136 one of is_MixApp sign and is_MixApp_2 sign are 1, handle proceeding to the S144 that discusses after a while.
If determine that in step S136 one of is_MixApp sign and is_MixApp_2 sign are not 1, the voice data that relates to the Index file does not comprise the voice data that mixes with audio stream #1 in reproducer 20-1.In step S137, controller 34-1 control switch 61 offers compressing audio signal interface 82 as the outlet terminal of compression (coding) audio signal with an audio stream #1 who will be referred to play.
Step S137 to S141 is substantially the same with the step S8 to S11 of Figure 38.
In particular, the data that read are offered corresponding buffer and be used for buffering.PID filter 55 and 56 and switch 57-59 be controlled, make data based on video by corresponding decoder decode, handle by video data processor 96 then.The data of encoding process then.Will be from switch 61 output to the outside of device by compressing audio signal interface 82 as the audio stream #1 of coding audio data.Coding video frequency data is outputed to the outside of device by compressed video signal interface 84.
Because compression (coding) voice data of not exported from compressing audio signal interface 82 in step S141 by AV decoder 33-1 decoding in reproducer 20-1, so tonequality does not reduce.
In step S142, controller 34-1 plays item to determine whether to exist with the reproduced next one with reference to playlist.If in step S142, define the next item of playing, handle and turn back to step S135 with repeating step S135 and step subsequently.The control of switch 61 can change according to the value of statistical indicant of describing in the broadcast item that reads subsequently.
If determine not have the next item of playing in step S142, controller 34-1 determines to reset to handle whether finish in step S143 so.Do not handle if determining in step S143 does not finish to reset, handle and turn back to step S133 with repeating step S133 and step subsequently.Handle and will finish if determining in step S143 resets, processing finishes.
If determine that in step S136 one of is_MixApp sign and is_MixApp_2 sign are 1, relate to the data of playing a file and comprise the voice data that mixes with audio stream #1 by reproducer 20-1.In step S144, controller 34-1 control switch 61 offers audio decoder 75 with the audio stream #1 that will be referred to play a file.
Step S145 to S148 is substantially the same with the step S18 to S21 of Figure 38.In particular, reading of data is offered corresponding buffer and be used for buffering.PID filter and switch are controlled, thereby by the data based on video of corresponding decoder decode buffering, and handle the data of decoding by video data processor 96.
Control PID filter and switch, the audio stream #1 that will cushion on buffer 53 is by offering audio decoder 75 so that decode with switch 61 thus.The data of blender 97 hybrid decodings and the voice data that on buffer 95, cushions.Because control switch 61 Be Controlled make audio stream #1 by audio decoder 75 decodings, mixed audio stream #1 and voice data.
The not audio compressed data that coding mixes outputs to the outside of device then by compressing audio signal interface 82.Coding outputs to the outside of device then by the uncompressed video data of video data processor 96 processing by compressed video signal interface 84.
In step S149, controller 34-1 with reference to playlist to determine whether to exist next play.If in step S149, define the next item of playing, handle and turn back to step S135 with repeating step S135 and step subsequently.The control of switch 61 can change according to the value of statistical indicant of describing in the broadcast item that reads subsequently.
If determine not have the next item of playing in step S149, controller 34-1 determines to reset to handle whether finish in step S150.Do not handle if determining in step S150 does not finish to reset, handle and turn back to step S133 with repeating step S133 and step subsequently.Handle and will finish if determining in step S150 resets, processing finishes.
If determine that in step S131 the output from reproducer 20-1 is not encoding stream, execution in step S151-S160 so, these steps are substantially the same with the step S24-S33 of Figure 38 respectively.
In particular, if output is not encoding stream, whether the voice data that no matter will mix with audio stream #1 exists decoded audio stream #1.Control switch 61 is so that offer audio decoder 75 with audio stream #1 like this.Controller 34-1 determines whether to send the reading order that reads playlist.If controller 34-1 determines to have sent reading order, read playlist.Read broadcast item, read the data of corresponding segment (AV stream) then by the playlist appointment.The data that read are offered corresponding buffer be used for buffering.Control PID filter and switch make by the data based on video of corresponding decoder decode buffering, are handled the data of decoding then by video data processor 96.And control PID filter and switch are so that offer audio decoder 75 so that decode by the audio stream #1 that switch 61 will be cushioned on buffer 53.If voice data is provided for and is present on the buffer 95,97 pairs of decoded datas of blender and voice data are carried out mixed processing.
Compressing audio signal interface 81 will not output to the outside of device from the treated not audio compressed data that blender 97 provides.Uncompressed video signal interface 83 will output to the outside of device from the uncompressed video data that video data processor 96 provides.The unpressed voice data of D/A converter 43 digital-to-analogue conversions.Simulated audio signal interface 85 outputs to analog signal the outside of device.D/A converter 44 digital-to-analogue conversion uncompressed video data.Analog video signal interface 86 outputs to analog signal the outside of device.
In step S159, determine whether next with reproduced broadcast item.If in step S159, define the next item of playing, handle and return step S154 with repeating step S154 and step subsequently.If determine not have the next item of playing in step S159, controller 34-1 determines to reset to handle whether finish in step S160 so.Do not handle if determining in step S160 does not finish to reset, handle turning back to step S152.If determine that in step S160 the playback processing finishes, processing finishes.
In the superincumbent processing, reproducer 20-1 is with reference to the sign of describing in playing a file.Do not comprise voice data if play a data of quoting, reproducer 20-1 outputs to audio stream #1 the outside of device as compression (coding) data that do not have decoding.Therefore tonequality avoids descending.Because in playing a file, describe sign, playing to be provided with whether comprise voice data on the basis one by one.Therefore increase the freedom of creation.
Also in above-mentioned processing, controller 34-1 determines that at the commitment of handling output is packed data or packed data not, and reference mark is to determine whether to carry out mixed processing then.Perhaps, at first reference mark to be to determine whether to carry out mixed process, determines that then output signal is packed data or packed data not.In this configuration, not aspect its flesh and blood but only change processing aspect its order.
Reproducer 20-2 as second embodiment of the invention can reproduce audio stream #1 and these two audio streams of audio stream #2.Figure 41 is a block diagram of describing reproducer 20-2.Reproducer 20-2 reproduces the playlist with main path and subpath.Reproducer 20-2 can mix by the audio stream #2 of audio_stream_id2 identification and voice data and by the audio stream #1 of audio_stream_id identification.
The element components identical of the reproducer 20-1 that discusses with reference Figure 37 is correspondingly omitted its discussion here with identical Reference numeral appointment.
As reproducer 20-1, reproducer 20-2 comprises replay data acquiring unit 31, switch 32, audio coder 41, video encoder 42, D/A converter 43, D/A converter 44, not compressing audio signal interface 81, compressing audio signal interface 82, uncompressed video signal interface 83, compressed video signal interface 84, simulated audio signal interface 85 and analog video signal interface 86.Reproducer 20-2 further comprises AV decoder 33-2 and controller 34-2, rather than AV decoder 33-1 in reproducer 20-1 and controller 34-1.
With the previous same way as of discussing with reference to Figure 37, the controller 34-2 of Figure 41 reads the Index file by replay data acquiring unit 31, reads play list file and reads according to the information of play list file in response to the order that produces and play.Controller 34-2 obtains corresponding segment (AV stream or AV data).Use user interface, the user gives an order with conversion audio frequency or captions.Be provided at the information that opriginal language is provided with the reproducer 20-2 from the memory (not shown) to controller 34-2.
Controller 34-2 comes control switch 61 according to the value of is_MixApp sign or is_MixApp_2 sign and the value of is_MixApp_1 sign.Described is_MixApp sign, is_MixApp_1 sign and is_MixApp_2 sign be described in Index file and play list file (AppInfoPlayList (), PlayList () and one of PlayItem) in.Whether is_MixApp sign and the expression of is_MixApp_2 sign exist the voice data that will mix with audio stream #1, and whether the expression of is_MixApp_1 sign exists the audio stream #2 that will mix with audio stream #1.
As AV decoder 33-1, AV decoder 33-2 comprises buffer 51-55, PID filter 55, PID filter 56, switch 57 and 58, switch 61, switch 102, background decoder 71, Video Decoder 72, presents graphic decoder 73, interactive graphics (IG) decoder 74, text subtitle synthetic decoder 76, switch 77, background plane generator 91, video plane generator 92, present graphics plane generator 93, interactive graphics plane generator 94, buffer 95, video data processor 96 and blender 97.AV decoder 33-2 further comprises switch 101 but not switch 59, the first audio decoder 75-1 rather than audio decoder 75, and comprises the second audio decoder 75-2 and blender 102 in addition.
The first audio decoder 75-1 decoded audio stream #1, and the second audio decoder 75-2 decoded audio stream #2.In particular, first audio decoder 75-1 decoding is by the audio stream of the identification of audio_stream_id in the STN_table of Figure 25 (), and second audio decoder 75-2 decoding is by the audio stream of audio_stream_id2 identification among the STN_table () of Figure 25.
Therefore, reproducer 20-2 comprises that two decoders (the first audio decoder 75-1 and the second audio decoder 75-2) are used to decode two audio streams.
Decode by ECC decoder (not shown) by the file data that controller 34-2 reads, then the multiplexed stream through decoding is carried out correction process.Under the control of controller 34-2, switch 32 flows according to type selecting from the data of decoding and error correction, then stream is offered each buffer 51-54.
Being included in the form that the AV in the disconnected and sub-segment of main leaf flows can be with previously described the same.Be used for reading of data replay data acquiring unit 31 method can with the same (just, can with time-division form reading of data or loading data in advance) discussed above.
The flow data that reads from the buffer 52 as the disconnected AV stream of main leaf read buffers is output to PID (bag ID) filter 55 subsequently at the fixed time.PID filter 55 is categorized as each decoder that the stream that flows and will classify outputs to basic stream with the disconnected AV stream of the main leaf of input by PID (bag ID).In particular, PID filter 55 to Video Decoder 72 video flowing is provided, to presenting graphical stream, to providing interactive graphic stream as the switch that the source is provided 58 and to providing audio stream as the switch that the source is provided 101 to the switch 61 and the second audio decoder 75-2 to interactive graphics (IG) decoder 74 as providing to the switch that the source is provided 57 that presents graphic decoder 73.
Be provided to switch 61 and be the stream that separates from the disconnected or sub-segment of main leaf by 82 outputs of compressing audio signal interface or the video flowing that is input to the first audio decoder 75-1 then.Similarly, the audio stream that is input to the second audio decoder 75-2 is the stream that separates from the disconnected or sub-segment of main leaf.For example, if disconnected audio stream #1 and the audio stream #2 of comprising of main leaf, PID filter 55 filters audio stream #1 and audio stream #2 according to the PID of audio stream, and the stream that will produce offers switch 101.
In the mode identical with reproducer 20-1, reproducer 20-1 carries out to reset to video data, data bitmap and interactive graphic data and handles.
Switch between audio stream that switch 101 comprises the main leaf that provides from PID filter 55 is disconnected and the audio stream that in sub-segment, comprises.Selected audio stream is offered switch 61 and one of second audio decoder 75-2 subsequently subsequently.
For example, switch 101 selects audio stream #1 so that audio stream #1 is offered switch 61 from PID filter 55, and selects audio stream #2 audio stream #2 is offered the second audio decoder 75-2 from PID filter 55.
Under the control of controller 34-2, switch 61 offers one of the first audio decoder 75-1 and compressing audio signal interface 82 with the voice data that provides.
The first audio decoder 75-1 decoded audio stream also offers blender 102 with the data of decoded audio stream then.The second audio decoder 75-2 decoded audio stream, and then the audio stream of decoding is offered blender 102.
If audio stream #1 and audio stream #2 are with mixed and reproduce (just, if it is playback of audio streams that the user selects two audio streams), will offer blender 102 by the audio stream #1 of first audio decoder 75-1 decoding and the audio stream #2 that decodes by the first audio decoder 75-1.
Blender 102 mixes (stack) from the voice data of the first audio decoder 75-1 with from the voice data of the second audio decoder 75-2, and the voice data of mixing is outputed to subsequently blender 97.In the content of this specification, mix (stack) and be also referred to as synthetic from the voice data of first audio decoder 75-1 output and the voice data of exporting from the second audio decoder 75-2.In other words, synthetic two voice datas of mixing that refer to.
To offer buffer 95 by the voice data that switch 32 is selected and be used for buffering.Buffer 95 offers voice data blender 97 at the fixed time.Voice data is the sound effective value sound fruit data of selecting on menu and is independent of stream.Blender 97 mix voice data that (stack or mix) provide from buffer 95 and the voice data that mixes by blender 102 (just, from the voice data of first audio decoder 75-1 output with from the mixing audio data of the voice data of second audio decoder 75-2 output), and the data that output produces are as audio signal.
When the sign of the existence of describing other voice datas that expression will mix with audio stream #1 in the Index of Index file (), reproducer 20-2 carries out to reset and handles 4 with the reproduction data.Flow chart description playback below with reference to Figure 42 handles 4.
In step S201, controller 34-2 determines whether the output from reproducer 20-2 is encoding stream.If determine that in step S201 the output from reproducer 20-2 is not encoding stream, handle to proceed to step S224.
If determine that in step S201 the output from reproducer 20-2 is encoding stream, the Index file that provides from switch 32 is provided in step S202 controller 34-2.
In step S203, controller 34-2 determines in the Index of Index file () to describe and whether one of the sign of the existence of the data that expression will mix with the audio stream #1 as main audio stream is 1.In particular, controller 34-2 determines whether is_MixApp_1 that is_MixApp sign that the expression voice data exists and is_MixApp_2 sign and expression audio stream #2 exist one of indicates is 1.
Except audio stream #1, have in the content of the audio stream #2 that discerns by audio_stream_id2 by audio_stream_id identification, except the is_MixApp_2 sign that the expression voice data exists, the is_MixApp_1 sign that expression audio stream #2 exists is described.Have the reproducer 20-2 of reproduction, in step S203, not only indicate and the value of is_MixApp_2 sign but also the value that indicates with reference to is_MixApp_1 with reference to is_MixApp by the function of the audio stream #2 of audio_stream_id2 identification.
If determine that in step S203 one of sign is 1, handle proceeding to the step S214 that discusses after a while.
If determine that in step S203 any one sign is not 1, relate to the audio stream #2 that the data of Index file do not comprise voice data and will mix with audio stream #1 in reproducer 20-2.In step S204, controller 34-2 control control switch 61 offers the compressing audio signal interface 82 of the outlet terminal that is used as compression (coding) audio signal with the voice data that will be referred to the Index file.
In step S205, controller 34-2 determines whether to have sent in response to the order that produces from the operation of user's input to read play list file (for example xxxxx.mpls of Figure 13).If determine also not send the reading order that reads play list file in step S205, repeating step S205 is up to determining to have sent the reading order that reads play list file.
If determine to have sent the reading order that reads play list file in step S205, the play list file that provides from switch 32 is provided in step S206 controller 34-2.
In step S207, controller 34-2 reads in the broadcast item of playlist (PlayList () of Figure 13) appointment of describing in the play list file, read the disconnected AV stream of corresponding main leaf, sub-segment AV flows and the text subtitle data, and the data that read are offered switch 32.
In step S208, controller 34-2 offers corresponding buffer with the data that read and is used for buffering.In particular, controller 34-2 control switch 32 provides background image data, the disconnected AV stream of main leaf is provided, provides sub-segment AV stream to buffer 53 to buffer 52 to buffer 51, and the data of text subtitle are provided to buffer 54.The data that buffer 51-54 buffering wherein provides.In particular, buffer 51 buffering background image datas, the disconnected AV stream of buffer 52 buffering main leaves, buffer 53 buffer sublayer segment AV stream and buffer 54 buffering text subtitle data.
In step S209, controller 34-2 control PID filter 55 and 56 and switch 67-59, thus utilize the data of predetermined decoding device decoding based on video, and utilize video data processor 96 to handle decoded datas.
In step S210, the treated video data that video encoder 42 codings provide from video data processor 96.
In step S211, the compressing audio signal interface 82 that is used as the outlet terminal of compression (coding) audio signal outputs to audio stream #2 the outside of device as the coding audio data from switch 61 outputs.Coding video frequency data is outputed to the outside of device as the compressed video signal interface 84 of the outlet terminal that compresses (coding) vision signal.
Compression (coding) voice data from 82 outputs of compressing audio signal interface in step S211 does not descend in tonequality, because the AV decoder 33-2 of reproducer 20-2 does not carry out decoding processing.
In step S212, controller 34-2 plays item to determine whether to exist with the reproduced next one with reference to playlist.If in step S212, define the next item of playing, handle and proceed to step S207 with repeating step S207 and step subsequently.
If in step S212, determine not have the next item of playing, determine to reset to handle at step S213 middle controller 34-2 and whether finish.Do not handle if determining in step S213 does not finish to reset, handle and turn back to step S205 with repeating step S205 and step subsequently.Handle and will finish if determining in step S213 resets, processing finishes.
If determine that in step S203 one of sign is 1, the data that relate to the Index file comprise voice data and will be by among reproducer 20-1 and the audio stream #2 that audio stream #1 mixes at least one.In step S214, controller 34-2 control switch 61 offers audio decoder 75-1 with the audio stream #1 that will be referred to the Index file.
Step S215 to S219 is substantially the same with step S205 to S209.In particular, controller 34-2 determines whether to have sent in response to the order that produces from the operation of user's input to read play list file.If determine to have issued the reading order that reads play list file, read playlist, and read the broadcast item of playlist appointment.To offer corresponding buffer corresponding to the segment data (AV stream) that reads the broadcast item and be used for buffering.Control PID filter and switch make the data based on video of buffering also be handled by video data processor 96 then by corresponding decoder decode.
In step S220, controller 34-2 control PID filter and switch, be used for decoding offering audio decoder 75-1 by switch 61 and switch 101, by switch 101 audio stream #2 offered the second audio decoder 75-2 and be used for decoding by the audio stream #1 of buffer 53 bufferings.Controller 34-2 controls blender 102, so that suitably mix the audio stream by the first audio decoder 75-1 and second audio decoder 75-2 decoding.Controller 34-2 control blender 97 is so that carry out mixed processing to audio stream that is mixed by blender 102 and the voice data that cushions on buffer 95.
Because switch 61 is controlled to audio decoder 75-1 decoded audio stream #1, audio stream #2 mixes with audio stream #1 mutually with voice data.
In step S221, the not audio compressed data that audio coder 41 codings mix, and the voice data of will encode (compression) offers compressing audio signal interface 82 so that output to the outside of device.The uncompressed video data that video encoder 42 codings are handled by video data processor 96, and (compression) video data of will encoding offers compressed video signal interface 84 so that output to the outside of device.
In step S222, controller 34-2 plays item to determine whether to exist with the reproduced next one with reference to playlist.If in step S222, determine to exist the next item of playing, handle and turn back to step S217 with repeating step S217 and step subsequently.
If determine not have a next existence of playing in step S222, controller 34-2 determines whether to finish to reset in step S223 to handle.Do not handle if determining in step S223 does not finish to reset, handle and turn back to step S215 with repeating step S215 and step subsequently.Handle if determining in step S223 finishes to reset, processing finishes.
If determine that in step S201 the output from reproducer 20-2 is encoding stream, offer audio decoder 75 with the audio stream #1 that will provide at step S224 middle controller 34-2 control switch 61.
If output is not encoding stream, no matter whether there is the voice data that will mix, decoded audio stream #1 with audio stream #1.
Step S225 to S230 is substantially the same with step S215 to S220 respectively.In particular, controller 34-2 determines whether to have sent in response to the order that produces from the operation of user's input to read play list file.If determine to have sent the reading order that reads play list file, read playlist, and read broadcast item by the playlist appointment in the play list file.Be provided for corresponding buffer corresponding to the segment data (AV stream) of the broadcast item that reads and be used for buffering.Control PID filter and switch make the data based on video of buffering also be handled by video data processor 96 then by the corresponding decoder decoding.And, PID filter and switch are controlled, be used for decoding offering audio decoder 75-1 by switch 61 and switch 101, by switch 101 audio stream #2 offered the second audio decoder 75-2 and be used for decoding by the audio stream #1 of buffer 53 bufferings.Controller 34-2 control blender 102 is so that suitably mix the audio stream of being decoded by the first audio decoder 75-1 and the second audio decoder 75-2.Controller 34-2 control blender 97 is so that carry out mixed processing to audio stream that is mixed by blender 102 and the voice data that cushions on buffer 95.
In step S231, compressing audio signal interface 81 will not output to the outside of device from the treated not audio compressed data that blender 97 provides.Uncompressed video signal interface 83 will output to the outside of device from the uncompressed video data that video data processor 96 provides.The unpressed voice data of D/A converter 43 digital-to-analogue conversions.Simulated audio signal interface 85 outputs to analog signal the outside of device.D/A converter 44 digital-to-analogue conversion uncompressed video data.Analog video signal interface 86 outputs to analog signal the outside of device.
In step S232, controller 34-2 plays item to determine whether to exist with the reproduced next one with reference to playlist.If in step S232, determine a next existence of playing, handle and turn back to step S227 with repeating step S227 and step subsequently.
If determine not have a next existence of playing in step S232, controller 34-2 determines whether to finish to reset in step S233 to handle.Do not handle if determining in step S233 does not finish to reset, handle and turn back to step S225 with repeating step S225 and step subsequently.Handle if determining in step S233 finishes to reset, processing finishes.
In the superincumbent processing, reproducer 20-2 is with reference to the sign of describing in Index ().If the data that Index () quotes do not comprise voice data and audio stream #2, audio stream #1 is output to the outside of device as compression (coding) data that do not have decoding.Therefore audio stream #1 avoids tonequality decline.
Still in the superincumbent processing, controller 34-2 determines that in the initial stage of handling output is packed data or packed data not, and reference mark is to determine whether to carry out mixed processing then.Perhaps, at first reference mark to be to determine whether to carry out mixed processing, determines that then output signal is packed data or packed data not.In this configuration, not aspect its flesh and blood but only change processing aspect its order.
Reproducer 20-2 carries out to reset and handles 5 so that the reproduction data when describing the sign that other voice datas that expression will mix with audio stream #1 exist in play list file.Flow chart description playback below with reference to Figure 43 handles 5.
In step S261, controller 34-2 determines whether the output from reproducer 20-2 is encoding stream.If determine that in step S261 the output from reproducer 20-2 is not encoding stream, handle to proceed to step S282 described below.
If determine that in step S261 the output from reproducer 20-2 is encoding stream, the Index file that provides from switch 32 is provided in step S262 controller 34-2.
In step S263, controller 34-2 determines whether to have sent in response to the order that produces from the operation of user's input to read play list file (for example xxxxx.mpls of Figure 13).If determine also not send the reading order that reads play list file in step S263, repeating step S263 is up to determining to have sent the reading order that reads play list file.
If determine to have sent the reading order that reads play list file in step S263, the play list file that provides from switch 32 is provided in step S264 controller 34-2.
In step S265, controller 34-2 determines one of sign of describing in the Index file, just whether one of AppInfoPlayList () and PlayList () are 1, the existence of the data that described two sign expressions will mix with the audio stream #1 as main audio stream.In particular, controller 34-2 determine the is_MixApp_2 sign of existence of is_MixApp sign, expression voice data of the existence of expression voice data (or audio stream #2) and expression audio stream #2 existence the is_MixApp_1 sign five whether be 1.
Except audio stream #1, have in the content of the audio stream #2 that discerns by audio_stream_id2 by audio_stream_id identification, except the is_MixApp_2 sign of the existence of expression voice data, the is_MixApp_1 sign of the existence of expression audio stream #2 has been described.Have reproduction by the reproducer 20-2 of the function of the audio stream #2 of audio_stream_id2 identification not only quotes the is_MixApp_2 sign in step S265 value, also quote the value that is_MixApp_1 indicates.
If determine that in step S265 one of sign is 1, handle proceeding to the step S274 that discusses after a while.
If determine that in step S265 any one sign is not 1, the data that relate to the Index file do not comprise voice data and the audio stream #2 that will mix with audio stream #1 in reproducer 20-2.In step S266, controller 34-2 control switch 61 offers the compressing audio signal interface 82 of the outlet terminal that is used as compression (coding) audio signal with the voice data that will be referred to the Index file.
Step S267 to S271 is substantially the same with the step S207 to S211 of Figure 42 respectively.Read broadcast item by the playlist appointment.Read the data that comprise in the disconnected or sub-segment of corresponding main leaf and text subtitle data and it is offered switch 32 and then the data that read are offered corresponding buffer and be used for buffering. Control PID filter 55 and 56 and switch 57-59 makes and is decoded by corresponding decoder based on the data of video, and handled by video data processor 96 then.The treated video data of encoding then.
Be output to the outside of device by compressing audio signal interface 82 as the outlet terminal that compresses (coding) audio signal as the audio stream #1 of the coding audio data of exporting from switch 61.Coding video frequency data is output to the outside of device by the compressed video signal interface 84 as the terminal of compressing (coding) vision signal.
In step S271 from the not reduction of tonequality of compression (coding) voice data of compressing audio signal interface 82 output, because the AV decoder 33-2 of reproducer 20-2 does not carry out decoding processing.
In step S272, controller 34-2 with reference to playlist to determine whether to exist next play.If in step S272, define the next item of playing, handle and turn back to step S267 with repeating step S267 and step subsequently.
If determine not have the next item of playing in step S272, controller 34-2 determines whether to finish to reset in step S273 to handle.Do not handle if determining in step S273 does not finish to reset, handle and turn back to step S263 with repeating step S263 and step subsequently.The control of switch 61 can depend on that the value of the sign of describing changes in the play list file that next reads.Handle if determining in step S273 finishes to reset, processing finishes.
If determine that in step S265 one of sign is 1, the data that relate to the Index file comprise voice data and will be by among reproducer 20-2 and the audio stream #2 that audio stream #1 mixes at least one.In step S274, controller 34-2 control switch 61 offers audio decoder 75-1 with the audio stream #1 that will be referred to the Index file.
Step S275 to S279 is substantially the same with the step S215 to S221 of Figure 42 respectively.In particular, read broadcast item by the playlist appointment.And, the data that comprise in the corresponding segment are offered corresponding buffer according to the broadcast item that reads.Control PID filter and switch make data based on video by corresponding decoder decode buffering, and are handled the data of decoding by video data processor 96.
And, PID filter and switch are controlled, be used for decoding will offer audio decoder 75-1 by the audio stream #1 of buffer 53 bufferings, and will offer the second audio decoder 75-2 by the audio stream #2 of buffer 53 bufferings by switch 101 and be used for decoding by switch 61 and switch 101.Controller 34-2 control blender 102 is so that suitably mix the audio stream of being decoded by the first audio decoder 75-1 and the second audio decoder 75-2.Controller 34-2 control blender 97 is to carry out mixed processing to audio stream that is mixed by blender 102 and the voice data that cushions on buffer 95.
Because control switch 61 is so that utilize audio decoder 75-1 decoded audio stream #1, thereby audio stream #2 is mixed with audio stream #1 mutually with voice data.
The not audio compressed data that audio coder 41 codings mix, and the voice data of will encode (compression) offers compressing audio signal interface 82 so that output to the outside of device.The uncompressed video data that video encoder 42 codings are handled by video data processor 96, and the video data of will encode (compression) offers compressing audio signal interface 84 so that output to the outside of device.
In step S280, controller 34-2 plays item to determine whether to exist with the reproduced next one with reference to playlist.If in step S280, determine to exist the next item of playing, handle and turn back to step S275 with repeating step S275 and step subsequently.
If in step S280, determine not have the next item of playing, determine to reset to handle at step S281 middle controller 34-2 and whether finish.Do not handle if determining in step S281 does not finish to reset, handle and turn back to step S263 with repeating step S263 and step subsequently.The control of switch 61 can depend on that the value of the sign of describing in the playlist that next reads changes.Handle and will finish if determining in step S281 resets, processing finishes.
If determine that in step S261 the output from reproducer 20-2 is not encoding stream, execution in step S282-S291, these steps are substantially the same with the step S224-S233 of Figure 42.
In particular, if output is not encoding stream, no matter whether have the voice data that will mix, decoded audio stream #1 with audio stream #1.
Controller 34-2 determines whether to send the order of reading playlist.If determine to have sent the reading order that reads playlist, read playlist so, and read broadcast item by the playlist appointment in the play list file.The segment data (AV stream) of the broadcast item that correspondence is read offers corresponding buffer and is used for buffering.Control PID filter and switch so that by the data based on video of corresponding decoder decode buffering, and are handled the data of decoding then by video data processor 96.And control PID filter and switch to be will offering audio decoder 75-1 by the audio stream #1 of buffering on the buffer 53 by switch 61 and switch 101, and by switch 101 audio stream #2 be offered the second audio decoder 75-2.Controller 34-2 control blender 102 is so that suitably mix the audio stream of being decoded by the first audio decoder 75-1 and the second audio decoder 75-2.Controller 34-2 controls blender 97, so that audio stream that is mixed by blender 102 and the voice data that cushions on buffer 95 are carried out mixed process.The treated not audio compressed data that provides from blender 97 is by digital-to-analogue conversion the time or be output to the outside of device afterwards.The uncompressed video data that provides from video data processor 96 is by digital-to-analogue conversion the time or be output to the outside of device afterwards.
Controller 34-2 plays item to determine whether to exist with the reproduced next one with reference to playlist in step S290.If in step S290, define the next item of playing, handle and proceed to step S285 with repeating step S285 and step subsequently.If in step S290, determine not have the next item of playing, determine to reset to handle at step S291 middle controller 34-2 and whether finish.Do not handle if determining in step S291 does not finish to reset, handle and turn back to step S283 with repeating step S283 and step subsequently.Handle and will finish if determining in step S291 resets, processing finishes.
In this mode, reproducer 20-2 is with reference to the sign of describing in play list file, if and voice data and audio stream #2 be not when being included in the data that play list file quotes, audio stream #1 is output to the outside of device under the situation that does not have decoding.Therefore, tonequality can excessive descent.
The click sound of button is the effect sound that the response user operates generation in the voice data that provides as voice data.Whether click sound mixes with audio stream #1 is unknown.In the flow process that above-mentioned playback is handled, is_MixApp sign, is_MixApp_1 sign and is_MixApp_2 sign that expression voice data or voice data exist have been described.Can use any sign, as long as this sign expression can not decodedly be used for mixing by this sign data designated.In particular, even sign is 1, mixed processing is not to take place, and device can not carried out mixed processing.If mixed processing does not take place, sign is set to 0.Because Index () can specify a plurality of play list file,, on each playlist basis, be provided with whether comprise voice data by in playlist, describing sign.Obtain control flexibly by in Index (), describing flag information, and improve the freedom of creation.
In the superincumbent processing, controller 34-2 determines that in the initial stage of handling output is packed data or packed data not, and then reference mark to determine whether to carry out mixed processing.Perhaps, at first reference mark to be to determine whether to carry out mixed processing, determines that then output signal is packed data or packed data not.In this configuration, not aspect its flesh and blood but only change processing aspect its order.
When describing the sign of other voice datas existence that will mix with audio stream #1 in playlist, reproducer 20-2 carries out to reset and handles 6 so that reproduce data.Flow chart description playback below with reference to Figure 44 handles 6.
In step S331, controller 34-2 determines whether the output from reproducer 20-2 is encoding stream.If determine that in step S331 the output from reproducer 20-2 is not encoding stream, handle proceeding to hereinafter with the step S351 that describes.
If determine that in step S331 the output from reproducer 20-2 is encoding stream, the Index file that provides from switch 32 is provided in step S332 controller 34-2.
In step S333, controller 34-2 determines whether to have sent in response to the order that produces from the operation of user's input to read play list file (for example xxxxx.mpls of Figure 13).If determine also not send the reading order that reads play list file in step S333, repeating step S333 is up to determining to have sent the reading order that reads play list file.
If determine to have sent the reading order that reads play list file in step S333, the play list file that provides from switch 32 is provided 34-2 at step S334 middle controller.
In step S335, controller 34-2 reads the broadcast item by the playlist of describing (PlayList () of Figure 13) appointment in play list file, read the disconnected AV stream of corresponding main leaf, sub-segment AV stream and text subtitle data, and these data are outputed to switch 32.
In step S336, controller 34-2 determines whether one of sign of describing is 1 in playing, and described sign is represented the existence of the data that will mix with the audio stream #1 as main audio stream.In particular, controller 34-2 determines whether the is_MixApp_1 of the existence of the is_MixApp sign of existence of expression voice data and is_MtxApp_2 sign and expression audio stream #2 one of indicates is 1.
Except audio stream #1, have in the content of the audio stream #2 that discerns by audio_stream_id2 by audio_stream_id identification, except the is_MixApp_2 sign that the expression voice data exists, the is_MixApp_1 sign that expression audio stream #2 exists has been described.Have reproduction by the reproducer 20-2 of the function of the audio stream #2 of audio_stream_id2 identification in step S336 not only with reference to the value of is_MixApp sign or is_MixApp_2 sign, and quote the value of is_MixApp_1 sign.
If determine that in step S336 one of sign is 1, handle proceeding to the step S344 that discusses after a while.
If determine that in step S336 any one sign is not 1, relate to the data of playing item and do not comprise voice data and the audio stream #2 that in reproducer 20-2, to mix with audio stream #1.In step S337, controller 34-2 control switch 61 offers compressing audio signal interface 82 as the outlet terminal of compression (coding) audio signal with a voice data that will be referred to play.
Step S338-S341 is substantially the same with the step S208-S2211 of Figure 42 respectively.The data that read are offered corresponding buffer so that buffering. Control PID filter 55 and 56 and switch 59-67 so that by the data of corresponding decoder decode based on video, handle by video data processor 96 then.The treated video data of encoding then.
To output to the outside of device as the audio stream #1 of the coding audio data of exporting from switch 61 by compressing audio signal interface 82 as the outlet terminal that compresses (coding) audio signal.Coding video frequency data is outputed to the outside of device by the compressed video signal interface 84 as the terminal of compressing (coding) vision signal.
Because the AV decoder 33-2 of reproducer 20-2 does not carry out decoding processing, compression (coding) voice data from 82 outputs of compressing audio signal interface in step S341 can not reduce on tonequality.
In step S342, controller 34-2 plays item to determine whether to exist with the reproduced next one with reference to playlist.If in step S342, define the next item of playing, handle and proceed to step S335 with repeating step S335 and step subsequently.The control of switch 61 can depend on that the value of statistical indicant of describing in the playlist that next reads changes.
If determine not have the next item of playing in step S342, controller 34-2 determines to reset to handle whether finish in step S343.Do not handle if determining in step S343 does not finish to reset, handle and turn back to step S333 with repeating step S333 and step subsequently.Handle and will finish if determining in step S343 resets, processing finishes.
If determine that in step S336 one of sign is 1, relate to the data of playing a file and comprise the voice data that to mix with audio stream #1 by reproducer 20-2 and at least one among the audio stream #2.In step S344, controller 34-2 control switch 61 offers audio decoder 75-1 with the audio stream #1 that will be referred to play list file.
Step S345 to S348 is substantially the same with the step S218 to S221 of Figure 42 respectively.To offer corresponding buffer corresponding to the data of the segment of playing item and be used for buffering.Control PID filter and switch, make buffering based on video data by corresponding decoder decode, and make the data of decoding be handled by video data processor 96.
And, PID filter and switch are controlled, be used for decoding will offer audio decoder 75-1 by the audio stream #1 of buffer 53 bufferings, will offer the second audio decoder 75-2 by the audio stream #2 of buffer 53 bufferings by switch 101 and be used for decoding by switch 61 and switch 101.Controller 34-2 control blender 102 is so that suitably mix the audio stream of being decoded by the first audio decoder 75-1 and the second audio decoder 75-2.Controller 34-2 controller blender 97 is to carry out mixed processing to audio stream that is mixed by blender 102 and the voice data that cushions on buffer 95.
Because switch 61 is controlled to utilize audio decoder 75-1 decoded audio stream #1, in audio stream #2 and the voice data each is all mixed with audio stream #1.
The not audio compressed data of 41 pairs of mixing of audio coder is encoded, and the voice data of will encode (compression) offers compressing audio signal interface 82 so that output to the outside of device.42 pairs of uncompressed video data of being handled by video data processor 96 of video encoder are encoded, and the video data of will encode (compression) offers compressed video signal interface 84 to output to the outside of device.
In step S349, controller 34-2 plays item to determine whether to exist with the reproduced next one with reference to playlist.If in step S349, determine to exist the next item of playing, handle and turn back to step S335 with repeating step S335 and step subsequently.The control of switch 61 can depend on that the value of the sign of describing in the playlist that next reads changes.
If determine not have a next existence of playing in step S349, controller 34-2 determines whether to finish to reset in step S350 to handle.Do not handle if determining in step S350 does not finish to reset, handle and turn back to step S333 with repeating step S333 and step subsequently.Handle if determining in step S350 finishes to reset, processing finishes.
If determine that in step S331 the output from reproducer 20-2 is not encoding stream, execution in step S351-S360, these steps are substantially the same with the step S224-S233 of Figure 42 respectively.
If output is not encoding stream, whether the voice data that no matter will mix with audio stream #1 exists decoded audio stream #1.
In particular, controller 34-2 has determined whether to send the order of reading play list file.If determine to have sent the order of reading play list file, read play list file, and read broadcast item by the playlist appointment in the play list file.Be provided for corresponding buffer corresponding to the segment data (AV stream) of the broadcast item that reads and be used for buffering.Control PID filter and switch make the data based on video of buffering also be handled by video data processor 96 then by the corresponding decoder decoding.And PID filter and switch are controlled, and are used for decoding offering audio decoder 75-1 by switch 61 and switch 101 by the audio stream of buffer 53 bufferings, by switch 101 audio stream #2 are offered the second audio decoder 75-2 and are used for decoding.Controller 34-2 control blender 102 is suitably to mix the audio stream by the first audio decoder 75-1 and second audio decoder 75-2 decoding.The audio stream that 97 pairs in controller 34-2 control blender is mixed by blender 102 and on buffer 95 voice data of buffering carry out mixed processing.The treated not audio compressed data that provides from blender 97 is by digital-to-analogue conversion the time or be output to the outside of device afterwards.The uncompressed video data that provides from video data processor 96 is by digital-to-analogue conversion the time or be output to the outside of device afterwards.
Controller 34-2 plays item so that determine whether to exist with the reproduced next one with reference to playlist in step S359.If in step S359, define the next item of playing, handle and proceed to step S354 with repeating step S354 and step subsequently.If in step S359, determine not have the next item of playing, determine to reset to handle at step S360 middle controller 34-2 and whether finish.Do not handle if determining in step S360 does not finish to reset, handle and turn back to step S352 with repeating step S352 and step subsequently.Handle and will finish if determining in step S369 resets, processing finishes.
In this mode, reproducer 20-2 is with reference to a sign of describing in playing, and if voice data and audio stream #2 be not included in when playing in the data that item quotes, audio stream #1 is in the outside that not have to be output under the situation of decoding device.Therefore, tonequality can too not descend.Have the flag information of describing in playing item, can play to be provided with whether comprise voice data on basis at each, this degree of freedom that causes branch to be created increases.
The flag information that utilization is described in playing item can be carried out control more flexibly.Yet, if when guaranteeing seamless playback, recompile is switched to ON/OFF by single playlist, the very difficulty that the software design of reproducer is implemented to become.Depend on the requirement of software implementation, preferably record mark information on each play list file basis.
In the superincumbent processing, controller 34-2 determines that at the commitment of handling output is packed data or packed data not, and then reference mark to determine whether to carry out mixed processing.Perhaps, at first reference mark to be determining whether to carry out mixed processing, and determines that then output signal is packed data or packed data not.In this configuration, not aspect its flesh and blood but only change processing aspect its order.
As mentioned above, determine whether voice data is carried out mixed processing.Determine according to this, determine processing of when main audio data is carried out decoding processing, carrying out and the processing of when main audio data not being carried out decoding processing, carrying out.These handle the mixing that is not only applicable to the mixing of voice data and is applicable to video data.
As mentioned above, be illustrated in the stream and will be described in one of Index, playlist and broadcast item with main video data, the background plane that just mixes, each the sign of existence that presents in graphics plane and the interactive graphics plane with the video data of Video Decoder 72 decodings.The AV decoder 33-3 of the reproducer 20-3 of Figure 45 comprises switch 151.Under the control of AV decoder 33-3, switch 151 will offer one of Video Decoder 72 and compressed video signal interface 84 from the main video data of PID filter 55.In this mode, main video data is outputted as compression (coding) data, and if do not have background plane, present graphics plane and interactive graphics plane is mixed with main video data, under the situation of not decoding, export main video data.Therefore avoid unnecessary image degradation.
The manufacture method of the recording medium 21 of storing the data that can reset on reproducer 20 is described below with reference to Figure 46 and 47.Here, recording medium is a disc recording medium.
As shown in figure 46, prepare the master made by glass.To be applied to master by the recording materials that photoresist etc. is made.Therefore produced recording master.
As shown in figure 47, in the software fabrication portion by video encoder encodes, adopt the video data of the form that can on reproducer 20, reset to be temporarily stored on the buffer.Voice data by the audio coder coding is temporarily stored on the buffer.Except being temporarily stored on the buffer by the data (for example, index, playlist, broadcast item etc.) the stream of data encoder coding.Synchronous with synchronizing signal, video data, voice data and multiplexed by multiplexer (MPX) except the data the stream of storing on each buffer, and error correction coding (ECC) circuit appends to multiplexed signal with error correcting code.The signal that produces is modulated by modulator (MOD) circuit, and is stored on the tape according to predetermined format then.Therefore, having made software program can be on the recording medium of resetting on the reproducer 20 21 so that it is recorded in.
Edit (pre-managing) software program as required, and therefore produce signal with the form that will on CD, write down.According to the tracer signal modulating lasering beam, guide to the photoresist on the master as shown in figure 46 then.Therefore, the photoresist on the master is exposed to the laser beam with the tracer signal modulation.
Then master is developed, and on master, arrange pit.Then master is carried out electroforming so that the metal master of pit of glass master of having made wherein transfer printing.Produced metal stamping and pressing (stamper) and be used as mould (mold) from metal master.
Be injected into mould and be cured such as the material of PMMA (propylene) or PC (Merlon).Perhaps, apply 2P (ultraviolet curable resin) afterwards in metal stamping and pressing, the directed metal stamping and pressing of ultraviolet light is so that be cured.In this mode, the pit in the metal stamping and pressing is transferred to the duplicate that is formed from a resin.
Use deposition or sputter technology on the duplicate of constructing like this, to form reflectance coating.Perhaps, use spin coating technique on duplicate, to form reflectance coating.
The inside round edge and the outside round edge of level and smooth then shaping dish, and also carry out the required processing that combines of two dishes.And, on dish, be stained with label, core is installed on the dish, and the dish that will produce inserts in the tray salver.Made recording medium 21 like this with the data that can on reproducer 20, reset.
Can use hardware or software to carry out the treatment step of above-mentioned sequence.Can carry out above-mentioned processing by the personal computer 500 of Figure 48.
With reference to Figure 48, CPU (CPU) 501 is carried out the processing that makes preparations for sowing being stored in the program on the read-only memory (ROM) 502 or being stored under the control of program of random access storage device (RAM) 503.RAM503 also stores CPU501 and carries out the needed data of multinomial processing.
CPU501, ROM502 and RAM503 are connected to each other by internal bus 504.Internal bus 504 also is connected to input-output interface 505.
Input-output interface 505 also is connected to the input unit 505 that comprises keyboard, mouse etc., the output unit 507 that comprises display device such as cathode ray tube (CRT) or LCD (LCD) and loud speaker comprises the memory cell 508 of hard disk, the communication unit 509 that comprises modulator-demodulator, terminal adapter etc.Communication unit 509 is handled by a plurality of network executive communications that comprise telephone wire or cable TV (CACT) line.
Input-output interface 505 also can be connected to driver 510 as required.To be loaded on the driver 510 such as the removable media 521 of disk, CD, electricity-disk or semiconductor memory.To be installed on the memory cell 508 from the computer program that removable media 521 reads in case of necessity.
When use software hold during the treatment step sequence, the program that forms software is installed on computers by network or from program recorded medium.
As shown in figure 48, program recorded medium can be the encapsulation medium such as removable media 521, and described medium recording computer program also is independent of the computer distribution so that provide computer program to the user.Program recorded medium can also be storage computation machine program and the ROM502 that provides to the user in device, or comprises the hard disk of memory cell 508.
Can carry out with aforesaid time series and describe the treatment step that is stored in the computer program on the recording medium.Perhaps, can walk abreast or carry out treatment step independently.
Driver 510 not only reading and recording on the removable media of installing 521 data and also data are write on the removable media 521 of installation.Personal computer 500 has and the software product part identical functions of discussing with reference to Figure 47 (personal computer 500 is carried out the program that is used to carry out and use the software product part identical functions of CPU501).
By the processing of CPU501, personal computer 501 can produce the identical data of data that partly produce with software product by Figure 47.By communication unit 509 or from being loaded in the removable media 521 on the driver 510, personal computer 500 can obtain the data that produced and be similar to the data that the software product by Figure 47 partly produces by external device (ED).Personal computer 500 provides the function of tape deck, is used on the removable media 521 that is loaded on the driver 510 record and is similar to the generation of the data that the software product by Figure 47 partly produces or the data of obtaining.
It should be appreciated by those skilled in the art that and according to design requirement and other factors, various modifications, combination, sub-portfolio and change can take place as long as within the scope of appended claims or its equivalent.

Claims (13)

1, a kind of reproducer comprises:
The replay data deriving means is used to obtain the replay data that comprises the encoding stream data;
Decoding device is used for the decoded stream data;
Mixing arrangement, be used for be different from flow data, mixed data are mixed mutually with flow data by decoding device decoding;
Choice device is used for selecting flow data being offered between decoding device and the output stream data; And
Control device is used to control choice device,
Wherein, control device obtains the definite information whether the expression replay data comprises the data of will mix with flow data from the data of being obtained by the replay data deriving means, and, if determine that replay data does not comprise mixed data and the data handled by the replay data processing unit be outputted as coded data, described control device control choice device output stream data if determine information.
2, reproducer as claimed in claim 1, wherein the replay data of being obtained by the replay data deriving means comprises a predetermined file, and this predetermined file comprises the data corresponding to the title of replay data, and
Wherein control device obtains described definite information from described predetermined file.
3, reproducer as claimed in claim 1, wherein the replay data of being obtained by the replay data deriving means comprises at least one predetermined file, and this predetermined file comprises the information of the playback order of representing replay data, and
Wherein control device obtains described definite information from predetermined file.
4, reproducer as claimed in claim 1, wherein the replay data of being obtained by the replay data deriving means comprises at least one unit of first data and at least one unit of second data that are associated with first data, first data are the information of the playback order of expression replay data, and second data representation according to the information in the playback cycle of the data of reproducing by the playback order of first Data Control, and
Wherein control device obtains described definite information from second data.
5, a kind of reproducting method that is used to reproduce data and exports the reproducer that reproduces data comprises step:
From the replay data that comprises the encoding stream data, obtain the definite information whether the expression replay data comprises the data of will mix with flow data;
Determine according to definite information of obtaining whether replay data comprises the data of mixing with flow data; And
Do not comprise the data of mixing if determine the information representation replay data with flow data, and if the reproduction data of exporting from reproducer be coded data, output stream data then.
6, a kind of computer that makes is carried out the playback processing so that reproduce data and export the program of reproducing data, may further comprise the steps:
Obtain the definite information whether the expression replay data comprises the data of will mix with flow data from the replay data that comprises the encoding stream data;
Determine according to definite information of obtaining whether replay data comprises the data of mixing with flow data; And
Do not comprise the data of mixing if determine the information representation replay data with flow data, and if the reproduction data of exporting from reproducer be coded data, output stream data then.
7, a kind of program recorded medium of storing the program of claim 6.
8, a kind of data structure of the data that will be reproduced by reproducer comprises:
The first information that is used for the playback order of management flow data,
Wherein the first information comprises second information, and this second information is different from flow data and relevant with the existence or the shortage of the data that will mix with flow data.
9, a kind of recording medium that is used to store the data that will reproduce by reproducer, the data of this recorded medium stores comprise:
The first information that is used for the playback order of management flow data,
Wherein the first information comprises second information, and this second information is different from flow data and relevant with the existence or the shortage of the data that will mix with flow data.
10, a kind ofly can be recorded in recording equipment on the recording medium, comprise in the data of resetting on the reproducer:
Deriving means, be used to obtain data with following data structure, described data structure comprises the first information of the playback order that is used for the management flow data, the described first information comprises second information, this second information be different from flow data and with will exist with the data of mixing of flow data or lack relevant; And
Tape deck, the data that are used for being obtained by deriving means are recorded in recording medium.
11, a kind of production is used to write down the method for the recording medium of the data that can reset on reproducer, comprises step:
Generation has the data of following data structure, described data structure comprises the first information that is used for management flow data playback order, the first information comprises second information, and this second information is different from flow data and relevant with the existence or the shortage of the data that will mix with flow data; And
The data that produce are recorded on the recording medium.
12, a kind of reproducer comprises:
The replay data acquiring unit obtains the replay data that comprises the encoding stream data;
Decoding unit, the decoded stream data;
Mixed cell, be used for be different from flow data, mixed data are mixed mutually with flow data by decoding unit decodes;
Selected cell is selected flow data being offered between decoding unit and the output stream data; And
Control unit is used to control selected cell,
Wherein control unit obtains the definite information whether the expression replay data comprises the data of will mix with flow data from the replay data of being obtained by the replay data acquiring unit, and, if determine that replay data does not comprise and to be outputted as coded data, described control unit control selected cell output stream data by mixed data and by the data of replay data processing unit processes if determine information.
13, a kind of being used for can be recorded in recording equipment on the recording medium in the data of resetting on the reproducer, comprise:
Acquiring unit, be used to obtain data with following data structure, described data structure comprises the first information of the playback order that is used for the management flow data, the described first information comprises second information, and this second information is different from flow data and relevant with the existence or the shortage of the data that will mix with flow data; And
Record cell, the data that are used for being obtained by deriving means are recorded in recording medium.
CN2006101689394A 2005-07-15 2006-07-14 Reproducing apparatus, reproducing method Active CN101026725B (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
JP2005206997 2005-07-15
JP2005206997 2005-07-15
JP2005-206997 2005-07-15
JP2006147981 2006-05-29
JP2006-147981 2006-05-29
JP2006147981A JP4251298B2 (en) 2005-07-15 2006-05-29 REPRODUCTION DEVICE AND REPRODUCTION METHOD, PROGRAM, PROGRAM STORAGE MEDIUM, DATA, RECORDING MEDIUM, RECORDING DEVICE, AND RECORDING MEDIUM MANUFACTURING METHOD

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN2010101180535A Division CN101789257B (en) 2005-07-15 2006-07-14 Reproducing apparatus, reproducing method, recording device, and manufacturing method of recording medium

Publications (2)

Publication Number Publication Date
CN101026725A true CN101026725A (en) 2007-08-29
CN101026725B CN101026725B (en) 2010-09-29

Family

ID=38744570

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2006101689394A Active CN101026725B (en) 2005-07-15 2006-07-14 Reproducing apparatus, reproducing method

Country Status (2)

Country Link
JP (2) JP4674618B2 (en)
CN (1) CN101026725B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102047674A (en) * 2009-04-08 2011-05-04 索尼公司 Recording device, recording method, reproduction device, reproduction method, program, and recording medium
CN108352165A (en) * 2015-11-09 2018-07-31 索尼公司 Decoding apparatus, coding/decoding method and program

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4251298B2 (en) 2005-07-15 2009-04-08 ソニー株式会社 REPRODUCTION DEVICE AND REPRODUCTION METHOD, PROGRAM, PROGRAM STORAGE MEDIUM, DATA, RECORDING MEDIUM, RECORDING DEVICE, AND RECORDING MEDIUM MANUFACTURING METHOD
CN101026725B (en) * 2005-07-15 2010-09-29 索尼株式会社 Reproducing apparatus, reproducing method
JP5552928B2 (en) * 2010-07-08 2014-07-16 ソニー株式会社 Information processing apparatus, information processing method, and program
JP6174326B2 (en) * 2013-01-23 2017-08-02 日本放送協会 Acoustic signal generating device and acoustic signal reproducing device
JP6228388B2 (en) * 2013-05-14 2017-11-08 日本放送協会 Acoustic signal reproduction device

Family Cites Families (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06253331A (en) * 1993-03-01 1994-09-09 Toshiba Corp Editing device corresponding to variable-length encoded signal
US5390177A (en) * 1993-03-24 1995-02-14 At&T Corp. Conferencing arrangement for compressed information signals
JPH0855427A (en) * 1994-08-11 1996-02-27 Sony Corp Multichannel audio recording and reproducing device
JP3268630B2 (en) * 1996-10-17 2002-03-25 株式会社ケンウッド How to edit recorded data on a disc
JP3416034B2 (en) * 1997-09-17 2003-06-16 沖電気工業株式会社 Coded signal processing device
JPH11213558A (en) * 1998-01-27 1999-08-06 Toshiba Corp Voice data processing device, computer system, and voice data processing method
JP2000165802A (en) * 1998-11-25 2000-06-16 Matsushita Electric Ind Co Ltd Stream edit system and edit method
US6621866B1 (en) * 2000-01-28 2003-09-16 Thomson Licensing S.A. Method for inserting a visual element into an MPEG bit stream
CN1136721C (en) * 2000-02-28 2004-01-28 松下电器产业株式会社 Method and device for flow editing
MXPA02012909A (en) * 2000-07-24 2004-05-05 Boehringer Ingelheim Pharma Improved oral dosage formulations of 1-(5-tert-butyl -2-p-tolyl-2h-pyrazol -3-yl)-3 -[4-(2-morpholin-4 -yl-ethoxy) -naphthalen -1-yl] -urea.
JP2002152051A (en) * 2000-11-13 2002-05-24 Nippon Telegr & Teleph Corp <Ntt> Compression code editor and compression code edit method
JP2002162995A (en) * 2000-11-22 2002-06-07 Sanyo Electric Co Ltd Sound reproducing device
US6925501B2 (en) * 2001-04-17 2005-08-02 General Instrument Corporation Multi-rate transcoder for digital streams
KR100457512B1 (en) * 2001-11-29 2004-11-17 삼성전자주식회사 Optical recording medium, apparatus and method for playing the optical recoding medium
US8150237B2 (en) * 2002-11-28 2012-04-03 Sony Corporation Reproducing apparatus, reproducing method, reproducing program, and recording medium
JP3859168B2 (en) * 2003-01-20 2006-12-20 パイオニア株式会社 Information recording medium, information recording apparatus and method, information reproducing apparatus and method, information recording / reproducing apparatus and method, computer program for recording or reproduction control, and data structure including control signal
WO2004068854A1 (en) * 2003-01-31 2004-08-12 Matsushita Electric Industrial Co., Ltd. Recording medium, reproduction device, recording method, program, and reproduction method
JP4228767B2 (en) * 2003-04-25 2009-02-25 ソニー株式会社 REPRODUCTION DEVICE, REPRODUCTION METHOD, REPRODUCTION PROGRAM, AND RECORDING MEDIUM
JP2005079945A (en) * 2003-09-01 2005-03-24 Alpine Electronics Inc Video reproducer and video reproducing method
JP2005114813A (en) * 2003-10-03 2005-04-28 Matsushita Electric Ind Co Ltd Audio signal reproducing device and reproducing method
JP4664346B2 (en) * 2004-12-01 2011-04-06 パナソニック株式会社 Recording medium, playback device, program, and playback method
JP4012559B2 (en) * 2004-12-01 2007-11-21 松下電器産業株式会社 Recording medium, playback device, program, playback method, integrated circuit
US8280233B2 (en) * 2005-01-28 2012-10-02 Panasonic Corporation Reproduction device, program, reproduction method
JP4251298B2 (en) * 2005-07-15 2009-04-08 ソニー株式会社 REPRODUCTION DEVICE AND REPRODUCTION METHOD, PROGRAM, PROGRAM STORAGE MEDIUM, DATA, RECORDING MEDIUM, RECORDING DEVICE, AND RECORDING MEDIUM MANUFACTURING METHOD
CN101026725B (en) * 2005-07-15 2010-09-29 索尼株式会社 Reproducing apparatus, reproducing method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102047674A (en) * 2009-04-08 2011-05-04 索尼公司 Recording device, recording method, reproduction device, reproduction method, program, and recording medium
CN108352165A (en) * 2015-11-09 2018-07-31 索尼公司 Decoding apparatus, coding/decoding method and program
CN108352165B (en) * 2015-11-09 2023-02-03 索尼公司 Decoding device, decoding method, and computer-readable storage medium

Also Published As

Publication number Publication date
JP4822081B2 (en) 2011-11-24
JP2008287869A (en) 2008-11-27
JP2009093789A (en) 2009-04-30
JP4674618B2 (en) 2011-04-20
CN101026725B (en) 2010-09-29

Similar Documents

Publication Publication Date Title
CN101789257B (en) Reproducing apparatus, reproducing method, recording device, and manufacturing method of recording medium
CN102572454B (en) Playback device and playback method
JP4770601B2 (en) Information processing apparatus, information processing method, program, and program storage medium
CN100394791C (en) Information processing method and apparatus, program and recording medium
CN100498959C (en) Information reproducing device of reproducing information recording medium and information recording device
CN101902655B (en) Data transmitting device and method
CN100539674C (en) Regenerating unit and renovation process
CN1906694B (en) Reproduction device, reproduction method, program, recording medium, and data structure
JP4968506B2 (en) REPRODUCTION DEVICE, REPRODUCTION METHOD, AND PROGRAM
CN101026725B (en) Reproducing apparatus, reproducing method
JP2006221795A (en) Information recording medium, information recording apparatus and method, information reproduction apparatus and method, information recording and reproduction apparatus and method, computer program for controlling recording or reproduction, and data structure comprising control signal
EP2003891A1 (en) Recording device, recording method, and recording program
US7848214B2 (en) Information recording medium, information recording device and method, information reproduction device and method, information recording/reproduction device and method, computer program for controlling recording or reproduction, and data structure including control signal
US8577206B2 (en) Information record medium, information record device and method, information reproduction device and method, information recording/reproduction device and method, recording or reproduction control computer program, and data structure containing control signal
CN102067181A (en) Synthesis device and synthesis method
JP2008193604A (en) Reproducing apparatus and method, and program
JP4720676B2 (en) Information processing apparatus and information processing method, data structure, recording medium manufacturing method, program, and program storage medium
JP4968561B2 (en) REPRODUCTION DEVICE, REPRODUCTION METHOD, AND RECORDING METHOD
JP5218872B2 (en) REPRODUCTION DEVICE, RECORDING MEDIUM, AND MANUFACTURING METHOD THEREOF
JP4821456B2 (en) Information processing apparatus, information processing method, program, data structure, and recording medium
JP2008052836A (en) Information processing apparatus, information processing method, program, and program storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C56 Change in the name or address of the patentee
CP01 Change in the name or title of a patent holder

Address after: Tokyo, Japan

Patentee after: Sony Corp

Address before: Tokyo, Japan

Patentee before: Sony Corporation

EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20070829

Assignee: Guangzhou Panyu Juda Car Audio Equipment Co., Ltd.

Assignor: Blue light United Co., Ltd.

Contract record no.: 2014990000233

Denomination of invention: Information storing disk, reproduction apparatus, and reproduction method

Granted publication date: 20100929

License type: Common License

Record date: 20140422

Application publication date: 20070829

Assignee: TCL Kone Electronics (Huizhou) Ltd.

Assignor: Blue light United Co., Ltd.

Contract record no.: 2014990000240

Denomination of invention: Information storing disk, reproduction apparatus, and reproduction method

Granted publication date: 20100929

License type: Common License

Record date: 20140423

Application publication date: 20070829

Assignee: Guangdong OPPO Mobile Communications Co., Ltd.

Assignor: Blue light United Co., Ltd.

Contract record no.: 2014990000237

Denomination of invention: Information storing disk, reproduction apparatus, and reproduction method

Granted publication date: 20100929

License type: Common License

Record date: 20140423

Application publication date: 20070829

Assignee: China Hualu Group Ltd.

Assignor: Blue light United Co., Ltd.

Contract record no.: 2014990000238

Denomination of invention: Information storing disk, reproduction apparatus, and reproduction method

Granted publication date: 20100929

License type: Common License

Record date: 20140423

Application publication date: 20070829

Assignee: Shenzhen Maxmade Technology Co.,Ltd.

Assignor: Blue light United Co., Ltd.

Contract record no.: 2014990000239

Denomination of invention: Information storing disk, reproduction apparatus, and reproduction method

Granted publication date: 20100929

License type: Common License

Record date: 20140423

LICC Enforcement, change and cancellation of record of contracts on the licence for exploitation of a patent or utility model
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20070829

Assignee: Dongguan de video technology Co. Ltd. Kit

Assignor: Blue light United Co., Ltd.

Contract record no.: 2016990000233

Denomination of invention: Information storing disk, reproduction apparatus, and reproduction method

Granted publication date: 20100929

License type: Common License

Record date: 20160614

LICC Enforcement, change and cancellation of record of contracts on the licence for exploitation of a patent or utility model