MXPA06013259A - Information storage medium, information reproducing apparatus, information reproducing method, and network communication system. - Google Patents

Information storage medium, information reproducing apparatus, information reproducing method, and network communication system.

Info

Publication number
MXPA06013259A
MXPA06013259A MXPA06013259A MXPA06013259A MXPA06013259A MX PA06013259 A MXPA06013259 A MX PA06013259A MX PA06013259 A MXPA06013259 A MX PA06013259A MX PA06013259 A MXPA06013259 A MX PA06013259A MX PA06013259 A MXPA06013259 A MX PA06013259A
Authority
MX
Mexico
Prior art keywords
video
information
sub
advanced
audio
Prior art date
Application number
MXPA06013259A
Other languages
Spanish (es)
Inventor
Kazuhiko Taira
Hideki Mimura
Toshimitsu Kaneko
Yasufumi Tsumagari
Yoichiro Yamagata
Takero Kobayashi
Yasuhiro Ishibashi
Seiichi Nakamura
Eita Shuto
Tooru Kamibayashi
Haruhiko Toyama
Original Assignee
Toshiba Kk
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2005072136A external-priority patent/JP2006260611A/en
Application filed by Toshiba Kk filed Critical Toshiba Kk
Publication of MXPA06013259A publication Critical patent/MXPA06013259A/en

Links

Landscapes

  • Signal Processing For Digital Recording And Reproducing (AREA)
  • Television Signal Processing For Recording (AREA)
  • Management Or Editing Of Information On Record Carriers (AREA)

Abstract

An information storage medium according to one embodiment of the present invention comprisesa management area in which management information to manage content is recordedand a content area in which content managed on the basis of the management informationis recorded. The content area includes an object area in which a plurality of objectsare recorded, and a time map area in which a time map for reproducing these objectsin a specified period on a timeline is recorded. The management area includesa play list area in which a play list for controlling the reproduction of a menuand a title each composed of the objects on the basis of the time map is recorded.

Description

INFORMATION STORAGE MEDIUM, INFORMATION REPRODUCTION APPARATUS, METHOD FOR REPRODUCING INFORMATION AND NETWORK COMMUNICATION SYSTEM TECHNICAL FIELD One embodiment of the invention relates to an information storage medium, such as an optical disk, an information reproducing apparatus and a method of reproducing information that reproduces the information from the storage medium and a network communication system composed of servers and players. BACKGROUND OF THE TECHNIQUE In recent years, DVD video discs that characterize high-quality, high-performance images and video players that reproduce DVD video discs have been widely used and peripheral devices that reproduce multichannel audio have expanded the range of elections for the consumer. In addition, a home theater can be performed or installed easily, and an environment is created that enables the user to watch movies, animations and the like with high quality images and high quality sound freely at home. In the Japanese Patent Application KOKAI Publication No. 10-50036, a reproducing apparatus capable of displaying several menus in an overlapping manner by changing the character colors for the images reproduced from the disc has been disclosed. As image compression technology has improved in the past few years, both users and content providers have sought to make much higher quality images. In addition to producing much higher quality images, content providers have sought an environment that provides more attractive content for users as a result of the expansion of more colorful menu content and an improvement in content interactivity including the main story of the title, menu screens and bonus images. In addition, users have sought more and more to enjoy the content freely, specifying the position of the reproduction, reproduction area, or the time of reproduction of the image data in the frozen images taken by the user, the subtitle text obtained at through the Internet connection, or similar ones. DESCRIPTION OF THE INVENTION An objective of one embodiment of the present invention is to provide an information storage medium capable of more attractive reproduction for viewers. Another objective of the embodiment of the present invention is to provide an information reproducing apparatus, a method of reproducing information and a networked reproduction system, which are capable of more attractive reproduction for viewers. An information storage means according to an embodiment of the invention comprises: an administration area in which the administration information (Advanced Navigation) that manages the content (Content Advanced) is registered; and a content area in which the managed content is recorded in the base of the administration information, wherein the content area includes an area of objects in which a plurality of objects are recorded, and a time allocation area in which a time allocation (TMAP) is recorded to play these objects is a specific period in a timeline, and the administration area includes a playlist area in which a playlist is registered to control the reproduction of a menu and a title each composed of the objects on the basis of the time allocation, and enables the menu to be played dynamically based on the playlist. An apparatus that reproduces information according to another embodiment of the invention which reproduces the information storage means comprises: a reading unit configured to read the registered playlist in the information storage means; and a reproduction unit configured to play the menu at the base of the playlist read by the reading unit. A method of reproducing information for reproducing the information storage medium according to yet another embodiment of the invention comprises: Reading the registered playlist in the information storage medium; and the playback of the menu in the base of the playlist. A network communication system according to yet another embodiment of the invention comprises: a player which reads information from a storage medium, requires a server for playback information via a network, downloads the playback information from the server and reproduces the information read from the information storage medium and the reproduction information downloaded from the server, and a server that provides the player with the reproduction information according to the requirement for the reproduction information made by the reproductive apparatus . Further objects and advantages of the invention will be set forth in the description that follows, and in part will be obvious from the description, or may be learned by practicing the invention. The objects and advantages of the invention can be realized and obtained by means of the instrumentation and combinations particularly indicated hereinafter. BRIEF DESCRIPTION OF THE DRAWINGS A general architecture that implements the various features of the invention will now be described with reference to the drawings. The drawings and associated descriptions are provided to illustrate embodiments of the invention and do not limit the scope of the invention. Figures IA and IB are explanatory diagrams showing the configuration of the standard content and that of advanced content according to one embodiment of the invention, respectively; Figures 2A to 2C are explanatory diagrams of discs in category 1, category 2, category 3 according to the embodiment of the invention, respectively, figure 3 is an explanatory diagram of a reference example for reinforced video objects (EVOB) according to the time allocation information (TMAPI) in the embodiment of the invention; Figure 4 is an explanatory diagram showing an example of the transition of the reproduction state of a disk in the embodiment of the invention; Figure 5 is a diagram to help explain an example of a volume space of a disk in the embodiment of the invention; Figure 6 is an explanatory diagram showing an example of directories and files of a disk in the embodiment of the invention; Figure 7 is an explanatory diagram showing the administration information configuration (VMD) and that of the video title set (VTS), in the embodiment of the invention; Figure 8 is a diagram to help explain the start sequence of a reproductive model in the embodiment of the invention; Figure 9 is a diagram to help explain a configuration showing a state where the main EVOB-TY2 packets are mixed in the embodiment of the invention; Figure 10 shows an example of an expanded system white decoder of the breeding model in the embodiment of the invention; Figure 11 is a timing chart to help explain an example of the operation of the player shown in Figure 10 in the embodiment of the invention; Figure 12 is an explanatory diagram showing a peripheral environment of an advanced content player in the embodiment of the invention; Figure 13 is an explanatory diagram showing a model of the advanced content player of Figure 12 in the embodiment of the invention; Figure 14 is an explanatory diagram showing the concept of information recorded on a disk in the embodiment of the invention; Figure 15 is an explanatory diagram showing an example of the configuration of a directory and that of a file in the embodiment of the invention; Figure 16 is an explanatory diagram showing a more detailed model of the advanced content player in the embodiment of the invention; Figure 17 is an explanatory diagram showing an example of the data access manager of Figure 16 in the embodiment of the invention; Figure 18 is an explanatory diagram showing an example of the data cache of Figure 16 in the embodiment of the invention; Fig. 19 is an explanatory diagram showing an example of the navigation manager of Fig. 16 in the embodiment of the invention; Figure 20 is an explanatory diagram showing an example of the display engine of Figure 16 in the embodiment of the invention; Figure 21 is an explanatory diagram showing an example of the advanced element display engine of Figure 16 in the embodiment of the invention; Fig. 22 is an explanatory diagram showing an example of the advanced subtitle player of Fig. 16 in the embodiment of the invention; Figure 23 is an explanatory diagram showing an example of the representation system of Figure 16 in the embodiment of the invention; Figure 24 is an explanatory diagram showing an example of the secondary video player of Figure 16 in the embodiment of the invention; Figure 25 is an explanatory diagram showing an example of the primary video player of Figure 16 in the embodiment of the invention; Figure 26 is an explanatory diagram showing an example of the decoder engine of Figure 16 in the embodiment of the invention; Fig. 27 is an explanatory diagram showing an example of the AV renderer of Fig. 16 in the embodiment of the invention; Figure 28 is an explanatory diagram showing an example of the video mixing model of Figure 16 in the embodiment of the invention; Figure 29 is an explanatory diagram that helps explain a graphic hierarchy according to the embodiment of the invention; Figure 30 is an explanatory diagram showing an audio mixing model according to the embodiment of the invention; Figure 31 is an explanatory diagram showing an interface manager according to the embodiment of the invention; Figure 32 is an explanatory diagram showing a disc data delivery model according to the embodiment of the invention; Figure 33 is an explanatory diagram showing a persistent storage data and network delivery model according to the embodiment of the invention; Figure 34 is an explanatory diagram showing a data storage model according to the embodiment of the invention; Figure 35 is an explanatory diagram showing a user input management model according to the embodiment of the invention; Figure 36A and 36B are diagrams that help explain the operation when the apparatus of the invention subjects a graphic model to an aspect ratio process in the embodiment of the invention; Figure 37 is a diagram that helps explain the function of a playlist in the embodiment of the invention; Figure 38 is a diagram that helps explain a state where the objects are assigned in a timeline according to the playlist in the embodiment of the invention; Fig. 39 is an explanatory diagram showing the cross-reference of the playlist to other objects in the embodiment of the invention; Figure 40 is an explanatory diagram showing a reproduction sequence related to the apparatus of the invention in the embodiment of the invention; Figure 41 is an explanatory diagram showing an example of reproduction in the trick reproduction related to the apparatus of the invention in the embodiment of the invention; Figure 42 is an explanatory diagram to help explain the assignment of objects in a timeline executed by the apparatus of the invention in a 60-Hz region in the embodiment of the invention; Figure 43 is an explanatory diagram that helps explain the allocation of objects in a timeline executed by the apparatus of the invention in a 50 Hz region in the embodiment of the invention; Figure 44 is an explanatory diagram showing an example of the advanced application contents in the embodiment of the invention; Figure 45 is a diagram that helps to explain a model related to the unsynchronized composition page break in the embodiment of the invention; Figure 46 is a diagram that helps to explain a model related to the page break of synchronized or smooth composition progressively in the embodiment of the invention; Figure 47 is a diagram that helps to explain a model related to the synchronized composition page break in a predetermined or hard manner in the embodiment of the invention; Figure 48 is a diagram that helps to explain an example timing of generating chart frames in the embodiment of the invention; Fig. 49 is a diagram that helps explain a pattern of frame elimination timing in the embodiment of the invention; Figure 50 is a diagram that helps explain an advanced content start sequence in the embodiment of the invention; Figure 51 is a diagram that helps explain an update sequence of advanced content playback in the embodiment of the invention; Figure 52 is a diagram that helps explain a sequence of the conversion of advanced VYS into standard VTS or vice versa. in the embodiment of the invention; Figure 53 is a diagram that helps explain a summary process in the embodiment of the invention; Figure 54 is a diagram that helps to explain an example of languages (codes) for selecting a language unit in the VG menu and in each of the VTS menus in the embodiment of the invention; Figure 55 shows an example of the validity of the HLI in each PGC (codes) in the embodiment of the invention; Figure 56 shows the structure of the navigation data in the standard content in the embodiment of the invention; Figure 57 shows the structure of the video administrator information (VMGI) in the embodiment of the invention; Figure 58 shows the structure of the video administrator information (VMGI) in the embodiment of the invention; Figure 59 shows the structure of a chain information table of the set of video titles (VTS_PGCIT) in the embodiment of the invention; Figure 60 shows the structure of the program chain information (PGCI) in the embodiment of the invention; Figures 61A and 61B show the structure of a program chain command table (PGC_CMDT) and that of a cell reproduction information table (C_PBIT) in the embodiment of the invention, respectively; Figures 62A and 62B show the structure of a set of enhanced video objects (EVOBS) and that of a navigation pack (NV_PCK) in the embodiment of the invention respectively; Figures 63A and 63B show the general control information structure (GCI) and the location of information highlighted in the embodiment of the invention, respectively; Figure 64 shows the relationship between the subimages and HLI in the embodiment of the invention; Figures 65A and 65B show a button color information table (BTN_COLIT) and an example of button information in each button group in the embodiment of the invention, respectively; Figures 66A and 66B show the structure of a highlighted information packet (HLI_PCK) and the relationship between the video data and the video packets in EVOBU in the embodiment of the invention, respectively; Figure 67 shows restrictions on MPEG-4AVC video in the embodiment of the invention; Figure 68 shows the structure of the video data in each EVOBU in the embodiment of the invention; Figures 69A and 69B show the structure of a sub-picture unit (SPU) and the relationship between the SPU and the sub-picture packages (SP_PCK) in the embodiment of the invention, respectively; Figures 70A and 70B show the update time of the sub-images in the embodiment of the invention; Figure 71 is a diagram that helps explain the contents of the information recorded in a storage medium of information similar to a disk according to the embodiment of the invention; Figures 72A and 72B are diagrams that help explain an example of the advanced content configuration in the embodiment of the invention; Figure 73 is a diagram that helps explain an example of the configuration of the video title set information (VTSI) in the embodiment of the invention; Fig. 74 is a diagram that helps explain an example of the configuration of the time allocation information (TMAPI) that begins with the input information (EVOBU to EVOBU_ENTI # i) in the most improved video object unit (s) in the embodiment of the invention; Figure 75 is a diagram that helps explain an example of the configuration of the interleaved unit information (ILVUI) that exists when the time allocation information is by a block interleaved in the embodiment of the invention; Figure 76 shows an example of a contiguous block TMAP in the embodiment of the invention; Figure 77 shows an example of a TMAP interleaved block in the embodiment of the invention; Figure 78 is a diagram that helps explain an example of the configuration of a primary enhanced video object (P_EVOB) in the embodiment of the invention; Figure 79 is a diagram that helps explain an example of the configuration of VM_PCK and VS_PCK in the primary enhanced video object (P_EVOB) in the embodiment of the invention; Figure 80 is a diagram that helps explain an example of the configuration of AS_PCK and AM_PCK in the primary enhanced video object (P_EV0B) in the embodiment of the invention; Figures 81A and 81B are diagrams that help explain an example of the configuration of an advanced package (ADV_PCK) and that of the start packet in a unit of time / unit of video object (VOBU / TU) in the embodiment of the invention; Figure 82 is a diagram that helps explain an example of the configuration of a time allocation of the secondary video set (TMAP) in the embodiment of the invention; Fig. 83 is a diagram that helps explain an example of the configuration of a secondary enhanced video object (S-EVOB) in the embodiment of the invention; Fig. 84 is a diagram that helps explain another example (another example of Fig. 83) of the secondary enhanced video object (S-EVOB) in the embodiment of the invention; Figure 85 is a diagram that helps explain an example of the configuration of a playlist in the embodiment of the invention; Figure 86 is a diagram that helps explain the placement of presentation objects in a timeline in the embodiment of the invention; Fig. 87 is a diagram that helps explain a case where a trick reproduction (such as a chapter jump) of reproduction objects is carried out in a timeline in the embodiment of the invention; Figure 88 is a diagram that helps explain an example of the configuration of a playlist when an object includes angle information in the embodiment of the invention; Figure 89 is a diagram that helps explain an example of the configuration of a playlist when an object includes a multiple story in the embodiment of the invention; Figure 90 is a diagram that helps explain an example of the description of the object assignment information in a playlist (when an object includes angle information) in the embodiment of the invention; Figure 91 is a diagram that helps explain an example of the description of the object assignment information in a playlist (when an object includes a multiple story) in the embodiment of the invention; Figure 92 is a diagram that helps explain an example of the type of advanced object (here, example 4) in the embodiment of the invention; Fig. 93 is a diagram that helps explain an example of a playlist in the case of an advanced object synchronized in the embodiment of the invention; Fig. 94 is a diagram that helps explain an example of the description of a playlist in the case of an advanced object synchronized in the embodiment of the invention; Figure 95 shows an example of a network system model according to the embodiment of the invention; Fig. 96 is a diagram that helps explain an example of disk authentication in the embodiment of the invention; Figure 97 is a diagram that helps explain a network data flow model according to the embodiment of the invention; Figure 98 is a diagram that helps explain a completely downloaded buffer model (file cache) according to the embodiment of the invention: Fig. 99 is a diagram that helps explain a completely downloaded stream buffer (flow buffer) model according to the embodiment of the invention: and Fig. 100 is a diagram that helps explain an example of download planning in the embodiment of the invention. THE BEST WAY TO CARRY OUT THE INVENTION 1. Structure Various modalities according to the invention will be described hereinafter with reference to the accompanying drawings. In general, a means of storing information according to an embodiment of the invention comprises: an administration area in which the administration information that manages the content is recorded; and a content area in which the managed content is recorded in the base of the administration information, wherein the content area includes an area of objects in which a plurality of objects are recorded, and a time allocation area wherein a time allocation is recorded for reproducing these objects in a specific period in a timeline, and the administration area includes a playlist area in which a playlist is registered to control the reproduction of a menu and a title each composed of the objects on the basis of the time allocation. 2. Scheme In a means of recording information, a means of transmitting information, an information processing apparatus, an information processing apparatus, a method of reproducing information, an apparatus for reproducing information, a method of registration of information, and an information recording apparatus according to one embodiment of the invention have been made, effective, new improvements in the data format and the method of handling the data format. Therefore, from sources, data such as video, audio, and other programs can be reused in particular. In addition, the freedom of the change of font combination is improved. This will be explained below. 3. Introduction 3.1 Content Type This specification defines two types of content: one is Standard Content and the other is Advanced Content. He Standard Content consists of navigation data and data of video objects on a disk in which they are pure extensions of those in the specification of Video DVD veri .1.
On the other hand, the Advanced Content consists of Advanced Navigation such as playlist, manifest, Composition and command files and Advanced Data such as primary / secondary video sets and Advanced Elements (image, audio, text, and so on). At least one file of the playlist and a set of primary videos must be located on a disk, and other data can be from a disk and also be released from a server. 3.1.1 Standard Content The Standard Content is only the extension of the content defined in Video DVD Veri .1 especially for high resolution video, high quality audio and some new functions. The standard content consists basically of a VMG space and one or more VTS spaces (which are called "standard VTS" or only "VTS"), as shown in figure IA. For more details see 5. Standard Content. 3.1.2 Advanced Content Advanced content performs more interactivity in addition to the audio and video extension made by the standard content. As described above, the advanced content consists of the Advanced Navigation such as the playlist, Manifest, and Advanced Data and command files such as the primary / secondary video set and the Advanced Element (image, audio, text, and so on). successively), and Advanced Navigation manages the reproduction of Advanced Data. See figure IB. A file from the playlist described by XML, located on a disk, and a player will play this file first if the disk has advanced content. This file gives information for: • Object assignment information: Info, in a Title for the objects of the presentation assigned in the timeline of the Title • Sequence of reproduction: Information of reproduction for each Title, described by the timeline of the Title. • Configuration information: System configuration, for example, alignment of the data buffer. According to the description of the playlist (Playlist or Playlist), the initial application is executed with reference to the primary / secondary video set and so on, if it exists. An application consists of Manifest, Composition (which includes content / style / timing information), commands and advanced data. An initial composition file, file (s) of commands or other sources that make up the application are referred to in a Manifesto file. The Composition initiates the playback of Advanced Data such as the Main / Secondary Video Set, and the Advanced Element. The Primary or Primary Video Set has the structure of a VTS space that specializes in this content. That is, this VTS does not have navigation commands, has no layered structure, but has TMAP information and so on. Also, this VTS can have a main video stream, a sub-video stream, 8 main audio streams and 8 sub-audio streams. This VTS is called "Advanced VTS". The Secondary Video Set is used for additional video / audio data for the Main Video Set and is also used for additional audio data only. However, this data can only be reproduced when the sub-video / audio stream in the Main Video Set is not playing, and vice versa. The Secondary Video set is recorded on a disk or released from a server as one or more files. This file will be stored in the file cache before it is played, if the data is recorded on a disc and it is necessary to play simultaneously with the Main Video Set. On the other hand, if the Secondary Video Set is located on a network site, all of this data must be stored in the file cache and played ("download"), or some of this data must be stored in the buffer flow sequentially and the data stored in the buffer is reproduced simultaneously without the buffer during data download from a server. ("Flow") For more details, see 6. Advanced Content. 3.1.2.1. Advanced VTS The Advanced VTS (which is also called the Main Video Set) is the Set of Video Titles used for Advanced Navigation. That is, the following are definitions that correspond to the Standard VTS. 1) More improvement for EVOB - 1 main video stream, 1 sub-video stream - 8 main audio streams, 8 sub-audio streams - 32 sub-image streams - 1 advanced stream 2) VOB Set integration Enhanced (EVOBS) - Integration of both Menu EVOBS and Title 3 EVOBS) Elimination of a layered structure - Untitled, without PGC, without PTT and without cell - Cancellation of Navigation command and UOP control 4) Introduction of new Assignment Information Time (TMAP) - A TMAPI corresponds to an EVOB and is stored as a file. - Some information is simplified in a NV_PCK. For more details, see 6.3 Main Video Set. 3.1.2.2 Interoperable VTS The interoperable VTS is the set of Video Titles supported in HD DVD-VR specifications. In this specification, HD DVD-Video, interoperable VTS specifications are not supported, that is, the author of the content can not make a disk that contains interoperable VTS. However, an HD DVD-Video player will support the interoperable VTS playback. 3.2 Disc Type This specification allows 3 types of discs (Category 1 disc / Category 2 disc / Category 3 disc) as defined below. 3.2.1 Category 1 Disk This disk contains only standard content consisting of a VMG and one or more standard VTS. That is, this disc does not contain Advanced VTS or Advanced Content. As an example of structure, see Figure 2A. 3.2.2 Category 2 Disc This disc contains only Advanced content consisting of Advanced Navigation, Main Video Set (Advanced VTS), Secondary Video Set and Advanced Element. That is, this disk does not contain Standard Content such as VMG or Standard VTS. As an example of structure, see Figure 2B. 3.2.3 Category 3 Disk This disk contains both Advanced content consisting of Advanced Navigation, Main Video Set (Advanced VTS), Secondary Video Set and Element Advanced and Standard Content consisting of VMG and one or more Standard VTS. However, either FP_DOM or VMGM_DO exist in this VMG. As an example of structure, see Figure 2C. Although this disc contains Standard Content, basically this disc follows the rules of the category 2 disc and in addition, the disc has the transition from the State of Advanced Content Playback to the State of Playback of Standard Content, and vice versa. 3.2.3.1 Use of Standard Content by Advanced Content Standard content can be used through Advanced Content. The VTSI can refer to the EVOBSs which is also referred to by the VTSI of the Standard VTS, through the use of the TMAP (see Figure 3). However, the EVOB may contain HLI, PCI and so on, which are not supported in the Advanced Content. In the reproduction of such EVOBs, for example, the HLI and PCI will be ignored in the Advanced content. 3.2.3.2 Transition between the Playback State of Standard / Advanced Content Regarding the category 3 disc, the Advanced Content and the Standard Content are reproduced independently. Figure 4 shows a state diagram for the reproduction of this disc. First the Advanced Navigation (that is, the file of the reproduction list) is interpreted in "Initial State", and according to the file, the initial application in Advanced Content is executed in "Advanced Content Playback State". This procedure is the same as on the category 2 disc. During the playback of Advanced Content, in this case, a player can play the Standard Content by executing specific commands via commands such as, for example, Call StandardStandard Content with disputes to specify the reproduction position. (Transition to "Standard Content Playback State") During playback of Standard Content, a player can return to "Standard Content Playback Status" by executing specific commands such as Navigation Commands such as, Advanced Calling Player Content.
In the Standard Content Playback State, the Advanced Content can read / set the system parameter (SPRM (I) to SPRM (IO)) for Standard Content. During transitions, the SPRM values are maintained continuously. For example, in the State of Reproduction of Advanced Content, Advanced Content sets SPRM for the audio stream according to the current audio playback state for playback of the appropriate audio stream in the In the Standard Content Playback State after the transition, even if the stream changes audio by a user in the Standard Content Playback State after the transition the Advanced Content reads SPRM for the audio stream and changes the audio playback status in the Standard Content Playback State. 3.3 Logical Data Structure A disk has the logical structure of a Volume Space, a Video Manager (VMG), a Video Title Set (VTS), a Set of Enhanced Video Objects (EVOBS) and an Advanced Content described here. 3.3.1 Structure of a Volume Space As shown in Figure 5, the Volume Space of an HD DVD-Video disc consists of 1) The structure of the file and volume that will be assigned to the UDF structure. 2) "The individual DVD-Video area", which can be assigned to the data structure of DVD-Video format. 3) "The individual DVD-Video zone", which will be assigned to the data structure of DVD-Video format. This zone consists of "Standard Content Zone" and "Advanced Content Zone". 4) "Other DVD zones", which can be used for either DVD-Video or HD DVD Video applications. The following rules apply to the HD DVD-Video zone 1) The "HD DVD-Video zone" will consist of a "Standard Content zone" on the Category 1 disc, The "HD DVD-Video zone" will consist of an "Advanced Content zone" on the Category 2 disc. The "HD DVD-Video zone" will consist of both, a "Standard Content zone" and a "Content zone". Advanced "on the Category 3 disc. 2) The" Standard Content area "will consist of an individual Video Manager (VMG) and at least 1 with a maximum 510 Video Title Set on the Category 1 disc, The "Standard Content zone" must not exist on the Category 2 disk and the "Standard Content area" consists of at least 1 with a maximum of 510 VTS on the Category 3 disk. 3) The VMG will be located in the main part of the "HD DVD-Video zone" if it exists, which is the case of the Category 1 disc. 4) The VMG will be composed of at least 2 with a maximum of 102 files. 5) Each VTS (except Advanced VTS) will be composed of at least 3 with 200 maximum files. 6) The "Advanced Content zone" will consist of files supported in the Advanced Content with an Advanced VTS. The maximum number of files for the Advanced Content zone (under the ADV_OBJ directory) is 512x2047. 7) The Advanced VTS will be composed of at least 5 with 200 maximum files. Note: Regarding the DVD-Video zone, refer to Part 3 (Video Specifications) of Ver.1.0. 3.3.2 Directory and File Rules The requirements for the files and directories associated with an HD DVD-Video disc are described here. Directory HVDVD_TS The directory "HVDVD_TS" will exist directly under the root directory. All files related to a VMG, a Standard Video Set (s), an Advanced VTS (Main Video Set) will reside under this directory. Video Manager (VMG) A Video Manager Information (VMGI), a Enhanced Video Object for the First Chain Menu Reproduction programs (FP_PGCM_EVOB), an Information of the Video Manager for reservation (VMGI_BUP) will be registered respectively as a component file under the directory (HVDVD_TS). A Set of Enhanced Video Objects for the Video Manager Menu (VMGM_EVOBS) with a size of 1 GB (= 230 bytes) or more must be divided into more than 98 files under the HVDVD_TS directory. For these files of a VMGM_EVOBS, each file will be located contiguously. Set of Standard Video Titles (VTS Standard) An information of the Set of Video Titles (VTSI) and a set of Video Titles for reservation (VTSI_BUP) will be registered respectively as a component file under the HVDVD_TS directory. A Set of Enhanced Video Objects for the Video Titles Set Menu (VTSM_EVOBS), and a Set of Enhanced Video Objects for Titles (VTSTT_VOBS) of a size of 1 GB (= 230 bytes) or more should be further divided of 99 files so that the size of each file is less than 1GB. These files will be component files under the HVDVD_TS directory. For these files of a VTSM_EVOBS, and a VTSTT_EVOBS, each file will be located contiguously. The Advanced Video Titles Set (Advanced VTS) An Information of the Video Titles Set (VTSI) and a Set of Video Titles Set for booking (VTSI_BUP) can be registered respectively as a component file under the HVDVD_TS directory. An Information of the Time Allocation of the Set of Video Titles (VTS_TMAP) and a Time Allocation Information of the Set of Video Titles for reservation (VTS_TMAP_BUP) can be composed of more than 99 files under the directory HVDVD_TS respectively. A Set of Enhanced Video Objects for Titles (VTSTT_VOBS) with a size of 1 GB (= 230 bytes) or more must be divided into more than 99 files so that the size of each file is less than 1 GB. These files will be component files under the HVDVD_TS directory. For these files of a VTSTT_EVOBS, each file will be located contiguously. The name of the file and the name of the directory under the "HVDVD_TS" directory will be applied according to the following rules. 1) Name of the Directory The name of the fixed directory for the DVD-Video will be "HVDVD_TS" 2) File Name for the Video Administrator (VMG) The name of the fixed file for the Information for the Video Administrator will be "HVI00001. IFO".
The name of the fixed file for the Enhanced Video Object for the FP_PGC Menu will be "HVM00001. EVO". The name of the file for the Video Object Set Improved for the VGM Menu will be "HVM000 %%. EVO". The name of the fixed file for the Information of the Video Manager for booking will be "HVI00001 .BUP". "%%" will be assigned consecutively in the ascending order of "02" to "99" for each Object Set of Improved Video for the VMG Menu. 3) Name of the file for the Video Titles Set Standard (VTS Standard) Name the file for the Set Information of Video Titles will be "HVI @@@ 01. IFO". The file name for the Set of Enhanced Video Objects for VTS Menu will be "HVM @@@# EVO". The name of the file for the Video Object Set Improved for Titles will be "HVT @@@ # EVO". The name of the file for the Set Information Video titles for booking will be "HVI @@@ 01.BUP". - "@@@" will be three characters from "001" to "511" to be assigned to the files of the number of Title Sets of Video . "##" will be assigned consecutively in ascending order from "01" to "99" for each Set of Enhanced Video Objects for the VTS Menu or for each Set of Improved Video Objects for Titles. 4) Name of the file for the Video Titles Set Advanced (Advanced VTS) The name of the file for the Set Information of Video Titles will be "AVI00001. IFO" The file name for the Enhanced Objects Set for the Title will be "AVT000 & EVO". The name of the file for the Time Assignment Information will be "AVMAP0 $$. IFO". The name of the file for the Set Information Video Titles will be "AVIOOOO1. BUP" The name of the file for the Information of the assignment of Time will be "AVMAP0 $$. BUP". - - "& amp;" will be assigned consecutively in the ascending order of "01" to "99" for the Set of Video Objects Improved for the Title. "$$" will be assigned consecutively in the ascending order of "01" to "99" for the Time Assignment Information. The ADVJDBJ directory The "ADV_0BJ" directory will exist directly under the root directory. All files in the playlist will reside only under this directory. Any Advanced Navigation, Advanced Element, and Secondary Video Set file may reside only under this directory. Playlist Each of the playlist files will reside only under the "ADV_0BJ" directory having the name of the file "PLAYLIST %% .XML". "%%" will be assigned consecutively in the ascending order of "00" to "99". The file in the playlist that has the maximum number is initially interpreted (when a disc is loaded). Directories for Advanced Content The "Directories for Advanced Content" can only exist under the directory "ADV_0BJ." Any of the Advanced Navigation, Advanced Element, and Secondary Video Set files may reside in this directory. The name of this directory will consist of d-characters and dl-characters. The total number of sub-directories "ADV_0BJ" (excluding the directory "ADV_OBJ") will be less than 512. The depth of the directory will be equal to or less than 8. FILES for Advanced Content The total number of files under the "ADV_DBJ" directory it will be limited to 512x2047, and the total number of files in each directory will be less than 2048. The name of this file will consist of d-characters or dl-characters, and the name of this file consists of body, "." (point) and extension. An example of the structure of the directory / file is shown in FIG. 6. 3.3.3 Video Manager structure (VMG) The VMG is the Table of Contents for all the Video Title Sets that exist in the "HD DVD-video zone".
As shown in FIG. 7, a VMG is composed of control data referred to VMGI (Video Manager Information), Video Enhanced Object for the first PGC Playback Menu (FP_PGCM_EVOB), Set of Video Enhanced Objects for the VMG Menu (VMGM_EVOBS) and a Reservation of control data (VMGI_BUP) . The control data is static information necessary for the titles of reproduction and it provides the information to support the User's Operation. The FP_PGCM_EVOB is an Enhanced Video Object (EVOB) used to select the menu language. The VMGM_V0BS is a collection of Enhanced Video Objects (EVOBs) used for Menus that support volume access. The following rules will apply to the Video Manager (VMG) 1) Each of the control data (VMGI) and the control data reserve (VMGI_BUP) will be an individual file that is less than 1 GB. 2) The EVOB for the Menu (FP_PGCM_EVOB) FP_PGC will be an Individual File that is less than 1 GB. The EVOBS for the VMG Menu (VMGM_EVOBS) will be divided into Files that are each smaller than 1 GB, up to a maximum of (98). 3) VMGI, FP_PGCM_EVOB (if present), VMGM_EVOBS (if present) and VMGI_BUP will be located in this order. 4) The VMGI and VMGI_BUP will not be registered in the same ECC block. 5) The files comprising VMGM_EVOBS will be located contiguously. 6) The contents of the VMGI_BUP will be exactly the same as the VMGI. Therefore, when the relative address information in VMGI_BUP refers to outside of VMGI_BUP, the relative address will be taken as a relative address of the VMGI. 7) There may be a space in the boundaries between VMGI, FP_PGCM_EVOB (if present), VMGM_EVOBS (if present) and VMGI_BUP. 8) In VMGM_EVOBS (if present), each of the EVOB are located contiguously. 9) The VMGI and VMGI_BUP will be recorded respectively in a logically contiguous area that is composed of consecutive LSNs.
Note: These specifications can be applied to DVD-R for General / DVD-RAM / DVD-RW in addition to DVD-ROM but will obey the rules of the location of the data described in part 2 (File System Specifications) of each medium . 3.3.4 Structure of the Set of Standard Video Titles (VTS Standard) A VTS is a collection of Titles. As shown in Figure 7, each VTS is composed of control data referred to as VTSI (Information from the Set of Securities of Video), Set of Enhanced Video Objects for the Menu (VTSM_BUP) of the VTS, Set of Enhanced Video Objects for Titles in a VTS (VTSTT_EVOBS), and reservation of control data (VTSI_BUP). The following rules apply to the Video Title Set (VTS) 1) Each of the control data (VTSI) and the control data reservation (VTSI_BUP) will be an individual file that is less than 1 GB. 2) Each of the EVOBS for the VTS Menu (VTSM_EVOBS) and the EVOBS for a VTS (VTSTT_EVOBS) will be divided into files that are each less than 1 GB, up to a maximum of (99) respectively. 3) The VTSI, VTSM_EVOBS (if present), the VTSTT_EVOBS and VTSI_BUP will be located in this order. 4) The VTSI and BTSI_BUP will not be registered in the same ECC block. 5) The files that comprise VTSM_EVOBS will be located contiguously. Also files that comprise VTSTT_EVOBS will be located contiguously. 6) The contents of VTSI_BUP will be exactly the same as VTSI. Therefore, when the Information of the relative address in VTSI_BUP refers to outside of the VTSI_BUP, the relative information will be taken as a relative address of the VTSI. 7) The VTS numbers are the consecutive numbers assigned to the VTS in the Volume. The VTS numbers go from "1" to "511" and are assigned in the order in which the VTS is stored on the disk (from the smallest LBN at the start of the VTSI of each VTS). 8) In each VTS, a space can exist in the boundaries between VTSI, VTSM_EVOBS (if present), VTSTT_EVOBS and VTSI_BUP. 9) In each VTSM_EVOBS (if present), each EVOB will be located contiguously. 10) In each VTSTT_EVOBS, each EVOB will be located contiguously. 11) The VTSI and VTSI_BUP will be registered respectively in a logically contiguous area that is composed of consecutive LSNs. Note: These specifications can be applied to DVD-R for General / DVD-RAM / DVD-RW in addition to DVD-ROM but will obey the data location rules described in part 2 (File System Specifications) of each medium. In terms of location details, it refers to part 2 (File System Specifications) of each medium. 3.3.5 Structure of the Video Titles Set (VTS) Advanced) This VTS consists of only one title. As shown in figure 7, this VTS is composed of control data referred to as VTSI (see 6.3.1 Information of the Video Titles Set), Set of Video Title Objects Enhanced in a VTSTT_EVOBS of the VTS, Information of Time Assignment of the Video Titles Set (VTSJTMAP), reserve control data (VTSI_BUP) and reservation of Time Assignment Information of the Set of Titles (VTS_TMAP_BUP). The following rules apply to the Set of Video Titles (VTS) 1) Each of the control data (VTSI) and the reservation of the control data (VTSI_BUP) (if any) will be an individual file that is less than 1 GB . 2) The EVOBS for the Titles in one (VTSTT_EVOBS) of the VTS will be divided into files that are each less than 1 GB, up to a maximum of (99). 3) Each one of an Information of Assignment of the Time of the Set of Video Titles and the reservation of this (VTS_TMAP_BUP) (if it exists) will consist of files that are less than 1 GB, up to a maximum of (99). 4) The VTSI and BTSI_BUP (if they exist) will not be registered in the same ECC block. 5) The VTS_TMAP and VTS_TMAP_BUP (if they exist) will not be registered in the same ECC block. 6) The files that comprise VTSTT_EVOBS will be located contiguously. 7) The contents of VTSI_BUP (if they exist) will be exactly the same as VTSI. Therefore, when the relative address information in VTSI_BUP refers to outside of VTSI_BUP, the relative address will be taken as a relative address of the VTSI. 8) In each VTSTT_EVOBS, each EVOB will be located contiguously.
Note: These specifications can be applied to DVD-R for General / DVD-RAM / DVD-RW in addition to DVD-ROM but will obey the data location rules described in part 2 (File System Specifications) of each medium . In terms of location details, it refers to part 2 (File System Specifications) of each medium. 3.3.6 Structure of the Video Object Set Enhanced (EVOBS) The EVOBS is a collection of Enhanced Video Objects (refer to 5. Enhanced Video Objects) which is composed of data in Video, Audio, Sub-image, and the like (See Figure 7). The following rules apply to the EVOBS: 1) In an EVOBS, the EVOBs are registered in Contiguous Blocks and Interleaved Blocks. Go to 3.3.12.1 Location of the Presentation Data for Contiguous Block and Interleaved Block. In the case of VMG and Standard VTS, 2) An EVOBS is composed of one or more EVOBs. The EVOB_ID numbers are assigned from the EVOB with the smallest LSN in the EVOBS, in ascending order starting with a (1). 3) An EVOB is composed of one or more cells. The numbers of C_ID are assigned from the cell with the smallest LSN in an EVOB, in ascending order that starts with a (1). 4) The cells in the EVOBS can be identified by the number of EVOB_ID and the number of C_ID. 3.3.7 Relationship between the Logical Structure and the Physical Structure The following rule will apply to cells for the VMG and VTS Standard: 1) A cell will be located in the same layer 3.3.8 MIME type The name of the extension and the type of MIME for each source in this specification will be defined in Table 1. Table 1 File Extension and MIME Type 4. System Model 4.1 Perspective of the System Model 4.1.1 Total Start Sequence Figure 8 is a flow chart of the HD DVD player's start sequence. After inserting the disc, the player confirms if there is "Playlist .xml or" playbacklist.xml "(Attempt)" in the directory "ADV_OBJ" under the root directory. If there is "Playlist .xml (Attempt)", the HD DVD player decides whether the disc is category 2 or 3. If there is no "Playlist .xml (Attempt)" the HD DVD player verifies the value of the disc VMG_ID in the VMGI on the disk. If the disc is category 1, it will be "HDDVD-VMG200". The [b? -bl5] of the VMG_CAT will only indicate the Standard Contents. If the disc does not belong to any HD DVD category, the behavior depends on each player. For details about the VMGI, see [5.2.1 Video Manager Information (VMGI)]. The reproduction procedure between Advanced Content and Standard Content is different. For Advanced Content, see System Model for Advanced Content. For details of Standard Content, see Common System Model. 4.1.2 Information Data to be handled by the player There is some necessary information data stored in the P-EVOB (Primary Video Enhanced Object) to be handled by the player in the content (Standard Content, Advanced Content or Interoperable Content).
Such information data is GCI (Information from General Control), PCI (Presentation Control Information) and DSI (Data Search Information) which are stored in the Navigation package (NV_PCK), and HLI (Highlight Information) stored in plural HLI packages. A player will handle the necessary information data in the content as shown in Table 2. Table 2 Information data to be handled by the player NA: Not Applicable Note: The RDI (Real Time Data Information) is defined in "DVD Specifications for High Density Re-taxable Disc / Part 3: Video Registration Specifications (Tentative)" 4.3 System Model for Advanced Content This section describes the system model for the playback of Advanced Content. 4.3.1 Types of Advanced Content Data 4.3.1.1 Advanced Navigation Advanced Navigation is a type of data navigation data for Advanced Content consisting of the following types of files. For details on Advanced Navigation, see [6.2 Advanced Navigation]. * Playlist * Information downloaded * Composition * Content * Style * Timing * Commands 4.3.1.2 Advanced Data Advanced data is a data type of presentation data for Advanced Content. Advanced data can be categorized or classified according to four types, * Primary Video Set * Secondary Video Set * Advanced Element * Others 4.3.1.2.1 Primary Video Set The Primary Video Set is a group of data for the Primary Video . The data structure of the Primary Video Set is an agreement for Advanced VTS, which consists of Navigation Data (for example VTSI and TMAPs) and Presentation data (for example P-EV0B-TY2). The Primary Video Set must be stored on Disk. The Primary Video Set can include several presentation data in it. The possible presentation flow types are the main video, main audio, sub video, sub audio and sub image. The HD DVD player can simultaneously play sub video and sub audio, in addition to primary video and audio. While the sub video and the sub audio are playing, the sub video and sub audio of the Secondary Video Set can not be played. For details of the Primary Video Set see [6.3 Primary Video Set]. 4. 3.1.2.2 Secondary Video Set The Secondary Video Set is a group of data for the pre-downloaded content and the network flow in the File Cache. The data structure of the Secondary Video Set is a simplified structure of the Advanced VTS, which consists of Presentation Data and TMAP (S_EVOB). The Secondary Video Set can include sub video, sub audio, Supplemental Audio and Subtitles. Complementary Audio is for the alternative audio stream which is to replace the Main Audio in the Primary Video Suite. The complementary Subtitles are for the flow of alternative Subtitles that are to replace the sub-image in the Primary Video Set. The format of the Supplementary Subtitle Data is the Advanced Subtitle. For details of the Advanced Subtitle, see [6.5.4 Advanced Subtitle]. The possible combinations of the Presentation Data in the Secondary Video Set are described in Table 3. For details of the Secondary Video Set, see [6.4 Secondary Video Set].
Table 3 Possible Presentation Data Flow in the Secondary Video Set (Tentative) 4. 3.1.2.3 Advanced Element The Advanced Element is the presentation material that is used to generate the Graphic Plane, the sound effect and any type of file that is generated by Advanced Navigation, by the Presentation Engine or received from the Source of Data. The following data formats are available. For details of the Advanced Element, see [6.5 Advanced Element].
• Image / Animation * PNG * JPEG * MNG • Audio * WAV • Text / Typeface * UNICODE, UTF-8 or UTF-16 format * Open Source 4.3.1.3 Other Advanced Content Playback can generate Data files whose format it is not specified in this specification. They can be a text file for the list of the game generated by scripts in the Advanced Navigation or cookies received. The Advanced Content begins to access the specified Network Server. Some types of these data files can be treated as the Advanced Element, such as the image file captured by the Primary Video Player instructed by Advanced Navigation. 4.3.2 Type 2 Enhanced Video Object (P-EVOB-TY2) The Type 2 Enhanced Video Object (P-EVOB-TY2) is the data stream which carries the Presentation Data of the Primary Video Set. The Type 2 Enhanced Video Object complies with the program flow prescribed in "The standard MPEG-2 system part (ISO / IEC 13818-1)". The types of Presentation Data of the Primary Video Set are the Main Video, Main Audio, sub video, sub audio and sub image. The Advanced Flow is also demultiplexed in the P-EV0B-TY2. See, Figure 9. The possible package types on the P-EV0B-TY2 are as follows, • Navigation Package (N_PCK) • Main Video Package (VM_PCK) • Main Audio Package (AM_PCK) • Sub Video Package (VS_PCK) • Sub Audio Package (AS_PCK) • Sub Image Package (SP_PCK) • Advanced Flow Package (ADV_PCK) For details, see [6.3.3 Primary EVOB (P-EVOB)]. The Time Map (TMAP) for the Video Set Enhanced Primary type 2 has entry points for each unit of the Primary Enhanced Video Object (P-EVOBU).
Details of the Weather Map, see [6.3.2 Weather Map (TMAP)].
The access unit for the Primary Video Set is based on the access unit of the Main Video as well as the structure of the Traditional Video object (VOB). The scroll information for the Sub Audio and Sub Video comes from the synchronization information, (SYNCI) as well as the Main Audio and Sub Image. For details of synchronization information, see [5.2.7 Synchronization information (SYNCI)]. Advanced Flow is used to supply several types of Advanced Content files to the File Cache without any interruption of the playback of the Set Primary of Video. The demultiplexer module in the primary video player distributes the advanced flow pack (ADV_PCK) to the File Cache Manager in the Navigation Engine. For details of the File Cache Manager, see [4.3.15.2 File Cache Manager]. 4.3.3 Input Buffer Model for the Primary Enhanced Video Object Type 2 (P-EVOB-TY2). 4.3.4 Decoding model for the Primary Enhanced Video Object type 2 (P-EVOB-TY2). 4.3.4.1 Decoder Model Extended System Objective (E-STD) For the Primary Enhanced Video Object type 2.
Figure 10 shows the configuration of the E-STD model for the Primary Enhanced Video Object type 2. The Figure indicates the P-STD (prescribed in the MPEG-2 system standard) and the extended functionality for the E-STD for the Primary Enhanced Video Object type 2. a) The System Time Clock (STC) is explicitly included as an element. b) The STC offset is the offset value, which is used to change an STC value when the P-EVOB-TY2s are connected together and presented in a similar manner. c) SW1 to SW7 allows to change between the STC value and (STC minus the STC offset) the value at the limit of the P-EVOB-TY2. d) Due to the difference between the duration of the presentation of the Main Video Access Unit, the Sub Video Access Unit, the Main Audio Access Unit and the Sub Audio Access Unit, there may be a discontinuity between the Adjacent access units in time stamps in some audio streams. In any place where a Main or Sub Audio Decoder satisfies a discontinuity, these Audio Decoders must be paused temporarily before restarting. For this purpose, the pause information of the main audio decoder (M-ADPI) and the pause information of the Sub Audio Decoder (S-ADPI) must be given externally independent and may be derived from the Non-Joint Reproduction Information. (SML_PBI) stored in DSI. 4.3.22.2 E-STD Operations for the Primary Enhanced Video Object Type 2 (1) Operations as P-STD The E-STD model works the same as the P-STD. It behaves as follows: (a) SWl to SW7 are always set by the STC, thus, the STC offset is not used. (b) Since the continuous presentation of an audio stream is guaranteed, M-ADPI and S-ADPI will not be sent to the Main and Sub Audio Decoders. Some P-EVOBs can guarantee the Reproduction without Joints when changing the trajectory of presentation of the angle. Not at all, such changeable locations where the Head of the Interlaced Unit (ILVU) is located, the P-EVOB-TY2 before and the P-EVOB-TY2 after the change must behave under the conditions defined in P-STD. (2) Operations as E-STD The following describes the behavior of the E-STD when the P-EVOB-TY2s enter continuously to the E-STD. Reference to Figure 11. <; Input timing to the E-STD for the P-EVOB-TY2 (Tl) > As soon as the last packet of the preceding P-EV0B-TY2 has entered the ESTD for the P-EV0B-TY2 (TI Timing in Figure 11), the STC offset is set and the SWl is switched (STC minus the STC offset) . Then, the Input Timing to the E-STD is determined by System Clock Reference (SCR) of the subsequent P-EVOB-TY2. The STC offset is set based on the following rules: a) The STC offset must be set assuming the continuity of the Video streams contained in the preceding P-EVOB-TY2 and the subsequent P-EVOB-TY2. That is, the time that is the sum of the presentation time (Tp) of the last unit of access of Main Video displayed in the preceding P-EVOB-TY2 and the duration (Td) of the presentation of the video of the Access Unit of Main Video must be equal to the sum of the first presentation time (Tf) of the first displayed Main Video Access Unit, contained in the subsequent P-EVOB-TY2 and the STC offset. Tp + Td = Tf + STC shift It should be noted that the STC shift itself is not encoded in the data structure. On the other hand, the completion time of the Final Presentation of Video PTM in the P-EVOB-TY2 and the start time, Start of the Video PTM in the P-EV0B-TY2 of the P-EV0B-TY2 must be described in the NV_PCK. The STC offset is calculated as follows: Offset STC = End of the Video PTM on the P-EV0B-TY2 (previous) - Start of the Video PTM on the P-EV0B-TY2 (subsequent) b) While the SWl is set on the (STC minus the STC offset) and the value (STC minus the STC offset) is negative, the entry to the E-STD must be prohibited until the value becomes 0 or positive. < Timing of the Main Audio presentation (T2) > Let T2 be the time that is the sum of the time when the last Main Audio Access Unit contained in the preceding P-EVOB-TY2 is presented and the Length of Presentation of the Main Audio Access Unit. At T2, SW2 changes to (STC minus the STC offset). Then, the presentation is activated by the Presentation Time Stamp (PTS) of the Main Audio Package contained in the subsequent P-EVOB-TY2. The time T2 itself does not appear in the structure of the data. The Main Audio Access Unit must continue to decode in T2. «Timing of the presentation of the Sub Audio (T3) > Let T3 make the time that is the sum of the time when the last Sub Audio Access Unit contained in the preceding P-EV0B-TY2 is presented and the Sub Audio Access Unit Presentation Duration. At T3, SW5 changes (STC minus the STC offset). Then, the presentation is activated by PTS of the Sub Audio Package contained in the subsequent P-EV0B-TY2. The T3 time itself does not appear in the data structure. The Sub Audio Access Unit must continue to decode in T3. < Decoding timing of the Main Video (T4) > Let T4 be the time that is the sum of the time when the last Main Video Access Unit contained in the preceding P-EV0B-TY2 is decoded and the Decoding Duration of the Main Video Access Unit. At T4, SW3 changes (STC minus the STC offset). Then, the decoding is activated by the Decoding Time Stamp (DTS) of the Main Video Package contained in the subsequent P-EVOB-TY2. The T4 time itself does not appear in the data structure. < Timing of the decoding of the Sub Video (T5) > Let T5 make the time that is the sum of the time when the last decoded Sub Video Access Unit contained in the preceding P-EVOB-TY2 is decoded and the Decoding Duration of the Sub Video Access Unit.
At T5, SW6 changes to (STC minus the STC offset). Then the decoding is activated by DTS of the Sub Video Package contained in the subsequent P-EV0B-TY2. The T5 time itself does not appear in the structure of the data. < Timing of the presentation PCI / Sub Image / Main Video (T6) > Let T6 make the time that is the sum of the time when the last displayed Main Video Access Unit contained in the previous Program flow is presented and the Duration of presentation of the Video Access Unit Principal . At T6, SW4 changes (STC minus the STC offset). Then the presentation is activated by PTS of the Video Package Main content in the subsequent P-EVOB-TY2. After T6, the timing of the presentation of the Sub-Image and PCI is also determined by (STC minus the STC offset). < Timing of the presentation of Sub Video (T7) > Let T7 make the time that is the sum of the time when the last displayed Sub Video Access Unit contained in the previous Program flow is presented and the Sub Video Access Unit Presentation Duration. At T7, SW7 changes (STC minus the STC offset). Then the presentation is activated by PTS of the Sub Video Package contained in the subsequent P-EVOB-TY2.
(The restrictions of reproduction without joints for the Sub Video are tentative) In the case of T7 (approximately) equal to T6, the presentation of Sub Video without joints is guaranteed. In the case that T7 is earlier or earlier than T6, the presentation of Sub Video causes some space. T7 should not pass T6 < Restart of STC > As soon as SWl to SW7 are changed (STC minus the STC offset), STC is reset according to the value of (STC minus the STC offset) and SWl to SW7 are changed to STC. < M-ADPI: Pause Information of the Main Audio Decoder for the discontinuity of Main Audio > M-ADPI comprises the STC value in which the pause state of the Main Audio Delay Presentation Time in P-EVOB-TY2 and the duration of the Pause Audio Space Length in P-EVOB-TY2. If the M-ADPI with non-zero pause duration is given, the Main Audio decoder does not decode the Main Audio Access Unit while the duration of the pause is present. The discontinuity of the main audio will be allowed only in P-EVOB-TY2 which is located in an interlaced block.
In addition, two of the discontinuities in a P-EV0B-TY2 are allowed maximum. < S-ADPI: Pause information of the Sub-Audio Decoder for the discontinuity of Sub-Audio > S-ADPI comprises the STC value in which the Sub-Audio Stop Presentation Time of the pause state in P-EV0B-TY2 and the duration of the Sub Audio Space Length pause in P-EVOB-TY2. If S-ADPI with non-zero pause duration is given, the Sub-Audio decoder does not decode the Sub-Audio access unit in the pause duration. The Sub-Audio discontinuity should be allowed only in a P-EVOB-TY2 that is located in an interlaced block. In addition, two of the discontinuities in a P-EVOB-TY2 are allowed. 4.3.5 Secondary Enhanced Video Object (S-EVOB) For example, in the bases of the applications, such content as graphic video or animation can be processed. 4.3.6 Input Buffer Model for the Enhanced Secondary Video Object (S-EVOB) For the secondary enhanced video object, a medium similar to that in the main video can be used as the input buffer. Alternatively, another medium can be used as a source 4.3.7 Environment for advanced content playback. Figure 12 shows the environment of the advanced content player. The advanced content player is a logic player for advanced content. Advanced Content Data Sources are disks, network server and persistent storage. For advanced content playback, category 2 or 3 of disk is necessary. Any type of Advanced Content data can be stored on disk. For persistent storage and the Network Server, any type of Advanced Content can be stored except for the primary video set. Regarding the details of the Advanced Content, see [6. Advanced Content]. The user's event entry originates from the user's input devices, such as a remote control or a front panel of the HD DVD player. The advanced content player is responsible for entering the user's events to the Advanced Content and generating the appropriate responses. With respect to the details of the user's input model. The audio and video outputs are presented on the speakers and the display devices, respectively. The Video output model is described in [4.3.17.1 Video Mixing Model]. The audio output model is described in [4.3.17.2 Audio Mixing Model]. 4.3.8 Global System Model. The Advanced Content Player is a player for Advanced Content. A simplified Advanced Content Player is described in Figure 13. It consists of six functional logic modules, Data Access Manager, Data Cache, Navigation Manager, User Interface Manager, Representation Engine and AV Representative. The Data Access Manager is responsible for exchanging various types of data between the data sources and the internal modules of the Advanced Content Player.
The data cache is a temporary data storage for the playback of advanced content. The Navigation Manager is responsible for controlling all the functional modules of the Advanced Content Player according to the descriptions in the Advanced Navigation. The User Interface Manager is responsible for controlling the user interface devices, such as the remote control or front panel of the HD DVD player, and subsequently notifying the User Input Event to the Navigation Administrator.
The Representation Engine is responsible for reproducing the presentation materials, such as the Advanced Element, the Primary Video Set, and the Secondary Video Set. The AV Representative is responsible for mixing the video / audio inputs of other modules and providing output to external devices such as speakers and display. 4.3.9 Data Source. This section shows what types of Data Sources are possible for the playback of Advanced Content. 4.3.9.1 Disk. The disc is a mandatory data source for the playback of Advanced Content. The HD DVD player will have the HD DVD drive. Advanced Content must be authorized to be reproduced even if the available data source is only the disk and mandatory persistent storage. 4.3.9.2 Network Server. The Network Server is an optional data source for the playback of Advanced Content, but the HD DVD player must have network access compatibility. The Network Server is usually operated by the content provider of the current disk. The Network Server is usually located on the Internet. 4. 3.9.3 Persistent Storage There are two categories of Persistent Storage. One is called "Fixed Persistent Storage". This is a mandatory persistent storage device attached to the HD DVD player. FLASH memory is the typical device for this. The minimum capacity for Fixed Persistent Storage is 64MB. Others are optional and called as "Additional Persistent Storage". They can be removable storage devices, such as USB / HDD memory or memory card. ÑAS is one of the possible Additional Persistent Storage devices. This specification does not specify the current implementation of the device. They must follow the API model for Persistent Storage. With respect to the details of the API model for Persistent Storage. 4.3.10 Data Disc Structure 4.3.10.1 Types of Data on the Disc The types of data that should / can be stored on the HD DVD disc are shown in Figure 14. The disc can store Advanced Content and Standard Content. The possible data types of the Advanced Content are Advanced Navigation, Advanced Element, Primary Video Set, Secondary Video Set and others. For details of the Standard Content, see [5. Standard Content] Advanced Flow is a data format that is archived in any type of Advanced Content files except for the Primary Video Set. The format of the Advanced Flow is T.B.D. without some compression. For details on archiving, see [6.6 Archiving]. The Advanced Flow is multiplexed in the Primary Enhanced Video Object type 2 (P-EV0BS-TY2) and is downloaded with the P-EV0BS-TY2 data supply to the Primary Video Player. With regard to the details of the P-EVOBS-TY2, see [4.3.2 Primary Enhanced Video Objects Type 2 (P-EVOB-TY2)]. The same files that are archived in the Advanced Flow and mandatory for the reproduction of the Advanced Content, should be stored as files. These duplicate copies are necessary to guarantee the reproduction of the Advanced Content. Because the Advanced Flow supply may not be complete, when the Primary Video Set reproduces, It jumps. In this case, the necessary files are read directly from the disk and stored in the Data Cache before reinitializing the playback from the specified jump position. Advanced Navigation: Advanced Navigation files should be located as files. The Advanced Navigation files are read during the start sequence and interpreted by the playback of the Advanced Content. The Advanced Navigation files to start must be located in the "ADV_OBJ" directory. Advanced Element: Advanced Element files can be located as files and also archived in the Advanced Flow which is multiplexed in P-EV0B-TY2. Primary Video Set: There is only one Primary Video Set on the disc. Secondary Video Set: The Secondary Video Set files can be located as files and also archived in the Advanced Flow which is multiplexed in P-EV0B-TY2. Other Files: Other files may exist depending on the Advanced Content. 4.3.10.1.1 File Directory Settings In terms of the file system, files for Advanced Content should be located in directories as shown in Figure 15. TS HDDVD directory The "HDDVD_TS" directory must exist directly under the directory root. All files of an advanced VTS for the Primary Video Set and one or more Standard Video Sets must reside in this directory. Directory ADV_OBJ The "ADV_OBJ" directory must exist directly under the root directory. All the startup files that belong to the Advanced Navigation must reside in this directory. Any Advanced Navigation, Advanced Element, and Secondary Video Set file may reside in this directory. Other directories for Advanced Content. "Other directories for Advanced Content" may exist only under the "ADV_OBJ" directory. Any Advanced Navigation, Advanced Element, and Secondary Video Set file may reside in this directory. The name of this directory should consist of d-characters and dl-characters. The total number of sub-directories "ADV_OBJ" (excluding the directory "ADV_OBJ") must be less than 512. The depth of the directory must be equal to or less than 8. Files for Advanced Content. The total number of files under the "ADV_0BJ" directory must be limited to 512 X 2047, and the total number of files in each directory must be less than 2048. The name of this file must consist of d-characters or dl-characters, and the name of this file consists of the body, "." (point) and extension. 4.3.11 Types of Data in the Network Server and Persistent Storage. Any Advanced Content file except for the Primary Video Set may exist on the Network Server and Persistent Storage. Advanced Navigation can copy any file from the Network Server or Persistent Storage to the file cache by using the appropriate API (s). The Secondary Video Player can read the Secondary Video Set of the disk, Network Server or Persistent Buffer Storage of the Stream. For details of the network architecture, see [9. Net] . Any Advanced Content file except for the Primary Video Set can be stored in Persistent Storage. 4.3.12 Advanced Content Player Model Figure 16 shows the detail of the Advanced Content Player system model. There are six main modules, the Data Access Manager, Data Cache, Navigation Manager, Presentation Engine, User Interface Manager and AV Representative. With regard to the details of each function module, see the following sections. • Data Access Manager - [4.3.13 Data Access Manager] • Data Cache - [4.3.14 Data Cache] • Navigation Manager - [4.3.15 Navigation Administrator] • Presentation Engine - [4.3 .16 Presentation Engine] • AV Representative - [4.3.17 AV Representative:] • User Interface Administrator - [User Interface Manager] 4.3.13 Data Access Administrator The Data Access Manager consists of Disk Administrator, Network Administrator and Persistent Storage Manager (see Figure 17). Persistent Storage Manager: The Persistent Storage Manager controls the exchange of data between the Persistent Storage Devices and the internal modules of the Advanced Content Player. The Persistent Storage Manager is responsible for providing the set of API files for access to Persistent Storage devices. Persistent Storage devices can support file / write functions. Network Administrator: The Network Administrator controls the exchange of data between the Network Server and the internal modules of the Advanced Content Player. The Network Administrator is responsible for providing the set of API files for access to the Network Server. The Network Server usually supports the downloading of files and some Network Servers can support the uploading of files. The Navigation Manager invokes the download / upload of files between the Network Server and the File Cache according to the Advanced Navigation. The Network Administrator also provides access functions from the protocol level to the Presentation Engine. The Secondary Video Player in the Presentation Engine can use this API set for the transmission of flows from the Network Server. With respect to the details of the network access capability, see [9. Network] 4.3.14 Data Cache The Data Cache can be divided into two types of temporary data storage. One is the File Cache which is a temporary Buffer for the file's data. The other is Flow Buffer which is a temporary Buffer for the data in the file. The Data Cache quota for the Flow Buffer is described in "playbacklistaOO .xml" and the Data Cache is divided during the start sequence of the Advanced Content playback. The minimum size of the Data Cache is 64 MB. The maximum size of the Data Cache is T.B.D. (see, Figure 18). 4.3.14.1 Initialization of the Data Cache The configuration of the Data Cache is changed during the start sequence of the Advanced Content playback. "PlaylistOO .xml" can include the size of the Flow Buffer. If there is no Flow Buffer size, that indicates a Flow Buffer size equal to zero. The byte size of the Flow Buffer size is calculated as follows <; bufferFlow size = "1024" / > Size of the Flow Buffer = 1024 X 2 (KByte) = 2048 (KByte) The minimum size of the Flow Buffer is 0 bytes. The maximum size of the Flow Buffer is T.B.D. With respect to the details of the start sequence, see 4.3.28.2 Advanced Content Start Sequence. 4.3.14.2 File Cache The File Cache is used by the Temporary File Cache between the Data Sources, Navigation Engine and Presentation Engine. Advanced Content files, such as graphic images, sound effect, text and typeface, must be stored in the File Cache before accessing them through the Navigation Manager or the Advanced Presentation Engine. 4.3.14.3 Flow Buffer The Flow Buffer is used by the temporary Data Buffer by the Secondary Video Set by the Motor Presentation of Secondary Video in the Secondary Video Player. The Secondary Video Player requires the Network Administrator to take a part of S-EVOB from the Secondary Set of Video to the Buffer of the Flow. And then the Secondary Video Player reads the S-EVOB data from the Flow Buffer and feeds it to the demultiplexer module in the Secondary Video Player. With respect to the details of the Secondary Video Player, see 4.3.16.4 Secondary Video Player. 4.3.15 Navigation Manager The Navigation Manager consists of two main functional modules, the Advanced Navigation Engine and the File Cache Manager (see, Figure 19). 4.3.15.1 Advanced Navigation Engine The Advanced Navigation Engine controls all the playback behavior of the Advanced Content and also controls the Advanced Presentation Engine according to the Advanced Navigation. The Advanced Navigation Engine consists of Syntactic Analysis, Declarative Engine and Programming Engine. See, Figure 19. 4.3.15.1.1 Syntactic Analysis The Syntactic Analysis reads the Advanced Navigation files and then analyzes them syntactically. The results of the Syntactic Analysis are sent to the appropriate modules, to the Declarative Engine to the Programming Engine. 4.3.15.1.2 Declarative Engine The Declarative Engine handles and controls the declarative behavior of the Advanced Content according to the Advanced Navigation. The Declarative Engine has the following responsibilities: • Advanced Presentation Engine Control • Graphic object design and advanced text • Graphic object style and advanced text • Timing control of the planned graphic plane behavior and the effect of sound reproduction • Control of the Primary Video Player • Configuration of the Primary Video Set including the record of the title playback sequence (Title Time Line). • High level player control • Secondary Video Player Control • Secondary Video Set Configuration • High level player control 4.3.15.1.3 Programming Engine The Programming Engine handles the behavior of the directed event, the calls of the API set, or any type of Advanced Content control. The user interface events are typically handled by the Programming Engine and can change the behavior of the Advanced Navigation which is defined in the Declarative Engine. 4.3.15.2 File Cache Manager The File Cache Manager is responsible for • supply files archived in the Advanced Flow in the P-EVOBS of the demux module in the Primary Video Player • supply files archived in the Advanced Flow in the Network Server or Persistent Storage • manage the lifetime of the files in the Cache File • retrieve files when a file required by Advanced Navigation or by the Presentation Engine is not stored in the File Cache. The File Cache Manager consists of the ADV_PCK Buffer and the File Extractor 4.3.15.2.1 Buffer ADV_PCK The File Cache Manager receives. Advanced Flow PCKs archived in P-EVOBS-TY2 from the demultiplexer module in the Primary Video Player. The PS header of the PCK Advanced Flow is removed and then the elementary data is stored in the ADV_PCK Buffer. The File Cache Manager also takes the Advanced Flow File in the Network Server or in the Persistent Storage. 4.3.15.2.2 File Extractor The File Extractor extracts the archived files from the Advanced Flow in the ADV_PCK Buffer. The extracted files are stored in the File Cache. 4.3.16 Presentation Engine The Presentation Engine is responsible for decoding the presentation data and for providing the AV Representative as an output in response to navigation commands of the Navigation Engine. This consists of four main modules, the Advanced Element Presentation Engine, Secondary Video Player, Primary Video Player and Decoder Engine. See, Figure 20. 4.3.16.1 Advanced Element Presentation Engine The Advanced Element Presentation Engine (Figure 21) provides two presentation flows to the AV Representative as output. One is the raster image for Graphic Plane. The other is the flow of the sound effect. The Advanced Element Presentation Engine consists of the Sound Decoder, Graphic Decoder, Text Lines / Typeface Maker, and Design Manager. Sound Decoder: The Sound Decoder reads the WAV file from the File Cache and continuously provides LPCM data to the AV Representative archived by the Navigation Engine. Graphics Decoder: The Graphics Decoder retrieves graphic data, such as PNG or JPEG images from the File Cache.
These image files are decoded and sent to the Design Administrator in response to the requirements of the Design Administrator Text line / typeface maker: The text line / typeface maker retrieves the font data of the File Cache to generate the text image. Receive the text data from the Navigation Manager or the File Cache. The text images are generated and sent to the Design Administrator in response to the requirements of the Design Administrator. Design Administrator: The Design Administrator has the responsibility to generate raster images for the Graphic Plane to the AV Representative. The design information comes from the Navigation Manager, when the raster image is changed. The Design Manager invokes the Graphic Decoder to decode the specific graphic object that is located in the raster image. The Design Manager also invokes the Text Line / Typeface Maker to generate text images that are also located in the raster image. The Design Manager locates the graphic images in the appropriate position of the lower layer and calculates the value of the pixel when the object has the alpha channel / value. Finally, it sends the raster image to the AV Representative. 4.3.16.2 Advanced Subtitle Player (Figure 22) 4.3.16.3 System for Representation of Letter Types (Figure 23) 4.3.16.4 Secondary Video Player The Secondary Video Player is responsible for playing additional video content, Complementary Audio and Subtitles Complementary These additional presentation contents can be stored on disk, Network Server and Persistent Storage. When the content is in Disk, it needs to be stored inside the File Cache previously to be accessed through the Secondary Video Player. The contents of the Network Server must be stored in the Flow Buffer before feeding to the demultiplexer / decoder to avoid the lack of data due to the fluctuation of the bit rate of the transport path of the network. For contents of relatively short length, they can be stored in the File Cache, before they are read by the Secondary Video Player. The Secondary Video Player consists of the Secondary Video Playback Engine and the Secondary Video Player of the Demultiplexer connects the appropriate decoders in the Decoding Engine according to the flow types in the Secondary Video Set (See Figure 24). The Secondary Video Set can not contain two audio streams at the same time, therefore the audio decoder which is connected to the Secondary Video Player, is always only one. Secondary Video Playback Engine: The Secondary Video Playback Engine is responsible for controlling all functional modules in the Secondary Video Player in response to the requirements of the Navigation Administrator. The Secondary Video Playback Engine reads and analyzes the TMAP file to find the appropriate reading position of the S-EVOB. Demultiplexer or Demux: The Demultiplexer reads and distributes the S-EVOB stream to the appropriate decoders, which are connected to the Secondary Video Player. The demultiplexer also has the responsibility to provide as output each PCK in the S-EVOB in the exact SCR timing. When the S-EVOB consists of only one stream of video, audio or Advanced Subtitle, the Demultiplexer only supplies it to the decoder in the exact SCR timing. 4.3.16.5 Primary Video Player The Primary Video Player is responsible for playing the Primary Video Suite. The Primary Video Set must be stored on Disk. The Primary Video Player consists of the DVD Playback Engine and the Demultiplexer. The Primary Video Player connects the appropriate decoders in the Decoder Engine according to the flow types in the Primary Video Set (See, Figure 25).
DVD Playback Engine: The DVD Playback Engine is responsible for controlling all the functional modules in the Player Video Primary in response to the requirements of the Navigation Administrator. The DVD Playback Engine reads and analyzes the INFO and TMAP files to find the appropriate reading position of the P-EVOBS-TY2 and controls the special playback characteristics of the Set Primary Video, such as multi-angle, audio / sub-picture selection and sub-video / audio playback. Demultiplexer or Demux: The Demultiplexer reads the P-EVOBS-TY2 to the Motor of Play DVD and distribute it to the appropriate decoders, which are connected to the Primary Video Player. The demultiplexer also has the responsibility of providing each PCK output on the P-EVOB-TY2 at the exact SCR timing to each decoder.
For multi-angle flow, read the appropriate interleaved block of the P-EVOB-TY2 on the Disk according to the location information in the navigation pack (N_PCK) or TMAP (s). The demultiplexer is responsible for providing an appropriate number of the audio packet (A_PCK) to the Main Audio Decoder or Sub Audio Decoder and an appropriate number of the sub-picture packet (SP_PCK) to the SP decoder. 4.3.16.6 Motor Decoder The Motor Decoder is an aggregation of six types of decoders, Timed Text Decoder, Sub Image Decoder, Sub Audio Decoder, Sub Video Decoder, Main Audio Decoder, Main Video Decoder. Each decoder is controlled by the playback engine of the connected Player. See, Figure 26. Timed Text Decoder: The Timed Text Decoder can only be connected to the demultiplexer module of the Secondary Video Player. It is responsible for decoding the Advanced Subtitles whose format is based on the Timed Text, in response to the requirements of the DVD Playback Engine. One of the decoders between the Timed Text Decoder and the Sub Audio Decoder, can be active at the same time. The Output Graphic Plane is called the Sub-Image plane and is shared by the output of the Timed Text Decoder and the Sub Image Decoder. Sub Image Decoder: The Sub Image Decoder can be connected to the Demultiplexer module of the Primary Video Player. It is responsible for decoding the sub-image data in response to the requirements of the DVD Playback Engine. One of the decoders between the Timed Text Decoder and the Sub Image Decoder can be active at the same time. The Output Graphic Plane is called the Sub-Image plane and is shared by the output of the Timed Text Decoder and the Sub-Image Decoder. Sub Audio Decoder: The Sub Audio Decoder can be connected to the module Demultiplexer of the Primary Video Player and the Secondary Video Player. The Sub Audio Decoder can support up to 2ch of audio and up to 48kHz of sampling rate, which is called as Sub Audio. The Sub Audio can be supported as sub audio stream of the Primary Set of Video, audio-only flow of the Secondary Video Set and multiplexed audio / video stream of the Secondary Set of Video. The output audio stream of the Sub Audio Decoder is called Sub Audio Flow. Sub Video Decoder: The Sub Video Decoder can be connected to the Demultiplexer module of the Primary Video Player and the Secondary Video Player. The Sub Video Decoder can support the SD resolution video stream (the maximum supported resolution is preliminary) which is called Sub Video. The Sub Video can be supported as a video stream of the Secondary Set of Video and sub video stream of the Primary Video Set. The output video plane of the Sub Video Decoder is called the Sub Video Plane. Main Audio Decoder: The Primary Audio Decoder can be connected to the Demultiplexer module of the Primary Video Player and the Secondary Video Player. The Primary Audio Decoder can support up to 7.1ch of multi-channel audio and up to 96kHz of sampling rate, which is called Main Audio. The Main Audio can be supported as the primary audio stream of the Primary Video Set and the audio-only stream of the Secondary Video Set. The output audio stream of the Main Audio Decoder is called the Main Audio Stream. Main Video Decoder: The Main Video Decoder is only connected to the Demultiplexer module of the Primary Video Player. He Main Video Decoder can support the HD resolution video stream which is called Main Video. He Main Video is only supported in the Primary Video Suite. The output video plane of the Main Video Decoder is called the Main Video Plane. 4.3.17 AV Representative: The AV Representative has two responsibilities. One is to collect the graphic plans of the Presentation Engine and the User Interface Manager and to output the mixed video signal. The other is to collect the PCM streams of the Presentation Engine and output the mixed audio signal. The AV Representative consists of the Graphic Representation Engine and the Mixed Sound Engine (See, Figure 27). Graphic Representation Engine: The Graphic Representation Engine can receive four graphic drawings of the Presentation Engine and a graphical plot of the User Interface Manager. The Graphic Representation Engine mixes these five planes according to the control information of the Navigation Manager, and later provides the mixed video signal as output. For details on Video Mixing, see [4.3.17.1 Video Mixing Model]. Audio Mixing Engine: The Audio Mixing Engine can receive three LPCM streams from the Presentation Engine. The Sound Mixing Engine mixes these three LPCM streams according to the mixing level information of the Navigation Manager, and then outputs the mixed audio signal. 4.3.17.1 Video Mixing Model: The Video Mixing Model is this specifications shown in Figure 28. There are five graphic inputs in this model. These are the Cursor Plane, Graphic Plane, Sub-Audio Plane, Sub-Video Plane and the Main Video Plane. 4.3.17.1.1 Cursor Plane. The Cursor Plane is the main plane of the five graphic inputs to the Graphic Representation Engine in this model. The Cursor Plane is generated by the Course Administrator in the User Interface Administrator. The cursor image can be replaced by the Navigation Manager, according to the Advanced Navigation. The Course Manager is responsible for moving the cursor shape to the appropriate position in the Cursor Plane and updating it to the Graphic Representation Engine. The Graphic Representation Engine receives the Cursor Plane and alpha-mix to lower planes according to the Alpha information of the Navigation Engine. 4.3.17.1.2 Graphic Plane. The Graphic Plane is the second plane of the five graphic entries to the Graphic Representation Engine in this model. The Graphic Plane is generated by the Advanced Element Presentation Engine according to the Navigation Engine. The Design Engine is responsible for generating the Graphic Plane using the Graphic Decoder and the Text Line / Typeface Maker. The size and the output frame index must be identical to the video output of this model. The animation effect can be realized through the series of graphic images (Cell Animation). There is no alpha information for this plane from the Navigation Manager in the Superposition Controller. These values are supplied in the alpha channel of the Graphic Plane in itself. 4.3.17.1.3 Sub-Image Plane. The Sub-Image Plane is the third plane of the five graphic entries to the Graphic Representation Engine in this model. The Sub-Image Plane is generated by the Timed Text Decoder or by the Sub Image Decoder in the Decoder Engine. The Primary Video Set may include the appropriate set of Sub-Image images with the output branch size. If an appropriate size of SP images exists, the SP decoder directly sends the generated raster image to the Graphic Rendering Engine. If there is no appropriate size of SP images, the scaler that follows the SP decoder will scale the raster image to the appropriate size and position, then send it to the Graphic Representation Engine. Regarding the details of the combination of the Video Output and the Sub-Audio Plane, see [5.2.4 Video Composition Model] and [5.2.5 Video Output Model]. The Secondary Video Set can include Advanced Subtitles for the Timed Text Decoder. (The scale rules and procedures are T.B.D.) The output data of the Sub Image Decoder have alpha channel information in them. (The alpha channel control for Advanced Subtitles is T.B.D.). 4.3.17.1.4 Sub-Video Map. The Sub-Video Plane is the fourth plane of the five graphic entries to the Graphic Representation Engine in this model. The Sub-Video Plane is generated by the Sub Video Decoder in the Decoder Engine. The Sub-Video Plane is scaled by the scaler in the Decoder Engine according to the information of the Navigation Manager. The output frame rate must be identical to the final video output. If there is information to cut the shape of the object in the Sub-Video Plane, it is done by the Chrominance Effect module in the Graphic Representation Engine. The information (or range) of the chrominance is supplied from the Navigation Manager according to the Advanced Navigation. The output plane of the Chrominance Effect module has two alpha values. One is 100% visible and the other is 100% transparent. The intermediate alpha value for the overlap to the Main Video Master Plane is supplied from the Navigation Manager and made by the Superposition Controller module in the Graphic Representation Engine. 4.3.17.1.5 Main Video Plane The Main Video Plane is the lower plane of the five graphic inputs to the Graphic Representation Engine in this model. The Main Video Plane is generated by the Main Video Decoder in the Decoder Engine. The Main Video Plane is scaled by the scaler in the Decoder Engine according to the information of the Navigation Manager. The output frame rate must be identical to the final video output. The Main Video Plane can be set outside the frame color when it is scaled by the Navigation Manager according to the Advanced Navigation. The default value of the color of the outer frame is "0, 0, 0" (= black). Figure 29 shows the hierarchy of the Graphic Planes. 4.3.17.2 Audio Mixing Model The Audio Mixing Model in this specification is shown in Figure 30. There are three audio stream entries in this model. These are the Sound Effect, Secondary Audio Flow and Primary Audio Flow. The supported Audio Types are described in Table 4. The Sampling Rate Converter adjusts the audio sampling rate of the output of each sound / audio decoder to the sampling rate of the final audio output. The levels of static mixing between the three audio streams are handled by the Sound Mixer in the Audio Mixing Engine according to the mixing level information of the Navigation Engine. The final audio output signal depends on the HD DVD player. Table 4 Supported Audio Type (Preliminary) Sound Effect: The Sound Effect is typically used when the graphic button was pressed. The single channel (mono) and WAV stereo channel formats are supported. The Sound Decoder reads the WAV file from the File Cache and sends the LPCM stream to the Audio Mixing Engine in response to the requirements of the Navigation Engine. Sub Audio Flow: There are two types of Sub Audio Stream. The first is the Sub Audio Flow in the Secondary Video Set. If there is Sub Video Flow in the Secondary Video Set, the Secondary Audio must be synchronized with the Secondary Video. If there is no Secondary Video Flow in the Secondary Video Set, the Secondary Audio is synchronized or not synchronized with the Primary Video Set. The other is the Sub Audio Flow in the Primary Video. This must be synchronized with the Primary Video. The control of meta data in the elementary stream of the Sub Audio Flow is handled by the Sub Audio Decoder in the Decoder Engine. Main Audio Stream: The Primary Audio Stream is an audio stream for the Primary Video Set. With regard to the details, see. The control of meta data in the elementary flow of the Flow Main Audio is managed by the Main Audio Decoder in the Decoder Engine. 4.3.18 User Interface Manager The User Interface Manager includes several user interface driver devices, such as the Front Panel, Remote Control, Keyboard, Mouse and controller of the game tablet and Cursor Manager. Each controller detects the availability of the device and observes the user's operation events. Each event is defined in this specification. For details of the user's input event. The user's input events are reported to the event handler in the Navigation Manager. The Cursor Manager controls the shape of the cursor and the position. Update the Cursor Plane according to the movement event of the related device, such as the Mouse, the controller of the game tablet and so on. See Figure 31. 4.3.19 Disc Data Supply Model Figure 32 shows the data delivery model of Advanced Disc Content. The Disk Administrator provides low-level disk access functions and file access functions. The Navigation Manager uses the access functions of the file to give Advanced Navigation in the start sequence. The Primary Video Player can use both functions to give IFO and TMAP files. The Primary Video Player usually requires a specific portion of P-EVOBS using low-level disc access functions. The Secondary Video Player does not directly access the data on the Disc. The files are stored to the file's cache and read by the Secondary Video Player. When the demultiplexer or demux module in the Primary Video Decoder demultiplexes P-EV0B-TY2, there may be Advanced Flow Package (ADV_PCK). Advanced flow packets are sent to the File Cache Manager. The File Cache Manager extracts the files archived in the Advanced Flow and stores them in the File Cache. 4.3.20 Persistent Storage and Network Data Supply Model Figure 33 shows the data delivery model of Network Server Advanced Content and Persistent Storage. The Network Server and Persistent Storage can store any Advanced Content file except for the Primary Video Set. The Network Administrator and the Persistent Storage Manager provide file access functions. The Network Administrator also provides access functions at the protocol level.
The File Cache Manager in the Navigation Manager can take Advanced Flow files directly from the Network Server and Storage Persistent through the Network Administrator and the Persistent Storage Manager. The Advanced Navigation Engine can not directly access the Network Server and Persistent Storage. The files must be stored in the File Cache before being read by the Advanced Navigation Engine. The Advanced Element Presentation Engine can handle files that are located on the Network Server or Persistent Storage. The Advanced Element Presentation Engine invokes the File Cache Manager to obtain files that are not located in the File Cache. The File Cache Manager compares with the File Cache Table if the required file is cached in the File Cache or not. In the case that the file exists in the File Cache, the File Cache Manager directly passes the file data to the Advanced Presentation Engine. In the event that the file does not exist in the File Cache, the File Cache Manager takes the file from its original location to the File Cache, and then passes the data from the file to the Advanced Presentation Engine. The Secondary Video Player can directly take files from the Secondary Video Set, such as TMAP and S-EVOB, from the Network Server and Persistent Storage through the Network Administrator and the Persistent Storage Manager as well as the File Cache . Typically, the Video Secondary Play Engine uses the Flow Buffer to take the S-EVOB from the Network Server. It stores part of the data from the S-EVOB to the Flow Buffer, and feeds it to the demultiplexer module in the Secondary Player. Video. 4.3.21 Data Storage Model Figure 34 describes the Data Storage Model in this specification. There are two types of data storage devices, Storage Persistent and the Network Server. (The details of data handling between the Data Source are T.B.D.) There are two types of files that are generated during Advanced Content Playback. One is the property type file that is generated by the Programming Engine in the Navigation Manager. The format depends on the descriptions of the Programming Engine. The other is the image file that is captured by the Presentation Engine. 4. 3.22 User Input Model (Figure 35) All User Input Events must be handled by the Programming Engine. User operations by means of user interface devices, such as the remote control or front panel, are initially entered into the User Interface Manager. The User Interface Manager translates the dependent input signals of the player to define events, such as "UIEvent" of "RemoteController Interface Event". The translated user input events are transmitted to the Programming Engine. The Programming Engine has the ECMA Script Processor which is responsible for executing the programmable behaviors. Programmable behaviors are defined by the description of the ECMA script that is provided by the script file (s) in the Advanced Navigation. The Controller's Code (s) of the user's event that is defined in the script file (s) is recorded in the Programming Engine. When the ECMA Script Processor recieves the User Input Event, the ECMA Script Processor searches to see if the controller code corresponds to the current event in the Registered Content Controller Code (s). If it exists, the ECMA Script Processor executes it. If it does not exist, the ECMA Script Processor searches the default driver codes. If the corresponding default controller code exists there, the ECMA Script Processor executes it. If it does not exist, the ECMA Script Processor removes the event or provides the warning signal as output. 4.3.23 Video Output Timing 4.3.24 SD Conversion of the Graphic Plane The Graphic Plane is generated by the Design Manager in the Advanced Element Presentation Engine. If the generated frame resolution does not match the resolution of the final video output of the HD DVD player, the graphics frame is scaled by the function of the climber in the Design Manager according to the current output mode, such as the SD panoramic scan or SD Boxes. The scaling for the SD panoramic scan is shown in Figure 36A. Scaling for the SD Boxes is shown in Figure 36B. 4.3.25 Network. For details, see chapter 9. 4.3.26 Presentation Timing Model. The presentation of the Advanced Content is handled depending on a master time which defines the presentation plan and the synchronization relation between the presentation objects. The master time is called the Title Time Line. The Timeline of the Title is defined for each logical period of reproduction, which is called Title. The Timing Unit of the Title Timeline is 90kHz. There are five types of presentation objects, the Primary Video Set (PVS), the Secondary Video Set (SVS), the Complementary Audio, the Supplementary Subtitle and the Advanced Application (ADV_APP). 4.3.26.1 Presentation Object There are the following five types of presentation objects. • Primary Video Set (PVS) • Secondary Video Set (SVS) • Sub Video / Sub Audio • Sub Video • Sub Audio • Complementary Audio (for the Primary Video Set) • Complementary Subtitle (for the Primary Video Set) • Advanced Application (ADV_APP) 4.3.26.2 Attributes of the Presentation Object There are two types of attributes for the Presentation Object. The first is "planned", the other is "synchronized". 4.3.26.2.1 Scheduled and Synchronized Presentation Object The start and end time of this type of object will be pre-assigned in the file of the playlist. The Presentation Timing will be synchronized with the time in the Title Timeline. The Primary Video, Complementary Audio and Subtitle Complementary Set will be this type of object. The Secondary Video Set and the Advanced Application can be treated as this type of object. For detailed behavior of the Scheduled and Synchronized Presentation Object, see [4.3.26.4 Play Trick] 4.3.26.2.2 Scheduled and Non-Synchronized Presentation Object The start and termination time of this type of object will be pre-assigned in the file of the playlist. The Presentation Timing will be the time base itself. The Secondary Video Set and the Advanced Application can be treated as this type of object. For detailed behavior of the Scheduled and Non-Synchronized Presentation Object, see [4.3.26.4 Play Trick] 4. 3.26.2.3 Un-Scheduled and Synchronized Presentation Object This type of object will not be described in the playlist file. The object is activated by user events handled by the Advanced Application. This Timing of presentation will be synchronized with the Title Timeline. 4.3.26.2.4 Un-Scheduled and Non-Synchronized Presentation Object This type of object will not be described in the playlist file. The object is activated by user events handled by the Advanced Application. The Presentation Timing will be the time base itself. 4.3.26.3 Playlist File The Playlist File is used for two purposes of Advanced Content Playback. The first one is for the initial configuration of the HD DVD player system. The other is for the definition of how to reproduce several types of Advanced Content Presentation Objects. The Playlist File consists of the following configuration information for Advanced Content Playback. • Object Assignment Information for each Title • Playback Sequence for each Title • System Configuration for Advanced Content Playback Figure 37 shows the overall appreciation of the Playback List except for System Configuration. 4.3.26.3.1 Object Assignment Information The Title Timeline defines the default playback sequence and the Timing relationship between the Presentation Objects for each Title. To the Scheduled Presentation Object, such as the Advanced Application, the Primary Video Set or the Secondary Video Set, your life period (start time and termination time) will be pre-assigned to the Title Timeline (see Figure 38). Along with the time progress of the Title Timeline, each Presentation Object will begin and end its presentation. If the Presentation Object is synchronized with the Title Timeline, the period of life pre-assigned to the Title Time Line will be identical to its presentation period. Ex.) TT2 - TT1 = PT1_1 - PT1_0 where PT1_0 is the start time of the presentation of the P-EVOB-TY2 # 1 and PT1_1 is the presentation completion time of the P-EV0B-TY2 # 1. The following description is an example of the Object Assignment Information < Title id = "Main Title" > < MainVideoTrack id = "PVS Main Title" > < Clip id = "P-EVOB-TY2-0" src = "file: /// HDDVD_TS / AVMAP001. IFO" titleTimeHome = "1000000" titleTimeTermination = "2000000" clipTimeHome = "0" / >; < Clip id = "P-EVOB-TY2-l" src = "file: ///HDDVD_TS/AVMAP002.IFO" titleTimeHome = "2000000" titleTimeTermination = "3000000" clipTimeHome = "0" / > < Clip id = "P-EVOB-TY2-2" src = "file: /// HDDVD_TS / AVMAP003. IFO" titleTimeHome = "3000000" titleTimeTermination = "4500000" clipTimeHome = "0" / > < CÜp id = "P-EVOB-TY2-3" src = "file: ///HDDVD_TS/AVMAP005.IFO" titleTimeHome = "5000000" titleTimeTermination = "6500000" clipTimeHome = "0" / > < / PrincipalVideoTrack > < SecondaryVideoTrack id = "SVS Comment" > < Clip id = "S-EVOB-0" src = "http: // dvdforum.com / commentary / AVMAPOOl. TMAP" titleTimeHome = "5000000" titleTimeTermination = "6500000" clipTimeHome = "0" / > < / SecondaryVideoTrack > < Application id = "AppO" Loading information = "file: /// ADV_OBJ / App0 / loading information. Xml" / > < Application id = "AppO" Loading information = "file: /// ADV_OBJ / Appl / loading information. Xml" / > < / Title > There is a restriction for the Assignment of the Object between the Secondary Set of Video, the Complementary Audio and the Complementary Subtitle. These three Presentation Objects are reproduced by the Secondary Video Player, therefore it is forbidden to assign two or more of these Presentation Objects in the Title Timeline simultaneously. For details on playback behavior, see [4.3.26.4 Play Trick]. The pre-assignment of the Presentation Object to the Timeline of the Title in the Playlist refers to the index information file for each Presentation Object. For the Primary Video Set and the Secondary Video Set, the TMAP file is referenced in the Playlist. For the Advanced Application, the information loading file is referred to in the Playlist. See, Figure 39. 4.3.26.3.2 Playback Sequence The Playback Sequence defines the beginning position of the chapter by the time value in the Title Timeline. The final position of the chapter is given as the next starting position of the chapter or the end of the Line of Title Time for the last Chapter (see, Figure 40).
The following description is an example of the Play Sequence < ChapterList > < Chapter titleTimeTime = "0" / > < Chapter titleTimeHome = "10000000" / > < Chapter titleTimeHome = "20000000" / > < Chapter titleTimeTime = "25500000" / > < Chapter titleTimeHome = "30000000" / > < Chapter titleTimeHome = "45555000" / > < / ChapterList > 4.3.26.3.3 System Configuration For the use of the System Configuration, see [4.3.28.2 Advanced Content Start Sequence] 4.3.26.4 Play Trick Figure 41 shows the relationship of the object assignment information in the Line of Title Time and the actual presentation.
There are two Presentation Objects. The first is the Primary Video that is the Synchronized Presentation Object. The other is the Advanced Application for the menu that is the Non-Synchronized Object. It is assumed that the menu provides the playback control menu for the Primary Video. It is assumed that several menu buttons are included that will be pressed by the user's operation. The menu buttons have graphic effects whose effect duration is "T_BTN". < Real Time Progress (tO) > At the time "tO" in Real Time Progress, the presentation of Advanced Content begins, along with the Time Progress of the Title Timeline, the Primary Video is played, and the menu applications also begin their presentation to "tO ', but its presentation does not depend on the' Time Progress of the Title Timeline. < Progress of Real Time (ti) > At the time "you" in the Real Time Progress, the user press the "pause" button which is presented by the Application of the menu. At the moment, the script that is related to the "pause" button keeps the Progress of Time on the Title Timeline in TT1, maintaining the Title Timeline, it also keeps the presentation of Video in VT1. Therefore, the menu button effect, which is related to the "pause" button, starts from "ti." <Real Time Progress (t2)> At the same time ". t2 'in Real Time Progress, the effect of the menu button ends. The period "t2" - "ti" equals the duration of the effect of the button, "T_BTN '. <Real Time Progress (t3)> At the time" t3"in Real Time Progress, the user presses the button "Play" which is presented by the Application of the menu.At the moment, the script that is related to the "play" button restarts the progress of the time in the Timeline of the TT1 Title. By resetting the Title Timeline, the video presentation of VT1 is also reinitialized. The effect of the menu button, which is related to the "pause" button starts from "t3X < Real Time Progress (t4) > At the time "t4" in Real Time Progress, the effect of the menu button ends, the period "t4" - "t3" equaling the duration of the effect of the button, "T_BTN". < Real Time Progress (t5) > At the time "t5" in Real Time Progress, the user presses the "jump" button which is presented by the Application of the menu. At the moment, the script that is related to the "jump" button takes the time of the Title Timeline to a certain jump destination time, TT3, however, the jump operation for the presentation of Video needs a period of time, so the time in the Title Time Line is maintained at "t5" at that time. On the other hand, the menu application keeps running, no matter what the progress of the Title Timeline is, the effect of the menu button, which is related to the "jump" button starts from "t5X". Real Time Progress (t6) > At the time "t6" in Real Time Progress, the Video presentation is ready to start from VT 3. At this time the Title Timeline starts from TT3 Initializing the Title Timeline, it also starts the Video presentation of VT3. <Real Time Progress (t7) > At the time "t7" in Real Time Progress, the effect of the menu button ends The period "t7" - "t5" equals the duration of the effect of the button, "T_BTN". < Real Time Progress (t8) > At the time "t8" in Real Time Progress, the Time Line of the Title reaches the completion time, TTe.The presentation of Video also reaches the completion time, VTe, so the presentation ends. , its period of life is assigned to TTe in the Title Timeline, therefore the presentation of the menu application also ends in TTe 4.3.26.5 Object Assignment Position Figure 42 and Figure 43 show possible positions Pre-assignment for the Presentation Objects in the Title Timeline For the Visual Presentation Object, such as the Advanced Application, Secondary Video Set including the Sub Video stream or the Primary Video Set, there are restrictions for the possible time entry position in the Title Timeline, this is to adjust the entire visual display Timing to the current Video output signal. In the case of TV systems with 525/60 (60Hz region), the possible entry position is restricted to the following two cases; 3003 X n + 1501 or 3003 X n (where "n" is an integer from 0) In the case of TV systems with 625/50 (59Hz region), the possible entry position is restricted to the following case; 1800 X m (where "m" is an integer from 0) For the Audio Presentation Object, such as the Additional Audio or the Secondary Video Set including only Sub Audio, there is no restriction for the possible entry position in time on the Title Timeline. 4.3.26.6 Advanced Application The Advanced Application (ADV_APP) consists of Composition Page Files that may have one-way or bi-directional links between them, script files that share a namespace belonging to the Advanced Application, and the Advanced Element files that are used by the Composition Page (s) and the script file (s). During the presentation of the Advanced Application, an active Composition page is always only one. A page of Active Composition jumps from one to another. 4.3.26.7 Composite Page Break There are the following three Composite Page Break models • Non-Synchronized Jump • Soft-Synchronized Jump • Hard-Synchronized Jump 4.3.26.7.1 Non-Synchronized Jump (Figure 45) The model Non-Synchronized Jump is a Composite Page Break model for the Advanced Application that is the Non-Synchronized Presentation Object. This model consumes a certain period of time for the preparation to begin the presentation of the subsequent Composition Page. During this preparation time period, the Advanced Navigation Engine loads the subsequent Composition Page to analyze and reconfigure the presentation modules in the presentation engine, if needed. The Title Time line keeps entering while this preparation period. 4.3.26.7.2 Synchronized Soft Jump (figure 46) The Synchronized Soft Jump model is a composite page break model for Advanced Application in which it is a Synchronized Presentation Object. In this model, the preparation time period for the presentation of the composition page that happens, is included in the presentation time period of the composition page that happens, the time progress of the composition page that happens is starts only after the final time of the presentation of the previous composition page. While the preparation period of the presentation, the current presentation of the composition page that happens can not be presented, after the end of the preparation, the current presentation starts. 4. 3.26.7.3 Synchronized Hard Jump (figure 47) The Synchronized Hard Jump model is a composite page break model for Advanced Application which is a Synchronized Presentation Object. In this model, while the preparation time period for the presentation of the composition page to occur, the timeline of the title is being supported. So other presentation objects that are synchronized to the timeline are also paused. After finishing the preparation for the presentation of the composition page to happen, the Title Timeline returns to run, then all objects in the presentation start playback. The synchronized jump by default or hard can be set for the initial composition page of Advanced Application. 4.3.26.8 Timing of Graphics Frame Generation 4.3.26.8.1 Basic Graphics Frame Generation Model Figure 48 shows the Timing of Generation of Basic Graphics frames. 4.3.26.8.2 Frame Removal Model Figure 48 shows the frame removal timing model. 4. 3.27 Playback without Advanced Content Joining 4.3.28 Advanced Content Playback Sequence 4.3.28.1 Extension This section describes the playback sequences of Advanced Content 4.3.28.2 Advanced Content Start Sequence Figure 50 shows a sequence flow chart Start for Advanced Content on disk. Reading the file of the initial playlist: After detecting whether the inserted HD DVD is of category 2 or 3 type, the Advanced Content player reads the file from the initial playlist that includes the information of the Assignment of Object, the reproduction sequence and the system configuration. (The definition for the file of the initial playlist is T.B.D).
Changing System Settings: The player changes the system source settings of the Advanced Content player. The size of the Flow Buffer is changed according to the size of the Flow Buffer described in the playlist file during this phase. All files and current data in the File Cache and Flow Buffer are removed. Initialization of the Sequence of Reproduction and Assignment of the Timeline of the Title: The Navigation Administrator calculates where the Presentation Object (s) are presented in the Timeline of the Title of the First Title and where it is the entry point (s) of the chapter. Preparation for the first Playback of the title: - The Navigation Manager will read and store all the files that need to be stored in the File Cache in the preview to start the first Title Play. They can be Advanced Elements files for the Element Presentation Engine or TMAP / S-EVOB file (s) for the Secondary Video Player. The MotorNavigation Manager initializes presentation modules, such as Advanced Element Playback Engine, Advanced Video Player, and Primary Video Player in this phase. If- there is a presentation of the Primary Video Set in the first Title, the Navigation Manager informs the presentation that assigns information of the Primary Video Set in the Timeline of the Title of the first Title in addition to specifying the Navigation files for the Set of Primary videos, such as IFO and TMAP (s). The Primary Video Player reads the IFOs and TMAPs from the disk, and then prepares the internal parameters for playback control for the Primary Video Set according to the informed presentation that assigns the information in addition to establishing the connection between the Player Primary Video and decoder modules required in the Decoder Engine. If the presentation object is played by the Secondary Video Player, such as the Secondary Video Set, Complementary Audio or Supplemental Subtitle, in the first Title. The Video administrator reports the presentation that assigns information from the first presentation object of the Title Timeline in addition to specifying the navigation files for the presentation object, such as TMAP. The Secondary Video Player reads the TMAP from the data source, and then prepares the internal parameters for playback control for the presentation object according to the informed presentation that assigns the information in addition to establishing the connection between the Player Secondary Video and the decoder modules required in the Decoder Engine Beginning of the First Title Playback After preparation for the first reproduction of the Title, the Advanced Content Player begins the Title Timeline. The presentation object assigned in the Timeline starts the presentation according to its presentation plan. 4. 3.28.3 Advanced Content Playback Update Sequence Figure 51 shows a flow chart of the Advanced Content playback update sequence. From "Reading the playback file" to "Preparing for the first Title Playback" are the same as in the previous section, [4.3.28.2 Advanced Content Start Sequence]. Title of Reproduction: Title Play by the Advanced Content Player Is there a new file in the playlist? To update the Advanced Content playback, the Advanced Application is required to execute the update procedures. If the Advanced Application tries to update your presentation, the advanced application on the disk will have to have the search and update script in progress. The programming commands look for the source (s) of the specified data, typically a network server, if a new file is available in the playlist. Recorder playlist file: If a new file is available in the playlist, the commands that are executed by the Programming Engine, download it to the Memory Cache and register it for the Advanced Content Player. In terms of API details and definitions, they are T.B.D. Progressive Emission Restart: After registration of the new file in the playlist, Advanced Navigation will restart the progression of the broadcast API to restart the start sequence. The progressive reboot API restarts all current parameters and playback settings, then restarts the start procedures from the procedure only after "reading the playlist file". The "Change System Settings" and the following procedures are executed based on a new file in the playlist. 4.3.28.4 Transition Sequence between the Advanced VTS and the Standard VTS For the category on type 3 playback disc this requires the transition of the reproduction between the Advanced VTS and the Standard VTS. Figure 52 shows a flow chart of this sequence. Advanced Content Playback: The category on type 3 playback disc will start from the Advanced Content Playback. During this phase, user input events are handled by the Navigation Manager. If any user event occurs that will be handled by the Primary Video Player, the Video Manager guarantees the transfer of them to the Primary Video Player. VTS Playback Event Meeting Standard The Advanced Content will explicitly specify the transition from Advanced Playback to Standard Content Playback by the Standard Calling ContentContentPlayer in Advanced Navigation. The Standard Calling Content Player may have an argument to specify the playback start position. When the Navigation Manager finds the command Call Standard Player Content, the Navigation Manager requires to suspend the playback of the Advanced VTS for the Primary Video Player, and calls the command Standard Call Content Player. Standard VTS Playback: When the Video Manager broadcasts the Standard Call API Content Player, the Primary Video Player jumps to start the Standard VTS from the specified position. During this phase, the Video Manager is suspended so that the user's event has to directly enter the Primary Video player. During this phase, the Primary Video Player is responsible for the entire reproduction transition between the 'Standard VTSs based on navigation commands. VTS Advanced Encounter Playback Command: The Standard content will explicitly specify the transition from Playback of Standard Content to Playback of Advanced Content by Advanced CallContentPlayer of the Navigation command. When the Primary Video Player encounters the command Advanced CallingPlayerContent, it stops to play the Standard VTS, then summarizes the Navigation Manager from the execution point only after calling the command StandardStandardContentPlayer. 5.1.3.2.1.1 Summary sequence When the summary presentation is executed through the Summary () of the User's operation or the RSM instruction of the Navigation command, the Player will verify the existence of Summary commands (RSM_CMDs) of the PGC that specified by the RSM Information, before starting the PGC playback. 1) When the (RSM_CMDs) exists in the PGC, the RSM_CMDs are executed first - if the instruction of the break is executed in the RSM_CMDs; the execution of the RSM_CMDs is terminated and then the presentation of the summary is restarted. But some information in the RSM information, such as SPRM (8) can be changed by the RSM_CMDs. - if the Branch Instruction is executed in the RSM_CMDs; The presentation of the summary is finished and the reproduction starts from the new position specified by the instruction for the branch. 2) When no RSM_CMDs exist in the PGC, the summary presentation is fully executed. 5.1.3.2.1.2 Summary Information The Player has only one RSM information. The RSM information will be updated and maintained as follows; The RSM information will be maintained until the RSM information is updated by the Instruction CallSS or the operation of Menu_Call (). - When the call process from TT_DOM to the Menu-space is executed by the CallSS Instruction or the Menu_Call operation (), the Player will check the flag or decision "RSM_permission" in a first TT_PGC. 1) If the flag is allowed, the current RSM Information is updated to new RSM Information and then a menu is presented. 2) If the flag is prohibited, the current RSM information is kept (not updated) and then a menu is presented.
An example of the Summary Process is shown in Figure 53. In the figure, the Summary Process is basically executed in the following steps. (1) Execute either the CallISS Instruction or the Menu_Call () operation (in a PGC that allows the flag "RSM_permission") - The RSMI is updated and a Menu is presented. (2) Execute the Instruction Ju pTT (jump to a PGC whose flag "RSM_permission" is prohibited) - A PGC is presented (3) Execute either the CallISS instruction or the Menu_Call () operation (on a PGC whose flag "RSM_permission "which prohibits) - The RSMI is not updated and a Menu is presented (4) Execution of the RSM Instruction - The RSM_CMDs are executed using RSMI and a PGC is summarized from the suspended position or specified by the RSM_CMDs. 5. 1.4.2.4 Structure of the Menu PGC < About the Language Unit > 1) Each System Menu can be registered for one or more Menu Description Language (s). The Menu described by the specific Menu Description Language (s) may be selected by the user. 2) Each PGC of the Menu consists of independent PGCs for the Menu Description Language (s). < Language Menu in FP_DOM > 1) The FP_PGC can have a Language Menu (FP_PGCM_EVOB) to be used for Language selection only. 2) Once the Language (code) is decided through this Language Menu, the Language (code) is used to select the Language Unit in the VMG Menu and in each VTS Menu. An example is shown in Figure 54 5.1.4.3 HLI Availability in each PGC To use the same EVOB for both main contents, such as a movie title and additional bonus contents, such as a title of a game with entry for the user, the "HLI availability flag" is entered for each PGC. An example of HLI availability in each PGC is shown in Figure 55. In this figure, there are two kinds of sub-picture flows, the first is for the subtitle, the other is for the button, in an EVOB. And in addition, there is an HLI flow in an EVOB. PGC # 1 is for the main content and its "HLI availability flag" is NOT available. Then PGC # 1 is played, both the HLI and the sub-image for the button will not be displayed. However the Sub-image for the subtitle can be displayed. On the other hand, PGC # 2 is for game content and its "HLI availability flan" is available. Then PGC # 2 is played, both the HLI and the sub-image for the button will be displayed with the forced deployed command. However, the Sub-image for the title will not be displayed. This function will save the disk space. 5.2 Navigation for Standard Content The Navigation Data for Standard Content is the information in attributes and the reproduction control for the Presentation Data. There are a total of five types namely, Video Administrator Information (VMGI), Video Title Set Information (VTSI), General Control Information (GCI), Presentation Control Information (PCI), Search Information Data (DSI), and Highlighted Information (HLI). The VMGI is described at the beginning, and at the end of the Video Manager (VMG), the VTSI at the start and end of the Video Title Set. The GCI, PCI, DSI, and HLI are dispersed in the Set of Enhanced Video Objects (EVOBS) with the Presentation Data. The Contents and Structure of each of the Navigation Data are defined below. In particular, the Program Chain information (PGCI) described in the VMGI and VTSI are defined in 5.2.3 Program Chain Information. The Navigation Commands and parameters described in PGCI and HLI are defined in 5.2.8 Navigation Commands and Navigation Parameters. The figure 56 shows the Navigation Data Image Assignment 5.2.1 Video Manager Information (VMGI) The VMGI describes the information in the related HVDVD_TS directory such as the information to search the Title and the information to present FP_PGC and VMGC, in addition to the Information in Parental Administration, and in each VTS_ATR and TXTDT. The VMGI starts with the Administration Table of Video Manager Information (VMGI_MAT), followed by the Title Search Pointer Table (TT_SRPT), followed by the PGCI Unit Table of the Video Administrator Menu (VMGM_PGCI_UT), followed by the Parental Management Information Table ( PTL_MAIT), followed by the Table of Attribute of the Video Titles Set (VTS_ATRT), followed by the Text Data Manager (TXTDT_MG), followed by the Cell Address Table of the FP_PGC Menu (FP_PGCM_C_ADT), followed by the Address Map of the Improved Video Object Unit of the FP_PGC Menu (FP_PGCM_EVOBU_ADMAP), followed by the Address Table of the Video Manager Menu Cell (VMGM_C_ADT), followed by the Address Map of the Object Unit of the Video Enhanced Video Manager Menu (VMGM_EVOBU_ADMAP), as shown in Figure 57. Each table will be aligned in the boundary between the Logical Blocks. For this purpose each table can be followed by up to 2047 bytes (containing (OOh)). 5.2.1.1 Video Administrator Information Management Table (VMGI_MAT). A table describing the size of VMG and VMGI, the start address of each information in VMG, the attribute information in the Enhanced Video Object Set for the Administrator Menu (VMGM_EVOBS) and the like, is shown in Tables 5 to 9.
TABLE 5 Table 6 Table 7 (RBP 32 to 33) VERN Describes the version number of this Part 3 Video specifications. bl5 bl4 bl bl2 bl1 blO b9 b8 Reserved b7 b6 b5 b4 b3 b2 bl bO Partial version of the Notebook Partial Version of the Notebook 0010 0000b: version 2.0 Other: reserved Table 8 (RBP 34 to 37) V G_CAT Describes the region management of each EVOBS in the VGM and the VTS (s) that are under the HVDVD_TS directory. b31 b30 b29 b29 b28 b27 b26 b26 b2S b24 Pre-ordered b23 b22 b21 b20 bl9 bl8 bl7 bl7 bld RMA # 8 RMA # 7 RMA # 6 RMA # 5 RMA # 4 RMA # 3 RMA # 2 RMA # 1 bl5 bl4 B13 M2 bll blO b9 b8 reserved b7 b6 b5 b4 b3 b2 bl bO reserved VTS status RMA #n ••• 0b: This volume can be played in region #n (n = 1 to 8) Ib: This volume will not be played in region #n (n = 1 to 8) VTS status ... 0000b: There is no Advanced VTS 00001b: there is VTS Advanced Other: reserved (RBP 254 to 257) VMGM_V_ATR Describes the Video attributes of VMGM_EVOBS. The value of each field must be consistent with the information in the video stream of VMGM EVOBS. If there is no VMGM: EVOBS, enter v 0b 'in each bit. Table 9 (RBP 254 to 257) VMGM_V_ATR b31 b30 b29 b28 b27 b26 b25 b24 b4 b24 b21 b20 bl9 bl8 bl7 bl6 bl6 bl6 bl6 bl6 progressive mode image strong image reserved CC1 CC2 source reserved type bl5 bl4 M3 bl2 bll blO b9 b8 resolution of image luenle reseivable b7 b6 b5 b4 b3 b2 bl bO Video compression mode ... 01b: Compliant with MPEG-2 10b: Compliant with MPEG-4 AVC 11b: Compliant with SMPTE VC-1 Others: Reserved TV System ... 00b: 525/60 01b: 625/50 10b: High Definition (HD) / 60 * 11b: High Definition (HD) / 50 * *: HD / 60 is used to convert-reduce to 525/60, and HD / 50 is used to convert-reduce to 625/50 . Aspect Ratio ... 00b: 4: 3 11b: 16: 9 Others: Reserved Display Mode ... Describes the display modes allowed on the 4: 3 monitor. When the "Aspect Ratio" is 'OOb' (4: 3), enter '11b'. When the "Aspect Ratio" is v 11b '(16: 9) enter? 00b ',? 01b 'o? 10b '. 00b: Both Exploration panoramic * and Boxed 01b: Panoramic Scan only * 10b: Scan only boxed 11b: Not specified * Panning means the window with aspect ratio 4: 3 taken from the decoded image. CC1 ... Ib: Subtitle Data for Field 1 is recorded in the Video stream. 0b: Subtitle Data for Field 1 is not recorded in the Video stream. CC2 ... Ib: the subtitle data for Field 2 is recorded in the Video stream. Ob: Subtitle Data for Field 2 is not recorded in the Video stream. Source image resolution ... 000b: 352 x240 (525/60 system), 352 x 288 (system 625750) 0001b: 352 x 480 • (system 525/60), 352 x 576 (system 625/50) 0010b: 480 x 480 (system 525/60), 480 x 576 (system 625/50 ) 0011b: 544 x 480 (525/60 system), 544 x 576 (625/50 system) 0100b: 704 x 480 (525/60 system), 704 x 576 (625/50 system) 0101b: 720 x 480 (525/60) 60), 720 x576 (625/50) 0110 to 0111b: reserved 1000b: 1280x720 (HD / 60 or HD / 50 system) 1001b: 960x1080 (HD / 60 or HD / 50 system) 1010b: 1280x1080 (HD / 60 system or HD / 50) 1011b: 1440 xl080 (HD / 60 or HD750 system) 1100b: 1920x1080 (HD / 60 or HD / 50 system) 1101b to 1111b: reserved Font type picture ... Describes whether the video output (after they mix the Video and the Sub-image, it refers to (Figure 4.2.2.1-2]) is boxed or not.When the "Aspect Ratio" is' 11b '(16: 9) enter Ob'. of aspect is '00b' (4: 3), enter 0b 'or? Ib'. Ob: Not typecast Ib: Boxed (Video source image is boxed and sub-images es (if any) are displayed only in the active image area of the Box). Progressive source image mode ... Describes whether the source image is the interlaced image or the progressive image. 00b: Interlaced image 01b: Progressive image 10b: Unspecified (RBP 342 to 533) VMGM_SPST_ATRT Describes each attribute of the sub-image stream (VMGM_SPST_ATR) for VMGM_EVOBS (Table 10). A VMGM_SPST_ATR is described for each existing Sub-image stream. The flow numbers are assigned from O 'according to the order in which the VMGM_SPST_ATRs are described. When the number of Sub-image flows is less than 32X, enter? 0b 'in each bit of VMGM_SPST_ATR for unused flows. Table 10 VMGM SPST_ATRT (Order of description) RBP Contents Number of bytes 342 to 347 VMGM_SPST_ATR of Sub-image stream # 0 6 bytes 348 to 353 VMGM _SPST_ATR of Sub-image stream # 1 6 bytes 354 to 359 VMGM_SPST_ATR of Sub-image stream # 2 6 bytes 360 to 365 VMGM SPST_ATR stream of Sub-image # 3 6 bytes 366 to 371 VMGM SPST_ATR of Sub-image stream # 4 6 bytes 372 to 377 VMGM SPST_ATR of Sub-image stream # 5 6 bytes 378 to 383 VMGM SPST ATR Sub-image stream # 6 6 bytes 384 to 389 VMGM SPST ATR Sub-image stream # 7 6 bytes 390 to 395 VMGM_SPST_ATR Sub-image stream # 8 6 bytes 396 to 401 VMGM_SPST_ATR Sub-image stream # 9 6 bytes 402 a 407 VMGM SPST ATR Sub-image stream # 10 6 bytes 408 to 413 VMGM SPST ATR sub-image stream # 11 6 bytes 414 to 419 VMGM_SPST_ATR Sub-image stream # 12 6 bytes 420 to 425 VMGM SPST_ATR Sub-image stream # 13 6 bytes 426 to 431 VMGM_SPST_ATR Sub-image stream # 14 6 bytes 432 to 437 VMGM SPST Sub-image stream ATR # 15 6 bytes 438 to 443 VMGM SPST Substream stream ATR image 1 6 6 bytes 444 to 449 VMGM_SPST_ATR of Sub-image stream # 17 6 bytes 450 to 455 VMGM_SPST_ATR of Sub-image stream # 18 6 bytes 456 to 461 VMGM SPST ATR of Sub-image stream # 19 6 bytes 462 to 467 VMGM SPST ATR Sub-image stream # 20 6 bytes 468 to 473 VMGM_SPST_ATR Sub-image stream # 21 6 bytes 474 to 479 VMGM SPST Sub-image stream ATR # 22 6 bytes 480 to 485 VMGM SPST_ATR stream Sub-image # 23 6 bytes 486 to 491 VMGM SPST Sub-image stream ATR # 24 6 bytes 492 to 497 VMGM SPST Sub-image stream ATR # 25 6 bytes 498 to 503 Sub-image stream VMGM_SPST_ATR # 26 6 bytes 504 to 509 VMGM_SPST_ATR of Sub-image stream # 27 6 bytes 510 to 515 VMGM SPST ATR of Sub-image stream # 28 6 bytes 516 to 521 VMGM SPST ATR of Sub-image stream # 29 6 bytes 522 to 527 VMGM_SPST_ATR of Sub-image stream # 30 6 bytes 528 to 533 VMGM_SPST_ATR of Sub-image stream # 31 6 bytes total 192 bytes The content of a VMGM SPST ATR is as follows: Table 11 VMGM_SPST_ATR b47 b46 b45 b44 b43 b41 b41 b40 reserved coding mode reserved reserved B39 b38 b37 b36 b35 b34 b33 b32 reserved SD SD- SD-PS SD-LB b31 b30 b29 b28 b27 b27 b26 b25 reserved b24 b23 b22 b21 b20 b20 bl9 bl8 bl7 bl? reserved bl5 b! 4 bl3 bl2 bll blO b9 b8 reseved b7 b6 b5 b4 b3 b2 bl bO reserved Sub-image encoding mode ... 000B: Path length for 2 bits / pixel defined in the Sub-image unit 5.5.3. The value of PRE_HEAD is different from (OOOOh)) 001b: Path length for 2 bits / pixel defined in the Sub-Image Unit 5.5.3. (The value of PRE_HEAD is (OOOOh)) 100b: Path length for 8 bits / pixel defined in the Sub-image Unit 5.5.4. For the 8-bit pixel depth Others: reserved HD ... When the "Sub-image coding mode" is' 001b 'or 100b', this flag specifies whether the HD or not.
Ob: There is no flow Ib: The flow exists SD-Width ... When the "Sub-image encoding mode" is? 001b 'or' 100b ', this flag specifies whether the SD flow exists Width or not. Ob: There is no flow Ib: The flow exists SD-PS ... When "Sub-image encoding mode" is '001b' or? 100b ', this flag specifies whether the SD Scanning flow exists (4: 3) or not. 0b: There is no flow Ib: The flow exists SD-BL ... When the "Sub-image coding mode is '001b' or '100b', this flag specifies whether the SD stream exists Box (4: 3 ) or not . 0b: There is no flow Ib: The flow exists Table 12 (RBP 1016 to 1023) FP_PGC_CAT Describes the category FP_PGC b63 b62 b61 b60 b59 b58 b57 b57 b56 b56 b57 b49 b49 b50 b49 b46 b45 b44 b44 b43 b42 b42 b41 b24 reserved b23 b22 b21 b20 b9 bl8 bl7 bl6 reserved bl5 bl4 bl3 bl2 bl2 bl b8 b8 reserved b7 b6 b5 b4 b3 b2 bl bO Input Type ... Ib: PGC Input 5.2.2 Video Title Set Information (VTSI) VTSI describes the information for one or more Video Titles and the Video Titles Set Menu. VTSI describes the administration information of this title (s) such as the information to search for Part_of_Title (PTT) and the information to reproduce the Set of Enhanced Video Objects, and the Set Menu of Video Titles (VTSM) , as well as the EVOBS attribute information. The VTSI starts with the Video Title Set Information Management Table (VTSI_MAT), followed by the Search Indicator Table of the Title Part of the Video Title Set (VTS_PTT_SPRT), followed by the Program Chain Information Table. of the Video Titles Set (VTS_PGCIT), followed by the PGCI Units Table of the Video Titles Set Menu (VTSM_PGCI_UT), followed by the Time Stamp Set Table of the Video Titles (VTS_TMAPT), followed by the Table of Cell Addresses of the Video Titles Set Menu (VTSM_C_ADT), followed by the Assignment of Unit Addresses of Enhanced Video Objects of the Video Titles Set Menu (VTSM_EVOBU_ADMAP), followed by the Cell Address Table of the Video Title Set (VTS_C_ADT), followed by the Assignment of Addresses of Improved Video Objects Units of the Video Title Set (VTS_EVOBU_ ADMAP) as shown in FIG. 58. Each table must be aligned on the boundary between the Logical Blocks. For this purpose, each table can be followed up to 2047 bytes (containing (00h)) 5.2.2.1 Information Management Table of the Video Titles Set (VTSI_MAT) A table in the size of VTS and VTSI, the start address of each information in the VTSI and the attributes of EVOBS in the VTS are shown in Table 13.
Table 13 (RBP 0 to 11) VTS_ID Describes "STANDARD-VTS" to identify the VTSI's File with the character set code of ISO 646 (to characters). (RBP 12 to 15) VTS_EA describes the final VTS address with RLBN of the first list and LB of this VTS. (RBP 28 to 31) VTSI_EA describes the final VTSI address with RLBN of the first LB of this VTSI. (RBP 32 s 33) VERN describes the version number of this Part 3: Video Specifications (Table 14). Table 14 (RBP 32 to 33) VERN bl5 bl4 bl bl2 bl l blQ b9 bd reseivated b7 b6 b5 b4 b3 b2 bl bO Partial Version of the Notebook Partial Version of the Notebook ... 00001 0000b: Version 1.0 Other: reserved (RBP 34 to 37) VTS_CAT Describes the type of application of this VTS (Table 15). Table 15 (RBP 34 to 37) VTS_CAT Describes the Application Type of this VTS .b31 b30 b29 b29 b28 b27 b26 b25 b24 reserved b23 b22 b21 b20 bl9 bl8 bl7 bl6 bl6 reserved blS bl4 bl3 bl2 bl1 bl9 b9 b8 reserved b7 bfi b5 b4 b3 b2 bl b reserved Type of application Application Type ... 0000b: Not specified 0001b: Karaoke Other: reserved (RBP 532 to 535) VTS_V_ATR Describes the Video attributes of VTSTT_EVOBS in this VST (Table 16). The value of each field must be consistent with the information in the VTSTT Video stream. EVOBS, Table 16 (RBP 532 to 535) VTS V ATR Describes the video attributes of VTSTT_EVOBS in this VST. The value of each field must be consistent with the information in the Video stream of VTSTT_EVOBS. b31 b3 b29 b28 b27 b2 b25 b24 b23 b22 b21 b20 b! 9 bl8 bl7 bl6 bl5 bl4 bl3 bl2 bll blO b9 b8 b7 b6 b5 b4 b3 b2 bl bO Video compression mode ... 01b: complies with MPEG-2 10b: complies with MPEG-4 AVC 11b: complies with SMPTE VC-1 Others: reserved TV System ... 00b: 525/60 01b: 625/50 10b: High Definition (HD) / 60 * 11b: High Definition (HD) / 50 * *: HD / 60 is used to convert-reduce to 525/60, and HD / 50 is used to convert-reduce to 625/50 . Aspect ratio ... 00b: 4: 3 11b: 16: 9 Others: reserved Display mode ... Describes the display modes allowed on the 4: 3 monitor. When the "Aspect Ratio" is x00b '(4: 3) enter vllb'. When the "Aspect Ratio" is llb '(16: 9), enter '00b', x 01b 'or? 10b '. 00b: Both Panorama Scan * and Boxed 01b: Panorama Scan Only * 10b: Only Boxed 11b: Not Specified *: Panoramic Scan means the window with aspect ratio 4: 3 taken from the decoded image. CC1 ... Ib: Subtitle data for Field 1 are registered in the Video stream. 0b: Subtitle data for Field 1 is not recorded in the Video stream. CC2 ... Ib: Subtitle data for Field 2 is recorded in the Video stream. 0b: The subtitle data for Field 2 is not recorded in the Video stream. Resolution of the source image ... 0000b: 352 x 240 (system 525/60), 352 x 288 (system 625/50) 0001b: 352 x 480 (system 525/60), 352 x 576 (system 625/60) 0010b: 480 x 480 (525/60 system); 480 x 576 (system 625/50) 011b: 544 x 480 (system 525/60), 544 x 576 (system 625/60) 0100b: 704 x 480 (system 525/60), 704 x576 (system 625/50) 0101b: 720 x480 (system (525/60), 720 x 576 (system 625/50) 0110 to 0111b: reserved 1000b: 1280x720 (system HD / 60 or HD / 50) 1001b: 960x1080 (system HD / 60 or HD / 50) 101b: 1280x1080 (HD / 60 or HD / 50 system) 1011b: 1440x1080 (HD / 60 or HD / 50 system) 1100b: 1920x1080 (HD / 60 or HD / 50 system) 1101b to 1111b: reserved Font type picture. ..Describe whether the video output (after which the Video and the Sub-image are mixed, refers to [Figure 4.2.2.1-2]) is pigeon-hole or not.When the "Aspect Ratio" is llb '( 16: 9), enter x 0b '.
When the "Aspect Ratio" is? 00b '(4: 3), enter? 0b' or? Lb '. Ob: Not Encasillada Ib: Encasillada (the image of Video Source is typecast and the Sub-images (if there are any) are displayed only in the active image area of the Box). Progressive image mode Source ... Describes whether the source image is the interlaced image or the progressive image. 00b: Interlaced image 01b: Progressive image 10b: Not specified Movie camera mode ... Describes the source image mode for the 625/50 system. When the "TV system" is OOb '(525/60), enter Ob'. When "TV system" is 'Olb' (625/50), enter? 0b 'or? Lb'. When "TV system" is? 10b '(HD / 60), enter? 0b'.
When "TV system" is llb '(HD / 50) it is used to convert-reduce to 625, enter? 0b' or 'Ib'. 0b: Ib camera mode: movie mode As for the definition of camera mode and movie mode, refer to ETS300 294, Edition 2: 1995-12. (RBP 536 to 537) VTS_AST_Ns Describe the number of Audio streams of VTSTT_EVOBS on this VTS (Table 17) Table 17 (RBP 536 to 537) VTS_AST_Ns Describe the number of Audio streams of VTSTT_EVOBS on this VTS. bl5 b! 4 bl3 bl2 bll blO b9 b8 reserved b7 b6 b5 b4 b3 b2 bl bO reserved Number of audio streams Number of Audio Streams ... Describes the numbers between '0' and '8' Other: reserved (RBP (538 to 601) VTS_AST_ATRT Describes the attributes of each Audio stream of VTSTT_EVOBS in this VTS (Table 18). 18 The value of each field must be consistent with the information in the Audio stream of VTSTT_EVOBS. A VTS_AST_ATR is described for each Audio stream. There must be an area for eight VTS_AST_ATRs constantly. The flow numbers are assigned from x 0 'according to the order in which the VTS_AST_ATRs are described. When the number of Audio streams is less than '80, enter 'Ob' in each bit of VTS_AST_ATR for unused streams. The content of a VTS_AST_ATR is as follows: Table 19 VTS AST ATR b63 b62 b61 b60 b59 b59 b57 b57 b57 b56 b53 b52 b52 b50 b49 b49 b48 b57 b48 b57 b48 b57 b48 b57 b48 b57 b48 b48 b48 b42 b48 b42 b41 b41 b41 b41 b41 b41 b41 b41 b41 b41 b41 b41 b41 b41 b41 b41 b41 b41 b41 b41 b41 b41 b41 (upper bits) b39 b38 b37 b37 b36 b34 b34 b33 b32 Specific code (lower bits) b31 b30 b29 b28 b27 b26 b25 b25 b24 reserved (for the specific code) b23 b22 b21 b2 bl9 bl8 bl7 bl6 bl6 specific code extension bl5 bl4 bl bl2 bll blO b9 b8 reserved b7 b6 b5 b4 b3 b2 bl bO Application Information Audio coding mode 000b: reserved only for Dolby AC-3 001b: audio MLP 010b: MPEG-1 or MPEG-2 without extension bit sequence 010b: MPEG-2 with extension bit sequence 100b: reserved 101b: audio Linear PCM with 1/1200 second sample data 110b: DTS-HD 111b: DD + Note: For more details on the requirements of "Audio coding mode", refer to 5.5.2 Audio and Annex N. Multichannel extension ... 0b: The relevant VTS_MU_AST_ATR is not effective Ib: Linked to the relevant VTS_MU_AST_ATR Note: This flag must be set to 'Ib' when the Audio application mode is "Karaoke mode" or "Surround sound mode" . Audio Type ... 00b: Not Specified 01b: Language Included Other: Reserved Audio Application Mode ... 00b: Not Specified 01b: Karaoke Mode 10b: Surround Sound Mode 11b: Reserved Note: When the type of application of VTS_CAT is set to? 0001b '(Karaoke) in one or more VTS_AST_ATRs in the VTS, this flag should be set to' 01b '. Quantization / DCR ... When the "Audio coding mode" is v110b 'or? Lllb', enter llb '. When the "Audio coding mode is '010b' or OllbX then Quantization / DCR is defined as 00b: The dynamic range control data does not exist in the MPEG audio stream 01b: the dynamic range control data exist in the MPEG audio stream 10b: reserved 11b: reserved When the "Audio coding mode" is' 001b 'ov 101b', then Quantize / DCR is defined as: 00b: 16 bits 01b: 20 bits 10b: 24 bits 11b: reserved fs ... 00b: 48 kHz 01b: 96 kHz Others: reserved Number of Audio Channels ... 00b: lch (mono) 001b: 2ch (stereo) 010b: 3ch 011b: 4ch 100b: 5ch (multichannel) 101b: 6ch 110b: 7ch 111b: 8ch Note 1: The "1.0c" is defined as "lch". (for example, in the case of 5. lch, enter? 101b '(6ch)). Specific code ... refer to Annex B. Application information ... reserved (RBP 602 to 603) VTS_SPST_Ns Describe the number of Sub-image streams for VTSTT_AVOBS in the VTS (Table 20). Table 20 (RBP 602 to 603) VTS_SPST_Ns Describes the number of Sub-image streams for VTSTT_EVOBS in the VTS. b! 5 bl4 bl3 bl2 bll blO b9 b8 reserved b7 bb b5 b4 b3 b2 bl bO reserved Number of Sub-image streams (RBP 604 to 795) VTS_SPST_ATRT Describes each attribute of the Sub-image stream (VTS_SPST_ATR) for VTSTT_EVOBS in this VTS (Table 21).
Table 21 VPS SPST ATR (Order of Description) RBP Contents Number of bytes 604 to 609 VTS_SPST_ATR of Sub-image stream # 0 6 bytes 610 to 615 VTS_SPST_ATR of Sub-image stream # 1 6 bytes 616 to 621 VTS_SPST_ATR of Sub-image stream # 2 6 bytes 622 to 627 VTS SPST ATR Sub-picture flow # 3 6 bytes 628 to 633 VTS SPST ATR of Sub-image stream # 4 6 bytes 634 to 639 VTS SPST ATR Sub-image flow # 5 6 bytes 640 to 645 VTS_SPST_ATR of Sub-image stream # 6 6 bytes 646 to 651 VTS_SPST_ATR of Sub-image stream # 7 6 bytes 652 to 657 VTS_SPST ATR of Sub-image stream # 8 6 bytes 658 to 663 VTS_SPST_ATR of Sub-image stream # 9 6 bytes 664 to 669 VTS SPST ATR of Sub-i flow magen # 10 6 bytes 670 to 675 VTS_SPST_ATR of Sub-i flow magen # 11 6 bytes 676 to 681 VTS SPST ATR of Sub-i flow magen # 12 6 bytes 682 to 687 VTS SPST ATR flow Sub-i magen # 13 6 bytes 688 to 693 VTS_SPST_ATR of Sub-i flow magen # 14 6 bytes 694 to 699 VTS SPST ATR of Sub-i flow magen # 15 6 bytes 700 to 705 VTS_SPST_ATR of sub-i flow magen # 16 6 bytes 706 to 711 VTS_SPST_ATR of Sub-i flow magen # 17 6 bytes 712 to 717 VTS SPST ATR of Sub-i flow magen # 18 6 bytes 718 to 723 VTS_SPST_ATR of Sub-i flow magen # 19 6 bytes 724 to 729 VTS SPST ATR of Sub-i flow magen # 20 6 bytes 730 to 735 VTS SPST ATR of Sub-i flow magen # 21 6 bytes 736 to 741 VTS SPST ATR flow Sub-i magen # 22 6 bytes 742 to 747 VTS SPST ATR of Sub-i flow magen # 23 6 bytes 748 to 753 VTS_SPST_ATR of Sub-i flow magen # 24 6 bytes 754 to 759 VTS SPST ATR of Sub-i flow magen # 25 6 bytes 760 to 765 VTS_SPST_ATR of Sub-i flow magen # 26 6 bytes 766 to 771 VTS SPST ATR of Sub-i flow magen # 27 6 bytes 772 to 777 VTS SPST ATR of Sub-i flow magen # 28 6 bytes 778 a783 VTS SPST ATR flow Sub-i magen # 29 6 bytes 784 to 789 VTS SPST ATR of Sub-i flow magen # 30 6 bytes 790 to 795 VTS SPST ATR Sub-image stream # 31 6 bytes Total 192 bytes A VTS_SPST_ATR is described for each existing Sub-image stream. The flow numbers are assigned from '0' according to the order in which the VTS_SPST_ATRs are described. When the number of Sub-image streams is less than v 32 ', enter? Ob 'in each bit of VTS_SPST_ATR for unused flows. The content of a VTSM_SPST_ATR is as follows: Table 22 VTSM SPST ATR b47 b46 b45 b44 b43 b41 b41 b41 reserved sub-image coding mode booked b39 b38 b37 b36 b36 b34 b33 b32 reserved HD SD- Width SD-PS SD-LB b31 b30 b29 b28 b28 b27 b26 b25 b24"Spatial code (higher bits) b23 b22 b21 b20 bl9 bl8 bl7 bl6 bl6 specific code (lower bits) bl5 bl4 bl3 bl2 bl1 bl9 b9 b8 reserved (for the specific code) b7 b6 b5 b4 b3 b2 bl bO Specific code extension Sub-image coding mode .... 000b: Path length for 2 bits / pixel defined in 5.3.3 Sub-Image Units. (The value of PRE_HEAD is different from (0000h)) 001b: Path length for 2 bits / pixel defined in 5.5.3 Units of Sub-image. (The value of PRE_HEAD is (0000h)) 100b: Path length for 8 bits / pixel defined in 5.5.4 Sub-image Units For the 8-bit pixel depth. Others: reserved Type of Sub-image ... 00b: Not specified 01b: Language Others: reserved Specific code ... Refer to Annex B. Extension of the specific code ... Refer to the Annex B. Note 1: In a Title, there must not be more than one Sub-image stream which has the extension of the language code (see Annex B) of forced subtitles (09h) between the sub-image streams which They have the language code. Note 2: Sub-image streams that have the Code Subtitle Language extension (09h) must have the number of Sub-image streams larger than all other Sub-image streams (which do not have the Extension of Language Code of forced subtitles (09h)). HD ... When the "Sub-image encoding mode" is '001b' or? 100b ', this flag specifies whether the HD stream exists or not. 0b: There is no flow Ib: The flow exists SD-width ... When the "Sub-image coding mode" is? 001b 'or' 100b ', this flag specifies whether the SD flow Width exists (16: 9) or not. Ob: There is no flow Ib: The flow exists SD_PS ... When the "Sub-image coding mode" is '001b' or 'lOOb', this flag specifies whether the SD scan stream exists (4: 3 ) or not . Ob: There is no flow Ib: The flow exists SD-LB ... When the "Sub-image coding mode" is v001b 'or? 100b', this flag specifies whether the SD stream exists Box (4: 3 ) or not . 0b. There is no flow Ib: The flow exists (RBP 798 to 861) VTS_MU_AST_ATRT Describes each audio attribute for multichannel use (Table 23). There is one type of Audio attribute which is VTS_MU_AST_ATR. The description area of eight Audio streams starting from the flow number v 0 'followed by the consecutive numbers up to? 7 'is constantly reserved. In the Audio flow area whose "Multichannel extension" in VTS_AST_ATR is v0b ', enter? 0b 'in each bit.
Table 23 VTS_MU_AST_ATRT (Order of Description Table 24 shows VTS MU AST ATR. Table 24 VTS MU AST ATR bl91 bl90 bl89 bl88 bl87 b! 86 bl85 M84 Mixed Audio banner mixing mode ACH0 audio channel contents bl83 bl82 bl81 bl80 bl78 bl77 bl77 b! 76 Mixed Audio banner blending mode AChI audio bl75 content bl74 bl73 bl72 bl71 bl71 bl70 bl69 M68 audio mixing phase ACH2 mixing mode audio channel contents bl66 bl66 M65 bl64 bl63 bl62 bl61 bl60 mixing phase audio mix mode ACH3 audio channel contents bl59 bl58 bl57 b! 5d blSS bl54 bl53 bl52 bl52 audio mixing phase ACH4 mixing mode Audio channel contents blSl bl50 bl49 bl48 bl47 bl46 bl45 bl45 b! 44 audio mixing phase ACH5 mixing mode Audio content bl43 bl42 bl41 b14Q bl 9 bl 38 bl37 bl36 audio mixing phase ACH6 mixing mode audio channel contents bl35 bl34 bl bl3 bl3 bl30 bl29 bl28 audio mixing phase ACH7 mixing mode audio channel contents Contents of the reserved audio channel Audio mixing stage ... reserved Mixed audio banner ... reserved Mixing mode from ACHO to ACH7 ... reserved 5.2.2.3 Program Chain Information Table of the Video Titles Set (VTS_PGCIT) A table that describes the VTS Program Chain Information (VTS_PGCI). The VTS_PGCIT table starts with the VTS_PGCIT Information (VTS_PGCITI) followed by the VTS_PGCI Search Indicators (VTS_PGCI_SRPs), followed by one or more VTS_PGCIs as shown in Fig. 59. The VTS_PGC number is assigned from the number ' 1 'in the described order of VTS_PGCI_SRP. The PGCIs that form a block must be described continuously. One or more VTS Title numbers (VTS_TTNs) are assigned in ascending order of VTS_PGCI_SRP for the Entry PGC from 'l'. A group of more than one PGC that constitutes a block is called a PGC Block. In each PGC block, VTS_PGCI_SRPs must be described continuously. VTS_TT is defined as a group of PGCs, which have the same VTS_TTN in a VTS. The contents of VTS_PGCITI and a VTS_PGCI_SRP are shown in Table 25 and Table 26 respectively. For the description of VTS_PGCI, refer to 5.2.3 Program Chain Information. Note: The order of the VTS_PGCIs is not related to the order of the Search Indicators of VTS_PGCI. Therefore it is possible more than one Search Indicator to indicate the same VTS PGCI.
Table 25 VTS PGCI SRP Ns (Order of Description) Table 26 VTS PGCI SRP (Order of Description) Table 27 (1) VTS_PGC_CAT Describe this category of PGC b63 b62 bdl b60 b59 b58 b57 b57 b55 b54 b53 b52 b52 b51 b51 b50 b49 b48 bt TTTS b47 b47 b45 b44 b43 b41 b41 b41 b41 b40 bL b3 b38 b23 b37 b36 b23 b32 b33 b32 b32 b32 b28 b27 b28 b27 b27 b27 b25 b25 b25 b28 b27 b27 b27 b27 b25 reserved b23 b24 b22 b21 b20 b20 b19 bl8 bl7 bl6 bl6 reserved bl5 b! 4 bl3 b! 2 BU blQ b9 b8 reserved b7 b6 b5 b4 b3 b2 bl b reserved Input type Ob: without Input PGC Ib: Input PGC RSM Authorization Describe if allows or not the restart of the playback by the RSM Instruction or the Restart () function in this PGC. Ob: allowed (RSM Information is updated) Ib: prohibited (RSM information is not updated) Block Mode When the PGC Block type is? 00b ', enter x 00b'. When the PGC Block type is' 01b ', enter Olb', '10b' and '11b'. 00b: There is no PGC in block 01b: The first PGC in block 10b: the PGC in the block (except the first and the last PGC) 11b: The last PGC in block Block Type When PTL_MAIT does not exist, enter ? 00b ' 00b: It is not a part of block 01b: Main Block Other: reserved HLI Availability Describes whether the HLI stored in EVOB is available or not. When HLI does not exist in EVOB, enter 'Ib'.
Ob: HLI is available in this PGC Ib: HLI is not available in this PGC, that is, HLI and the related Sub-image for the button must be ignored by the player. VTS_TTN '1' to * 511 ': value of the number of VTS Titles Others: reserved 5.2.3 Program Chain Information (PGCI) PGCI is the navigation data to control the presentation of the PGC. PGC is basically composed of PGCI and Enhanced Video Objects (EVOBs), however, there may also be a PGC without any EVOB but only with a PGCI. Only one PGC is used with the PGCI, for example, to decide the presentation conditions and to transfer the presentation to another PGC. The PGCI numbers are assigned from '1' in the order described for the PGCI Search Indicators in VMGM_LU, VTSM_LU and VTS_PGCIT. The PGC numbers (PGCN) have the same value as the PGCI numbers. Even when PGC takes a block structure, the PGCN in the block is matched with the consecutive number in the PGCI Search Indicators. The PGCs are divided into four types according to the Domain and the purpose, as shown in Table 28. A structure with PGCI only as well as PGCI and EVOB is possible for the First PGC of Reproduction (FP_PGC), the PGC of Menu of the Set of Titles of Video PGC (VTSM_PGC) and the PGC of Titles (TT PGC).
Table 28 The following restrictions apply to FP_PGC. 1) Neither Cell (nor EVOB) or Cell (s) is allowed in a EVOB 2) As for PG playback mode, only "Sequential program playback" is allowed 3) Parental block is not allowed 4) Language block is not allowed For details of the presentation of a PGC, refer to 3.3.6 PGC reproduction order. 5.2.3.1 Structure of PGCI PGCI comprises the General Information of the Program Chain (PGC_GI), the Table of Commands of the Program Chain (PGC_CMDT), the Program Program Chain Allocation (PGC_PGMAP), the Information Table of Cell Reproduction (C_PBIT) and the Cell Position Information Table (C_POSIT) as shown in FIG. 60. This information must be recorded consecutively through the LB border. PGC_CMDT is not necessary for PGC when the Navigation Commands are not used. PGC_PGMAP, C_PBIT and C_POSIT are not necessary for PGCs in cases where EVOB to be presented does not exist. 5.2.3.2 PGC General Information (PGC_GI) PGC Gl is the information in the PGC. The contents of PGC Gl are shown in Table 29. Table 29 PGC_SPST_CTLT (Table 30) The availability flag of the Sub-image stream and the conversion information of the Sub-image stream number for the number of Decoding SubImage streams is described in the following format. PPGC_SPST_CTLT consists of 32 PGC_SPST_CTLs. A PGC_SPST_CTL is described for each Sub-image stream. When the number of Sub-image streams is less than v32 ', enter? Ob 'in each bit of PGC_SPST_CTL for unused flows.
Table 30 The content of a PGC_SPST_CTL_ is as follows, Table 31 GC SPST CTL bM b30 b29 b28 b27 b26 b25 b24 b24 banner flag number of decoding sub-picture availability availability reserved for 4 HD HD SD b21 b21 b21 b2Q bl9 bl8 b17 bl6 b! 5 bl4 b! 3 b! 2 bl l blO b9 b8 b7 bß bS b4 b3 b2 bl bO SD Availability Flag ... Ib: The SD Sub-image stream is available in this PGC: Ob: The SD Sub-image stream is not available in this PGC. Note: for each Sub-image stream, this value must be the same for all TT_PGCs in the same TT_DOM, all VMGM_PGCs in the same VMGM_DOM or all VTSM_PGCs in the same VTSM_DOM. HD Availability Flag ... Ib: The HD Sub-image stream is available in this PGC. Ob: The HD Sub-image stream is not available in this PGC.
When the "Aspect Ratio" in the current Video attribute (FP_PGCM_V_ATR, VMG_V_ATR, VTSM_V_ATR or VTS_V_ATR) is' 00b ', this value must be set to x0b'. Note 1: When the "Aspect Ratio" is' 00b 'and the "source image resolution" is? 1011b' (1440x1080), this value can be set to? Ib '. It should be assumed that the "Aspect Ratio" is '11b' in the following descriptions.
Note 2: For each Sub-image stream, this value must be equal to all TT_PGCs in the same TT_DOM, all VMGM_PGCs in the same VMGM_DOM or all VTSM_pgcs in the same VTSM_DOM. 5.2.3.3 Program Chain Command Table (PGC_CMDT) PGC_CMDT is the description area for the Pre-command (PRE_CMD) and. the POST-command (POST_CMD) of the PGC, the Cell Command (C_CMD) and the Restart Command (RSM_CMD). As shown in Fig. 61A, PGC_CMDT comprises the Information from the Program Chain Command Table (PGC_CMDTI), zero or more PRE_CMD, zero or more POST_CMD, zero or more C_CMD, and zero or more RSM_CMD. The command numbers are assigned from one according to the order of description for each group of commands. A total of up to 1023 commands can be described with some combination of PRE_CMD, POST_CMD, C_CMD and RSM_CMD. It is not required to describe PRE CMD, P0ST_CMD, c CMD and RSM CMD when necessary. The contents of PGC_CMDTI and RSM_CMD are shown in Table 32, and Table 33 respectively. Table 32 (1) PRE_CMD_Ns describes the number of PRE_CMDs using numbers between? 0 'and X023X (2) POST_CMD describes the number of POST_CMDs using numbers between? 0' and? 1023 '. (3) C_CMD_Ns Describe the number of C_CMDs using numbers between? 0 'and x1023'. (4) RSM_CMD_Ns Describe the number of RSM_CMDs using numerals between? 0 'and' 1023 '. Note: TT_PGC of which the "RSM authorization" flag has 'Ob' may have this command area. TT_PGC of which the "RSM authorization" has? Lb ', FP_PGC, VMGM_PGC or VTSM_PGC must not have this command area. Then this field should be set to '0'. (5) PGC_CMDT_EA Describes the final address of PGC_CMDT with RBN from the first byte of this PGC CMDT.
Table 33 RSM CMD (1) RSM_CMD describes the commands to be negotiated before a PGC is restarted. The last instruction in RSM_CMDs will be the Interrupt Instruction. For the details of the commands, refer to 5.2.4 Navigation Commands and Navigation Parameters. 5.2.3.5 Cell Reproduction Information Table (C_PBTI) C_PBTI is a table that defines the presentation order in the Cells in a PGC. The Reproduction Information of Cells (C_PBI) must be continuously described in the C_PBTI as shown in FIG. 61B. The cell numbers (CNs) are assigned from '1' in the order in which the C_PBI is described.
Basically, Cells are presented continuously in ascending order from CNl. A group of Cells that constitute a block are called a Block of Cells. A Block of Cells will consist of more than one Cell. C_PBIs in a block will be described continuously. One of the Cells in a Cell Block is chosen for the presentation. One of the Cell Blocks is a Block of Angle Cells. The presentation time of those Cells in the Angle Block must be the same. When several Angle Blocks are set within the same TT_DOM, within the same VTSM_DOM and the VMGM_DOM, the number of Angle Cells (AGL_Cs) in each block must be the same. The presentation between the Cells before or after the Angle Block and each AGL_C must be uniform. When the Angle Cell Blocks in which the No-Junction Angle Change flag is designated as non-jointed exist continuously, a combination of all the AGL_Cs between the Cell Blocks must be presented without joint. In that case, all the connection points of the AGL_C in both blocks will be the limit of the Interlaced Unit. When the Angle Cell Blocks in which the Unsecured Angle Change flag is designated as non-unbroken exist continuously, only the presentation between the AGL_Cs with the same Angle number each block will be without junction. An Angle Cell Block has a maximum of 9 Cells, where the first Cell has the number 1 (Angle Cell number 1). The remainder is numbered according to the order described. The contents of a C_PBI are shown in FIG. 61B and Table 34.
Table 34 C_CMD_SEQ (Table 35) Describes the information of the sequence of commands of Cell. Table 35 (7) C_CMD_SEQ Describes the information of the cell command sequence. bl5 bl4 bl3 bl2 bll blO b9 b8 Number of cell commands Number of commands in the Start Cell b7 b «b5 b4 b3 b2 bl bO Number of commands in the Start Cell Number of Cell commands ... Describes the number of Cell commands to be executed sequentially from the Start Cell Command number in this cell between? 0 'and? 8'.
The value ? 0 'means that there is no cell command to be executed in this cell. Start Cell Command Number ... Describes the initial number of the Cell Command to be executed in this cell between and 0 'and v1023'. The value x 0 'means that there is no cell command to be executed in this cell. Note: If the "reproduction flag without join" in C_CAT is 'Ib' there is one or more cell commands in the previous cell, the presentation of the previous cell and this cell will be without join. Then, the Command in the previous Cell will be executed in a matter of 0.5 seconds from the start of the presentation of this Cell. If the Commands include the instruction to branch the presentation, the presentation of this Cell will be terminated and the new presentation will start according to the instruction. 5.2.4 Navigation Commands and Navigation Parameters The Navigation Commands and the Navigation Parameters form the basis for the providers to generate several Titles. Providers can use the Navigation Commands and Navigation Parameters to obtain or change the status of the Player, such as Parental Management Information and the number of Audio Streams.
By combining the use of Navigation Commands and Navigation Parameters, the provider can define simple and complex branched structures in a Title. In other words, the provider can create an Interactive Title with complicated branching structure and Menu Structure in addition to Linear Movie Titles or Karaoke Titles. 5.2.4.1 Navigation Parameters The Navigation Parameter is the general term for the information that is contained in the Player. These are classified into General Parameters and System Parameters as described below. 5.2.4.1.1 General Parameters (GPRMs) < Overview > The provider can use these GPRMs to memorize the user's operational history and to modify the behavior of the Player. These parameters can be accessed by the Navigation Commands. < Contents > GPRMs store two-byte, fixed-length numeric values. Each parameter is treated as an unsigned 16-bit integer. The player has 64 GPRMs. < For use > GPRMs are used in a Register mode or a Counter mode. The GPRMs used in the Record mode maintain a stored value. The GPRMs used in a Counts mode automatically increase the value stored every second in TT_DOM. The GPRM in the Counter mode should not be used as the first argument for arithmetic operations and operations with bits, except the MOv Instruction. < Initialize the Value > All GPRMs must be set to zero and in the Registration mode under the following conditions: • In the Initial Access. • When Play_Code (), Play_TTP () or Play_Time () are executed in all Domains and the State Stopped. • When Menu_live () is executed in the Stop State. < Domain > The value stored in the GPRMs (Table 36) is maintained, even if the presentation point is changed between the Domains. Therefore, the same GPRMs are shared among all the Domains.
Table 36 General Parameters (GPRMs) bl5 bl4 bl3 bl2 bll blO b9 b8 General Parameter Value (Higher Value) b7 bd b5 b4 b3 b2 bl bO General Parameter Value (Lower Value) 5. 2.4.1.2 System Parameters (SPRMs) < Overview > The provider can control the Player by setting the value of the SPRMs using the Navigation commands. These parameters can be accessed by the Navigation commands. < Content > The SPRMs store values of two bits, of fixed length. Each parameter is treated as an unsigned 16-bit integer. The Player has 32 SPRMs. < To be used > The value of SPRMs should not be used as the first argument for all Fixing Instructions not as a second argument for arithmetic operations, except for the Mov Instruction.
To change the value in the SPRM, the SetSystem Instruction is used. As for the Initialization of the SPRMs (Table 37), refer to 3.3.3.1 Initialization of the Parameters.
SPRM (ll), SPRM (12), SPRM (13), SPRM (14), SPRM (15), SPRM (16), SPRM (17), SPRM (18), SPRM (19), SPRM (20) and SPRM (21) are called Player parameters. < Initialize Values > See 3.3.3.1 Initialization of Parameters. < Domain > There is only one set of System Parameters for all Domains. (a) SPRM (O): Current Menu Description Language Code (CM_LCD) < Purpose > This parameter specifies the code of the language to be used as the current Menu Language during the presentation. < Contents > The value of SRPM (O) can be changed by the Navigation Command (SetM_LCD). Note: This parameter should not be changed by the User Operation directly.
Whenever the value of SPRM (21) is changed, the value will be copied to SPRM (O). Table 38 SPRM (O) bl5 bl4 bl3 bl2 bbl blO b9 b8 Current Menu Description Language Code (Upper Value) b7 b or b5 b4 b3 b2 bl bO Current Menu Description Language Code (Lower Value) (A) SPRM (26): Audio flow number (ASTN) for Menu-space < Purpose > This parameter specifies the current selected ASTN for Menu-space. < Contents > The value of SPRM (26) can be changed by a User Operation, a Navigation Command or the [Algorithm 3] shown in 3.3.39.1.1.2 Algorithm for the selection of the Audio and Sub-image stream in Menu-space . a) in the Space-menu When the SPRM value (26) is changed, the audio flow to be presented must be changed. b) in FP_DOM or TT_DOM The value of SPRM (26) which is set in Menu-space is maintained. The value of SPRM (26) should not be changed by a User Operation. If the value of the SPRM (26) is changed in either FP_DOM or TT DOM by a Navigation Command, it becomes valid in the Space-menu. < Default Value The default value is (Fh). Note: This parameter does not specify the audio stream number of Current Decoding. For details, refer to 3.3.9.1.1.2 Algorithm for selecting the Audio and Sub-image stream in Menu-space. Table 39 SPRM (26): Audio flow number (ASTN) for Menu-space bl5 bl4 b! 3 bl2 blll blO b9 b8 reserved b7 b6 b5 b4 b3 b2 bl bO Reserved ASTN ASTN ... 0 to 7: values of ASTN Fh: there is not one available, nor is the ast selected. Other: reserved (B) SPRM (27): Sub-image stream number (SPSTN) and on / off flag for Menu-space < Purpose > This parameter specifies the current selected SPSTN for Menu-space and whether the Sub-image is displayed or not. < Contents > The value of SPR (27) can be changed by a User Operation, a Navigation Command or the [Algorithm 3] shown in 3.3.9.1.1.2 Algorithm for the selection of Audio and Sub-image streams in Menu- space. a) in the Space-menu When the SPRM value (27) is changed, the Sub-image flow to be displayed and the Sub-image display status should change. b) in the FP_DOM and TT_DOM The value of SPRM (27) which is established in the Menu-space is maintained. The value of SPRM (27) will not be changed by a User Operation. If the value of SPRM (27) is changed in either FP_D0M or TT_D0M by a Navigation Command, it becomes valid in the Space-menu. c) The Sub-image display state is defined as follows: c-1) When a valid SPSTN is selected: When the value of SP_disp_bandera is xlb? The specified Sub-image is displayed until the end of its viewing period. When the value of SP_disp_bandera is? Ob ', refer to 3.3.9.2.2 Sub-image displayed in a forced way in System-space. c-2) When a valid SPSTN is selected: the Sub-image is not displayed. < Default value > The default value is 62. Note: This parameter does not specify the current decoding Sub-image stream number. When this parameter is changed in Menu-space, the presentation of the current Sub-image is discarded. For details, refer to 3.3.9.1.1.2 Algorithm for selection of Audio and Sub-image streams in Menu-space Table 40 (B) SPRM (27): Sub-image stream number (SPSTN) and flag on / off for Menu-space bl5 bl4 b! 3 b! 2 bll blO b9 b8 Reserved b7 b6 b5 b4 b3 b2 bl bO SP_disp_ reserved flag SPSTN SP_disp_banker Ob: Sub-image display is disabled. Ib: Sub-image display is enabled. SPSTN ... 0 to 31: SPSTN 62 values: No SPST available or SPST selected.
Other: reserved (C) SPRM (28): Angle Number (AGLN) for Menu-space < purpose > This parameter specifies the current AGLN for Menu-space < Contents > The value of SPRM (28) can be changed by a User Operation or a Navigation Command. a) in the FP_DOM If the value of SPRM (28) is changed in the FP_DOM by a Navigation Command, it becomes valid in the Menu-space. b) in the Space-menu When the SPRM value (28) is changed, the Angle to be displayed is changed. c) in the TT_DOM The value of SPRM (28) which is established in the Menu-space is maintained.
The SPRM value (2 () must not be changed by a User Operation.If the SPRM value (28) is changed in the TT_DOM by a Navigation Command, it becomes valid in the Space-menu. <Value by Default or default The default value is '1.' Table 41 (C) SPRM (28): Angle number (AGLN) for Menu-space bl5 bl4 bl3 b! 2 blll blO b9 b8 reserved b7 b6 b5 b4 b3 b2 bl bO Pre-ordered AGLN AGLN ... 1 to 9: AGLN values Other: reserved (D) SPRM (29): Audio flow number (ASTN) for FP_DOM < Purpose > This parameter specifies the current selected ASTN for FP_DOM. < Contents > The value of SPRM (29) can be changed by a User Operation, a Navigation Command or the [Algorithm 4] shown in 3.3.9.1.1.3 Algorithm for the selection of the Audio and Sub-image stream in FP_DOM. a) in the FP_DOM when the value of SPRM (29) is changed, the audio flow to be presented should change. b) in the Menu-space or TT_DOM the value of SPRM (29) which is set in FP_DOM is maintained. The value of SPRM (29) will not be changed by a User Operation. If the value of SPRM (29) is changed in either the Menu-space or TT_DOM by a Navigation Command, it becomes valid in the FP_DOM. < Default value > The default value is (Fh). Note: This parameter does not specify the current Decoding audio stream number. For details refer to 3.3.9.1.1.3 Algorithm for Audio and Sub-image stream selection in FP_DOM Table 42 (D) SPRM (29): Audio flow number (ASTN) for FP DOM bl5 bl4 bl3 bl2 blll blO b9 b8 reserved b7 b6 b5 b4 b3 b2 bl bO reserved ASTN ASTN ... 0 to 7: ASTN values Fh: There is no AST available, nor has the AST been selected.
Other: reserved (E) SPRM (30): Sub-image flow number (SPSTN) and on / off flag for FP_DOM < Purpose > This parameter specifies the current selected SPSTN for FP_DOM and whether the Sub-image is displayed or not. < Contents > The value of SPRM (30) can be changed by a User Operation, a Navigation Command or the [Algorithm 4] shown in 3.3.9.1.1.3 Algorithm for the selection of the Audio and Sub-image stream in FP_DOM. a) in the FP_DOM When the SPRM (20) value is changed, the Sub-image flow to be displayed and the Sub-image display status will change. b) in the Menu-space or TT_DOM The value of SPRM (30) which is established in the FP_DOM is maintained. The SPRM value (30) must not be changed by a User Operation. If the value of SPRM (30) is changed either in Menu-space or TT_DOM by a navigation command, it becomes valid in the FP_DOM. c) The Sub-image display state is defined as follows: c-1) When a valid SPSTN is selected: when the value of SP_disp_bandera is vlb? The specified Sub-image is displayed throughout its viewing period. When the value of SP_disp_bandera is? 0b? refer to 3.3.9.2.2 Sub-image displayed in a forced way in System-space. c-2) When an invalid SPSTN is selected: The Sub-image is not displayed. < Default value > The default value is 62. Note: This parameter does not specify the current decoding Sub-image stream number. When this parameter is changed in fo_D0M, the presentation of the current Sub-image is discarded.
For details, refer to 3.3.9.1.1.3 Algorithm for selecting the Audio stream and Sub-image in FP_DOM Table 43 (E) SPRM (30): Sub-image flow number (SPSTN) and on / off flag for FP_DOM bl5 bl4 bl3 bl2 blll blO b9 b8 reserved b7 bd b5 b4 b3 b2 bl bO reserved SP_disp_ flag SPSTN SP_disp_bandera Ob: Sub-image display is disabled. Ib: Sub-image display is enabled. SPSTN ... 0 to 31: SPSTN 62 values: No SPST available, nor is SPST selected. Other: reserved 5.3.1 EVOB Contents A Set of Enhanced Video Objects (EVOBS) is a collection of EVOBs as shown in FIG. 62. A. An EVOB can be divided into cells made up of EVOBUs. An EVOB and each element in a cell will be restricted as shown in Table 44.
Table 44 Restrictions on each item Note 1: The definition of "Completed" is as follows: 1) The start of each flow will start from the first data of each access unit. 2) The end of each flow will be aligned in each access unit.
Therefore, when the packet length comprising the last data in each stream is less than 2048 bytes. Note 2: The definition of "Presentation of the Sub-image is valid in the Cell" is as follows: 1) When the two Cells are presented without Joints, • The preceding Cell Presentation must be cleared at the cell boundary using the STP_DSP command in SP_DCQS or, • The presentation must be updated by the SPU that is registered in the subsequent Cell and whose presentation time is the same as the presentation time of the first upper field of the posterior Cell. 2) When two Cells are not presented without joints, • The presentation of the preceding Cell must be cleared by the Player before the presentation time of the Rear cell. 5.3.1.1 Enhanced Video Object Unit (EVOBU) An Enhanced Video Object Unit (EVOBU) is a sequence of packets in the order of registration. It starts with exactly one NV_PCK, covers all packages (if any), and ends either immediately before the next NV_PCK in the same EVOB or at the end of the EVOB. An EVOBU except the last EVOBU of a Cell represents a presentation period of at least 0.4 seconds and at most one second. The last EVOBU of a Cell represents a presentation period of at least 0.4 seconds and at most 1.2 seconds. An EVOB consists of a whole number of EVOBUs. See FIG. 62A. The following additional rules apply: 1) The period of presentation of an EVOBU is equal to an entire number of periods of the field / video frame. This is also the case when the EVOBU does not contain any video data. 2) The start and end time of the presentation of an EVOBU is defined in 90kHz units. The presentation start time of an EVOBU is equal to the presentation completion time of the previous EVOBU (except for the first EVOBU). 3) When the EVOBU contains the video: - The start time of presentation of the EVOBU is equal to the start time of presentation of the first field / video frame, - the presentation period of the EVOBU is equal to or longer than the period of presentation of the video data. 4) When the EVOBU contains the video, the video data will represent one or more PAU (Video Access Units). 5) When an EVOBU with the video data is followed by an EVOBU without video data (in the same EVOB), the last encoded image will be followed by a SEQ_FINAL_CÓDIGO. 6) When the period of presentation of the EVOBU is longer than the period of presentation of the video it contains, the last encoded image will be followed by a SEQ_FINAL_CÓDIGO. 7) The video data in an EVOBU must never contain more than one SEQ_FINAL_CÓDIGO. 8) When the EVOB that contains one or more SEQ_FINAL_CÓDIGO, and this is used in an ILVU, - The submission period of an EVOBU is equal to an integer number of field / video frame periods. - The video data in an EVOBU will have a Frame with Coding I (refer to Annex R) for the Still Images or when there is no video data. - the EVOBU that contains the Frame with Coding I for the Fixed images will have a SEQ_FINAL_CÓDIGO. The first EVOBU in an ILVU will have a video data. Note: The period of presentation of the video contained in an EVOBU is defined as the sum of: - the difference between the PTS of the last video access unit and the PTS of the first video access unit in the EVOBU (the last and the first one in terms of the display order), - the duration of the presentation of the last video access unit. The presentation completion time of an EVOBU is defined as the sum of the presentation start time and the presentation duration of the EVOBU. Each elementary flow is identified by the id_flow in the Program flow. Audio Presentation Data not defined by MPEG are transported in PES packets with a_dial flow of private_flow_l. The Navigation Data (GCI, PCI and DSI) and Highlighted Information (HLI) are transported in PES packets with a_id flow of private_flow_2. The first bytio of the data area of the data packets private_flow_l and private_flow_2, are used to define a sub_flow_id as shown in Tables 45, 46 and 47. When the flow_id is the private_flow_l or private_flow_2, the first bytio in the area of Data for each packet is assigned as sub_flux_id. The dates of flow_id, sub_flux_id for private_flow_l, and sub_flux_id for private_flow_2 are shown in Tables 45, 46 and 47.
Table 45 NA: Not applicable Note: The identification of the VC-1 flows is based on the use of the flow_id extensions defined by an amendment to the MPEG-2 Systems [ISO / IEC 13818-1: 2000 / AMD2: 2004]. When the id_flow is set to OxFD (1111 1101b), this is the_id_extension_flow field that defines the nature of the flow. The field flow_id_extension is added to the PES header using the PES extension flags present in the PES header. For VC-1 video streams, the flow identifiers to be used are: Flow_id ... 1111 1101b; extended_flow_id Flow_id_extension ... 101 0101b; for VC-1 (video stream) Table 46 Note 1: "reserved" of sub_flux_id means that sub_flux_id is reserved for future extensions of the system. Therefore, its use is forbidden for reserved values of sub_flux_id. Note 2: The sub_flux_id whose value is? 1111 1111b 'can be used to identify the bit stream that is freely defined by the provider. However, it is not guaranteed that each player will have a feature to play that stream. The EVOB restriction, such as the maximum transfer rate of the total flows, will be applied if the bit stream defined by the provider exists in the EVOB.
Table 47 Note 1: "reserved" of sub_flux_id means that sub_flux_id is reserved -for future extensions of the system. Therefore, it is forbidden to use reserved values of sub_flux_id. Note 2: The sub_flow_id whose value is 'lili 1111b' can be used to identify the bit stream that is freely defined by the provider. However, it is not guaranteed that each player will have a feature to play that stream. The EVOB restriction, such as the maximum transfer rate of the total flows, will be applied if the bit stream defined by the provider exists in the EVOB. 5.4.1 Navigation package (NV_PCK) The Navigation package comprises a packet header, a system header, a GCI packet (GCI_PKT), a PCI packet (PCI_PKT) and a DSI packet (DSI PKT) as shown in Fig. 62B. the NV_PCK will be aligned with the first package of the EVOBU. The contents of the system header, the packet header of the GCI_PKT, the PCI_PKT and the DSI _PKT are shown in Tables 48 and 50. The id_flow of the GCI_PKT, the PCI_PKT and the DSI_PKT are as follows: GCI_PKT ... fluj? _id; 1011 1111b (private_fluj or_2) Sub_flux_id; 0000 0100b PCI_PKT ... flow_id; 1011 1111b (private_flow_2) Sub_flow_id; 0000 0000b DSI_PKT ... flow_id; 1011 1111b (private_fluj or_2) Sub_flow id; 0000 0001b Table 48 System Heading Note 1: Only the packet speed of the NV_pck and the MPEG-2 audio packet can exceed the packet speed defined in the "Restricted System Parameter Program flow" of ISO / IEC 13818-1. Note 2: The sum of the buffers or target buffers for the Presentation Data defined as private_flow_l should be described. Note 3: "P-STD_buf_link_size" for the elementary streams of Video MPEG-2, MPEG-4 AVC and SMPTE VC-1 are defined as below. Table 49 Note 1: For HD content, the value of the video elementary stream can be increased compared to the nominal size of the buffer or buffer representing 0.5 seconds of video data supplied at 29.4 Mbps (sec. the size of an additional video frame of 1920x1080 (In MPEG-4 AVC, this memory is used as a reference of the additional video frame.) The use of buffer size or increased buffer does not override the restrictions that after looking for a input point header, elementary stream decoding should not start later than 0.5 seconds after the first byte of the elementary video stream has entered the buffer or buffer Note 2: For the SD content, the value of the elementary video flow can be increased compared to the nominal size of the buffer or buffer representing 0.5 seconds of video data supplied at 15 Mbit / sec. Additional memory represents the size of an additional 720x576 video frame (In MPEG-4 AVC, this memory space is used as a reference of the additional video frame). The use of the increased size of the reference memory does not negate the restrictions that after searching for an entry point header, the decoding of the elementary stream should not start with a delay greater than 0.5 seconds after the first byte of the elementary video stream entered into the buffer or buffer. Table 50 5. 2.5 General Control Information (GCI) GCI is the General Information Data with respect to the data stored in an EVOB unit (EVOBU) such as copyright information. GCI are composed of two pieces of information as shown in Table 51. GCI is described in the GCI packet (GCI_PKT) in the Navigation packet (NV_PCK) as shown in FIG. 63A. Its content is renewed for each EVOBU. Regarding the details of EVOBU and NV_PCK, refer to 5.3 Primary Enhanced Video Object.
Table 51 5. 2.5.1 General Information GCI (GCI_GI) GCI Gl is the information in the GCI as shown in abla 52 Table 52 5. 2.5.2 Registration Information (RECI) RECI is the information for the video data, each audio data and the SP data which are recorded in this EVOBU as shown in Table 53. Each information is described as ISRC (International Standard Registration Code) which complies with ISO3901.
Table 53 (1) ISRC_V Describes the ISRC of the video data which is included in the video stream. As for the description of ISRC. (2) ISRC_An Describe the ISRC of the audio data which are included in the Decoding Audio stream #n.
As for the description of ISRC. (3) ISRC_SPn Describes the ISRC of the SP data which is included in the Decoding sub-image stream #n selected by ISRC_SP_SEL. As for the description of ISRC. (4) ISRC_V_SEL Describes the group of Decoding Video streams for ISRC_V. If the Main or sub flow is selected Video in each GCI. ISRC: V_SEL is the information in RECI as is shown in Table 54. Table 54 ISRC V SEL b7 b6 b5 M b3 b2 bl bO M / S reserved M / S ... Ob: The main video stream is selected. Ib: The sub video stream is selected. Note 1: In the Standard content, M / S will be set to zero (0). (5) ISRC A SEL Describes the group of Decoding Audio streams for ISRC_An. Whether the Master or Sub Decoding Audio stream is selected in each GCI. ISRC_A_SEL is the information in RECI as shown in Table 55. Table 55 ISRC A SEL b7 b6 b5 b4 b2 bl bO M / S ... Ob: Deselect the main Decoding Audio streams. Ib: Sub Decoding Audio streams are selected. Note 1: In the standard content, M / S will be set to zero (0). (6) ISRC_SP_SEL Describes the decoding SP group for ISRC_SPn. Two or more SP_GRn should not be established in (1) in ca GCI. ISRC_SP_SEL is the information in RECI as shown in Table 56. Table 56 ISRC_SP_SEL b7 b6 b5 b4 b3 b2 bl bO M / S reserved SP GR4 SP GR3 SP GR2 SP GR1 SP_GR1 . . Ob: The setting of Decoding SP # 0 to # 7 is not selected.
Ib: The decoding SP flow # 0 to # 7 is selected. SP_GR2 ... Ob: The SP flow of Decoding # 8 to # 15 is not selected. Ib: The SP flow of Decoding # 8 to # 15 is selected. SP_GR3 ... Ob: Decoding SP # 16 to # 23 is not selected. Ib: The SP flow of Decoding # 16 to # 23 is selected. SP_GR4 ... Ob: Decoding SP # 24 to # 31 is not selected. Ib: Decoding flow SP # 24 to # 31 is selected. M / S ... Ob: Primary Decoding SP flows Ib are selected: Sub Decoding SP flows are selected.
Note 1: In the Standard content, M / S will be set to zero (0). 5.2.8 Highlighted Information (HLI) HLI is the information for the rectangular area highlighted in the Sub-image display area as the button and it is stored in an EVOB anywhere. HLI is composed of three pieces of information as shown in Table 57. HLI is described in the HLI packet (HLI_PKT) in the HLI packet (HLI_PCK) as shown in Fig. 63B. its contents are renewed for each HLI. Regarding the details of EVOB and HLI_pck, refer to 5.3 Primary Enhanced Video Object.
Table 57 HLI (Order of Description In FIG. 63B, HLI_PCK can be located in the EVOB anywhere. - The HLI_PCKs will be located after the first related SP_PCK package. - Two types of HLI can be located in an EVOBU. With this Highlight Information, the mix (contrast) of the Video and Sub-image color in the specific rectangular area can be altered. The relationship between the Sub-image and HLI as shown in FIG. 64. Each submission period of the Sub-image unit (SPU) in each Sub-image stream for the button will be equal to or greater than the valid HLI period. The different Sub-image stream of the Sub-image stream for the button has no relation to HLI. 5.2.8.1 Structure of HLI HLI consists of three pieces of information as shown in Table 57. The Button Color Information Table (BTN_COLIT) consists of three (3) Button Color Information (BTN_COLIT) and 48 Information of the Button (BTNI). The 48 BTNIs could be used as a group mode of 48 BTNIs, the two group mode of 18 BTNIs or the three group mode of 16 BTNIs each described in the ascending order directed by the Button Group. The Button Group is used to alter the size and position of the display area for the Buttons, according to the type of display (4: 3, HD, Wide, Box Type, or Panning) of the Sub-image stream of Decoding. Therefore, the contents of the Buttons that share the same number of Buttons in each Button Group will be the same except for the position and display size. 5.2.8.2 General Information Highlight HL_GI is the information about the HLI as a whole, as shown in Table 58.
Table 58 (6) CMD_CHG_S_PTM (Table 59) Describes the start time of the Button command change in this HLI by the following format. The start time of the Button command change will be equal to or later than the HLI start time (HLI_S_PTM) of this HLI, and before the Button selection completion time. (BTN_SL_E_PTM) of this HLI. When HLI_SS is v01b 'or 10bX the start time of the button command change will be equal to HLI_S_PTM.
When is HLI_SS? llbX describes the start time of the HLI Button command change which is renewed after the previous HLI, Table 59 CMD_CHG_S_PTM b31 b30 b29 b28 b27 b26 b25 b24 CMD_CHG_S_PTM [31. . 24] b23 b22 b21 b20 bl9 bl8 bl7 bl? CMD_CHG_S_PTM [23. . 16] b15 bl4 bl3 bl2 bll blO b9 b8 CMD_CHG_S_PTM (15 .8] b7 b6 b5 b4 b3 b2 bl bO CMD_CHG_S_PTM [7 .0] Start time of button command change = CMD_CHG_S_PTM [31. . 0] / 90000 [seconds] (13) SP_USO (Table 60) Describes the use of the Sub-image stream. When the number of Sub-image streams is less than '32', enter '0b' in each SP_USO bit for the unused streams. The content of an SP_USO is as follows: Table 60 SPJJSE b7 b6 b5 b4 b3 b2 bl bO SP_Use ... Whether this Sub-image stream is used as the Highlight Button or not.
Ob: Highlight button during the HLI period. Ib: Distinct from the Highlight button Decoding Sub-image flow number for the Button ... When "SP_Uso" is 'Ib', it describes the last 5 significant bits of sub_flux_id for the Sub-image flow number for corresponding Button . Otherwise enter '00000b', but the value '00000b' does not specify the decode sub-image stream number '0'. 5.2.8.3 Button Color Information Table (BTN_COLIT) BTN_COLIT is composed of three BTN_COLITs as shown in Fig. 65A. The color number of the Button (BTN_COLN) is assigned from l 'to' 3 'in the order in which BTN_COLI is described. BTN_COLI consists of Color Selection Information (SL_COLI) and Action Color Information (AC_COLI) as shown in Fig. 65A. In SL_COLI, the color and contrast to be displayed when the Button is in the "Selection State". Under this state, the User can move the Button from the highlight to another. In AC_C0LI, describes the color and contrast to be displayed when the Button is in the "Action state".
Under this state, the User can not move the highlighted button to another.
The contents of SL_C0LI and AC_COLI are as follows SL_COLI consists of 256 color codes and 256 contrast values. The 256 color codes are divided into the four color codes specified for the background pixels, Pattern pixels, Emphasis-1 pixels and Pixels of Emphasis-2, and the other 252 color codes for the Pixels.
The 256 contrast values are divided into the four contrast values specified for the Background pixels, Pattern pixels, Emphasis-1 pixels and Emphasis-2 pixels, and the other 252 contrast values for the Pixels as well. AC_COLI consists of 256 color codes (Table 61) and 256 contrast values (Table 62). The 256 color codes are divided into the four color codes specified for the background pixels, Pattern pixels, Emphasis-1 pixels and Emphasis-2 pixels, and the other 252 color codes for the Pixels. The 256 contrast values are divided into the four contrast values specified for the background pixels, Pattern pixels, Emphasis-1 pixels and Emphasis-2 pixels, and the other 252 contrast values for the Pixels as well. Note: the four specified color codes and the four specified contrast values are used for both the 2-bit / pixel and 8-bit / pixel sub-image. However, the other 252 color codes and the other 252 contrast values are used only for the 8-bit / pixel sub-image. Table 61 (a) Selection color information (SL_COLI) for color code b2047 b2046 b2045 b2044 'b2043 b2042 b2041 b2040 selection color code for background pixels b2039 b2038 b2037 b2036 b2035 b2034 b2033 b2032 b2032 Selection color code for pixels of Pattern b2031 b2O30 b2029 b2028 b2027 b2026 b2025 b2024 Color code for pixel selection of Emphasis- 1 b2023 b2022 b2021 b2020 b2019 b2018 b2017 b2016 Selection color code for pixels of Enfas? s-2 b2015 b2014 b2013 b2012 b2011 b2010 b2009 b2008 Selection color code of pixels-4 b7 b6 b5 b4 b3 b2 bl bO Selection color code for pixels-255 In the case of the four specified pixels: Color selection code of the background pixels Describes the color code for the background pixels when the Button is selected. If a change is not required, enter the same code as the initial value. Pattern Pixel Selection Color Code Describes the color for the pattern pixels when the Button is selected.
If a change is not required, enter the same code as the initial value. Pixel selection color code of Emphasis-1 Describes the color code for the emphasis-1 pixels when the Button is selected. If a change is not required, enter the same code as the initial value. Pixel selection color code of Emphasis-2 Describes the color code for the emphasis-2 pixels when the Button is selected. If a change is not required, enter the same code as the initial value. In the case of the other 252 pixels; Color code is selected from Pixel-4 to Pixel-255 Describes the color code for the pixels when the Button is selected. If a change is not required, enter the same code as the initial value. Note: An initial value means the color code which is defined in the Sub-image.
Table 62 (b) Selection Color Information (SL_COLI) for the contrast value b2047 b2046 b2045 b2044 b2043 b2042 b2041 b2040 Background Pixel Selection Contrast Value b2039 b2038 b2037 b2036 b2035 b2034 b2033 b2033 b2032 Pixel Selection Constraint Value of Pattern b2031 b2030 b2029 b2028 b2028 b2027 b2026 b2025 b2024 Value of contrast selection of pixels of Emphasis- 1 b2023 b2022 b2021 b2020 b2019 b2018 b2017 b2016 Value of contrast of pixel selection of Emphasis-2 b2015 b2014 b2013 b2012 b2011 b2010 b2009 b2008 Value of Pixels-4 selection contrast b7 b6 b5 b4 b3 b2 bl bO Pixel-selection contrast value-255 In the case of the four specified pixels: The Background Pixel Selection Contrast Value Describes the contrast value of the background pixels when the Button is selected. If a change is not required, enter the same value as the initial value. Pattern Pixel Selection Contrast Value Describes the contrast value of the pattern pixels when the Button is selected. If a change is not required, enter the same value as the initial value.
Pixel selection constant value-1 Describes the contrast value of the emphasis-1 pixels when the Button is selected. If a change is not required, enter the same value as the initial value. Pixel Selection Contrast Value of Emphasis-2 Describes the contrast value of the emphasis-2 pixels when the Button is selected. If a change is not required, enter the same value as the initial value. In the case of the other 252 pixels: Selection Contrast Value from Pixel-4 to Pixel-255 Describes the contrast value for the pixels when the Button is selected. If a change is not required, enter the same code as the initial value. Note: An initial value means the contrast value which is defined in the Sub-image. 5.2.8.4 Button Information Table (BTNIT) BTNIT consists of the 48 Button Information (BTNI) as shown in FIG. 65B. This table can be used as a mode of a group consisting of 48 BTNIs, the mode of two groups consisting of 24 BTNIs or the mode of three groups consisting of 16 BTNIs according to the content of the description of BTNGR_Ns. The BTNI description fields hold in a fixed manner the maximum number established in the Button Group. Therefore, BTNI is described from the beginning of the description field of each group. Zero will be described in the fields where the valid BTNI does not exist. The Button number (BTNN) is assigned from '1' in the order in which the BTNI is described in each Button Group. Note: The Buttons in the Button Group which are activated by the Select_Select_and_Activation () function are those between BTNN # 1 and the value described in NSL_BTN_Ns. The User Button number is defined as follows: User Button Number (U_BTNN) = BTNN + BTN_0FN BTNI is composed of the Button Position Information (BTN_POSI). Adjacent Button Position Information (AJBTN_POSI) and Button Command (BTN_CMD). The BTN_P0SI describes the color number of the Button to be used by the Button, the rectangular display area and the action mode of the Button. AJBTN_POSI describes the number of the Button located up, down, to the right, and to the left. In BTN_CMD, the command executed when the button is activated is described. (c) Button Command Table (BTN_CMDT) Describes the batch of eight commands to be executed when the Button is Activated. The Button Command numbers are assigned from one according to the order of description. Then, the eight commands are executed from BTN_CMD # 1 according to the order of description. BTN_CMD is a fixed size with 64 bytes as shown in Table 63 Table 63 BTN CMDT (Order of Description BTM_CMD # 1 to # 8 describe the commands to be executed when the Button is activated. If the eight commands for a button are not necessary, they will be filled with one or more NOP commands. Refer to 5.2.4 Navigation Commands and Navigation Parameters. 5.4.6 Highlighting Information Package (HLI_PCK) The Highlighting information packet comprises a packet header and an HLI packet (HLI_PKT) as shown in Fig. 66A. The contents of the packet header of the HLI_PKT are shown in Table 64. The id_flow of the HLI_PKT is as follows: HLI_PKT flow_id; 1011 1111b (private_flow_2) Sub_fluj? id; 0000 1000b Table 64 5. 5.1.2 Video MPEG-4 AVC The encoded video data will be in accordance with ISO / IEC 14496-10 (MPEG-4 Advanced Video coding standard) and will be represented in byte stream format. The additional semantic restrictions on the video stream for MPEG-4 AVC are specified in this section. A GOVU (Group of Video Access Units) consists of more than one NAL unit of byte flow. The RBPS data transported in the load of the NAL units will start with a delimiter of access units, followed by a set of sequence parameters (SPS), followed by supplementary enhancement information (SEI), followed by a set of image parameters (PPS), followed by SEI followed by an image, which contains only fractions I, followed by any subsequent combination of an access unit delimiter, a PPS, an SEI and the fractions as shown in Fig. 66B. At the end of an access unit, fill data and the end of the sequence may exist. At the end of an GOVU, padding data and the end of the sequence may exist. The video data for each EVOBU will be divided into a whole number of video packets and recorded on the disk as shown in FIG. 66B. The access unit delimiter for the start of the EVOBU video data will be aligned with the first video packet.
The detailed structure of the GOVU is defined in Table 65.
Table 65 Detailed structure of GOVU (* 1) If the associated image is an IDR image, the SEI recovery point is optional. Otherwise, it is mandatory. (* 2) As for the Movie Grain, refer to 5.5.1.x If nal_type_type is one of 0 and from 24 to 31, the NAL unit should be ignored. Note: SEI messages not included in [Table 5.5.1.2-1] must be read and discarded in the player. 5.5.1.2.2 Other restrictions on video MPEG-4 AVC 1) In an EVOBU, the frames or frames coded before the frame or frame with coding I which is the first in the coding order, can be referred to the frames Encoded in the preceding EVOBU. The Encoded Frames displayed after the Encoded Table I will not refer to the Encoded Frames or frames preceding the first Encoded Frame I in the order of display as shown in FIG. 67. Note 1: The first image in the first GOVU in an EVOB will be an IDR image. Note 2: The set of image parameters refers to the set of parameters of the sequence of the same GOVU.
All fractions in a unit and access will refer to the set of image parameters associated with the access unit. 5.5.1.3 SMPTE VC-1 The encoded video data shall be in accordance with VC-1 (Specification SMPTE VC-1). The additional semantic restrictions on the video stream for VC-1 are specified in this section. The video data in each EVOBU will start with a Sequence Start Code (SEQ_SC) followed by a Sequence Header (SEQ_HDR) followed by an Access Point Start Code (EP_SC) followed by an Entry Point Header (EP_HDR) ) followed by a Frame Start Code (FRM_SC) followed by the Image data either of the image type I, i / l, P / I or I / P. The video data for each EVOBU will be divided into a whole number of video packets and recorded on the disk as shown in Fig. 68. The SEQ_SC at the start of the EVOBU video data will be aligned with the first packet of video. video. 5.5.4 Sub-Image Unit (SPU) for the 8-bit pixel depth The Sub-Image Unit comprises the Sub-Image Unit Header (SPUH), the Pixel Data (PXD) and the Sequence Table of Display Control (SP_DCSQT) which includes the Sub-Image Display Control Sequences (SP_DCSQ). The size of SP_DCSQT will be equal to or less than half the size of the Sub-Image Unit. SP_DCQS describes the content of the display control in the pixel data. Each SP_DCSQ is registered sequentially, linked to each other as shown in Fig. 69A.
The SPU is divided into integral fragments of SP_PCKs as shown in Fig. 69B and then recorded on a disk. An SP_PCK can have a fill packet or padding bytes, only when this is the last packet for an SPU. If the length of the SP_PCK comprising the last data of the unit is less than 2048 bytes, it will be adjusted by any method. SP_PCKs other than the last packet for an SPU will not have the padding packet. The PTS of an SPU will be aligned with the upper fields. The valid period of the SPU is from the PTS of the SPU to that of the SPU to be presented immediately. However, when Freezing occurs in the Navigation Data during the valid period of the SPU, the valid period of the SPU is until the Freezing is finished. The SPU display is defined as follows: 1) When the display is activated by the Display Control Command during the valid period of the SPU, the Sub-image is displayed. 2) When the display is deactivated by the Display Control Command during the valid period of the SPU, the Sub-image is cleaned. 3) The Sub-image is forcedly cleaned when the valid period of the SPU reaches the end, and the SPU is abandoned by the buffer or buffer of the decoder. FIGs. 70A and 70B show the Update Timing of the Sub-Image Unit. 5.5.4.1 Header of the Sub-Image Unit (SPUH) The SPU comprises the information of the identifier, the information of the size and direction of each data in an SPU.
Table 66 shows the content of the SPUH. Table 65 (1) SPU_ID The value of this field is (00 OOh). (2) SPU_SZ Describes the size of an SPU in number of bytes. The maximum SPU size is T.B.D. bytes. The size of a SPU in bytes must be uniform, (when the size is irregular, a (FFh) should be added to the end of the SPU, to make the size uniform). (3) SP DCSQT SA Describes the start address of SP_DCSQT with RBN from the first bytes of the SPU. 5.5.4.2 Pixel data (PXD) PXDs are the compressed data from the bitmap data in each line by the specific path length method described in 5.5.4.2 (a) Path length compression rule . The number of pixels in a line in the bitmap data will be equal to the number of pixels displayed on a line which is set by the "ESTABLISH_DAREA2" command in SP_DCCMD. Refer to 5.5.4.4 SP Display Control Command. For pixels of the bitmap data, the pixel data is assigned as shown in Tables 67 and 68. Table 67 shows the four specified pixel data, Background, Pattern Emphasis-1 and Emphasis-2. Table 68 shows the other 252 pixel data using graduation or grayscale, etc.
Table 67 Specific Pixel Specific Pixel Data Assignment Pixel Data Background Pixels 0 0000 0000 Pattern Pixels 0 0000 0001 Emphasis Pixels- 1 0 0000 0010 Emphasis Pixels-2 0 0000 0011 Table 68 So Nation of Other Pixel Data Note 1: Pixel data from "1 0000 0000b" to "1 0000 0011" should not be used. PXD, that is, bitmap data with path length compression, are separated into fields. Within each SPU, the PXDs will be organized in such a way that all subsets of PXD to be displayed during any field should be contiguous. A typical example is PXD for the upper field that registers first (after the SPUH), followed by PXD for the button field. Other arrangements are possible. (a) travel length compression rule The coded data consists of the combination of eight patterns. < In the case of the four specified pixel data, the following four patterns are applied > 2) If it comes back only 1 pixel with the same value, enter the compression flag (Comp) of path length, and enter the pixel data (PIX 2 to PIX =) in the 3 bits. In the case where, Comp and PIX are always' OX The 4 bits are considered as a unit. Table 69 dO d) d2 d3 3) If 2 to 9 pixels with the same value come back, enter the path length compression flag (Comp), and enter the pixel data (PIX2 to PIXO) in the 3 bits, and enter the length extension ( LEXT) and enter the travel counter (RUN2 to RUNO) in the 3 bits. In the case where, Comp is always v 1 ', PIX2 and LEXT are always' 0'. The travel counter is always calculated by adding 2. the 8 bits are considered as one unit. Table 70 dO di d2 d3 d4 d5 d6 d7 Comp! > IX2 P1X1 PIXO LEXT RUN2 RUNO RUNO 3) if 10 to 136 pixels with the same value come back, enter the path length compression flag (comp), and enter the pixel data (PIX2 to PIX 0) in the three bits, and enter the length extension (LEXT) and enter the travel counter (RUN6 to RUNO) in the 7 bits. In the case where, Comp and LEXT are always? L ', is PIX2 always? 0 '. The travel counter is calculated by adding always 9. The 12 bits are considered as one unit.
Table 71 dO di d2 d3 d4 d5 dd d7 d8 d9 dlO dll Comp P1X2 P1X1 PIXO LEXT RUN6 RUNS RUN4 RUN3 RUN2 RUN1 RUNO 4) If the same skins come to the end of a line, enter the travel length compression flag (Comp), and enter the pixel data (PIX2 to PIXO) in the 3 bits, and enter the length extension (LEXT) and enter the travel counter (RU6 to RUNO) in the 7 bits). In the case where Comp and LEXT are always' 1 ', PIX2 is always v 0X The routing counter is always' OX The 12 bits are considered as one unit. Table 72 dO di d2 d3 d4 d5 dd d7 d8 d9 dlO dll Comp PIX2 PIX) PIXO LEXT RUNO RUN5 RUN4 RUN3 RUN2 RUN1 RUNO < In the case of the other 252 pixel data, the following four patterns are applied > 1) If it comes back only 1 pixel with the same value, enter the path length compression flag (Comp), and enter the pixel data (PIX7 to PIXO) in the 8 bits. In the case where Comp is always? 0 ', PIX7 is always' 1'. The 9 bits are considered as a unit. Table 73 dO di d2 d3 d4 d5 d6 d7 d8 Comp PIX7 PIXó PIX5 P1X4 PIX3 PIX2 PIX1 P1X0 a) If 2 to 9 pixels with the same value come back, enter the path length compression flag (Comp), and enter the pixel data (PIX7 to PIXO) in the 8 bits, and enter the length extension (LEXT) and enter the travel counter (RUN2 to RUNO) in the 7 bits. In the case where Comp and PIX7 with always 'l'; LEXT is always '0'. The travel counter is calculated by adding 2. The 13 bits are considered as one unit.
Table 74 dO di d2 d3 d4 d5 dd d7 d8 d9 dlO d l dl2 Comp P1X7 PIX6 PIX5 PIX4 P1X3 PIX2 PIX1 P1X0 LEXT RUN2 RUN1 RUNO 3) If 10 to 136 pixels come back with the same value, enter the path length compression flag (Comp), and enter the pixel data (PIX7 to PIXO) in the 8 bits, and enter the length extension ( LEXT) and enter the travel counter (RUN6 to RUNO) in the 7 bits. In the case where, Comp, PIX7 and LEXT are always' IX The travel counter is calculated by adding always 9. The 17 bits are considered as one unit. Table 75 dO di d2 d3 d4 d5 df »d7 a dO dlO dl l cll2 dl3 dl4 dl5 dl6 Ca p PIX7 Pl? D rix P1V4 PM PIX2 P1X1 P1X0 LEXl RÜNí RUNS RUN4 RUNJ RUN2 RUNI RUNO 4) If the same pixels are coming back to the end of a line, enter the compression flag for travel length (Comp), and enter the pixel data (PIX7 to PIXO) in the 8 bits, and enter the length extension (LEXT) and enter the travel counter (RUN6 to RUNO) in the 7 bits. In the case where Comp, PIX7 and LEXT are always V1X The travel counter is always l0 '. The 17 bits are considered as a unit. Table 76 dO di d2 d3 d4 dS d? d7 d8 d9 gave dl l d! 2 dl3 dH dl5 dld Cupf PIX7 P1X6 PIXS PIX4 PIX3 PIX2 P1X1 IXÜ LEXT RUN6 RUNj RUN4 RUNJ RUN2 RUN1 RliNC FIG. 71 is a view for explaining the content of the information contained in a disk-shaped storage medium according to the embodiment of the invention. Information storage medium 1 shown in FIG. 71 (a) can be configured by a high-density optical disk (a high-density or high-definition digital versatile disk: HD_DVD for brevity) which uses, for example, a red laser with a wavelength of 650 nm or a blue laser with a wavelength of 405 nm (or less). The information storage means 1 includes an input conductor area 12, and an output conductor area 13 from the internal peripheral side, as shown in FIG. 71 (b). This information storage medium 1 adopts the ISO 9660 and UDF bridge structures as a file system, and has the information area 11 with volume / file archiving ISO 9660 and UDF on the driver's side of the area entrance 12 of data. The data area 12 allows mixed assignments of the video data recording area 20 used to record the DVD-video content (also called standard content or SD content), another video data recording area (the video recording area). advanced content for recording the advanced content) 21, and the general computer information recording area 22, as shown in Fig. 71 (c). The video data recording area 20 includes the HD video manager registration area 30 (the HD Definition compatible video manager [HDVMG] which records the administration information associated with the content of the full HD_DVD video recorded in the area 20 of video data recording, the recording area 40 of the set of HD video titles (the Set of Video titles compatible with High Definition), also called standard VTS) which is available for the respective titles, and the registration area 50 of the registration information information and video information (the video objects) for the respective titles together, and the set of titles HD video (advanced VTS), as shown in Fig. 71 (d). The HD video manager registration area (HDVMG) the administrator information area 31 (the High Definition Compatible Video Manager Information [HDVMGI]) indicating the associated administration information the data recording area 20 global video, the HD video manager information backup area [HDVMGI_BUP] which records the same information as in the HD video manager information area 31 as its backup, and the video object area 32 of the menu (HDVMGM_VOBS) which registers a top menu screen indicating the complete video data recording area 20, as shown in Fig. 71 (e). In the embodiment of the invention, the HD video manager registration area 30 again includes the audio object area 33 of the menu (HDMENU_AOBS which records the audio information to be sent in parallel with the menu display. VOBS of the PGC first-play language selection menu (FP_PGCM_VOBS) which is executed after the first access immediately after disk 1 (the information recording medium) is loaded into a disk drive, is configured to register a screen that can establish a description language code and the like A recording area 40 of the set of HD video titles (HDVTS) that records the management information and the video information (the video objects) together for each title , includes the information area 41 of the set of HD video titles (HDVTSI) which records the administration information for all the content in the area 40 of r the set of HD video titles, the information backup area 44 of the set of HD video titles (HDVTSI_BUP) which records the information that the information area 41 of the set of HD video titles as their backup data, the area 42. of video objects of the menu (HDTSM_VOBS) which records the information of the menu screens for each set of video titles, and the area 43 of video title objects (HDVSTTJVOBS) which records the data of video objects ( the video information of the title) in this set of video titles. Fig. 72A is a view for explaining an example of configuring an Advanced Content in the advanced content recording area 21. Advanced content can be recorded in the storage medium, or it is provided to the server through a network.
The Advanced Content registered in the Advanced Content Al area is configured to include the Advanced Navigation that manages the output of the Primary / Secondary Video Set, the text / graphic presentation, and the audio output, and the Advanced Data that includes these data handled by Advanced Navigation. Advanced Navigation registered in the All Advanced Navigation area includes the Playback or Playlist files, the Upload Information files, the Marking files (for content, stylization, Timing information), and the Instruction set. The PlaybackList files are recorded in an area there of files of PlayList files. Composition files or Load information are recorded in an A112 area of composition files or Load Information. Marking files are registered in an A113 area of Marking files. The Instruction or Guide set files are recorded in an A114 area of Instruction Set files. As well. Advanced Data recorded in area A12 Advanced Data includes a set of Primary Video (VTSI, TMAP, and P-EVOB) a Secondary Video Set (TMS and S-EVOB), Advanced Elements (JPEG, PNG, MNG, L-PCM, Open Type fonts, etc.), and the like. The Primary Video Set is registered in an area A121 of the Primary Video Set. The Secondary Video is recorded in an A122 area of the Secondary Video Set. The Advanced Elements are recorded in an area A123 of the Advanced Elements Set. Advanced Navigation includes a Playlist file and the Load Information files, the Marking files (for content information, styling, Timing) and the Sequence files. The Playback files, the Load Information files and the Marking files must be encoded in XML documents. The script files will be encoded in text files in UTF-8 encoding. The XML documents for Advanced Navigation will be well formed, and subject to the rules of this section. XML documents which are not well formed with XML will be rejected by the Advanced Navigation Engine. The XML documents for Advanced Navigation will be well-formed documents. But if the document resources are not well formed, they can be rejected by the Advanced Navigation Engine. The XML documents will be valid according to their definition of the type of documents referenced (DTD). The Advanced Navigation Engine is not required to have the content validation capability. If the XML document resource has not been well formed, the behavior of the Advanced Navigation Engine is not fanatized. The following rules will be applied in the XML declaration. • The encoding declaration will be "UTF-8" or "ISO-8895-1" - XML files will be encoded in one of them.
• The value of the declaration of the individual document in the XML declaration, if present, must be "no". If the declaration of the individual document is not present, its value will be considered as "no". Each resource available on the disk or in the network has an address that is encoded by a Uniform Resource Identifier defined in [URI, RFC2396]. Protocol Supported by T.D.B and path to the DVD disc. file // dvdrom: / DVD_advnav / .xml file Playlist File (Fig 85) The Playlist file describes the initial configuration of the HD DVD player system of advanced content titles. For each title, a set of information about Object Assignment and Sequence of Reproduction should be described in the Playback file. As for the Title, the Information of Assignment of Objects and the Sequence of Reproduction, refer to the Presentation Timing Model. The Playlist File will be coded as well-formed XML, submitted to the rules in the XML Document File. The document type of the Playlist file will continue in this section. Elements and attributes In this section, the Playlist file is defined using the XML Syntax Representation. 1) Playback ElementList The PlaybackList element is the root element of the PlaybackList. Representation of the XML Syntax of the Playlist element: <; PlaybackList > Title Configuration Set < / PlayList > A Playback Element consists of a TitleSet item for a set of Title information and a Configuration item for the System Configuration Information. 2) Element of TituloConjunto The element of TituloConjunto describes the information of a set of Titles for the Advanced Contents in the PlaybackList. Representation of the XML Syntax of the TituloSet element: < Title * > < / TituloConjunto The TituloConjunto element consists of a list of Title elements. According to the document order of the Title element, the Title number for Advanced Navigation will be assigned continuously from? LX A Title Element describes the information of each Title. 3) Title Element The title element describes the information of a Title for the Advanced contents, which consists of the Information of Assignment of Objects and the Sequence of Reproduction in a Title. Representation of the XML Syntax of the Title element: < title Id = ID Hidden = (true | false) OnExit = positioInteger > PrimaryVideoTrack? SecondaryVideoTrack? ComplementaryAudioTrack? Complementary SubtitleTrack? ApplicationTrack * ChapterList? < / Title > The content of the title element consists of the element fragment for the clues and the Chapter element. The element fragment for the tracks consists of a list of elements of PrimaryVideoTrack, SecondaryVideoTrack, ComplementaryAudioTrack, Complementary SubtitlesTrack, and ApplicationTrack. The Object Assignment information for a Title is described by the element fragment for the tracks. The assignment of the Presentation Object in the Title Timeline will be described by the corresponding element. The Primary Video Set corresponds to PrimarioVideo Pista, the Set of Secondary Video to SecondaryVideoTrack, Complementary Audio to ComplementaryAudioPista, Complementary Subtitles to ComplementarySubtitlesTrack, and ADV_APP toTrack Application. The Title Timeline is assigned for each Title. Regarding the Title Timeline, refer to 4.3.20 Presentation Timing Object.
The information of the Sequence of Reproduction for a Title which consists of the points of chapters is described by the element of ChapterList. (a) hidden attribute Describes whether the Title may be navigable by the User Operation, or not. If the value is "true", the title can not be navigated by the User Operation. The value can be omitted. The default value is "false". (b) AtExit attribute T.B.S Describes the Title which will be played by the Player after the current Title is played. The Player may not skip if there is a reproduction of the current Title before the end of the Title. 4) Element PrimaryVideoPrimary TrackVideoTrack describes the Object Assignment Information of the Primary Video Set in a Title. Representation of the XML Syntax of the PrimarioVideoPista element: < PrimaryVideoTrack Id = ID > (Clip | ClipBloque) + < / PrimaryVideoTrack > The content of PrimaryVideoTrack is a list of Clip elements and a ClipBloque element, which refers to a P-EVOB in the Primary Video Set as the Presentation Object. The PRE-player will assign O-EVOB (s) on the Title Timeline using the start and end times, according to those described in the Clip element. The P-EVOB (s) assigned in a Title Timeline will not overlap each other. 5) SecondaryVideo element SecondaryVideoTrack describes the Object Assignment Information of the Secondary Video Set in a Title. Representation of the XML Syntax of the SecondaryVideoTrack element: < SecondaryVideoTrack Id = ID Synchronization = (true | false) > Clip + < / SecondaryVideoTrack > The content of SecondaryVideoTrack is a list of Clip elements, which refer to an S-EVOB in the Secondary Video Set as the Presentation Object. The player will pre-assign S-EVOB (s) on the Title Timeline using the start and end times, according to those described in the Clip element.
The player will assign the Clip and the ClipBlock in the Title Time Line by the TitleTimeTime and titleTimeTime attributes of the Clip element as the start and end position of the Clip on the Title Timeline. The S-EVOB (s) assigned in a Title Timeline will not overlap each other. If the synchronization attribute is 'true', the Secondary Video Set will be synchronized with the time in the Title Timeline. If the synchronization attribute is 'false', the Secondary Video Set will run on its own time. (a) Synchronization attribute If the value of the synchronization attribute is 'true' or is omitted, the Presentation Object in SecondaryVideoTrack is the Synchronized Object. If the value of the synchronization attribute is 'false', this is not the Synchronized Object. 6) Complementary elementAudioPlayer ComplementaryAudioPista describes the Object Assignment Information of the Complementary Audio Track in a title and the assignment to the Audio Flow Number. Representation of the XML Syntax of the ComplementaryAudioPista element: < ComplementaryAudioTrack Id = ID flowNumber = Number languageCode = witness >; Clip + < / ComplementaryAudioTrack > The content of the ComplementariaAudioPista element is a list of elements of celia, which will refer to Complementary Audio as the Presentation Element. The Player will pre-assign the Complementary Audio on the Title Timeline according to what is described in the Clip element. The Complementary Audio (s) assigned on a Title Timeline will not overlap each other. Complementary Audio will be assigned to the specified Audio Flow number. If the API is Audio_flow_change selects the specified stream number of the Complementary Audio, the Player will select the Complementary Audio instead of the audio stream in the Primary Video Set. (a) attribute flowNumber Describes the Audio Flow Number for this Complementary Audio. (b) Language attribute Describes the specific code and extension of the specific code for this Supplemental Audio. Regarding the specific code and the specific code extension, refer to Annex B. The value of the language code attribute follows the following BNF scheme. The specificcode and specificCodeExt describe the specific code and the extension of the specific code, respectively. LanguageCode: = specificCode ':' specifiedCodeSpecific ExtensionCode: = [A-Za-z] [A-Za-zO-9] specificExtCode: = [0-9A-F] [0-9A-F] 7) Complementary elementSubtitres Complementary TrackSubtitlesTrack describes Object Assignment Information of the Supplementary Subtitles in a Title and the assignment of the Sub-image Flow Number. Representation of the XML Syntax of the SubmembersSubtitleSubject element: < ComplementarySubtitlesTrack id = ID flowNumber = Language numberCode = witness Clip + < / ComplementarySubtitlesTrack > The content of the SubtitleSubtitles element is a list of Clip elements, which refers to Complementary Subtitles as the Submission Element. The Player will pre-assign the Complementary Subtitles on the Title Timeline according to those described in the Clip element. The Complementary Subtitle (s) assigned in a Title Timeline will not overlap with each other. Complementary Subtitles will be assigned to the specified Sub-Image Flow Number. If the Change_Channel Sub-Image API selects the flow number of the Complementary Subtitles, the player will choose the Supplemental Subtitles instead of the Sub-image stream in the Primary Video Set. (a) flowNumber attribute Describes the Sub-image Flow Number for these Complementary Subtitles. (b) Language attribute Describes the specific code and the specific code extension for these Supplementary Subtitles. Regarding the specific code and the specific code extension, refer to Annex B. The language code attribute value follows the following BNF scheme. The specificCode and the specificExtCode describe the specific code and the specific code extension, respectively. languageCode: = specificCode ':' specificCodeSpec specific textCode: = [A-Za-z] [A-Za-Z0-9] specificCodeExt: = [0-9A-F] [0-9A-F] 8) item ApplicationTrack The element ApplicationTrack describes the Object Assignment Information of ADV_APP in a Title. Representation of the XML Syntax of the ApplicationTrack element: < ApplicationTrack id = IDLoad Information = anyURI Synchronization = (true | false) Language = string / > The ADV_SPP will be programmed in the Full Title Timeline. If the Player starts playing the Title, the Player will launch the ADV_APP according to the Load Information file specified by the Load Information attribute. If the Player leaves the Title replay, the ADV_APP in the Title must be terminated.
If the synchronization attribute is 'true', ADV_APP will be synchronized with the time in the Title Timeline. If the synchronization attribute is 'false', ADV_APP will run on its own time. (1) Load Information attribute Describes the URI for the Load Information file which describes the initialization information of the application. (2) synchronization attribute If the synchronization attribute value is 'true', the ADV_APP in Track Application is the Synchronized Object. If the value of the sync attribute is 'false', this is the unsynchronized object. 9) Clip element A Clip element describes the life period information (start time to end time) in the Timeline of the Title of a Presentation Object. Representation of the XML Syntax of the Clip element: < Clip id = ID titleTimeHome = timeExpression titleTimeHome = timeExpression titleTimeFin = timeExpression src = anyURI preload = timeExpression XML: base = anyURI > (NoAvailableFlowAudio | NoAvailableSubimageFlow) * < / Clip > The period of life in the Timeline of the Title of a Presentation Object is determined by the start time and the completion time in the Title Timeline. The start time and end time in the Title Timeline are described by the title TimeTimeTime and the attributeTimeTimeTime, respectively. A starting potion of the Presentation Object is described by the clipTimeTime attribute. At the start time in the Title Timeline, the Presentation Object will be present in the position at the start position described by clipTimeHome. The Presentation Object is known by the URI of the index information file. For the Primary Video Set, reference will be made to the TMAP file for P-EVOB. For the Secondary Video Set, reference will be made to the TMAP file for S-EVOB. For the Complementary Audio and the Complementary Subtitles reference will be made to the TMAP file for S-EVOB of the Secondary Video Set that includes the object.
The attribute values of title StartTime, titleTimeTime and clipTimeTime, and the duration time of the Presentation Object will satisfy the following relationship: titleTimeTime < titleTimeTime and clipTimeTime + titleTimeTime - titleTimeTime = duration of the Presentation Object. NodisponibleAudioFlow and NoseavailableSubimageFlow will be present only for the Clip element in the premininaryVideoTrain element. (a) attribute titleTimeHistory Describes the start time of the continuous fragment of the Presentation Object in the Title Timeline. The value will be described in the time valueExpression. (b) attribute titleTimeTime Describes the completion time of the continuous fragment of the Presentation Object in the Title Timeline. The value will be described in the time valueExpression. (c) Attribute clipTimeHome Describes the start position in a Presentation Object. The value will be described in the time valueExpression. The TimeTime clip can be omitted. If the clipTimeTime attribute is not present, the start position will be '0' (d) src attribute Describe the URI of the index information file of the MovieObject to be referenced, (e) preload attribute TBD describes the time, in a Line of Title Time, when the Player must begin to pre-extract the Presentation Object. 10) ClipBloque element ClipBloque describes a group of Clip in P-EBOVS, which is called a Clips Block. One of the Clip is chosen for the presentation. Representation of the XML Syntax of the ClipBloque element: < clipBlock > Clip + < / clipBlock > All the Clip in the ClipBlock will have the same start time and the same end time. The ClipBlock will be programmed into the Title Timeline using the start and end time of the first small Clip. The ClipBloque can only be used on the VideoVideo Primary. The ClipBlock represents an Angles Block. According to the document order of the Clip element, the Angle number for Advanced Navigation will be assigned continuously from 'IX By default, the Player will select the first Clip to be presented. If the Change_Angle API selects the specified Angle number of ClipBlock, the Player will select the corresponding Clip to be presented. 11) Unavailable Stream Audio The Element available in Stream Audio in a Clip element describes a Decoding Audio Stream in P-EBOVS that is not available during the presentation period of this Clip. Representation of the XML Syntax of the NodisponibleAudioFluj or: < NodisponibleAudioFlow number = integer / > The NodisponibleAudioFlow element will be used only in a Clip element by P-EVOB, which is in a PrimaryVideoTrack element. Otherwise, the Unavailable Audio Stream will not be presented. The Player will disable the Decoding Sub-Image Flow specified by the number attribute. 12) Nodisponible element Sub picture Flow The Nodisponible element Sub picture Flow in a Clip element describes a Decode Sub-Image Flow in p-EVOBS that is not available during the presentation period of this Clip. Representation of the XML Syntax of the Nodisponible element SubImage Flow: < Not availableSubimageFlow number = integer / > The Nodisponible element Sub picture Flow can be used only by P-EVOB, which is in the PrimaryVideoTrack element. Otherwise, the Nodisponible element Sub picture Flow will not be presented. The Player will be disabled to Decode the Sub-image stream specified by the number attribute. 13) Element ChapterList The Element ChapterList in a Title element describes the Reproduction Sequence Information for this Title. The Play Sequence defines the beginning position of the chapter by the time value in the Title Timeline. Representation of the XML Syntax of the element ChapterList <; ChapterList > Chapter + < / ChapterList > The ChapterList element consists of a list of Chapter elements. The Chapter element describes the starting position of the chapter in the Title Timeline. According to the order of the document of the Chapter element in the ChapterList, the Chapter number for Advanced Navigation will be continuously assigned from 'IX. The beginning position of the chapter in a Line of Time of the Title will increase in an iononota manner according to the number of Chapter. 14) Element Chapter The Element Chapter describes a starting position of the chapter in the Timeline of the Title in a Play Sequence. Representation of the XML Syntax of the Chapter element < Chapter id = ID titleHomeTime = timeExpression / > The Chapter element will have a title attribute StartTime. A time value Expression of the title Time Start Title describes a start position of the chapter on the Title Timeline. (1) attribute title Start Time Explains the starting position of the chapter in the Timeline of the Title in a Play Sequence. The value will be described in the time valueExpression defined in [6.2.3.3]. Data Types 1) Expression Time Describes the units of the 90kHz time code value by a non-negative integer value. Cargo Information File The Cargo Information File is the initialization information in ADV_APP for a title. The Player will launch an ADV_APP according to the information in the load information file. The ADV_APP consists of a presentation of the Marking file and the execution of the Command Sequence. The initialization information described in a Load Information file is as follows: • The files to be stored in the File Cache initially before executing the initial markup file • The initial marking file to be executed • The sequence file to be executed The load information file will be encoded as well-formed XML, subject to the rules in 6.2.1 XML Document File. The type of document in the Playlist file will follow this section.
Elements and Attributes In this section, the syntax of the Load Information file is specified using the Representation of the XML Syntax. 1) Application element The Application element is the root element of the Load Information file. This contains the following elements and attributes. Representation of the XML Syntax of the Application element: < Application Id = ID > Resource * Command sequence? Marked? Limit? < / Application > 2) Resource element Describes a file which will be stored in a File Cache before executing the Initial Marking. Representation of the XML Syntax of the Playlist element: < Resource id = ID src = anyURI / > (a) src attribute Describes the URI for the file to be stored in a File Cache. 3) Sequence Element Describes the initial Sequence file for the ADV_APP. Representation of the XML Syntax of the Sequence of Command element: < Command sequence or guide id = ID src = anyURI / > At the start-up of the application, the Sequence Engine will load the script file referred to by URI into the src attribute, and then execute it as the global code. [ECMA 10.2.10] (b) src attribute Describes the URI for the initial script file. 4) Marker element Describes the initial composition file for the ADV_APP. Representation of the XML Syntax of the Markup or composition element: < Marking or composition Id = ID Src = snyURI / > At the start of the application, after the initial execution of the script file if it exists, Advanced Navigation will load the Marking or composition file referred by the URI into the src attribute. (c) src attribute Describes the URI for the Initial Marking file. 5) Limit Element T.B.D. defines the list of valid URLs that the application can refer to. Marking File or Composition A Marking or Composition file is the information of the Presentation Object in the Graphics Plane. Only one Marking file is present in one application at the same time. A Marking file consists of a content model, stylization and Timing. For more details, see 7 Definition of Declarative Language [This Marking corresponds to the iHD mark] Sequence File A Command Sequence describes the global code of the Sequence of Commands.
The CommandCurrent Sequence executes a script file at the start of the ADV_APP and waits for the event in the event handler defined by the global code of Guide or Sequence of Command executed. The Command Sequence can control the Playback Sequence and the Graphics on the Graphics Plane by events such as the User Input Event, the Player event of the Player. FIG. 84 is a view showing another exemplary secondary enhanced video object (S-EVOB) (another example is FIG 83). In the example of FIG. 83, an S_EV0B is composed of one or more EVOBUs. However, in the example of Fig. 84, an S_EV0B is composed of one or more Time Units (TUs) Each TU can include a group of audio packages for an s-EVOB (A_pck for Secondary) or a Text package Timing for an S-EVOB (TT_PCK for the Secondary) (For TT_pck, refer to Table 23). Note that a Playlist file which is described in XML (markup language) is assigned to the disk. A playback device (player) of this disc is configured to play this PlaybackList first (before the Advanced content playback) when that disc has the Advanced content. This Playlist file may include the following pieces of information (see Fig. 85 to be described later): * Object Assignment Information (the information that is included in each title and is used to reproduce the assigned objects in the line of time of this title); * Playback Sequence (the playback information for each title which is described based on the title timeline); and * Configuration Information (information for system configurations, such as memory buffer or buffer alignment, etc.) Note that the Primary Video Set is configured to include the Video Title Set Information ( vtsi), an Enhanced Video Object Set for the Video Title Set (VTS_EVOBS), a Video Title Set Information Backup (VTSI_BUP), and the Time Signature Assignment Information for the Video Title Set (VTS_TMAP) ). FIG. 73 is a view for explaining an example configuration of video title set information (VTSI). The VTSI describes the information of a video title. This information makes it possible to describe the attribute information of each EVOB. This VTSI starts from an Information Management Table of the Video Titles Set (VTSI_MAT), and an Attributes Information Table of Enhanced Video Objects from the Video Titles Set (VTS_EVOB_ATRT) and the Video Enhanced Video Information Table of the Video Titles Set (VTS_EVOBIT) follows that table. Note that each table is aligned with the limit of the neighboring logical blocks. Due to this limit alignment, each table can track up to 2047 bytes (which may include OOh). Table 77 is a view for explaining an example configuration of the information management table of the video title set (VTSI_MAT).
Table 77 VTSI MAT In this table, a VTS_ID which is first assigned as a relative byte position (RBP) describes the "ADVANCED_VTS" used to identify a VTSI file using the codes of the ISO 646 character set (characters a). the next VTS_EA describes the final address of a VTS of interest using a relative block number from the first logical block of that VTS. The following VTSI_EA describes the final VTSI address of interest using a relative block number from the first logical block of that VTSI. The following VERN describes a version number of the Video DVD specification of interest. Table 78 is a view to explain an example configuration of a VERN. Table 78 VERN M5 bl4 bl3 bl2 bll blO b9 b8 reserved b7 bd b5 b4 b3 b2 bl bO Partial Version of the Notebook Partial Version of the Notebook. . . 0010 0000b: version 2 0 Other reserved Table 79 is a view for explaining an example configuration of a category of the video title set (VTS_CAT). This VTS_CAT is assigned after the VERN in Tables 77 and 78, and includes the information bits of an Application type. With this type of Application, an Advanced VTS (= 0010b), Interpolable VTS (= 0011b), or others can be discriminated. After the VTS_CAT in Tables 77 and 78, the final address of the VTSI_MAT (VTSI_MAT_EA), the initial address of the VTS_EBOV_ATRT (VTS_EVOB_ATRT_SA), the start address of the VTS_EVOBIT (VTS_EVOBIT_SA), the start address of the VTS_EVOBS are assigned. (VTS_EVOBS_SA), and others (reserved). Table 79 VTS_CAT b31 b30 b29 b28 b27 b2? b25 b24 reserved b23 b22 b21 b20 9 b! 8 bl7 bld reserved bl5 bl4 bl3 bl2 bll blO b9 b8 reserved b7 b? b5 b4 b3 b2 bl bO reserved Application type Application Type 0010b VTS Advanced 0011b VTS Interoperable Other reserved FIG. 72B is a view for explaining an example of a time allocation configuration (TMAP) which includes as one element the time allocation information (TMAPI) used to convert the playing time into a primary enhanced video object (P-) EVOB) in the direction of an enhanced video object unit (EVOBU). This TMAP starts from the TMAP Generate information (TMAP Gl). A Search indicator (TMAPI_SRP) and TMAP information (TMAPI) follow the TMAPI_GI and ILVU information (ILVUI) is assigned at the end. Table 80 is a view for explaining an example of configuration of the general time allocation information (TMAP_GI).
TABLE 80 TMAP Gl This TMAP_GI is configured to include TMAP_ID that describes "HDDVD-V_TMAP" which identifies a Time Assignment file by the character set or similar codes of ISO / IEC 646: 1983 (characters a), TMAP_EA that describes the address At the end of the TMAP of interest with a relative logical block number from the first logical block of the TMAP of interest, VERN describes the version number of the book of interest, TMAPI Ns describing the number of fragments of the TMAPI in TMAP of interest using the numbers, ILVUI_SA which describes the start address of the ILVUI with a relative logical block number from the first logical block of the TMAP of interest, EVOB_ATR_SA which describes the starting address of the EVOB_ATR of interest with a relative logical block number from the first logical block of the TMAP of interest, the copy protection information (CPI), and the like. Registered contents can be protected by illegal or unauthorized use of copy protection information, on a time allocation basis (TMAP). Here, the TMAP can be used to convert an EVOB of a given presentation time to the address of an EVOBU or to the address of a unit of time TU (TU represents an access unit for an EVOB that does not include the video package) . In the TMAP for a Primary Video Set, the TMAPI_Ns are set to 'IX In the TMAP for a Secondary Video Set which does not have any TMAPI (for example, streaming live content streams), the TMAPI_Ns are set in 'OX If there is no ILVUI in the TMAP (that for a contiguous block), the ILVUI_SA is filled in with' Ib or FFh 'or the like. In addn, when the TMAP for a Primary Video Assembly does not include any EVOB-acoustic transducer for reception it is filled with 'Ib' or the like.
Table 81 is a view to explain an example of configuration of the type of time allocation (TMAP_TY). This TMAP_TY is contiguous to include the information bits of ILVUI, and Angle. If the bit of ILVUI in the TMAP_TY is Ob, this indicates that there is no ILVUI in the TMAP of interest, that is, the TMAP of interest is that for a contiguous block or others. If the bit of ILVUI in the TMAP_TY is Ib, this indicates that there is an ILVUI in the TMAP of interest, that is, the TMAP of interest is that of an interleaved block. Table 81 TMAPJG? bl5 b! 4 bl3 bl2 bll blO b9 b8 reserved G VUI ATR b7 b6 b5 M b3 b2 bl bO reserved Angle ILVUI ... Ob: ILVUI does not exist in this TMAP, that is, this TMAP is for continguos blocks or others. Ib: ILVUI exists in this TMAP, that is, this TMAP is for interleaved Blocks. ATR ... Ob: EVOB_ATR does not exist in this TMAP, that is, this TMAP is for the Primary Video Set Ib: EVOB_ATR exists in this TMAP, that is, this TMAP is for the Secondary Video Set. (This value is not allowed in TMAP for Primary Video Set) Angle ... 00b: No Angle Block 01b: No angle block 10b jointless: Angle Block jointless l lb: reserved Note: The value '01b '10b' in "angle" can be set if the value of "Block" in ILVU = 'lb' If the bit of ATR in the TMAP_TY is Ob, this specifies that there is no EVOB_ATR in the TMAP of interest, and TMAP of interest is a time allocation for a Primary Video Set. If the ATR bit in TMAP_TY is Ib, this specifies that there is an EVOB_ATR in the TMAP of interest, and the TMAP of interest is a time allocation for a Secondary Video Set. If the Angle bits in the TMAP_TY are Ob, they specify that there is no angle block; if these bits are 01b, they specify that there is no angle block without joins; and if these bits are 10b, they specify an angle block without joints. The Angle bits = 11b in the TMAP_TY are reserved for other purposes. Note that the value 01b or 10b in the Angle bits can be set when the ILVUI bit is Ib. Table 82 is a view for explaining an example configuration of the time allocation information search indicator (TMAPI_SRP). This TMAPI_SRP is configured to include TMAPI_SA that describes the start address of the TMAPI with logical block number relative from the first logical block of the TMAP of interest, VTS_EVOBIN describing the number of VTS_EVOBI which is referred to by the TMAPI of interest , EVOBU_ENT_Ns describing the number of fragments EVOBU_ENTI for TMAPI of interest and ILVU_ENT_Ns describing the number of ILVU_ENTs for TMAPI of interest (if no ILVUI in the TMAP of interest (i.e., if the TMAP is for contiguous blocks) , the value of ILVUI ENT Ns is '0'). Table 82 TMAPI SRP FIG. 74 is a view for explaining a configuration example of the mapping information Time (TMAPI of a Set Video Primary) which starts from entry information (EVOBU_ENT # i) of one or more object units improved video. The TMAP information (TMAPI) as an element of a Time Assignment (TMAP) is used to convert the playing time in an EVOB in the direction of an EVOBU. This TMAPI includes one or more EVOBU Entries. A TMAPI for a contiguous block is stored in a gil, which is called TMAP. Note that one or more TMAPIs belonging to an identical interleaved block are stored in a single file. This TMAPI is configured to start from one or more EVOBU Entries (EBOVU ENTs).
Table 83 is a view for explaining an example of configuring the input information of enhanced video object units (EVOBU_ENTI). This EVOBU_ENTI is set to include 1STREF_SZ (Top), 1STREF_SZ (Bottom), EVOBU_PB_TM (top, EVOBU_PB_TM (Bottom), EVOBU_SZ (Top) and EVOBU_SZ (Bottom)) Table 83 EVOBU Input (EVOBÜ ENT) b31 b30 b29 b28 b27 b26 b25 b24 1STREF_SZ (upper) b23 b22 b21 b20 bl9 b! 8 bl7 bl < 5 1STREF_SZ (Lower) EVOBU_PB_TM (Upper) bl5 bl4 bl3 bl2 blI blO b9 b8 EVOBU_PB_TM (Lower) EVOBU_SZ (Higher) b7 b6 b5 b4 b3 b2 bl bO EVOBU_SZ (Bottom) 1 STREF_SZ ... Describes the size of the 1st Reference Image of this EVOBU. The size of the 1st Reference Image is defined as the number of packets from the first packet of this EVOBU to the packet that includes the last byte of the first coded reference image of this EVOBU. Note (TBD): the "reference image" is defined as one of the following: -A picture I which is encoded as a frame structure -A pair of pictures I both of which are encoded as a field structure -One image I followed immediately by the P-image both of which are coded as field structure EVOBU_PB_TM ... Describes the Playback Time of this EVOBU, which is specified by the number of video fields in this EVOBU. EVOBU_SZ ... Describe the size of this EVOBU which is specified by the number of packages in this EVOBU.
The 1STREF_SZ describes the size of a Reference Image of the EVOBU of interest. The size of the Ira Reference Image can be defined as the number of packets from the first packet of the EVOBU of interest to the packet that includes the last byte of the first coded reference image of the EVOBU of interest. Note that the "reference image" can be defined as one of the following: an I image which is encoded as a frame structure; a pair of images I both of which are encoded as field structures; and an I image immediately followed by a P-image, both of which are coded as field structures. The EVOBU_PB_TM describes the playing time of the EVOBU of interest, which can be specified by the number of video fields in the EVOBU of interest. In addition, the EVOBU_SZ describes the size of the EVOBU of interest, which can be specified by the number of packages in the EVOBU of interest.
FIG. 75 is a view to explain an example of information configuration of interleaved units (I VUI for a Primary Video Set) which exists when the time allocation information is for an interleaved block. This ILVUI includes one or more ILVU Entries (ILVU_ENTs). This information (ILVUI) exists when the TMAPI is for an Interleaved Block. Table 84 is a view for explaining an example of configuration of the input information of interleaved units (ILVU_ENTI). This ILVU_ENTI is configured to include ILVU_ADR which describes the start address of the ILVU of interest with a relative logical block number from the first logical block of the EVOB of interest, and ILVU_SZ which describes the size of the ILVU of interest. This size can be specified by the number of EVOBUs.
Table 84 ILVU ENT FIG. 76 is a view showing an example of a TMAP for a contiguous block. FIG. 77 is a view showing an example of a TMAP for an interleaved block. FIG. 77 shows each of a plurality of TMAP files that individually have TMAPI and ILVUI. Table 85 is a view for explaining a list of packet types in an improved video object. This list of package types has a Navigation package (NV_PCK) configured to include General Control Information (GCI) and Data Search information (DSI), a Main Video package (vm_PCK) configured to include data from video (MPEG-2 / MPEG-4 AVC / SMPTE VC-1, etc), a Sub video package (VS_PCK) configured to include video data (MPEG-2 / MPEG-4 AVC / SMPTE VC-1, etc.), a Main Audio Package (AM_PCK) configured to include Audio data (Dolby Digital Plus (DD +) / MPEG / Linear PCM / DTS-HD / PCM Packaging (MLP) / SDDS (option), etc.), a Sub Audio package (AS_PCK) configured to include audio data (Dolby Digital Plus (DD +) / MPEG / Linear PCM / DTS-HD / PCM Packaging (MLP), etc.), a Sub-image package (SP_PCK) configured to include Sub-image data, and an advanced package (ADV_PCK) configured to include the Advanced Content data. Table 85 Note that the Main Video package (VM_PCK) in the Primary Video Set follows the definition of a V_PCK in the Standard Content. The Sub Video package in the Primary Video Set follows the definition of the V_PCK in the Standard Content, except for the id_flow and the P-STD_buffer-size (see Fig. 202). Table 86 is a view to explain an example of restriction of transfer rates on the flows of an improved video object. In this example of restriction of transfer speeds, an EVOB is established with a restriction of 30.24 Mbps in the total flows. A Main Video stream is set with a restriction of 29 Mbps (HD) or 15.00 Mbps (SD) in the Total flows, and a restriction of 29.40 Mbps (HD) or 15.00 Mbps (SD) in One Stream. The main audio streams are fixed with a restriction of 19.60 Mbps in the total flows, and a restriction of 18,432 Mbps in a flow. The sub-image flows are fixed with a restriction of 19.60 Mbps in the total flows, and a restriction of 10.08 Mbps in A flow.
Table 86 Transfer rate The restriction on the Sub-image flow in an EVOB will be defined by the following rule: a) For all Sub-image packages which have the same sub_flux_id (SP_PCK (1)): SCR (n) < SCR (n + 100) -T300packs Where N: 1 to SP_PCK number,) s-100) SCR (n): SCR of n-th SP_PCKW SCR (n + 100) SCR of 100 ° SP_PCK after n-th SP_PCK (l) 1 300packages value of 4388570 (= 27xl06x300x2048x8 / 30.24xl06) b) For all sub-image packages (SP_PCK (all)) in an EVOB which can be connected without junctions to the successive EVOB: SCR (n) < SCR (last) -T9opaqUetes Where N 1 a (number of SP_PCK (all) s) SCR (n) SCR of n-th SP_PCK < , odos)) SCR (last) SCR of the last packet in the EVOB 190packages value of 1316570 (= 27xl06x8x2048x90 / 30,24xl06) Note: at least the first successive EVOB packet is not SP_PCK. goopaquees plus T? eraquee guarantees ten successive packages. FIGs. 78, 79 and 80 are a view to explain an example configuration of a primary enhanced video object (P-EVOB). An EVOB (this means a Primary EVOB, ie "P-EBOV") includes some of the Presentation data and the Navigation Data. Like the Navigation Data included in the EVOB, General Control Information (GCI), Data Search Information (DSI), and the like are included. As the Presentation Data, the Main / Sub video data, the Main / Sub audio data, the Sub-image data, the Advanced Content data and the like are included. A Set of Enhanced Video Objects (EVOBS) corresponds to a set of EVOBS, as shown in FIGS. 78, 79 and 80. The EVOB can be divided into one or more (a whole number of) EVOBUs. Each EVOBU includes a series of packets (the various types of packets exemplified in FIGs 78, 79 and 80) which are arranged in the order of registration. Each EVOBU starts from a NV_PCK, and ends in an arbitrary packet which is assigned immediately before the next NV_PCK in the identical EVOB (or the last packet of the EVOB). Except for the last EVOB, each EVOBU corresponds to a playback time of 0.4 sec to 1.0 sec. Also, the last EVOBU corresponds to a playback time of 0.4 sec to 1.2 sec. In addition, the following rules apply to EVOBU: The playback time of the EVOBU is an integer multiple of the field / video frame periods (even if the EVOBU does not include any video data); The start and end times of the EVOBU playback are specified in 90kHz units. The playback start time of the current EVOBU is set to be equal to the playback completion time of the preceding EVOBU (except for the first EVOBU); When the EVOBU includes the video data, the start time of the EVOBU playback is set to be equal to the start time of the reproduction of the first field / video frame. The reproduction period of the EVOBU is set to be equal to or longer than that of the video data, - When the EVOBU includes the video data, that video data indicates one or more PAUs (Image Access Units); When an EVOBU which does not include any video data follows an EVOBU which includes the video data (in a Identical EVOB), a sequence end code (SEQ_FINAL_CODIGO) is appended after the last encoded image; When the reproduction period of the EVOBU is longer than that of the video data included in the EVOBU, a sequence end code (SEQ_FINAL_CÓDIGO) is appended after the last encoded image; The video data in the EVOBU does not have a plurality of Sequence End codes (SEQ_FINAL_CODIGO); When the EVOB includes one or more end-of-sequence codes (SEQ_FINAL_CODIGO), these are used in an ILVU. At this time, the reproduction period of the EVOBU is a whole multiple of the field / frame periods. Also, the video data in the EVOBU has an I image data for a still image, or no video data is included. The EVOBU which has an image data I for a still image has a sequence end code (SEQ_FINAL_CODIGO). The first EVOBU in the ILVU has the video data. Assume that the period of reproduction of the video data included in the EVOBU is the sum of the following A and B: A. a difference between the presentation time stamp PTS of the last video access unit (in the order of display) in the EVOBU and the PTS display time stamp of the first video access unit (in the display order); and B. a presentation duration of the last video access unit (in the order of display).
Each elementary flow is identified by flow_ID defined in a Program flow. Audio Presentation Data which are not defined by MPEG are stored in the packages PES with the flow_ID of private_flow_l. The Navigation Data (GCI and DSI) are stored in PES packets with flow_ID + of private_flow_2 the first bytes of the data areas of the private_flow_l and private_flow_2 packets are used to define the sub_flow_ID, if the flow_id is private_flow_l or private_flow_2, the first Byte of a data area of each packet can be assigned as sub_flux_id.
Table 87 is a view to explain an example of restriction of the elements in a primary improved video object stream.
Table 87 Note: The definition of "completed" is as follows: 1) The start of each flow will start from the first data of each access unit. 2) The end of each flow will be aligned in each access unit. Therefore, when the length of the packet comprising the last data in each flow is less than 2048 bytes, it will be adjusted by any method shown in [Table 5.2.1-1] [TBD].
In this example of element constraint, as for the Main Video stream, the Main Video stream is completed within an EVOB; if a video stream carries interlaced video, the display configuration starts from a higher field and ends in a lower field; and a Video stream may or may not be terminated by an end-of-sequence code (SEQ_FINAL_CODIGO). In addition, as for the Main Video stream, the first EVOBU has the video data. As for the Main Audio stream, the Main Audio stream is completed within an EVOB; and when an Audio stream is for Linear PCM, the first audio frame is the start of the GOF. As for a Sub-image stream, the Sub-image stream is completed within the EVOB; the last playing time (PTM) of the last Sub-picture unit (SPU) is equal to or less than the time prescribed by EVOB_V_E_PTM (video completion time); the PTS of the first SPU is equal to or greater than EVOB_V_S_PTM (start time of the video); and in each sub-picture stream, the PTS of any SPU is longer than that of the preceding SPU that has the same sub_flux_id (if any). Also, as for the Sub-image stream, the Sub-image stream is completed within a Cell; and Sub-image submission is valid within the Cell where the SPU is registered. Table 88 is a view to explain an example of configuration of a flow id and the extension of flow id.
Note: the identification of SMPTE VC-1 flows is based on the use of id_flow extensions defined by an amendment to the MPEG-2 Systems [ISO / IEC 13818-1: 2000 / AMD2: 2004]. When the flow_id is set to OxFD (1111 1101b), it is the field field flow_id_extension which currently defines the nature of the flow. The extension_id_flow field is added to the PES header using the PES extension flags that exist in the PES header.
In this flow_id and flow_id_extension, flow_id = llOx 0 *** b specify flow_id_extension = N / A, and encoding of Flow = audio stream MPEG for principal *** = number of audio flow decoding; flow_id = llOx l *** b specify flow_id_extension = N / A, and encoding of Flow = audio stream MPEG for Sub; flow_id = 1110 0000b specifies flow_id_extension = N / A, and encoding of Flow = stream of Video (MPEG-2); flow_id = 1110 0001b specifies flow_id_extension = N / A, and encoding of Flow = stream of Video (MPEG-2) for Sub; flow_id = 1110 0010 specifies flow_id_extension = N / A, and encoding Flow = video stream (MPEG-4 AVC); flow_id = 1110 0011b specifies flow_id_extension = N / A, and encoding of Flow = stream of Video (MPEG-4 AVC) for Sub; flow_id = 1110 1000b specifies flow_id_extension = N / A, and encoding of Reserved Flow; flow_id = 1110 1001b specifies flow_id_extension = N / A, and encoding of Reserved Flow; flow_id = 1011 1101b specifies flow_id_extension = N / A, and encoding Flow_private_l; flow_id = 1011 1111b specifies flow_id_extension = N / A, and encoding Flow_private_2; flow_id = 1111 1101b specifies flow_id_extension = 101 0101b, and encoding Flow = extended_flow_id (note) video stream SMPTE VC-1 for Principal; flow_id = 1111 1101b specifies flow_id_extension = 111 0101b, and encoding Flow = extended_flow_id (note) video stream SMPTE VC-1 for Sub; and flow_id = Other specifies flow flow coding = no use. Note: the identification of. The SMPTE VC-1 flows are based on the use of the id_flow extensions defined by an amendment to the MPEG-2 systems (ISO / IEC 13818-1: 2000 / AMD2: 2004] .When the_ID_flow is set to be OxFD ( 1111 1101b), the extension_id_flow field is used to define the nature of the current flow, the extended_id_flow field is added to the PES header using the PES extension flags which exist in the PES header, Table 89 is a view to explain an example. for configuring a subfluid id for private flow 1. Table 89 Sub flow id for private flow 1 sub flow id flow coding QQ J * **** | j Sub-image flow * **** = decoding sub-image stream number 0100 1000b reserved Q l 1 * * * * * U reserved 1000 0 *** b reserved 1100 0 *** b audio stream Dolby Digital plus (DD +) for Main *** = Decoding Audio flow number 1100 l *** b Dolby Digital Plus audio stream (DD +) for Sub 1000 l *** b DTS-HD audio stream for Main *** = Decoding Audio flow number 1001 l *** b DTS-HD audio stream for Sub 1001 0 *** b reserved for SDDS 1010 0 * ** b Linear PCM audio stream for Main *** - = Decoding Audio stream number 1010 l *** b reserved 1011 0 *** b PCM audio stream Packaging for Main *** = flow number of Decoding Audio 1011 l *** b reserved 111 0000b reserved 1111 0001b reserved 111 1 0010b reserved for 1 111 0111b 1111 1111b flow defined by the provider Other reserved (for future Presentation Data) Note 1: "reserved" of sub_flux_id means that the sub_flux_id is reserved for future extensions of the system. Therefore, it is forbidden to use the reserved values of sub_flux_id. Note 2: The sub_flux_id whose value is '1111 1111b' can be used to identify a bit stream which is freely defined by the provider. However, it is not guaranteed that each player will have a feature to play that stream. The EVOB restrictions, such as the maximum transfer rate of the total flows, will be applied if the bit flow defined by the provider in the EVOB exists.
In this sub_flux_id for private_flux_l, sub_flux_id = 001 * **** b specifies encoding of Flow = Sub-image stream * **** = Decoding Sub-image stream number; sub_flux_id = 0100 1000b specifies Flow coding = reserved; sub_flux_id = 011 * **** b specifies Flow coding = reserved; sub_flux_id = 1000 0 *** b specifies Flow coding = reserved; sub_flux_id = 1100 0 *** b specifies Flow coding = Dolby Digital Plus audio stream (DD +) for Main *** = Decoding Audio flow number; sub_flux_id = 1100? *** specifies Flow coding = Dolby Digital Plus audio stream (DD +) for Sub; sub_flux_id = 1000 l *** b specifies Flow coding = DTS-HD audio stream for Main *** = Decoding Audio flow number; sub_flux_id = 1001 l *** b specifies Flow coding = DTS-HD audio stream for Sub; sub_flux_id = 1001 0 *** b specifies Flow coding = reserved (SDDS); sub_flux_id = 1010 0 *** b specifies Flow coding = audio stream Linear PCM for Main *** = Decoding Audio flow number; sub_flux_id = 1010? *** specifies Flow coding = Linear PCM audio stream for Sub; sub_flux_id = 1011 0 *** b specifies Flow coding = PCM audio stream Packaging (MLP) for Principal *** = Decoding Audio flow number; sub_flux_id = 1011? *** specifies Flow coding = PCM audio stream Packaging (MLP) for Sub; sub_flux_id = 1111 0000b specifies Flow coding = reserved; sub_flux_id = 1111 0001b specifies Flow coding = reserved; sub_flux_id = 1111 0010b to 1111 0111b specifies Flow coding = reserved; sub_flux_id = 1111 1111b specifies Flow coding = flow defined by the Provider; and sub_flux_id = Other specifies Flow coding = reserved (for future Presentation data). Table 90 is a view for explaining an example configuration of a subflow id for private stream 2.
Table 90 Note 1: "reserved" of sub_flux_id means that sub_flux_id is reserved for future extensions of the system. Therefore, it is forbidden to use the reserved values of sub_flux_id. Note 2: The sub_flux_id whose value is '1111 1111b' can be used to identify a bit stream which is freely defined by the provider. However, it is not guaranteed that each player will have a feature to play that stream. The EVOB restrictions, such as the maximum transfer rate of the total flows, will be applied if the bit flow defined by the provider in the EVOB exists.
In this sub_flux_id for private_flow_2, sub_flow_id = 0000 0000b specifies encoding of Flow = reserved; sub_flux_id = 0000 0001b specifies Flow coding = DSI stream; sub_flux_id = 0000 0010b specifies Flow coding = GCI flow; sub_flux_id = 0000 1000b specifies Flow coding = reserved; sub_flux_id = 0101 0000b specifies Flow coding = reserved; sub_flux_id = 1000 0000b specifies Flow coding = Advanced flow; sub_flux_id = 1111 1111b specifies Flow coding = flow defined by the provider; and sub_flux_id = Other specifies Flow coding = reserved (for future Navigation data). FIGs. 81A and 81B are views for explaining an example of an advanced packet configuration (ADV PCK) and the first packet of a video object unit / time unit (VOBU / TU). An ADV_PCK in Fig. 81A comprises a packet header and an Advanced packet (ADV_PCK). The Advanced data (Advanced flow) is aligned with a limit of the logical blocks. Only in the case of the last Advanced data packet (Advanced flow), the ADV_PCK can have a fill packet or insertion bytes. Thus, when the length of the ADV_PCK that includes the latest data in the Advanced stream is less than 2048 bytes, that packet length can be adjusted to have 2048 bytes. The id_flow of this ADV_PCK is, for example, 1011 1111b (private_flow_2), and its sub_flow_id is, for example, 1000 0000b. A VOBU / TU in FIG. 8IB, comprises a packet header, the System header, and the VOBU / TU packet. In a Primary Video Set, the System header (24 bytes data) is transported by a NV_PCK.
On the other hand, in a Secondary Video Set, the stream includes no NV_PCK, and the System header is carried by: the first V_PCK in an evovu when an EVOB includes EVOBUs; or the first A_PCK or the first TT_PCK when an EVOB includes TUs. (TU = Time Unit will be described later using Fig. 83). A video packet (v_PCK) in a Secondary Video Set follows the definitions of a VS_PCK in a Primary Video Set. An audio package (A_PCK) for a Sub Audio Flow in the Secondary Video Set follows the definition for an AS_PCK in the Primary Video Set, on the other hand, an audio package (A_PCK) for a Complementary Audio stream in the The Secondary Video Set follows the definition for an AM_PCK in the Primary Video Set.
Table 91 is a view to explain an example of an advanced package configuration.
Table 91 Note 1: "PES_coding_control" describes the copyright status of the package in which this package is included. 00b: This package does not have the specific data structure for the copyright protection system 01b: This package has the specific data structure for the copyright protection system. Note 2: "advanced_pkt_state" describes the position of this packet in the Advanced flow. (TBD) 00b: This package is neither in the first package nor the last package in the Advanced flow. 01b: This package is the first packet in the flow Advanced 10b: This package is the last package in the Advanced flow. 11b: reserved. Note 3: "manifest_fname" describes the file name of the Manifest file to which this advanced flow (TBD) refers.
In this Advanced package, a prefix_code_init package field has a value of "00 OOOlh", an id_flow_field = 1011 1111b specifies the private_flow_2, and a field PES_weight_package is included. The Advanced package has a Private data area, in which a field sub_flux_id = 1000 0000b specifies an Advanced flow, a PES-coding_control field assumes a value "00b" or "01b" (Note 1), or "10b" ( Note 2). Also, the Private data area includes a field carga_info_fnombre (Note 3) the lime describes the file name of a load information file which refers to the advanced flow of interest. Note 1: the field "PES_coding_control" describes the copyright status of the package that includes this advanced package: 00b specifies that the package of interest does not have any specific data structure of a copyright protection system, and 01b specifies that the interest package has a specific data structure of a copyright protection system. Note: 2 the adv_pkt_state field describes the position of the packet of interest (the advanced packet) in the Advanced flow: 00b specifies that the packet of interest is neither the first packet nor the last packet in the Advanced flow, 01 specifies that the The packet of interest is the first packet in the Advanced flow, and 10b specifies that the packet of interest is the last packet in the Advanced flow. 11b is reserved.
Note 3: The load_info_fname field describes the file name of the load information file that refers to the advanced flow of interest. Table 92 is a view to explain an example of MPEG-2 video restriction for a main video stream.
Table 92 Video MPEG-2 for Video Princi stream (* 1) If the frame rate is 60i or 50i, "field" is used. If the frame rate is 60p or 50p, "frame" is used. (* 2) If the image resolution and frame rate are equal to or less than 720x480 and 29.97, respectively, this is defined as SD. If the image resolution and frame rate are equal to or less than 720x576 and 25, respectively, this is defined as SD. Otherwise, this is defined as HD.
In the MPEG-2 video for a Main Video stream in a Primary Video Set, the number of images in a GOP is 36 fields / display frames or smaller in the case of 525/60 (NTSC) or HD / 60 (in this case, if the frame rate is 60 interlaced (i ) or 50i, "field" is used, and if the frame rate is 60 progressive (p) or 50p, "frame" is used). On the other hand, the number of images in the GOP is 30 fields / display frames in the case of 625/50 (PAL, etc.) or HD / 50 (in this case also, if the frame rate is 60i or 50i, "field" is used, and if the frame rate is 60p or 50p, "frame" is used). The Bitios speed in the MPEG-2 video for the Main Video stream in the Primary Video Set assumes a constant value equal to or less than 15 Mbps (SD) or 29.40 Mbps (HD) in the case of 525/60 or HD / 60 and the case of 625/50 or HD / 50. Alternatively, in the case of a variable bit rate, a variable maximum bit rate is equal to or less than 15 Mbps (SD) or 29.40 Mbps (HD). In this case, DVD_retardo is coded as (FFFFh). (If the image resolution and frame rate are equal to or less than 720x480 and 29.97, respectively, SD is defined, in the same way if the image resolution and frame rate are equal to or less than 720x576 and 25, respectively, SD is defined, otherwise HD is defined). In the MPEG-2 video for the Main Video stream in the Primary Video Set, under_relay (sequence extension), it is set to '0b' (that is, the "low-delay sequence" is not allowed).
In the MPEG-2 video for the Main Video stream in the Primary Video Set, the Resolution (= Horizontal_size / vertical_size) / Frame rate (= velocity_frame_pair) / Aspect ratio are the same as those in a Standard Content. More specifically, the following variations are available if they are described in the order of Horizontal_size / vertical_size / speed_spaces_frame / aspeeto_relacion_informacion / aspect ratio: 1920/1080 / 29.97 / '0011b' or '0010b' / 16: 9; 1440/1080 / 29.97 / '0011b' or '0010b' / 16: 9; 1440/1080 / 29.97 / '0011b' / 4: 3; 1280/1080 / 29.97 / '0011b' or '0010b' / 16: 9; 1280/720 / 59.94 / '0011b' O '0010b' / 16: 9; 960/1080 / 29.97 / OOllb 'or' 0010b '/ 16: 9; 720/480 / 59.94 / OOllb 'or' 0010b '/ 16: 9; 720/480 / 29.97 / '0011b' or '0010b' / 16: 9; 720/480 / 29.97 / '0010b' / 4: 3; 704/480 / 59.94 / '0011b' or '0010b' / 16: 9; 704/480 / 29.97 / '0011b' O '0010b' / 16: 9; 704/480 / 29.97 / '0010b' / 4: 3; 544/480 / 29.97 / '0011b' or '0010b' / 16: 9; 544/480 / 29.97 / '0010b' / 4: 3; 480/480 / 29.97 / '0011b' or '0010b' / 16: 9; 480/480 / 29.97 / '0010b' / 4: 3; 352/480 / 29.97 / '0011b' or '0010b' / 16: 9; 352/480 / 29.97 / '0010b' / 4: 3; 352/240 (note * l, note * 2) /29.97/ '0010b' / 4: 3; 1920 / l080 / 25 / '0011b' or '0010b' / 16: 9; 1440 / l080 / 25 / '0011b' or '0010b' / 16: 9; 1440/1080/25 / '001Ib' / 4: 3; 1280 / l080 / 25 / '0011b' O '0010b' / 16: 9; 1280/720/50 / '0011b' or '0010b' / 16: 9; 960/1080/25 / '0011b' / l6: 9; 720/576/50 / '0011b' or '0010b' / 16: 9; 720/576/25 / '0011b' or '0010b' / 16: 9; 720/576/25 / '0010b' / 4: 3; 704/576/50 / '0011b' or '0010b' / 16: 9; 704/576/25 / '0011b' or '0010b' / 16: 9; 704/576/25 / '0010b' / 4: 3; 544/576/25 / '0011b' O '0010b' / 16: 9; 544/576/25 / '0010b' / 4: 3; 480/576/25 / '0011b' or '0010b' / 16: 9; 480/576/25 / '0010b' / 4: 3; 352/576/25 / '0011b' or 'OOlObO' / 16: 9; 352/576/25 / '0010b' / 4: 3; 352/288 (note * 1) / 25 / '0010b' / 4: 3.
Note * 1: The interlaced SIF format (352 x 240/288) is not adopted. Note * 2: When the "vertical_size" is '240', "progressive_sequence" is 'IX In this case, the meanings of "upper_field_first" and "repeat_first_field" are different from those when "progressive_sequence" is' 0X When the aspect ratio is 4: 3, horizontal_size / display_horizontal_size / aspeeto_ relation / aspect_relation_information are as follows (DAR = Display Aspect Ratio): 720 or 704/720 / '0010b' (DAR = 4: 3); 544/540 / '0010b' (DAR = 4: 3); 480/480 / '0010b' (DAR = 4: 3); 352/352 / '0010b' (DAR = 4: 3). When the aspect ratio is 16: 9, horizontal_size / display_horizontal_size / aspeeto_velocity_information / Display mode in FP_PGCM_V_ATR / VMGM_VATR; VTSM_V_ATR; VTS_V_ATR are as follows (DAR = Display Aspect Ratio): 1920/1920 / '0011b' (DAR = 16: 9) / Only Boxed; 1920/1440 / '0010b' (DAR = 4: 3) / Panoramic Exploration Only, or both Typecast and Panning 1440/1440 / '0011b' (DAR = 16: 9) / Only Boxed; 1440/1080 / '0010b' (DAR = 4: 3) / Panoramic Exploration Only, or both Typecasting and Panning; 1280/1280 / '0011b' (DAR = 16: 9) / Only Boxed; 1280/960 / '0010b' (DAR = 4: 3) / Panoramic Exploration Only, or both Typecasting and Panning; 960/960 / '0011b' (DAR = 16: 9) / Only Boxed; 960/720 / '0010b' (DAR = 4: 3) / Panoramic Exploration only or both Typecasting and Panning; 720 or 704/720 / '0011b' (DAR = 16: 9) / Only Boxed; 720 or 704/540 / '0010b' (DAR = 4: 3) / Panoramic Exploration only or both Typecasting and Panning; 544/540 / '0011b' (DAR = 16: 9) / Only Boxed; 544/405 / '0010b' (DAR = 4: 3) / Only Panoramic Exploration or both Typecast and Panning; 480/480 / '0011b' (DAR = 16: 9) / Only Boxed; 480/360 / '0010b' (DAR = 4: 3) / Panoramic Exploration only or both Typecast and Panning; 352/352 / '0011b' (DAR = 16: 9) / Only Boxed; 352/270 / '0010b' (DAR = 4: 3) / Panoramic Exploration only or both Typecasting and Panning; In Table 92, the still image data in the MPEG-2 video for the Main Video stream in the Primary Video Set are not supported.
However, the subtitle data in the MPEG-2 video for the Main Video stream in the Primary Video Set are supported. Table 93 is a view to explain an example of video MPEG-4 AVC restriction for a main video stream.
Table 93 Video MPEG-4 AVC for the Main Video stream (* 1) If the frame rate is 60i or 50i, "field" is used. If the frame rate is 60p or 50p, "frame" is used. (* 2) If the image resolution and frame rate are equal to or less than 720x480 and 29.97, respectively, this is defined as SD. If the image resolution and frame rate are equal to or less than 720x576 and 25, respectively, this is defined as SD. Otherwise, this is defined as HD.
In the MPEG-4 AVC video for a Primary Video stream in the Primary Video Set, the number of images in a GOP is 36 fields / display frames or less in case of 525/60 (NTSC) or HD / 60. On the other hand, the number of images in the GOP is 30 fields / display frames or less in the case of 625/50 8PAL, etc.) or HD / 50. The Bitios speed in the MPEG-4 video for the Main Video stream in the Primary Video Set assumes a constant value equal to or less than 15 Mbps (SD) or 29.40 Mbps (HD) in the case of 525/60 or HD / 60 and the case of 625/50 or HD / 50. Alternatively, in the case of a variable bit rate, a variable maximum bit rate is equal to or less than 15 Mbps (SD) or 29.40 Mbps (HD). In this case, DVD_retardo is coded as (FFFFh). In the MPEG-4 AVC video the Main Video stream in the Primary Video Set, under_relay (sequence extension) is set to 'Ob' in the MPEG-4 AVC video for the Main Video stream in the Primary Video Set , the Resolution / Speed of Frames / Aspect Ratio are the same as those in the Standard Content. Note that the Fixed image data in the MPEG-4 video for the Main Video stream in the Primary Video Set are not supported. However, the subtitle data in the MPEG-4 AVC video for the Main Video stream in the Primary Video Set are supported.
Table 94 is a view to explain an example of SMPTE video restriction VC-1 for a Main Video stream.
Table 94 Video SMPTE VC-1 for the Main Video stream In the SMPTE VC-1 video for a Primary Video stream in the Primary Video Set, the number of images in a GOP is 36 fields / display frames or less in the case of 525/60 (NTSC) or HD / 60 . On the other hand, the number of images in the GOP is 300 fields / display frames or less in the case of 625/50 (PAL, etc.) or HD / 50. The bit rate in the SMPTE VC-1 video for the Main Video stream in the Primary Video Set assumes a constant value equal to or less than 15 Mbps (AP @ L2) or 29.40 Mbps (AP0L3) both in the case of 525 / 60 or HD / 60 and the case of 625/50 or HD / 50. In the SMPTE VC-1 video for the Main Video stream in the Primary Video Set, Resolution / frame rate / aspect ratio are the same as those in Standard Content. Note that the Fixed image data in the SMPTE VC-1 video for the Main Video stream in the Primary Video Set are not supported. However, the subtitle data in the SMPTE VC-1 video for the Main Video stream in the Primary Video Set are supported. Table 95 is a view to explain an example configuration of an audio package for DD +.
Note 1: All channel configurations can include optional Low Frequency Effects (LFE) channels. Note 1: All channel configurations can include optional Low Frequency Effects (LFE) channels. To support the mixing of the Sub Audio with the primary audio, mixing metadata will be included in the Sub audio stream, as defined in ETSI TS 102 366 Appendix E. The number of channels present in the Sub Audio stream will not exceed the number of channels present in the primary audio stream. The Sub Audio stream will not contain channel locations that are not present in the primary audio stream. The Sub Audio with a 1/0 audio coding mode can be framed between the Left Center and the Right or (when the primary audio does not include a center channel) the Left and Right channels of the primary audio, through the use of the parameter "means of framing". Valid ranges of the "frame average" value are 0 to 20 (C to R), and 220 to 239 (L to C.) Sub Audio with an audio encoding mode greater than 1/0 will not contain panning metadata. .
In this example, the sampling frequency is set at 48 kHz, and a plurality of audio coding modes are available. All audio channel settings can include an optional Low Frequency Effect (LFE) channel. In order to support an environment that can mix the sub audio with the primary audio, mixing meta data is included in a sub audio stream. The number of channels in the sub audio stream does not exceed that of a primary audio stream. The sub audio stream does not include any channel location which does not exist in the primary audio stream. The sub audio with an audio coding mode of "1/0" can be framed between the left, center and right channels. Alternatively, when the primary audio does not include a center channel, the sub audio can be framed between the left and right channels of the primary audio, through the use of a "framing medium" parameter. Note that the value of "framing medium" has a valid range, for example, from 0 to 20 from the center to the right and from 220 to 239 from the center to the left. The sub audio of a coding mode greater than "1/0" does not include any framing parameters. Fig. 82 is a view for explaining an example of a time allocation (TMAP) configuration for a Secondary Video Set. This TMAP has a configuration that is partially different from that of the Primary Video Set shown in FIG. 72B. more specifically, the TMAP for the Secondary Video Set has the general information of TMAP (TMAP_GI) in its heading position, which is followed by a search indicator of time allocation information (TMAPI_SRP # 1) and the information of corresponding time allocation (TMAPI # 1), and has an EVOB attribute (EVOB-ATR) at the end. The TMAP_GI for the Secondary Video Set can have the same configuration as in Table 80. However, in this TMAP_GI, the values of ILVUI, ATR, and Angle in the TMAP_TY (Table 81) assume respectively? 0bX lb ', and 'OObX Also, the value of TMAPI_Ns assumes? 0' ol '. In addition, the value of ILVUI_SA is filled with 'Ib'. Table 96 is a view to explain an example configuration of the TMAPI_SRP.
Table 96 TMAPI SA The TMAPI_SRP for the Secondary Video Set is configured to include TMAPI_SA that describes the start address of the TMAPI with a relative block number from the first TMAP log block, EVOBU_ENT_Ns that describes the EVOBU entry number for this TMAPI, and a reserved area. If the TMAPI_Ns in TMAP_GI (FIG.182) is' ObX there is no TMAPI SRP data (FIG.215) in the TMAP (FIG 214).
Table 97 is a view to explain an example configuration of the EVOB-ATR.
Table 97 EVOB ATR The EVOB-ATR included in TMAP (Figure 82) for the Secondary Video Set is configured to include the EVOB_TY that specifies a type of EVOB, EVOB_FNAME that specifies an EVOB file name, EVOB_V_ATR that specifies an EVOB video attribute , EVOB_AST_ATR that specifies an EVOB audio stream attribute, EVOB_MU_ASMT_ATR that specifies a multi-channel pricipal audio attribute of EVOB, and a reserved area. Table 98 is a view to explain the elements in the EVOB_ATR in table 21. Table 98 EV? B GY b7 b6 b5 b4 b3 b2 bl bO EVOB_TY ... 000b: The Sub Video stream and the Sub Audio stream exist in this EVOB. 0001b: There is only the Sub Video stream in this EVOB. 0010b: There is only the Sub Audio stream in this EVOB. 001 Ib: there is the Complementary Audio flow in this EVOB. 0100b: There is the flow of Complementary Subtitles in this EVOB. Other: reserved Note: the Sub Video / Audio stream is used for mixing with the Main Video / Audio stream in the Primary Video Set. The Complementary Audio stream is used for the replacement with the Audio stream Main in the Primary Video Set. The complementary subtitle stream is used for the addition to the Subimage stream in the Primary Video Set.
The EVOB TY included in the EVOB_ATR in table 97 describes the existence of a flow of Video, Audio streams, and Advanced Flow. That is, EVOB_TY = '0000b' specifies that there is a sub Video stream and the sub audio stream in the EVOB of interest. EVOB_TY = '0001b' specifies that there is only one Sub Video stream in the EVOB of interest. EVOB_TY = '0010b' specifies that there is only one Sub Audio stream in the EVOB of interest. EVOB_TY = '0011'b specifies that there is a Complementary Audio stream in the EVOB of interest. EVOB_TY = '0100b' specifies that there is a flow of Complementary Subtitles in the EVOB of interest. When the EVOB_TY assumes different values from those described above, it is reserved for other purposes of use. Note that the Sub Video / Audio stream can be used for mixing with a Primary Video / Audio stream in the Primary Video Set. The Complementary Audio stream can be used for replacement with a Primary Audio stream in the Primary Video Set. The Complementary Subtitle stream can be used for the addition to a Sub-image stream in the Primary Video Set. With reference to Table 98, EVOB_FNOMBRE is used to describe the file name of an EVOB file to which the TMAP of interest relates. The EVOB_V_ATR describes an EVOB video attribute used to define a Sub Video flow attribute in the VTS_EVOB_ATR and EVOB_VS_ATR. If the audio stream of interest is a Sub Audio stream (ie, evov_TY = '0000b' or '0010b', the EVOB_AST_ATR describes an EVOB audio attribute which is defined by the Sub Audio stream in the VTS_EVOB_ATR and EVOB_ASST_ATRT If the audio stream of interest is a Complementary Audio stream (ie EVOB_TY = '0011b'), the EVOB_AST_ATR describes an EVOB audio attribute which is defined by a Main Audio stream in the VTS-EVOB_ATR and EVOB_AMST_ATRT The EV0B_MU AST_ATR describes the respective audio attributes for multichannel use, which is defined in the vt-EVOB_ATR and EVOB_MU_AMST_ATRT.In the area of the audio stream whose "multi-channel extension" in the EVOB_AST_ATR is 'Ob', it is enter Ob 'in each bit A Secondary EVOB (S-EVOB) will be summarized below The s-EVOB includes the Presentation Data configured by the Video data, the Audio data, the Advanced Subtitle data, and the like. Video data in the S-EVOB is used mainly e to mix with those of the Primary Video Set, and can be defined according to the Sub Video data in the Primary Video Set. The audio data in the S-EVOB includes two types, namely the Sub Audio data and the Complementary Audio data. The Sub Audio data is mainly used to mix with the Audio data in the Primary Video Set, and can be defined according to the Sub Audio data in the Primary Video Set. On the other hand, the Complementary Audio data is mainly used to be replaced by the Audio data in the Primary Video Set, and can be defined according to the Primary Audio data in the Primary Video Set. Table 99 is a view for explaining a list of packet types in a secondary enhanced video object.
Table 99 Y ears of Ate In the Secondary Video Set, the Video package (V_PCK), the Audio package (A_PCK), and the Timed Text packet (TT_PCK) are used. The V_PCK stores the video data of MPEG-2, MPEG-4 AVC, SMPTE VC-1, or the like. The A_PCK stores Dolby Digital Plus (DD +), MPEG, Linear PCM, DTS-HD, Packaged PCM (MLP), or similar data. The TT_PCK stores the data of Advanced Subtitles (data of Subtitles Complementary).
Fig. 83 is a view for explaining an example configuration of a secondary enhanced video object (S-EVOB). Unlike the configuration of the P-EVOB (FIGs 78, 79, and 80), in the S-EVOB (FIG 83 or FIG 84 to be described later), each EVOBU does not include any Navigation package (NV PCK) is your heading position.
An EVOB (Enhanced Video Set) is a collection of EVOBs, and the following EVOBs are supported by the Secondary Video Set: An EVOB which includes a Sub Video stream (V_PCKs) and the Sub Audio stream (A_PCKs); An EVOB which includes only a stream of Sub Video (V_PCKs); An EVOB which includes only one stream of Sub Audio (A_PCKs) One EVOB which includes only one stream of Complementary Audio (A_PCKs); and An EVOB which includes only a stream of Complementary Subtitles (TT-PCKs). Note that an EVOB can be divided into one or more Access Units (AUs). When the EVOB includes V_PCKs and A_PCKs, or when the EVOB includes only V_PCKS, each Access Unit is called an "EVOBU". On the other hand, when the EVOB includes only A_PCKs or when the EVOB includes only TT_PCKs, each Access Unit is called a "Time Unit (TU)". An EVOBU (Enhanced Video Object Unit) includes a series of packages which are arranged in a registration order, the start of a V_PCK that includes a System header, and includes all subsequent packages (if any). The EVOBU ends in a position immediately before the next V_PCK includes a System header in the EVOB identical to the end of that EVOB. Except for the last EVOBU, each EVOBU of the EVOB corresponds to a reproduction period of 0.4 sec to 1.0 sec. Also, the last EVOBU of the EVOB corresponds to a reproduction period of 0.4 sec to 1.2 sec. The EVOB includes a whole number of EVOBUs. Each elementary flow is identified by the ID_flow defined in a Program flow. The Audio Presentation data which are defined by MPEG can be stored in PES packets with the id_flow of private_flow_l. Advanced Subtitle data can be stored in PES packets with the id_flow of private_flow_2. The first bytes of the data areas of the packages of private_flow_l and private_flow_2 can be used to define the sub_flow_id. FIG. 220 shows a practical example of them. Table 100 is a view to explain an exemplary configuration of the example id_flow and extended_id_flow, which of the sub_flux for private_flux_l, and that of the subflux_id for private_flux_2.
The id_flow and the extension_id_flow can have a configuration, as shown in, for example, Table 100 (a) (in this example, extension_id_flow is not applied or optional). More specifically, flow_id = '1110 1000b' specifies the encoding of Flow = 'Video stream (MPEG-2)'; flow_id = '1110 1001b', Flow coding = 'Video stream (MPEG-4 AVC)'; flow_id = '1011 1101b', flow coding = 'private_flow_l', flow_id = '1011 1111b, Flow coding =' private_flow_2 '; flow_id = '1111 1101b', Flow Coding = 'extended_flux_id' (SMPTE video stream VC-1); and flow_id = others, Flow coding = reserved for other purposes of use. The sub_flux_id for private_flow_l may have a configuration as shown in, for example, Table 100 (b). more specifically, sub_flux_id = '1111 0000b' specifies Flow coding = 'audio stream of Dolby Digital Plus (DD +)'; sub_flux_id = '1111 0001b', Flow coding = 'DTS-HD audio stream'; sub_flujo_id = '1111 0010b' to '1111 0111b', flow coding = 'DTS-HD audio stream'; sub_flux_id = '1111 0010b' to '1111 0111b', stream coding = reserved for other video streams; and sub_flux_id = others, Flow coding = reserved for other purposes of use. The sub_flux_id for private_flow_2 may have a configuration as shown in, for example, the Table of FIG. 100 (c). More specifically, sub_flux_id = '0000 0010b' specifies Flow coding = GCI flow; sub_flujo_id = '1111 1111b', Flow coding = flow defined by the provider; and sub_flux_id = others, Flow coding = reserved for other purposes. Some of the following files can be archived as a file using (tdb) without any compression. • Manifest (XML) • Marking (XML) • Command Sequence (ECMAScript) • Image (JPEG / PNG / MNG) • Audio for sound effects (WAV) • Fonts (OpenType) • Advanced Subtitles (XML) In this specification, the archived files are called as advanced flow the file can be located on a disk (under the directory ADV_0BJ) or can be supplied from a server. Also, the file can be multiplexed in an EVOB of the Primary Video Set, and in this case, the file is divided into packets called Advanced Packs (ADV_PCK). FIG. 85 is a view to explain an example of the PlaybackList configuration. The information of Object Assignment, a Play Sequence, and Configuration information are respectively described in three areas designated under a root element. This Playlist file may include the following information: * Object Assignment Information (the reproduction object information which exists in each title, and is assigned in the timeline of this title); * Playback Sequence (the reproduction information of the title described in the title timeline); and * Configuration Information (System configuration information such as data buffer alignment). FIGs. 86 and 87 are views to explain the timeline used in the Playlist. FIG. 86 is a view to explain an example of the Assignment of the Presentation Objects in the timeline. Note that the timeline unit can use a video frame unit, units of seconds (milliseconds), clock units based on 90 kHz / 27 MHz, units specified by the SMPTE, and the like. In the example of Fig. 86, two Primary Video Sets are prepared having durations of "1500" and "500", and are assigned in a range from 500 to 1500 and that of 2500 to 300 in the Timeline. By assigning Objects that have different durations in the Timeline as a timeline, these Objects can be reproduced in a compatible manner. Note that the timeline is set to be reset to zero for each playlist to be used. Fig. 87 is a view explaining an example when trick playback (chapter skipping or the like) of a presentation object in the timeline. FIG. 87 shows an example of how the time gains in the Timeline by executing a current presentation operation. That is, when the presentation starts, the time on the Timeline begins to win (* 1), due to the depression of a Playback button at time 300 on the Timeline (* 2), the time on the Line of time jumps to 500, and the presentation of the Primary Video Ensemble begins. After that, by pressing a Chapter Break button at time 700 (* 3), the time jumps to the start position of the corresponding Chapter (time 1400 on the Timeline), and the presentation starts from there . After that, after pressing a Pause button (by the user of the player) at time 2550 (* 4), the presentation stops after the effect of the button is validated. After pressing the Play button in time 2550 (* 5), the presentation restarts. Fig. 88 is a view for explaining an example of a Playlist configuration when the EVOBS have interleaved angle blocks. Each EVOB has a corresponding TMAP file. However, the information of EVOB4 and EVOB5 as interleaved angle blocks is written in a single TMAP file. By designating individual TMAP files by the Object Assignment Information, the Primary Video Set is assigned in the timeline. Also, the Applications, Advanced subtitles, Additional Audio, and the like are assigned in the Timeline based on the description of the Object Assignment Information in the Playlist. In FIG. 88, a Title (a Menu or similar ones according to its purpose of use) that has no video or the like is defined as Appl between times 0 and 200 in the Timeline. Also, during a period of times 200 to 800, App2, P-Videol (Primary Video 1) is set to P-Video3, Advanced Subtitles 1, and Audiol is added. During a period of time from 1000 to 1700, P-Video4_5 is set including EVOB4 and EV0B5, P-Video6, P-Video5, App3, and App4, and Advanced Subtitles2, which form the angle block. The Play Sequence defines that Appl configures a Menu as a title, App2 configures a Main Movie, and App3 and App4 configure a Director cut. further, the Sequence of Reproduction defines three Chapters in the Main Movie, and a Chapter in the Director's Cut. Fig. 89 is a view for explaining an example of a PlaybackList configuration when an object includes multi-story. Fig. 89 shows an image of the Playlist after establishing the Multi-story. By designating the TMAPs in the Object Assignment Information, these two titles are assigned in the Timeline. In this example, Multi-story is implemented using EV0B1 and EV0B3 in both titles, and replacing EV0B2 and EVOB4. FIG. 90 is a view for explaining a description example (when an object includes angle information) of the object assignment information in the Playlist. FIG. 90 shows an example of a practical description of the Object Assignment Information in Fig. 88. Fig. 91 is a view for explaining a description example (when an object includes multi-story) of the object assignment information in the Reproduction List. FIG. 91 shows an example of describing the Object Assignment Information after establishing Multi-story in FIG. 89. Note that a seq element means that its minor elements are assigned sequentially in the Timeline, and an even element means that its minor elements are assigned simultaneously in the Timeline. Also, a track element is used to designate each individual Object, and times in the Timeline are also expressed using the start and end attributes. At this time, when the objects are successfully assigned in the Timeline as Appl and App2 in Fig. 88, a termination attribute can be omitted. Also, when objects are assigned to have a gap such as App2 and App3, their times are expressed using the termination attribute. In addition, by using a set of name attributes in the seq and pair elements, the state can be displayed during the current presentation in (a display panel of) the player or an external display. Note that Audio and Subtitles can be identified using Flow numbers. Fig. 92 is a view to explain examples (four examples in this case) of an advanced object type. Advanced objects can be classified into four types, as shown in FIG. 92. Initially, objects are classified into two types depending on whether an object is played in synchronism with the Timeline or an object plays asynchronously based on its own playing time. Then, the objects of each of these two types are classified into an object said reproduction start time in the Timeline is recorded in the Playlist, and which begin to be reproduced at that time (programmed object), and a object which has an arbitrary start time of reproduction, for example, by a user operation (non-scheduled object). FIG. 93 is a view for explaining an example of describing a Playlist in the case of a synchronized advanced object. FIG. 93 exemplifies the cases <; 1 > and < 2 > which must be reproduced in synchronism with the Timeline of the four types mentioned above. In FIG. 93, an explanation is given using Effect Audio. Efectol Audio corresponds to < 1 > , and the Effect Audio2 corresponds to < 2 > in FIG. 94. Efectol audio is a model whose start and end times are defined. Effect Audio 2 has its own playing time "600" and its reproducible period of time has an arbitrary start time for a user operation for a period from 1000 to 1800. When App3 starts from time 1000 and starts the presentation of Effect2 Audio at time 1050, these are played back to time 1650 on the Timeline in synchronism with it. When you start the presentation of the Effect 2 audio from time 1100, it is played synchronously in a similar way until the 1700th time. However, the presentation beyond the Application produces conflicts if there is another Object. Therefore, a restriction is established to inhibit such presentation. For this reason, when the presentation of the Effect Audio2 starts from time 1600, it will last until the time 2000 based on its own playing time, but it ends in time 1800 as the application termination time in practice . FIG. 94 is a view for explaining an example of a description of a Playlist in the case of a synchronized advanced object. FIG. 94 shows an example of a description of the track elements for the Effect Audio and the Effect Audio2 used in FIG. 93 when objects are classified into types. The selection as to whether or not they will be synchronized with the Timeline can be defined using a synchronization attribute, if it is determined that the reproduction period in the Timeline or this can be selected within a reproducible time, for example , a user operation can be defined using a time attribute. Network This chapter describes the access specification of the network access functionality of the HD DVD player. In this specification, the following simple network connection model is assumed. The minimum requirements are: -The HD DVD player is connected to the international network. - The name resolution service such as DNS is available to translate domain names into IP addresses. - current efficiency below 512 kbs at minimum is guaranteed. Efficiency is defined as the amount of data successfully transmitted from the server in the international network to an HD DVD player in a given period of time. This takes into account the retransmission due to errors and control information such as the establishment of the session. In terms of buffer or buffer management and playback timing, the HD DVD will support two types of download: full download and stream transmission (progressive download). In this specification, these terms are defined as follows: Full download: the HD DVD player has enough buffer size or buffer to store the total of the file. The transmission of a complete file from a server to the player is completed before the file is played. The Advanced Navigations, the Advance Elements and the files of these files are downloaded by complete download. If the file size of the Secondary Video Set is small enough to be stored in the File Cache (a part of the Data Cache), it can also be downloaded by full download.
- Transmission of flows (progressive download), the size of the buffer prepared for the file to be downloaded may be smaller than the size of the file. Using the buffer as a ring buffer, the player plays the file while the download continues. Only the Secondary Video Set is downloaded by stream transmission. In this chapter, "download" is used to indicate the two above. When both types of discharges need to be differentiated, "full discharge" and "flow transmission" are used. The typical procedure for the transmission of flows of the Secondary Video Set is explained in FIG. 95. After the server-player connection is established, an HD DVD player requests a TMAP file using the http GET method. Then, in response to the request, the server sends the TMAP file for full download. After receiving the TMAP file, the player sends a message to the server which requests the Secondary Video Set corresponding to the TMAP. After transmission of the server file begins, the player starts playing the file without the completion of the download. For synchronized playback of the downloaded content, the timing of network access, as well as the presentation timing, must be pre-programmed and explicitly described in the PlaybackList (TBD). This pre-programming allows us to guarantee the arrival of the data before they are processed by the Presentation Engine and the Navigation Manager. Server and Disk Certification The procedure to establish the Secure Connection to ensure secure communication with a server and an HD DVD player, the authentication process must be prior to the communication of the data. First, the authentication server must be processed using HTTPS. Afterwards, the HD DVD disc is authenticated. The process of disk authentication is optional and activated by the servers. The request for disk authentication is uploaded to the servers, but all HD DVD players should behave as specified in this specification if required. Server authentication At the start of network communications, the HTTPS connection must be established. During this process, the server must be authenticated using the Server Certificate in the SSTL / TLS link protocol. Disk Authentication (FIG 96) Disk Authentication is optional for servers, while all HD DVD players must support Disk Authentication. It is the server's responsibility to determine the need for Disk Authentication.
Disk Authentication consists of the following steps: 1. A player sends a http GET request to a server. 2. The server selects sector numbers used by the Disk Authentication and responds to the message by including them. 3. When the player receives the sector numbers, it reads the raw data of the specified sector number and calculates a hash code. The hash code and the sector numbers are appended to the next http GET request to the server. 4. If the hash code is correct, the server sends the requested file as a response. When the hash code is not correct, the server sends an error response. The server can re-authenticate the disk by sending a reply message that includes a sector number to be read at any time. It must be taken into account that Disk Authentication can interrupt continuous playback since it requires random access to the disk. The format of the message for each of the steps and a hash function is T.B.D. List of Bounded Garden The list of bounded garden defines a list of accessible network domains. Access to network domains that are not listed in this list is prohibited. The details of the list of bounded garden is TBD. Download Model Download Data Stream Model (FIG 97). As explained above, the files transmitted from a server are stored in a Data Cache by the Managed Network. The Data Cache consists of two areas. The File Cache and the Stream Transmission buffer. He File Cache is used to store the downloaded files by full download, while the Buffer of Transmission of flows is used for the Transmission of flows. The size of the Stream Transmission Buffer is usually smaller than the size of the Secondary Video Set to be downloaded by stream transmission and therefore, this buffer is used as a ring buffer and is handled by the Stream Transmission Buffer Manager. The data flow in the File Cache and the Stream Transmission Buffer is modeled below. The Network Administrator manages all communications with the servers. This makes the connection between the player and the servers and processes all the authentication procedures. It also requires downloading files to the servers using the appropriate protocol. The request timing is activated by the Navigation Manager. - The Data Cache is a memory used to store the downloaded data and the data read from the HD disk DVD. The minimum size of the Data Cache is 64 MB. The Data Cache is divided into two areas: the File Cache and the Stream Transmission buffer. - The File Cache is a buffer used to store downloaded data by full download. He File Cache is also used to store the data from the HD DVD disc. The Stream Transmission Buffer is a buffer used to store a part of the downloaded files during the transmission of streams. The size of the Flow transmission buffer is specified in the ReproduccionLista. The Flow Transmission Buffer manager controls the behavior of the Stream Transmission Buffer. This treats the Stream Transmission Buffer as a ring buffer. During the transmission of flows, if the Flow Transmission Buffer is not full, the Stream Transmission Buffer Manager stores the data in the Stream Transmission Buffer as much as possible.
- The Data Supply Manager gets the data from the Flow Transmission Buffer at the appropriate time and places it in the Secondary Video Decoder.
Buffer model for Full Download (File Cache) For full download planning, the File Cache behavior is completely specified by the following data input / output model and the action timing model. FIG. 98 shows an example of the behavior of the buffer. Data entry / exit model. - The data entry speed is 512 kbps (TBD).
- The downloaded data is removed from the File Cache at the end of the application period. Action Timing Model - The download starts at the Download Start Time specified in the Playback list by the pre-extraction label. The presentation starts at the Presentation Start Time specified in the Playlist by the track label. Using this model, access to the network must be planned in such a way that the download must be complete before the presentation time. This condition is equivalent to the condition that the time_mark calculated by the following formula is positive. Margin_time = (prsentacion_inicio__tiempo descarga_inicio_tiempo - datos_size) / efficiency_minima Margin_time is a margin to absorb the variations in the efficiency of the network. Buffer Model for Flow Transmission (Flow Transmission Buffer) For the flow transmission planning, the behavior of the Stream Transmission Buffer is completely specified by the following data input / output model and the timing model of the transmission. action. FIG. 99 shows an example of the behavior of the buffer. Data Input / Output Model - The data entry speed is 512 kbps (TBD).
- After the presentation time, the data is sent from the buffer to the video bit rate. - When the Flow Transmission Buffer is full, data transmission stops. Action timing model - the flow transmission starts at the Download Start Time.
The presentation starts at the Presentation Start Time. In the case of flow transmission, the time_frame calculated by the following formula must be positive Time_range = presentation_time_time + time_start download The size of the Stream Transmission Buffer, which is described in the configuration in PlaybackList, must satisfy the following condition. Flow_buffer__size > = time_margin * minimum_efficiency In addition to these conditions, the following trivial condition must be satisfied. Minimum_efficiency > = video_speed of bitios Model of Flow of Data for Random Access In the case that a Set of Secondary Video is unloaded by complete download, any reproduction of trick such as fast advance and reproduction in reverse can be supported. On the other hand, in the step of the transmission of flows, only jumps are supported (random access). The model for random access is TBD. Download Planning To achieve synchronized playback of downloaded content, network access must be pre-planned. The network access schedule is described as the download start time in the Player list. For network access planning the following conditions must be assumed: -The network efficiency is always constant (512 kbps: TBD) - Only one individual session can be used via http / HTTPS and multiple sessions are not allowed. Therefore, in the authoring stage, data download must be scheduled not to download more than one data simultaneously. - For the transmission of Secondary Video Set flows, a TMAP file of the Secondary Video Set must be downloaded in advance. - Under the Network Data Flow Model described below, the complete download and the transmission of flows must be pre-programmed so as not to cause overflow / subflow of the buffer. The network access planning is described by the Pre-extraction element for the complete download and by the preload attribute in the Clip element for the transmission of flows, respectively (TBD). For example, the following description specifies a schedule for the complete download. This description indicates that the snap.jpg download should start at 00: 10: 00: 00 at the time of the title. < Pre-extraction src = "http: // sample. Com / snap. Jpg" titleTimeTime = "00: 10: 00: 00" / > Another example explains a network access schedule for the transmission of Secondary Video Set flows. Before beginning the download of the Secondary Video Set, the TMAP corresponding to the Secondary Video Set must be completely downloaded. FIG. 100 represents the relationship of the presentation schedule and the network access schedule specified by this description. < SecondaryVideoConjuntoTrack > < Pre-extraction src = "http / sample. Com / clipl. TMAP" started = "00: 02: 20: 00" / > < Clip src = "http77sample.com/clip.TMAP" preload = "00: 02: 40" titleTimeTime = "00: 03: 00: 00" / > < / ScundarioVideoConjuntoTrast > This invention is not limited to the modalities described above and can be incorporated by modifying the component elements in various ways without departing from the spirit or essential character thereof on the basis of the techniques available in the present or future implementation phase. For example, this invention can be applied not only to DVD-ROM videos currently popular throughout the world but also to reproducible, recordable DVD-VRs (video recorders) for which demand has increased significantly in the recent years. In addition, the invention can be applied to the reproduction system or the recording and reproduction system of a next-generation HD-DVD that is expected to become popular quickly. While certain embodiments of the invention have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. In truth, the new methods and systems described here can be incorporated in a variety of different ways; In addition, several omissions, substitutions and changes in the form of the methods and systems described here can be made without departing from the spirit of the inventions. The appended claims and their equivalents are intended to cover such forms or modifications as they fall within the scope and spirit of the inventions

Claims (1)

CLAIMS 1. A means of storing information, comprising: an administration area in which the administration information is recorded to administer the content; and a content area in which the administered content is recorded on the basis of the administration information, characterized in that, the content area includes an area of objects in which a plurality of objects are recorded, and an area of allocation of time in which a time allocation is recorded to play these objects in a specified period in a timeline, and the administration area includes a playlist area in which a playlist is registered to control the playlist. reproduction of a menu and a title, each composed of the objects based on the allocation of time, and allows the menu to be reproduced dynamically based on the playlist. 2. An information reproduction apparatus, which reproduces an information storage means as claimed in claim 1, characterized in that it comprises: a reading unit configured to read the playlist registered in the storage medium; and a reproduction unit configured to play the menu on the basis of the playlist read by the reading unit. 3. A method of reproducing information for reproducing a storage medium as claimed in claim 1, characterized in that it comprises: reading the registered playlist in the information storage medium; and play the menu based on the playlist. 4. A network communication system characterized in that it comprises: a player which reads the information from a storage medium, requests a server for the reproduction information via a network, downloads the reproduction information from the server, and reproduces the information read from the storage medium and the reproduction information downloaded from the server; and a server which provides the player with the reproduction information according to the request by the reproduction information made by a reproduction apparatus. SUMMARY OF THE INVENTION An information storage means according to an embodiment of the present invention comprises an administration area in which the administration information is recorded to administer the content, and a content area in which the content is recorded. administered on the basis of management information. The content area includes an object area in which a plurality of objects is recorded, and a time allocation area in which a time allocation is recorded to reproduce these objects in a specific period in a time line. The administration area includes a playlist area in which a playlist is registered to control the reproduction of a menu and a title, each composed of the objects, based on the allocation of time.
1/86 VMG Standard Content structure example Navigation Data (IFO) Video Object Data (EVOB) FIG.1A Example of Advanced Content Structure Application Advanced Navigation List of Information reproduction Composition Loading script Secondary Video Element Set Advanced TMAP Image S-EVOB Sound effect Source FIG.1B
MXPA06013259A 2005-03-15 2006-09-21 Information storage medium, information reproducing apparatus, information reproducing method, and network communication system. MXPA06013259A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2005072136A JP2006260611A (en) 2005-03-15 2005-03-15 Information storage medium, device and method for reproducing information, and network communication system
JP2006005189 2006-09-21

Publications (1)

Publication Number Publication Date
MXPA06013259A true MXPA06013259A (en) 2007-02-28

Family

ID=40259079

Family Applications (1)

Application Number Title Priority Date Filing Date
MXPA06013259A MXPA06013259A (en) 2005-03-15 2006-09-21 Information storage medium, information reproducing apparatus, information reproducing method, and network communication system.

Country Status (1)

Country Link
MX (1) MXPA06013259A (en)

Similar Documents

Publication Publication Date Title
KR100833641B1 (en) Information storage medium, information reproducing apparatus, information reproducing method, and network communication system
US20060182418A1 (en) Information storage medium, information recording method, and information playback method
KR100221427B1 (en) System for reproducing reproduced data using attribute information of reproduced data
US20070031122A1 (en) Information storage medium, information playback method, information decode method, and information playback apparatus
WO1997025714A1 (en) Information recording medium, recording method and reproduction apparatus
CN105765657A (en) Recording medium, playback device, and playback method
JP2009239981A (en) Information recording method, reproducing method, and reproducing device
JP4322867B2 (en) Information reproduction apparatus and reproduction status display method
US20060098944A1 (en) Information storage medium, information playback method, and information playback apparatus
RU2358338C2 (en) Recording medium with data structure for controlling playback of data streams recorded on it and method and device for recording and playing back
US20060110135A1 (en) Information storage medium, information playback method, and information playback apparatus
US20070147781A1 (en) Information playback apparatus and operation key control method
CA2342137C (en) Numbering of video objects and cells
JP2003503805A (en) Incomplete video stream recording
JP2835319B2 (en) optical disk
JP2857123B2 (en) Information recording medium
CN106463151B (en) Recording medium, reproduction device, and reproduction method
MXPA06013259A (en) Information storage medium, information reproducing apparatus, information reproducing method, and network communication system.
JP2835317B2 (en) Optical disk reproducing apparatus, reproducing method thereof, data recording method on optical disk and recording apparatus thereof
JP2875797B2 (en) optical disk
JP4808761B2 (en) Apparatus for reproducing moving image data for each angle corresponding to one moving image from an information recording medium
JP2865643B2 (en) Recording medium storing video data capable of forcibly reproducing sub-pictures according to reproduction state, and reproduction system therefor
JP2006221754A (en) Information storage medium, information recording method, and information reproducing method
JP2869048B2 (en) Optical disk reproducing apparatus and optical disk reproducing method
CN106104687B (en) Recording medium, reproducing apparatus and method thereof

Legal Events

Date Code Title Description
FA Abandonment or withdrawal