TW201105108A - Playback device, playback method, and program - Google Patents

Playback device, playback method, and program Download PDF

Info

Publication number
TW201105108A
TW201105108A TW99110155A TW99110155A TW201105108A TW 201105108 A TW201105108 A TW 201105108A TW 99110155 A TW99110155 A TW 99110155A TW 99110155 A TW99110155 A TW 99110155A TW 201105108 A TW201105108 A TW 201105108A
Authority
TW
Taiwan
Prior art keywords
stream
view video
data
dependent
video stream
Prior art date
Application number
TW99110155A
Other languages
Chinese (zh)
Other versions
TWI532362B (en
Inventor
Shinobu Hattori
Original Assignee
Sony Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Corp filed Critical Sony Corp
Publication of TW201105108A publication Critical patent/TW201105108A/en
Application granted granted Critical
Publication of TWI532362B publication Critical patent/TWI532362B/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/156Mixing image signals
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B20/00Signal processing not specific to the method of recording or reproducing; Circuits therefor
    • G11B20/10Digital recording or reproducing
    • G11B20/12Formatting, e.g. arrangement of data block or words on the record carriers
    • G11B20/1217Formatting, e.g. arrangement of data block or words on the record carriers on discs
    • G11B20/1252Formatting, e.g. arrangement of data block or words on the record carriers on discs for discontinuous data, e.g. digital information signals, computer programme data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/189Recording image signals; Reproducing recorded image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B20/00Signal processing not specific to the method of recording or reproducing; Circuits therefor
    • G11B20/10Digital recording or reproducing
    • G11B20/10527Audio or video recording; Data buffering arrangements
    • G11B2020/10537Audio or video recording
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B20/00Signal processing not specific to the method of recording or reproducing; Circuits therefor
    • G11B20/10Digital recording or reproducing
    • G11B20/12Formatting, e.g. arrangement of data block or words on the record carriers
    • G11B2020/1264Formatting, e.g. arrangement of data block or words on the record carriers wherein the formatting concerns a specific kind of data
    • G11B2020/1289Formatting of user data
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B2220/00Record carriers by type
    • G11B2220/20Disc-shaped record carriers
    • G11B2220/25Disc-shaped record carriers characterised in that the disc is based on a specific recording technology
    • G11B2220/2537Optical discs
    • G11B2220/2541Blu-ray discs; Blue laser DVR discs

Abstract

The present invention, for instance in the case of displaying 3D images, can determine which stream from the basic stream or the expansion stream is the stream of left image and then play the 3D content. In the case of the value of view_type equal to 0, those data stored in the DPB (151) and subjected to decoding with base view video packet that has been identified by PID=0 are outputted to L video plane production part (161). The value of view_type equal to 0 implies that the base view video stream is an L view stream. Under this circumstance, the data decoded with dependent view video packet that has been identified by PID with values other than 0 are outputted to R video plane production part (162). This invention can be applied to playback device supporting BD-ROM specification.

Description

201105108 六、發明說明: 【發明所屬之技術領域】 本發明係有關於一種播放裝置、播放方法、及程式,本 發明尤其係有關於一種例如於顯示3D圖像之情形時,可判 斷基本串流與擴展串流中之哪一串流為左圖像之串流,從 、 而播放3D内容之播放裝置、播放方法、及程式。 【先前技術】 作為電影等之内容,係以二維圖像之内容為主流,而最 近’能夠實現立體視覺之立體視覺圖像之内容受到關注。 立體視覺圖像之顯示需要專用之設備,而作為此種立體 視覺用设備’例如存在NHK(Nipp〇n Hoso Kyokai,日本放 送協會)開發之IP(Integral Photography,集成攝影)立體圖 像系統。 立體視覺圖像之圖像資料係包含複數個視角之圖像資料 (自複數個視角拍攝之圖像之圖像資料),且可實現視角之 數量越多且視角範圍越廣便越能自各種方向觀察被攝體之 所謂的「全像電視」。 立體視覺圖像中之視角數量最少者係視角數量為2個視 k 角之立體圖像(所謂之3D圖像)。立體圖像之圖像資料係包 , 含以左眼觀察之圖像即左圖像資料,及以右眼觀察之圖像 即右圖像資料。 另一方面’電影等之高解析度圖像之内容因其資料量較 多’因此記錄此種資料量較多之内容需要大容量之記錄媒 體。 M5441.doc 201105108 作為此種大容量之記錄媒體’有BD(Blu-Ray(註冊商 才示)Disc ’藍光光碟)_R〇M(Read Only Memory,唯讀記情 體)等Blu-Ray(註冊商標)Disc(以下亦稱為BD)。 [先前技術文獻] [專利文獻] [專利文獻1]日本專利特開2005-348314號公報 【發明内容】 [發明所欲解決之問題] 然而,於BD之規格中並未規定以何種方式將包含立體 圖像之立體視覺圖像之圖像資料記錄於BD中,而且亦未 規定以何種方式進行播放。 例如,立體圖冑之圖冑資料包含左圖I之資肖串流與右 圖像之資料串流之2個資料串流。因此,必需定義能夠掮 放此2個資料串流’並顯示立體圖像之解碼器模組,且於 播放裝置中安裝基於此模型之解碼器。 本發明係蓉於上述狀況而完成者,其係例如於顯示则 像之情形時,可判斷基本串流與擴展串流中之哪—串流為 左圖像之串流’從而播放3 d内容。 [解決問題之技術手段] 本發明之-態樣之播放裝置包括:取得機構,其取得以 特定之編碼方式編碼複數個影像資料而得之基本串流盘擴 展串流;解碼機構,其對由上述取得機構所取得之上述基 本串流與上述擴展串流進行解碼;及㈣機構,其基於二 示上述基本串流與上述擴展串流中之哪一串流為左圖像之 145441 .doc 201105108 串流抑或是右圖像之串流的旗標,切換上述解碼機構所產 生之解碼結果的貧料之輸出目的地。 本發明之播放裝置可進而設置生成左圖像之平面的第丄 生成機構、及生成右圖像之平面的第2生成機構。於此情 形時,可使上述切換機構將上述基本串流與上述擴展串流 中、藉由上述旗標而表示為左圖像串流之串流之解碼結果 之資料輸出至上述第1生成機構.,且將另一串流之解碼結 果之資料輸出至上述第2生成機構。 可使上述切換機構基於pID,識別經上述解碼機構解碼 之資料為上述基本串流之解碼結果之資料、抑或是上述擴 展串流之解碼結果之資料。 可使上述切換機構基於編碼時設定於上述擴展串流中之 視角之識別資訊,識別經上述解碼機構解碼之資料為上述 基本串流之解碼結果之資料、抑或是上述擴展串流之解碼 結果之資料。 可使上述切換機構基於所設定之上述視角之識別資訊, 識別上述擴展串流之解碼結果之資料’並且將未設定上述 視角之識別資訊的串流之解碼結果之資料識別為上述基本 串流之解碼結果之資料。 了使上述取得機構於自所安裝之記錄媒體中讀出並取得 上述基本串流與上述擴展串流之同時’自上述記錄媒體中 讀出描述上述旗標且用於控制上述基本串流與上述擴展串 流之播放的播放控制資訊,且可使上述切換機構基於由上 述取得機構自上述記錄媒體中所讀出之上述播放控制資訊 145441.doc 201105108 过·之上述旗標,切換上述解碼機構所產生之解碼择 果之資料的輸出目的地。 :使上述取得機構取得附加於上述基本串流與上述擴展 中之至)任一各串流内且描述上述旗標之附加資訊, 且y使上述切換機構基於由上述取得機構而取得之上述附 力資λ中所描述之上述旗標,切換上述解碼機構所產生之 解碼結果之資料的輸出目的地。 可使上述取得機構取得附加於播送而來之上述基本串流 與上述擴展串流中之至少任―各串流内且描述上述旗標: 可使上述取得機構接收並取得傳送而來之上述基本串流 與上述擴展以,並且接收㈣上述旗標且用於控制上述 基本’流與上述擴屐串流之傳送的傳送控制資訊,可使上 述切換機構基於經上述取得機構接收之上述料控制資訊 :所:述之上述旗標,切換上述解碼機構所產生之解碼結 果之-貝料的輸出目的地。 可使上述取得機構接收並取得經由無線電波傳送而來之 上述基本串流與上述擴展串流,並且接收描述上述旗標且 =控制上述基本串流與上述擴展串流之傳送的傳送控制 貢汛。 乂 :明之—態樣之播放方法係包括如下步驟:取得以' :::式編碼複數個影像資料而得之基本串流㈣ 二:所取得之上述基本串流與上述擴展串流進行〗 碼’並基於表示上述基本串流與上述擴展串流中之哪一$ 145441.doc 201105108 w為左圖像之串流抑或是 結果之資料的輸出目的地。串-的旗標,切換解碼 理本=之1樣之程式係使電腦執行包含如下步驟之處 Η特定之編碼方式編碼複數㈣像資料而得之基 串2與=展串流;對所取得之上述基本串流與上述擴展 解碼,基於表示上述基本串流與上述擴展串流中 串流為左圖像之串流抑或是右圖像之串流的旗標, 切換解碼結果之資料的輸出目的地。 於本發明之-態樣t,取得以蚊之編碼方式編碼複數 個影像資料而得之基本_流與擴展串流,對所取得之上述 基本串流與上述擴展串流進行解碼’並基於表示上述基本 串流與上述擴展串流中之哪一串流為左圖像之串流抑或是 右圖像之串流的旗標,切換解碼結果之資料的輸出目的 地。 [發明之效果] 根據本發明’可於例如顯示3D圖像之情形時,判斷基本 串流與擴展串流中之哪-串流為左圖像之串流,從而播放 3D内容。 【實施方式】 <第1實施形態> [播放系統之構成例] 圖1係表不包含應用本發明之播放裝置丨之播放系統之構 成例的圖。 如圖1所示’忒播放系統係構成為藉由HDMI(High 145441.doc 201105108201105108 VI. Description of the Invention: [Technical Field] The present invention relates to a playback device, a playback method, and a program, and the present invention relates to, for example, a case where a 3D image is displayed, and the basic stream can be judged. Which of the extended streams is a stream of left images, a playback device, a playback method, and a program for playing 3D content. [Prior Art] As the content of a movie or the like, the content of a two-dimensional image is mainly used, and the content of a stereoscopic image that can realize stereoscopic vision has recently attracted attention. The display of the stereoscopic image requires a dedicated device, and as such a stereoscopic device, there is, for example, an IP (Integral Photography) stereoscopic image system developed by NHK (Nipp〇n Hoso Kyokai, Japan Broadcasting Association). The image data of the stereoscopic image includes a plurality of image data of a plurality of angles of view (image data of images taken from a plurality of angles of view), and the more the number of angles of view and the wider the range of the angle of view, the more various The so-called "all-image TV" of the subject is observed in the direction. The number of views in the stereoscopic image is the smallest. The number of views is a stereoscopic image of 2 k-angles (so-called 3D image). The image data package of the stereoscopic image, including the image observed by the left eye, that is, the left image data, and the image observed by the right eye, that is, the right image data. On the other hand, the content of a high-resolution image such as a movie has a large amount of data. Therefore, recording a large amount of data requires a large-capacity recording medium. M5441.doc 201105108 As such a large-capacity recording medium, there are Blu-Ray (registered BD (Blu-Ray), _R〇M (Read Only Memory) Trademark) Disc (hereinafter also referred to as BD). [Prior Art Document] [Patent Document 1] Japanese Patent Laid-Open Publication No. 2005-348314 [Draft] [Problems to be Solved by the Invention] However, there is no provision in the specification of BD. The image data of the stereoscopic image including the stereoscopic image is recorded in the BD, and the manner in which the playback is performed is not specified. For example, the map of the stereogram includes two data streams of the stream stream of the left image I and the data stream of the right image. Therefore, it is necessary to define a decoder module capable of releasing the two data streams ’ and displaying a stereoscopic image, and installing a decoder based on this model in the playback device. The present invention is completed in the above situation, for example, in the case of displaying an image, it can be determined which of the basic stream and the extended stream - the stream is a stream of the left image - thereby playing 3 d content . [Technical means for solving the problem] The playback apparatus of the present invention includes: an acquisition mechanism that acquires a basic stream disk extended stream obtained by encoding a plurality of image data in a specific encoding manner; and a decoding mechanism The basic stream obtained by the obtaining means and the extended stream are decoded; and (4) means, based on which of the basic stream and the extended stream, the left image is 145441.doc 201105108 The stream or the stream of the right image is switched to the output destination of the poor result of the decoding result generated by the decoding mechanism. The playback device of the present invention can further provide a second generation means for generating a plane of the left image and a second generation means for generating a plane of the right image. In this case, the switching means may output the data of the decoding result of the stream represented by the flag to the left image stream in the basic stream and the extended stream to the first generating means. And outputting the data of the decoding result of another stream to the second generation means. The switching means may be configured to identify, based on the pID, whether the data decoded by the decoding means is the data of the decoding result of the basic stream or the decoding result of the extended stream. The switching mechanism may be configured to identify, according to the identification information of the viewing angle set in the extended stream during encoding, whether the data decoded by the decoding mechanism is the data of the decoding result of the basic stream or the decoding result of the extended stream. data. The switching means can identify the data of the decoding result of the extended stream based on the set identification information of the viewing angle, and identify the data of the decoding result of the stream in which the identification information of the viewing angle is not set as the basic stream. The data of the decoding result. And causing the acquiring means to read the basic stream and the extended stream while reading from the mounted recording medium, and reading out the flag from the recording medium and controlling the basic stream and the above Expanding the playback control information of the streaming play, and causing the switching mechanism to switch the decoding mechanism based on the flag of the playback control information 145441.doc 201105108 read by the obtaining means from the recording medium The output destination of the data that is decoded and selected. And causing the obtaining means to acquire additional information added to any one of the basic stream and the expansion to describe the flag, and y to cause the switching means to be based on the acquisition by the acquiring means The above-described flag described in the force λ switches the output destination of the data of the decoding result generated by the decoding unit. The obtaining means may obtain the flag added to at least one of the basic stream and the extended stream that are broadcasted, and describe the flag: the basic unit that can receive and obtain the transmission by the acquiring means And transmitting, by the stream, the transmission control information for receiving the (four) flag and for controlling the transmission of the basic 'stream and the expanded stream, the switching mechanism may be based on the material control information received by the obtaining mechanism : The above-mentioned flag is used to switch the output destination of the decoding result generated by the decoding unit. The obtaining means may receive and acquire the basic stream and the extended stream transmitted via radio waves, and receive a transfer control tribute describing the flag and controlling the transmission of the basic stream and the extended stream. .乂 明 明 明 态 态 态 态 态 播放 播放 播放 播放 播放 播放 播放 播放 播放 播放 播放 播放 播放 播放 播放 播放 播放 播放 播放 播放 播放 播放 播放 播放 播放 播放 播放 播放 播放 播放 播放 播放 播放 播放 播放 播放 播放 播放 播放 播放 播放 播放 播放 播放 播放'And based on the output destination indicating which of the above-mentioned basic stream and the above-mentioned extended stream is $145441.doc 201105108 w is the stream of the left image or the result. The string-to-flag, switch-coded version=the program is such that the computer executes the base string 2 and the =-stream stream obtained by encoding the complex (4) image data in the following steps: The basic stream and the extended decoding are based on a flag indicating whether the stream is streamed as a left stream or a stream of a right image in the extended stream, and the output of the data of the decoding result is switched. destination. In the aspect t of the present invention, a basic stream and an extended stream obtained by encoding a plurality of image data by a mosquito encoding method are obtained, and the obtained basic stream and the extended stream are decoded 'based on the representation Which one of the above-mentioned basic stream and the above-mentioned extended stream is a stream of a left image or a stream of a right image, and switches the output destination of the data of the decoding result. [Effects of the Invention] According to the present invention, it is possible to judge which of the basic stream and the extended stream is a stream of the left image, for example, in the case of displaying a 3D image, thereby playing the 3D content. [Embodiment] <First Embodiment> [Configuration Example of Playback System] Fig. 1 is a view showing an example of a configuration of a playback system to which a playback device of the present invention is applied. As shown in Figure 1, the '忒 playback system is constructed by HDMI (High 145441.doc 201105108)

Defm丨tion Multimedia Interface,高清晰度多媒體介面)傳 輸線等而連接有播放裝置丨與顯示裝置3。於播放裝置 安裝有BD等光碟2。 於光碟2中記錄有顯示視角數量為2個之立體圖像(所謂 之3D圖像)所必需之串流。 播放裝置1係支援記錄於光碟2中之串流的3D播放之播放 器。播放裝置1係將記錄於光碟2中之串流播放,並使播放 所得之3D圖像顯示於包含電視接收衾等之顯示裝置3中。 聲音同樣亦藉由播放裝置丨進行播放,且自設置於顯示裝 置3之揚聲器等中輸出。 作為3D圖像之顯示方式,提出有各種方式。此處,作為 3D圖像之顯示方式,採用以下類si之顯示方式及類型2之 顯示方式。 類型1之顯不方式係如下方式:藉由以左眼觀察之圖像 (L圖像)之資料、與以右眼^察之圖像(R圖像)之資料構成 3D圖像之資料,並且使L圖像與R圖像交替顯示,藉此顯 示3D圖像。 類型2之顯示方式係如下方式:藉由顯示使用作為生成 3D圖像之來源之源圖像之資料與Depth(深度)之資料而生 成之L圖像與R圖像,來顯示3〇圖像。類型2之顯示方式中 所用之3D圖像之資料包括源圖像之資料、及可藉由提供給 源圖像而生成L圖像與R圖像之Depth之資料。 類型1之顯示方式係於視聽時需要眼鏡之顯示方式。類 型2之顯示方式係無需眼鏡便可收視3D圖像之顯示方式。 145441.doc 201105108 於光碟2中,記錄有無論類型1與2中之任一顯示方式均 可顯示3 D圖像之串流。 作為用以將如此之串流記錄於光碟2之編碼方式,例如 可採用 H.264 AVC(Advanced Video Coding,高級視訊編 碼)/MVC(Multi-view Video coding,多視角視訊編碼)設定 檔標準。 [H.264 AVC/MVC Profile] 於H.264 AVC/MVC設定檔標準中,定義有稱為Base view video(基本視角視訊)之圖像串流、與稱為Dependent view video(相依視角視訊)之圖像串流。以下適當地將 H.264 AVC/MVC設定檔標準簡稱為MVC。 圖2係表示攝影例的圖。 如圖2所示,以相同被攝體為對象,藉由L圖像用之攝像 機與R圖像用之攝像機來進行拍攝。將藉由L圖像用之攝像 機與R圖像用之攝像機所拍攝之影像之基本串流輸入至 MVC編碼器。 圖3係表示MVC編碼器之構成例之方塊圖。 如圖3所示,MVC編碼器11包括H.264/AVC編碼器21、 H.264/AVC 解碼器 22、Depth計算部 23、Dependent view video編碼器24、及多工器25。 藉由L圖像用之攝像機所拍攝之影像# 1之串流係輸入至 H.264/AVC編碼器21與Depth計算部23。又,藉由R圖像用 之攝像機所拍攝之影像#2之串流係輸入至Depth計算部23 與Dependent view video編碼器24。影像#2之串流亦可輸入 145441.doc 201105108 至H.264/AVC編碼器21與Depth計算部23,且影像#1之串流 亦可輸入至Depth計算部2:3與Dependent view video編碼器 24 ° H.264/AVC編碼器21係將影像#丨之串流編碼成例如Η 264 AVC/High Pr〇file視訊串流。H 264/AVC編碼器以將編碼所 得之AVC視訊串流作為Base view 串流,輸出至 H.264/AVC解碼器22與多工器25。 H.264/AVC解碼器22係對由H.264/AVC編碼器21供給之 AVC視訊串流進行解碼,並將解碼所得之影像#1之串流輸 出至 Dependent view video編碼器 24。A playback device 丨 and a display device 3 are connected to a transmission line or the like of a Defm丨tion Multimedia Interface. The playback device is equipped with a disc 2 such as a BD. In the optical disc 2, a stream necessary for displaying a stereoscopic image having two viewing angles (so-called 3D image) is recorded. The playback device 1 is a player that supports streaming 3D playback recorded on the optical disc 2. The playback device 1 plays the stream recorded on the optical disc 2, and displays the 3D image obtained by the playback in the display device 3 including the television receiver or the like. The sound is also played by the playback device ,, and is output from a speaker or the like provided in the display device 3. As a display method of a 3D image, various methods have been proposed. Here, as the display mode of the 3D image, the following display modes of the type si and the display mode of the type 2 are employed. The type 1 display mode is as follows: the data of the image (L image) observed by the left eye and the image of the image of the right eye (R image) constitute the data of the 3D image. And the L image and the R image are alternately displayed, thereby displaying the 3D image. The display mode of the type 2 is a method of displaying a 3-inch image by displaying an L image and an R image generated using data of a source image which is a source of a 3D image and a material of Depth (depth). . The data of the 3D image used in the display mode of the type 2 includes the data of the source image and the material of the Depth which can generate the L image and the R image by supplying the source image. Type 1 is displayed in such a way that it is required to display the glasses when viewing. The display mode of type 2 is to display the display mode of 3D images without glasses. 145441.doc 201105108 In Disc 2, a stream of 3D images can be displayed regardless of any of Types 1 and 2. As the encoding method for recording such a stream on the optical disc 2, for example, H.264 AVC (Advanced Video Coding)/MVC (Multi-view Video Coding) setting standard can be employed. [H.264 AVC/MVC Profile] In the H.264 AVC/MVC profile standard, an image stream called Base view video is defined, and it is called Dependent view video. The image stream. The H.264 AVC/MVC profile standard is appropriately referred to as MVC as appropriate. Fig. 2 is a view showing an example of photographing. As shown in Fig. 2, the same subject is used for imaging by the camera for the L image and the camera for the R image. The basic stream of the image captured by the camera for the L image and the camera for the R image is input to the MVC encoder. Fig. 3 is a block diagram showing a configuration example of an MVC encoder. As shown in Fig. 3, the MVC encoder 11 includes an H.264/AVC encoder 21, an H.264/AVC decoder 22, a Depth calculation unit 23, a Dependent view video encoder 24, and a multiplexer 25. The stream of video #1 captured by the camera for the L image is input to the H.264/AVC encoder 21 and the Depth calculating unit 23. Further, the stream of video #2 captured by the camera for the R image is input to the Depth calculation unit 23 and the Dependent view video encoder 24. The stream of image #2 can also be input to 145441.doc 201105108 to H.264/AVC encoder 21 and Depth calculation unit 23, and the stream of image #1 can also be input to Depth calculation unit 2: 3 and Dependent view video coding The 24° H.264/AVC encoder 21 encodes the stream of video images into, for example, a 264 AVC/High Pr file video stream. The H 264/AVC encoder outputs the encoded AVC video stream as a Base view stream to the H.264/AVC decoder 22 and the multiplexer 25. The H.264/AVC decoder 22 decodes the AVC video stream supplied from the H.264/AVC encoder 21, and outputs the decoded video #1 stream to the Dependent view video encoder 24.

Depth計算部23係基於影像#丨之串流與影像#2之串流算 出Depth ’並將算出之Depth之資料輸出至多工器乃。The Depth calculating unit 23 calculates Depth' based on the stream of the video #丨 and the stream of the video #2, and outputs the calculated data of the Depth to the multiplexer.

Dependent view video編碼器 24係將由 H 264/AVC解碼器 22供給之影像#1之串流與由外部輸入之影像“之_流編碼 後,輸出 Dependent view video 串流。The Dependent view video encoder 24 outputs the Dependent view video stream after the stream of the image #1 supplied from the H 264/AVC decoder 22 and the stream input from the external image.

Base view video中不允許進行將其他串流作為參考圖像 之預測編碼處理,但如圖4所示,Dependent Wew vide〇中 允許進行將Base View video作為參考圖像之預測編碼處 理。例如,在將L圖像作為Base view vide〇並且將R圖像作 為Dependent view video進行編碼時,最終所得之 Dependent view Vide0串流之資料量將變得少於Base video串流之資料量。 再者,由於利用H.264/AVC進行編碼,故而需對Base view video進行時間方向之預測。又,就Dependent yiew 14544 丨.doc -10- 201105108 video而言,亦與View間之預測同時進行時間方向之預測。 如需對Dependent view video進行解碼,則必需Dependent view video編碼時作為參考對象之相應之^“ Wew vide〇 之解碼已經結束。The prediction encoding process of using other streams as reference pictures is not allowed in the Base view video, but as shown in Fig. 4, Dependent Wew vide〇 allows prediction encoding processing using Base View video as a reference picture. For example, when the L image is used as the Base view vide and the R image is encoded as the Dependent view video, the amount of data of the resulting Dependent view Vide0 stream will become less than the amount of data of the Base video stream. Furthermore, since encoding is performed using H.264/AVC, it is necessary to predict the time direction of the Base view video. Also, as far as Dependent yiew 14544 丨.doc -10- 201105108 video is concerned, the prediction of the time direction is also carried out simultaneously with the prediction between Views. If you want to decode the Dependent view video, you must use the Dependent view video encoding as the reference object. The decoding of the WeW vide〇 has ended.

Dependent view video編碼器24係將亦使用如此之view間 之預測進行編碼所得之Dependent View video串流輸出至多 工器25。 多工器25將由H.264/AVC編碼器21供給之Base view video串流、由Depth計算部23供給之Dependent view vide〇 串流(Depth之資料)、由Dependent view vide〇編碼器以供 給之Dependent view video串流多工處理為例如MpEG2 TS(Motion Picture Experts Group 2 Transport Stream,動態 像壓Ifg才示準-2傳輸流)。既存在Base view video串流與The Dependent view video encoder 24 outputs the Dependent View video stream which is also encoded using the prediction between such views to the multiplexer 25. The multiplexer 25 supplies the Base view video stream supplied from the H.264/AVC encoder 21, the Dependent view vide stream supplied from the Depth calculating unit 23 (data of Depth), and is supplied by the Dependent view vide encoder. The Dependent view video stream multiplexing processing is, for example, MpEG2 TS (Motion Picture Experts Group 2 Transport Stream). There is both a Base view video stream and

Dependent view video串流多工處理為1個MPEG2 TS之情 幵> ’亦存在該等串流包含於不同之]VIPEG2 TS中之情形。 多工器25係將所生成之TS(MPEG2 TS)輸出。由多工器 25輸出之TS係與其他管理資料一併於記錄裝置中記錄於光 碟2中’並以記錄於光碟2中之形式提供至播放裝置1。 於必需區分類型1之顯示方式中與Base view video—併 使用之Dependent view video、及類型2之顯示方式中與 Base view video—併使用之Dependent view video(Depth)之 情形時’將前者稱為D1 view video,將後者稱為D2 view video ° 又’將使用Base view video與D1 view video所進行之類 U5441.doc 11 201105108 型1之顯示方式中之3〇播放稱為_播放。將使ffiBase view video與D2 view video所進行之類型2之顯示方式中之 3D播放稱為B-D2播放。 播放裝置1係於根據使用者之指示尊進行B_D1播放時, 自光碟2讀出Base view video串流與D1 view vide〇串流進 行播放。 又,播放裝置1係於進行B-D2播放時,自光碟2讀出Base view video串流與D2 view video串流進行播放。 進而,播放裝置1係於進行通常之2D圖像之播放時,自 光碟2中僅讀出Base view video串流進行播放。 由於Base view video串流係以H.264/AVC進行編碼之 AVC視訊串流’故而支援BD之格式之播放器,即可使該 Base view video串流播放,並顯示2D圖像。 以下’主要對Dependent view video為D1 view video之情 形進行說明。僅稱為Dependent view video時係表示D1 view video。D2 view video亦係以與D1 view video相同之 方式記錄於光碟2中,並可進行播放。 [TS之構成例] 圖5係表示TS之構成例之圖。 於圖 5 之 Main TS(Main Transport Stream,主傳輸流) 中,Base view video、Dependent view video、Primary audio(主音訊)、Base PG(Base Presentation Graphic,基本 表達圖形)、Dependent PG(Dependent Presentation Graphic,相依表達圖形)、Base IG(Base Interactive 145441.doc 12 201105108The Dependent view video stream multiplex processing is 1 MPEG2 TS 幵 > 'There is also the case where the streams are included in different VIPEG2 TSs. The multiplexer 25 outputs the generated TS (MPEG2 TS). The TS outputted from the multiplexer 25 is recorded in the optical disc 2 together with other management data and supplied to the playback apparatus 1 in the form recorded in the optical disc 2. When it is necessary to distinguish between the display mode of type 1 and the Base view video—and the Dependent view video used, and the display mode of type 2 and Base view video—and use the Dependent view video (Depth), the former is called D1 view video, the latter is called D2 view video ° and 'U5441.doc will be used with Base view video and D1 view video. The 3〇 play in the display mode of 201105108 type 1 is called _play. The 3D playback in the type 2 display mode performed by the ffiBase view video and the D2 view video is referred to as B-D2 playback. The playback device 1 reads the Base view video stream and the D1 view vide stream from the disc 2 for playback when B_D1 playback is performed according to the user's instruction. Further, when the playback device 1 performs B-D2 playback, the Base view video stream and the D2 view video stream are read from the optical disc 2 for playback. Further, when the playback device 1 is playing a normal 2D image, only the Base view video stream is read from the optical disk 2 and played. Since the Base view video stream is an AVC video stream encoded by H.264/AVC, the player supporting the BD format can stream the Base view video and display a 2D image. The following 'mainly describes the situation where the Dependent view video is D1 view video. When it is called Dependent view video only, it means D1 view video. The D2 view video is also recorded on the disc 2 in the same way as the D1 view video, and can be played back. [Configuration Example of TS] FIG. 5 is a view showing a configuration example of TS. In the Main TS (Main Transport Stream) of Figure 5, Base view video, Dependent view video, Primary audio, Base PG (Base Presentation Graphic), Dependent PG (Dependent Presentation Graphic) , according to the expression), Base IG (Base Interactive 145441.doc 12 201105108

Graphic ’ 基本交互圖形)、Dependent IG(Dependent Interactive Graphic,相依交互圖形)之各自之串流係經多 工處理。如此,亦存在Dependent view video串流與Base view video串流一併包含於Main TS中之情形。 於光碟 2 中記錄有 Main TS 與 Sub TS(Sub Transport Stream,子傳輸流)。Main TS係至少包含 Base view video 串流之TS。Sub TS係包含Base view video串流以外之串流 且與Main TS —併使用之TS。 下述PG、IG亦準備有Base view與 Dependent view之各自 之串流,以能夠與視訊同樣地進行3D顯示。 對各個串流進行解碼所得之p G、IG之B a s e v i e w之平面 係與將Base view video串流解碼所得之Base view video的 平面合成進行顯不。同樣地’ PG、IG之Dependent view之 平面係與將Dependent view video串流解碼所得之 Dependent view video的平面合成進行顯示。 例如,當Base view video串流為l圖像之串流,而 Dependent view video串流為R圖像之串流時,PG、IG中, 其Base view之串流亦成為l圖像之圖形之串流。又,The respective streams of Graphic ’ Basic Interactive Graphics and Dependent IG (Dependent Interactive Graphic) are multiplexed. Thus, there is also a case where the Dependent view video stream is included in the Main TS together with the Base view video stream. Main TS and Sub TS (Sub Transport Stream) are recorded on the disc 2. The Main TS system contains at least the TS of the Base view video stream. Sub TS is a TS that contains streams other than the Base view video stream and is used with the Main TS. The following PG and IG are also prepared to have a stream of the Base view and the Dependent view so that 3D display can be performed in the same manner as the video. The plane of the PG and IG B a s e v i e w obtained by decoding each stream is compared with the plane synthesis of the Base view video obtained by decoding the Base view video stream. Similarly, the plane of the Dependent view of PG and IG is combined with the plane of the Dependent view video obtained by stream decoding the Dependent view video. For example, when the Base view video stream is a stream of l images, and the Dependent view video stream is a stream of R images, in the PG and IG, the stream of the Base view also becomes a graph of the image of 1 image. Streaming. also,

Dependent view之PG串流、IG串流成為尺圖像之圖形之串 流。 另一方面’當Base view video串流為R圖像之串流, Dependent view video串流為L圖像之串流時,PG、IG中, 其Base view之串流亦成為R圖像之圖形之串流。又, Dependent view之PG串流、犯串流成為L圖像之圖形之串 145441.doc 13 201105108 流。 圖6係表示TS之其它構成例之圖。 於圖 6之 Main TS 中,Base view video、Dependent view video之各自之串流係經多工處理。 另一方面,於 Sub TS 中,Primary audio、Base PG、 Dependent PG、Base IG、Dependent IG之各自之串流係經 多工處理。 亦存在以此方式將視訊串流多工處理成Main TS,將 PG、IG之串流等多工處理成Sub TS之情形。 圖7係表示TS之又一其它構成例之圖。 於圖 7A 之 Main TS 中,Base view video、Primary audio、Base PG、Dependent PG、Base IG、Dependent IG 之各自之串流係經多工處理。 另一方面,於Sub TS中包含Dependent view video串流。 如此般,Dependent view video 串流與 Base view video 串 流有時亦包含於不同之TS中。 於圖 7B 之 Main TS 中,Base view video、Primary audio、PG、IG之各自之串流係經多工處理。另一方面’ 於 Sub TS 中,Dependent view video、Base PG、Dependent PG、Base IG、Dependent IG之各自之串流係經多工處理。The PG stream of the Dependent view and the IG stream become the stream of the image of the ruler image. On the other hand, when the Base view video stream is a stream of R images and the Dependent view video stream is a stream of L images, in the PG and IG, the stream of the Base view also becomes the graph of the R image. Streaming. In addition, the PG stream of the Dependent view and the stream of the stream become a string of L images. 145441.doc 13 201105108 Stream. Fig. 6 is a view showing another configuration example of the TS. In the Main TS of Figure 6, the respective streams of the Base view video and the Dependent view video are multiplexed. On the other hand, in Sub TS, the respective streams of Primary audio, Base PG, Dependent PG, Base IG, and Dependent IG are multiplexed. There is also a case where the video stream is multiplexed into the Main TS in this manner, and the multiplex processing such as PG and IG is processed into the Sub TS. Fig. 7 is a view showing still another example of the configuration of the TS. In the Main TS of FIG. 7A, the respective streams of Base view video, Primary audio, Base PG, Dependent PG, Base IG, and Dependent IG are multiplexed. On the other hand, the Dependent view video stream is included in the Sub TS. As such, the Dependent view video stream and the Base view video stream are sometimes included in different TSs. In the Main TS of FIG. 7B, the respective streams of Base view video, Primary audio, PG, and IG are multiplexed. On the other hand, in Sub TS, the respective streams of Dependent view video, Base PG, Dependent PG, Base IG, and Dependent IG are multiplexed.

Main TS中所含之PG ' IG係2D播放用之串流。Sub TS中 所含之串流係3D播放用之串流。 如此般,亦可於2D播放與3D播放中不共享PG之串流與 IG之串流。 145441.doc • 14· 201105108 如此心,Base view video 串流與 Dependent view video 串 流有時亦包含於不同之MPEG2 TS中。就Base view videoThe PG 'IG included in Main TS is a stream for 2D playback. The stream contained in the Sub TS is a stream for 3D playback. In this way, the stream of PG and the stream of IG can not be shared in 2D playback and 3D playback. 145441.doc • 14· 201105108 So, Base view video Streaming and Dependent view video Streaming is sometimes included in different MPEG2 TSs. Base view video

串流與Dependent view video串流包含於不同之MPEG2 TS 中進行記錄之情形的優點加以說明。 例如設想可多工處理成1個MP]EG2 TS之位元率受到限制 之情形。於此情形時,當Base view video串流與Dependent view video串流兩者包含於HgIMPEG2 TS中時,為滿足其 之約束而必需降低各串流之位元率。其結果,導致畫質下 降。 由於包含於各不相同之MPEG2 TS中,故無需降低位元 率,且不會使畫質下降。 [應用格式] 圖8係表示播放裝置1對八¥(八11以〇 Vedi〇,視訊音訊)串流 之管理例之圖。 汝圖8所示,AV串流之管理係使用piayList(播放清單)與 Clip(片段)之2層來進行。AV串流不僅記錄於光碟2中有 時亦記錄於播放裝置丨之局部儲存器中。 此處’將1個AV串流及作為伴隨該AV串流之資訊的C卟 ㈣onation(片段資訊)之對視鄒個物件,且將該等物件總 稱為Clip以下,將儲存有AV串流之檔案稱為AV串流檔 案。又,亦將儲存有Clip Inf〇rmati〇n之播案稱為 Information檔案。 AV串抓係/。時間軸擴充,且各之存取點係主要以時 戳於PlayLm中進行指定。CUp inf〇rmati〇n檔案係用以搜 145441.doc 15 201105108 尋A V串流中應開始解碼之位址等。The advantages of streaming and Dependent view video streaming in different MPEG2 TS recordings are illustrated. For example, it is assumed that the bit rate of multiplex processing into one MP]EG2 TS is limited. In this case, when both the Base view video stream and the Dependent view video stream are included in the HgIMPEG2 TS, it is necessary to lower the bit rate of each stream in order to satisfy the constraint. As a result, the picture quality is degraded. Since it is included in each MPEG2 TS, there is no need to lower the bit rate without degrading the picture quality. [Application Format] Fig. 8 is a diagram showing an example of management of the streamer of the singularity of the singularity of the singularity of the video device. As shown in FIG. 8, the management of the AV stream is performed using two layers of piayList (playlist) and Clip (segment). The AV stream is recorded not only in the disc 2 but also in the local storage of the playback device. Here, 'one AV stream and C (four) onation (segment information) as information accompanying the AV stream are viewed as objects, and these objects are collectively referred to as Clips, and AV streams are stored. The file is called an AV streaming file. In addition, the broadcast file containing Clip Inf〇rmati〇n is also called the Information File. AV string capture system /. The timeline is expanded, and each access point is specified primarily in the timestamp in PlayLm. The CUp inf〇rmati〇n file is used to search 145441.doc 15 201105108 Find the address where the A V stream should start decoding.

PlayList係AV串流之播放區間之集合。Av串流中之丄個 播放區間稱為Playltem(播放項)。PlayItenW^、由時間軸上之 播放區間之IN(進)點與〇UT(出)點之對來表示。如圖8所 示,PlayList係包含1個或複數個PlayItem。 圖8之自左側起第HiIP丨ayList包括,且利用 該2個PlayItem,分別參考左側之Clip中所含之八乂串流之前 半部分與後半部分。 自左側起第2個PlayList包括1個PlayItem,且利用該1個 PlayItem,參考右側之Clip中所含之AV串流整體。 自左側起第3個PlayList包括2個PlayItem,且利用該2個 PlayItem,分別參考左側之Clip中所含之AV串流之某一部 分、及右側之Clip中所含之AV串流之某一部分。 例如’由碟片導航索引程式來指定自左側起第1個 PlayList中所含之左側之PlayItem作為播放對象時,將該 PlayItem所參考之左側Clip中所含的AV串流之前半部分進 行播放。如此般,PlayList將用作用以管理AV串流之播放 的播放管理資訊。 於PlayList中,將由1個以上PlayItem之排列所形成的播 放路徑稱為主路徑(Main Path)。 又,於PlayList中,將與Main Path平行且由1個以上 SubPlayltem(子播放項)之排列所形成的播放路徑稱為子路 徑(Sub Path)。 圖9係表示Main Path與Sub Path之結構之圖。 145441.doc •16- 201105108PlayList is a collection of playback intervals of AV streams. The playback interval in the Av stream is called Playltem. PlayItenW^ is represented by the pair of IN (in) and 〇 UT (out) points in the playback interval on the time axis. As shown in Fig. 8, the PlayList includes one or a plurality of PlayItems. The HiIP丨ayList from the left side of Fig. 8 includes, and with the two PlayItems, the first half and the second half of the eight-way stream contained in the Clip on the left side are respectively referred to. The second PlayList includes one PlayItem from the left, and the entire PlayItem is referred to by referring to the entire AV stream included in the Clip on the right side. The third PlayList includes two PlayItems from the left side, and the two PlayItems are used to refer to a part of the AV stream included in the Clip on the left side and a part of the AV stream included in the Clip on the right side. For example, when the PlayItem on the left side included in the first PlayList from the left is designated as the playback target by the disc navigation index program, the first half of the AV stream included in the left Clip referred to by the PlayItem is played. As such, the PlayList will be used as playback management information for managing the playback of the AV stream. In the PlayList, a playback path formed by the arrangement of one or more PlayItems is referred to as a main path. Further, in the PlayList, a play path formed in parallel with the Main Path and arranged by one or more SubPlayltems is referred to as a Sub Path. Figure 9 is a diagram showing the structure of Main Path and Sub Path. 145441.doc •16- 201105108

PlayList可具有1個Main Path與1個以上之Sub Path。 上述Base view video串流係作為構成Main Path之 Playltem所參考之串流進行管理。又,Dependent view video串流係作為構成Sub Path之SubPlayltem所參考之串流 進行管理。 圖9之PlayList具有藉由3個Playltem之排列所形成之1個 Main Path與 3 個 Sub Path。 於構成Main Path之Playltem中,由前向後依序分別設定 有ID(Identification,識別符)。亦於Sub Path中,由前向後 依序設定有 Subpath_id=0、Subpath_id=l、及 Subpath_id=2 之ID。 於圖9之例中,於Subpath一id=0之Sub Path中包含有1個The PlayList can have one Main Path and more than one Sub Path. The Base view video stream is managed as a stream referenced by the Playltem constituting the Main Path. Further, the Dependent view video stream is managed as a stream referred to by SubPlayltem constituting Sub Path. The PlayList of FIG. 9 has one Main Path and three Sub Paths formed by the arrangement of three Playltems. In the Playltem constituting the Main Path, an ID (Identification) is set in order from the front to the back. Also in Sub Path, the IDs of Subpath_id=0, Subpath_id=l, and Subpath_id=2 are sequentially set from front to back. In the example of FIG. 9, one Sub Path included in Subpath-id=0

SubPlayltem,於 Subpath—id=l 之 Sub Path 中包含有 2 個 SubPlayltem。又,於Subpath_id=2之Sub Path中包含有 1 個 SubPlayltem。 於1個Playltem所參考之Clip AV串流中,至少包含有視 訊串流(主圖像資料)。 又’於Clip AV串流中’既可含有1個以上之以與Clip AV 串流中所含之視訊串流相同之時序(同步)播放之音訊串 流,亦可不含有該音訊串流。 於Clip AV串流中,既可含有1個以上之與clip AV串流中 所含之視訊串流同步播放之點陣圖之字幕資料(pG (Presentation Graphic))之串流’亦可不含有該串流。 於Chp AV串流中,既可含有1個以上之與AV串流檔 145441.doc -17- 201105108 案中所含之視訊串流同步播放之IG(Interactive Graphic)之 串流,亦可不含有該串流。IG之串流係用於顯示由使用者 操作之按紐等之圖形。 於1個Playltem所參考之Clip AV串流中,視訊串流、與 該視訊串流同步播放之0個以上之音訊串流、0個以上之PG 串流、以及0個以上之IG串流係經多工處理。 又,1個SubPlayltem係參考與Playltem所參考之Clip AV 串流不同之串流(其他串流)之視訊串流、音訊串流、或PG 串流等。 關於使用如此之 PlayList、Playltem、SubPlayltem 之 AV 串流之管理,例如記載於日本專利特開2008-252740號公 報、日本專利特開2005-3483 14號公報中。 [目錄結構] 圖1 0係表示記錄於光碟2中之檔案之管理結構之例的 圖。 如圖10所示,檔案係藉由目錄結構來進行階層式管理。 於光碟2上建立1個root(根)目錄。root目錄之下為由1個記 錄播放系統所管理之範圍。 於root目錄之下設置BDMV目錄。 於BDMV目錄之正下方,儲存設定有「Index.bdmv」之 名稱之稽案即Index檀案、及設定有「MovieObject.bdmv」 之名稱之樓案即MovieObject檐案。 於BDMV目錄之下,設置有BACKUP目錄、PLAYLIST目 錄、CLIPINF目錄、STREAM目錄等。 145441.doc -18· 201105108 於PLAYLIST目錄中,健存有描述piayList2PlayList檔 案。於各PlayList檔案中,設定有5位數字與擴展名 「.mpls」組合而成之名稱。於圖10所示之1個PlayList檔案 中設定有「OOOOO.mpls」之稽案名。 於CLIPINF目錄中’儲存有clip informati〇n檔案。於各 Clip Information檔案中,設定有5位數字與擴展名 「.clpi」組合而成之名稱。 於圖10之3個Clip lnformati〇n標案中,分別設定有 「〇〇〇01_clpi」、「〇〇〇〇2.clpi」、r 00003 clpi」之檔案名。以 下,將Clip Informati〇n檔案適當稱為clpi檔案。 例如’「OOOUpi」之clpi檔案係對Base view video之 Clip之相關資訊進行描述的檔案。 00002.clpi」之 clpi槽案係對D2 view video 之 Clip之相 關資Λ進行描述的槽案。 〇〇〇03.clpi」之clpi槽案係對di view video之Clip之相 關負§fl進行描述的標案。 於STREAM目錄中,儲存有串流檔案。於各串流檔案中 設定有5位數字與擴展名「m2ts」組合而成之名稱、或者 5位數子與擴展名「iivt」組合而成之名稱。以下,將設 定有擴展名「,m2ts」之檔案適當稱為爪2匕檔案。又,設定 有擴展名「.ilvtj之檔案稱為ilvt檔案。 「〇〇〇〇l_m2ts」之m2ts檔案係為2〇播放用之檔案,且藉 由指定該檔案而進行Base view video串流之讀出。 「00002.m2ts」之m2ts檔案係為D2 view video串流之檔 145441.doc -19- 201105108 案’「00003.m2tSj之m2ts檔案係為Dlviewvideo串流之樓 案。 「10000_ilvt」之iWt檔案係為B-D1播放用之檔案,且藉 由指定該權案而進行Base view video串流與D1 view video 串流之讀出。 「20000.ilvt」之ilvt標案係為B-D2播放用之檔案,且藉 由指定該檔案而進行Base view video串流與D2 view video 串流之讀出。 除了圖1 0所示者以外,於BDMV目錄之下,亦設置有儲 存音訊串流檔案之目錄等。 [各資料之語法] 圖11係表示PlayList檔案之語法之圖。SubPlayltem contains 2 SubPlayltems in Sub Path with subpath_id=l. Also, there is one SubPlayltem in the Sub Path of Subpath_id=2. The Clip AV stream referenced by one Playltem includes at least a video stream (main image data). Further, the 'in the Clip AV stream' may include one or more audio streams that are played at the same timing (synchronous) as the video stream included in the Clip AV stream, or may not include the audio stream. In the Clip AV stream, the stream of the subtitle data (pG (Presentation Graphic) of one dot map that is played in synchronization with the video stream included in the clip AV stream may not contain the same. Streaming. In the Chp AV stream, it may contain one or more streams of IG (Interactive Graphic) synchronized with the video stream included in the AV stream file 145441.doc -17-201105108, or may not contain the stream. Streaming. The IG stream is used to display a graphic such as a button operated by a user. In a Clip AV stream referenced by one Playltem, a video stream, zero or more audio streams that are played synchronously with the video stream, zero or more PG streams, and zero or more IG streams It is processed by multiplex. Further, one SubPlayltem refers to a video stream, an audio stream, or a PG stream which is different from the stream (other stream) of the Clip AV stream referred to by Playltem. The management of the AV stream using such PlayList, Playltem, and SubPlayltem is described in Japanese Laid-Open Patent Publication No. 2008-252740, and Japanese Patent Laid-Open No. Hei No. 2005-3483. [Directory Structure] Fig. 10 is a diagram showing an example of a management structure of files recorded on the optical disc 2. As shown in Figure 10, the file is hierarchically managed by a directory structure. Create a root directory on CD 2. Below the root directory is the range managed by a recording playback system. Set the BDMV directory under the root directory. Directly below the BDMV directory, the file named "Index.bdmv" is the index file, and the MovieObject file with the name of "MovieObject.bdmv" is stored. Under the BDMV directory, a BACKUP directory, a PLAYLIST directory, a CLIPINF directory, a STREAM directory, and the like are provided. 145441.doc -18· 201105108 In the PLAYLIST directory, there is a description of the piayList2PlayList file. In each PlayList file, a name in which a 5-digit number and an extension ".mpls" are combined is set. The audit name of "OOOOO.mpls" is set in one PlayList file shown in FIG. The clip informati〇n file is stored in the CLIPINF directory. In each Clip Information file, a name with a combination of 5 digits and the extension ".clpi" is set. In the three Clip lnformati〇n standards in Figure 10, file names of "〇〇〇01_clpi", "〇〇〇〇2.clpi", and r 00003 clpi are set. Hereinafter, the Clip Informati〇n file is appropriately referred to as a clpi file. For example, the "OOOUpi" clpi file is a file that describes the information about the Clip of Base view video. The clpi slot of 00002.clpi is a slot that describes the related resources of the Clip of D2 view video. The clpi slot of 〇〇〇03.clpi is a standard for describing the relevant §fl of the di view video Clip. In the STREAM directory, a streaming file is stored. In each stream file, a name in which a 5-digit number is combined with the extension "m2ts" or a combination of a 5-digit number and an extension "iivt" is set. Hereinafter, the file whose extension ", m2ts" is set is appropriately referred to as a claw file. Also, the file with the extension ".ilvtj" is called the ilvt file. The m2ts file of "〇〇〇〇l_m2ts" is the file for playback, and the Base view video stream is read by specifying the file. Out. The m2ts file of "00002.m2ts" is the file of D2 view video stream 145441.doc -19- 201105108 case "The m2ts file of "00003.m2tSj" is the Dlviewvideo streaming file. The iWt file of "10000_ilvt" is The file for B-D1 playback, and the Base view video stream and the D1 view video stream are read by specifying the right. The ilvt standard of "20000.ilvt" is a file for B-D2 playback, and the Base view video stream and the D2 view video stream are read by specifying the file. In addition to the one shown in Fig. 10, under the BDMV directory, a directory for storing audio stream files is also provided. [Syntax of Each Data] FIG. 11 is a diagram showing the syntax of a PlayList file.

PlayList檔案係儲存於圖1〇之PLAYLIST目錄中且設定有 擴展名「.mpls」之檔案。 圖11之type」ndicator係表示「xxxxx.mpls」之檔案種 類。 version number係表示「xxxx.mpls」之版本號碼。 version—number包含4位數字。例如,於3D播放用之 PlayList檔案中,設定表示係「3D Spec version」之 「0240」。The PlayList file is stored in the PLAYLIST directory of Figure 1 and is filed with the extension ".mpls". The type "ndicator of Fig. 11" indicates the file type of "xxxxx.mpls". The version number is the version number indicating "xxxx.mpls". Version—number contains 4 digits. For example, in the PlayList file for 3D playback, "0240" indicating "3D Spec version" is set.

PlayList_start_address係以自PlayList檔案之前導位元組 起之相對位元組數為單位,表示PlayList()之前導位址。PlayList_start_address indicates the PlayList() leading address in units of the relative number of bytes from the leading tuple of the PlayList file.

PlayListMark_start_address係以自 PlayList檔案之前導位 元組起之相對位元組數為單位,表示PlayListMark〇之前 I4544I.doc -20· 201105108 導位址。The PlayListMark_start_address is expressed in units of relative bytes from the leading group of the PlayList file, and indicates the I4544I.doc -20· 201105108 leading address before the PlayListMark.

ExtensionData_start_address係以自 PlayList檔案之前導 位元組起之相對位元組數為單位,表示ExtensionData()之 前導位址。 於 ExtensionData_start—address之後,包含 160 bit(位元) 之 reserved_for_future_use。 於AppInfoPlayList()中,儲存有播放限制等與piayList之 播放控制相關之參數。 於PlayList〇中,儲存有與Main Path或Sub Path等相關之 參數。關於PlayList()之内容將於下文敍述。 於PlayListMark〇中’儲存有piayList之標記資訊,即指 示章節跳躍等之使用者操作或指令等中與作為跳躍目的地 (跳躍點)之標記相關之資訊。 於ExtensionData()中可插入私有資·料。 圖12係表示PlayList檔案之描述之具體例之圖。 如圖12所示,於PlayList檔案中描述有2 bit(位元)之 3D_PL_type與 1 bit(位元)之 view_type 。view_type係描述於 例如圖 11之 AppInfoPlayList()中。 3D_PL_type係表示 PlayList之種類。 view—type係表示由 PlayList 管理播放之 Base view video 串流係為L圖像(L view)之串流抑或是R圖像(R view)之串 流。 圖13係表示3D_PL_type之值之含義的圖。 3D_PL_type之值之00表示係為2D播放用之PlayList。 I45441.doc 201105108 3D一PL_type之值之01表示係為3D播放中之B-D1播放用 之PlayList。 3D_PL_type之值之1〇表示係為3D播放中之B-D2播放用 之 PlayList。 例如,3D_PL_type之值為01或1〇時,對PlayList檔案之 ExtenstionData()中登錄3DPlayList資訊。例如,登錄與 Base view video 串流及Dependent view video串流自光碟 2 中讀出相關的資訊,作為3DPlayList資訊。 圖I4係表示view_type之值之含義的圖。 view_type之值之0係表示於進行3D播放時,Base view video串流為L view之串流。於進行2D播放之情形時,則 表示Base view video串流為AVC視訊串流。 view_type之值之 1係表示 Base view video 串流為 R view 之串流。 因view_type係描述於PlayList檔案中,故播放裝置1可識 別Base view video串流係為L view之串流抑或是R view之 串流。 例如’ 一般認為經由HDMI傳輸線,將視訊信號輸出至 顯示裝置3時,要求播放裝置1將L view之信號與R view 之 信號分別加以區分後再輸出。 由於可識別Base view video串流係為L view之串流抑或 是R view之串流,故播放裝置1可將L view之信號與R view 之信號區分後輸出。 圖15係表示圖11之PlayList()之語法之圖。 14544 丨.doc •22· 201105108 length係表示自該length攔位之後起至PlayList()之最後 為止之位元組數的32位元之無符號整數。即,length係表 示自reserved_for_future_use起直至PlayList之最後為止之 位元組數。 於 length之後準備 1 6位元之 reserved_for_future_use。 number_of_PlayItems係表示位於 PlayList之中之 Playltem 之數量的16位元之攔位。於圖9之例之情形時,Playltem之 數量為 3。Playltem_id 之值係按照 PlayList 中 Playltem()所 表示之順序由0開始進行分配。例如,分配圖9之 PlayItem_id=0、1、2。 number_of__SubPaths係表示位於 PlayList 中之 Sub Path之 數量的16位元之欄位。於圖9之例之情形時,Sub Path之數 量為3。SubPath」d之值係按照PlayList中SubPath()所表示 之順序由0開始進行分配。例如,分配圖9之 Subpath_id=0、1、2。其後之for語句係僅以Playltem之數 量參考PlayItem(),且僅以Sub Path之數量參考SubPath()。 圖16係表示圖15之SubPath ()之語法的圖。 length係表示自該length棚位之後起至Sub Path()之最後 為止之位元組數的32位元之無符號整數。即,length係表 示自reserved_for_future_use至PlayList之最後為止之位元 組數。 於 length之後,準備 16位元之 reserved_for_future_use。ExtensionData_start_address indicates the leading address of ExtensionData() in units of relative bytes from the leading group of the PlayList file. After ExtensionData_start_address, contains 160 bits (bits) of reserved_for_future_use. In the AppInfoPlayList(), parameters related to the playback control of the piayList, such as a playback restriction, are stored. In PlayList, parameters related to Main Path or Sub Path are stored. The contents of PlayList() will be described below. In the PlayListMark, the tag information of the piayList is stored, that is, the information related to the mark as the jump destination (jump point) in the user operation or command such as the chapter jump. Private assets can be inserted in ExtensionData(). Fig. 12 is a view showing a specific example of the description of the PlayList file. As shown in Fig. 12, a 2D (bit) 3D_PL_type and a 1 bit (bit) view_type are described in the PlayList file. The view_type is described, for example, in AppInfoPlayList() of Fig. 11. 3D_PL_type indicates the kind of PlayList. The view-type indicates whether the Base view video stream played by the PlayList is streamed as an L-picture stream or an R-picture stream. Fig. 13 is a view showing the meaning of the value of 3D_PL_type. The value 00 of the 3D_PL_type indicates that it is a PlayList for 2D playback. I45441.doc 201105108 The value of 01_D_PL_type of 01 indicates that it is a PlayList for B-D1 playback in 3D playback. The value 1 of the 3D_PL_type indicates that it is a PlayList for B-D2 playback in 3D playback. For example, when the value of 3D_PL_type is 01 or 1 3, 3DPlayList information is registered in ExtenstionData() of the PlayList file. For example, the login and Base view video stream and Dependent view video stream read related information from the disc 2 as 3DPlayList information. Figure I4 is a diagram showing the meaning of the value of view_type. The value of view_type is 0. When the 3D playback is performed, the Base view video stream is a stream of L view. In the case of 2D playback, the Base view video stream is represented as an AVC video stream. The value 1 of the view_type indicates that the Base view video stream is a stream of R view. Since the view_type is described in the PlayList file, the playback device 1 can recognize whether the Base view video stream is a stream of L view or a stream of R view. For example, when the video signal is output to the display device 3 via the HDMI transmission line, the playback device 1 is required to distinguish the signal of the L view from the signal of the R view and output the signal. Since the Base view video stream can be identified as a stream of L view or a stream of R view, the playback device 1 can distinguish the signal of the L view from the signal of the R view and output it. Fig. 15 is a view showing the syntax of PlayList() of Fig. 11. 14544 丨.doc •22· 201105108 length is a 32-bit unsigned integer representing the number of bytes from the end of the length of the PlayList(). That is, length represents the number of bytes from the reserved_for_future_use to the end of the PlayList. Prepare 1 6-bit reserved_for_future_use after length. number_of_PlayItems is a 16-bit block that represents the number of Playltems in the PlayList. In the case of the example of Fig. 9, the number of Playltem is 3. The value of Playltem_id is assigned starting from 0 in the order indicated by Playltem() in the PlayList. For example, PlayItem_id = 0, 1, 2 of Fig. 9 is assigned. number_of__SubPaths is a 16-bit field representing the number of Sub Paths in the PlayList. In the case of the example of Fig. 9, the number of Sub Paths is three. The value of SubPath"d is allocated from 0 in the order indicated by SubPath() in the PlayList. For example, Subpath_id = 0, 1, 2 of Fig. 9 is assigned. The following for statement refers to PlayItem() only with the number of Playltem, and only refers to SubPath() with the number of Sub Paths. Fig. 16 is a diagram showing the syntax of SubPath () of Fig. 15. Length is a 32-bit unsigned integer representing the number of bytes from the length of the length of the Sub Path(). That is, length represents the number of bytes from the reserved_for_future_use to the end of the PlayList. After length, prepare a 16-bit reserved_for_future_use.

SubPath_type係表示Sub Path之應用種類的8位元攔位。 SubPath_type係用於例如表示Sub Path係為音訊,或是點 145441.doc -23- 201105108 陣圖字幕,抑或是為文字字幕等種類之情形。 於 SubPath_type 之後’準備 15 位元之 reserved for futureuse ° is_repeat_SubPath係指定Sub Path之播放方法的1位元欄 位,其表示於Main Path之播放區間重複將進行sub Path之 播放,抑或是僅進行1次Sub Path之播放《例如,用於 Main Path所參考之Clip與Sub Path所參考之Clip之播放時 序不同之情形(將Main Path用作靜態圖像之幻燈片展示之 路徑,且將 Sub Path用作作為BGM(Background Music,背 景音樂)之音訊之路徑的情形等)。 於 Is_repeat_SubPath 之後’準備 8 位元之 reserved_for_ future_use 〇 number_of_SubPlayItems 係表示位於 1 個 Sub Path 中之 SubPlayltem之數量(登錄數)的8位元欄位。例如,圖9之 SubPath一id=0 之 SubPlayltem 之 number_of_SubPlayItems 為 1,SubPath_id=l 之 SubPlayltem之 number_of_SubPlayItems 為2。其後之for語句係僅以SubPlayltem之數量參考 SubPlayItem() 〇 圖17係表示圖16之SubPlayltem(i)之語法的圖。 length係表示自該length欄位之後起至Sub playltem()之 最後為止之位元組數的1 6位元之無符號整數。 圖 17之 SubPlayltem(i)係分為 SubPlayltem參考 1 個 Clip之 情形與參考複數個Clip之情形進行描述。 對SubPlayltem參考1個Clip之情形進行說明。 145441.doc • 24- 201105108SubPath_type is an 8-bit block indicating the application type of Sub Path. SubPath_type is used, for example, to indicate that Sub Path is audio, or to point to 145441.doc -23- 201105108, or to subtitles such as text subtitles. After SubPath_type 'prepared for 15 bits reserved for futureuse ° is_repeat_SubPath is a 1-bit field that specifies the playback method of Sub Path, which indicates whether the sub path will be played repeatedly in the playback interval of Main Path, or only once. Sub Path playback "For example, when the Clip referenced by Main Path is different from the playback timing of the Clip referenced by Sub Path (the Main Path is used as the path of the slide show of the still image, and Sub Path is used as the Sub Path. As the path of the audio of BGM (Background Music), etc.). After Is_repeat_SubPath 'Prepare 8 bits reserved_for_future_use 〇 number_of_SubPlayItems is an 8-bit field that represents the number of SubPlayltems (number of logins) in 1 Sub Path. For example, the Sub___PlayItems of SubPlayltem of SubPath-id=0 in Figure 9 is 1, and the number_of_SubPlayItems of SubPlayltem of SubPath_id=l is 2. The following for statement refers to SubPlayItem() only for the number of SubPlayltems. Fig. 17 is a diagram showing the syntax of SubPlayltem(i) of Fig. 16. Length is an unsigned integer of 16 bits from the length field to the last byte of Sub playltem(). The SubPlayltem(i) of Fig. 17 is divided into the case where SubPlayltem refers to one Clip and the case where a plurality of Clips are referred to. The case where SubPlayltem refers to one Clip will be described. 145441.doc • 24- 201105108

Clip_lnformation_file_name[0]係表示所參考之clip。Clip_lnformation_file_name[0] represents the clip referenced.

Clip_codec」dentifier[0]係表示Clip之編解碼方式。於 Clip_codec—identifier[0]之後包含有 reserved_for_future— use ° is一multi—Clip—entries係表示有無 multi_Clip(多片段)登錄 之旗標。於建立有is_multi_Clip_entries之旗標之情形時, 參考SubPlayltem參考複數個Clip時之語法。 ref_to—STC」d[0]係與 STC(System Time Clock,系統時 鐘)非連續點(系統時間基準之非連續點)相關之資訊。Clip_codec"dentifier[0] is the codec mode of the Clip. Included after Clip_codec_identifier[0] is reserved_for_future—use ° is a multi-Clip-entries indicates whether there is a multi_Clip (multi-segment) login flag. When the flag of is_multi_Clip_entries is established, refer to the syntax when SubPlayltem refers to a plurality of Clips. ref_to_STC"d[0] is information related to the STC (System Time Clock) discontinuity point (the discontinuity of the system time base).

SubPlayltem一IN—time係表示Sub Path之播放區間之開始 位置,SubPlayItem_OUT_time係表示結束位置。 sync—Playltem」d 與 sync_start—PTS_of_PlayItem 係表示 Main Path之時間轴上Sub Path開始播放之時刻。SubPlayltem-IN-time indicates the start position of the Play Path of Sub Path, and SubPlayItem_OUT_time indicates the end position. sync—Playltem”d and sync_start—PTS_of_PlayItem is the time at which the Sub Path starts playing on the timeline of the Main Path.

SubPlayItem_IN—time 、 SubPlayltemOUTtime 、 sync_PlayItem」d 、 sync—start_PTS_of—Playltem 係於 SubPlayltem所參考之Clip中共用。 對「if(is—multi—Clip_entries==lb」,且 SubPlayltem參考 複數個Clip之情形進行說明。 num_of_Clip—entries係表示戶斤參考之Clip之數量。 Clip—Information_file_name[SubClip_entry_id]之數量係指 定除 Clip_Information—file_name[0]以外之 Clip之數量。SubPlayItem_IN_time, SubPlayltemOUTtime, sync_PlayItem"d, sync_start_PTS_of-Playltem are shared by the Clip referenced by SubPlayltem. The case of "if(is_multi_Clip_entries==lb", and SubPlayltem refers to a plurality of Clips. num_of_Clip_entries is the number of Clips that refer to the reference. The number of Clip_Information_file_name[SubClip_entry_id] is specified in addition to Clip_Information. —The number of clips other than file_name[0].

Clip_codec_identifier[SubClip_entry—id]係表示 Clip之編 解碼方式。 ref_to—STC_」d[SubClip_entry_id]係與 STC 非連續點(系 I45441.doc -25- 201105108 統時間基準之非連續點)相關之資訊。於ref_t〇_STC_ id[SubClip_entry」d]之後,包含有 reserved—f〇r—future_ use 〇 圖18係表示圖15之piayitern()之語法的圖。 length係表示自該length欄位之後起至Piay[tem〇之最後 為止之位元組數的16位元之無符號整數。Clip_codec_identifier[SubClip_entry_id] indicates the encoding and decoding method of the Clip. ref_to_STC_"d[SubClip_entry_id] is information related to the STC non-contiguous point (which is a discontinuous point of the I45441.doc -25-201105108 time base). After ref_t〇_STC_ id[SubClip_entry"d], reserved_f〇r_future_use is included. FIG. 18 is a diagram showing the syntax of piayitern() of FIG. Length is an unsigned integer of 16 bits from the length field to the end of Piay [the last byte of the tem〇.

Chp—Information—file_name[0]係表示 piayItemK 參考之 Clip之Clip Information檔案名。再者,於包含⑶卩之爪仏 檔案之檀案名、及與其對應之Clip lnformati〇n檔案之檀案 名中,包含有相同之5位數之數字。Chp—Information—file_name[0] is the name of the Clip Information file of the Clip referenced by piayItemK. Furthermore, the same five-digit number is included in the name of the file name of the file containing the (3) 仏 卩 档案 file and the Clip lnformati〇n file corresponding thereto.

Clip—codec—identiHer[0]係表示clip之編解碼方式。於 Clip_codec」dentifier[0]之後,包含有 reserved—f〇r—future— use 。於 reserved—for_future_use 之後,包含有 is multi angle、connection_condition ° ref一to一STC」d[0]係與STC非連續點(系統時間基準之非 連續點)相關之資訊。 IN一time係表示Playltem之播放區間之開始位置, OUT_time係表示結束位置。 於OUT一time之後,包含有 UO_mask_table()、playItem_ random_access_mode、still_mode 〇 於STN一table()中包含有對象PlayIten^參考之AV串流的 資訊。又’亦包含當存在與對象之Playltern建立相關性進 行播放之Sub Path時,構成該Sub path之SubPlayltem所參 考之AV串流的資訊。 145441.doc •26· 201105108 圖19係表示圖18之STN_table()之語法的圖。 STN_table()係以Playltem之屬性而設定。 length係表示自該length攔位之後起至STN_table〇之最 後為止之位元組數的16位元之無符號整數。於length之 後,準備 16位元之 reserved_for_future_use。 number of video stream entries係表示 STN_table()中所 — — _ __ 登錄(entry)且提供video_stream_id之串流的數量。 video_stream_id係用以識別視訊串流之資訊。例如, Base view video 串流係藉由該 video_stream_id來確定。 就Dependent view video串流之ID而言,既可於 STN_table()内進行定義,亦可藉由對Base view video串流 之ID加上特定值等而計算求出。 video_stream_number係用於視訊切換且使用者可看見之 視訊串流編號。 number_of_audio_stream_entries係表示 STN—table()中所 登錄且提供audio_stream」d之第1音訊串流之串流數量。 audio_stream」d係用以識別音訊串流之資訊, audio_stream_number係用於聲音切換且使用者可看見之音 訊串流編號。 number_of_audio_stream2_entries 係表示 STN_table()中 所登錄且提供audio_stream」d2之第2音訊串流之串流數 量。audio_stream_id2係用以識別音訊串流之資訊, audio_stream_number係用於聲音切換且使用者可看見之音 訊串流編號。可使該例中,對播放之聲音進行切換。 145441.doc -27- 201105108 number_of_PG—txtST_stream_entries 係表示 STN_table() 中所登錄且提供PG_txtST_stream_id之串流的數量。其 中,登錄有點陣圖字幕經運行長度編碼之PG串流與文字字 幕檔案(txtST)。PG_txtST_stream_id係用以識別字幕串流 之資訊,PG_txtST_stream_number係用於字幕切換且使用 者可看見之字幕串流編號。 number_of_IG_stream_entries 係表示 STN_t.able()中戶斤登 錄且提供IG_stream_id之串流的數量。其中登錄有IG串 流。IG_stream_id係用以識別IG串流之資訊, IG_stream_number係用於圖形切換且使用者可看見之圖形 串流編號。Clip_codec_identiHer[0] is the codec mode of the clip. After Clip_codec"dentifier[0], contains reserved-f〇r-future-use. After reserved_for_future_use, includes is multi angle, connection_condition ° ref_to_STC"d[0] is the information related to the STC non-contiguous point (non-contiguous point of the system time reference). The IN-time indicates the start position of the Playltem playback interval, and the OUT_time indicates the end position. After the OUT time, UO_mask_table(), playItem_random_access_mode, and still_mode are included in the STN-table() information of the AV stream referenced by the object PlayIten^. Further, the information of the AV stream constituting the SubPlayltem of the Sub path is also included when there is a Sub Path for playing the correlation with the Playltern of the object. 145441.doc •26· 201105108 FIG. 19 is a diagram showing the syntax of STN_table( ) of FIG. 18. STN_table() is set with the properties of Playltem. Length is an unsigned integer of 16 bits representing the number of bytes from the end of the length block to the end of STN_table. After length, prepare 16-bit reserved_for_future_use. The number of video stream entries indicates the number of streams in the STN_table() that are ____entry and provide video_stream_id. Video_stream_id is information used to identify video streams. For example, the Base view video stream is determined by the video_stream_id. The ID of the Dependent view video stream can be defined in STN_table() or by adding a specific value to the ID of the Base view video stream. Video_stream_number is the video stream number that is used for video switching and visible to the user. Number_of_audio_stream_entries is the number of streams representing the first audio stream registered in STN_table() and providing audio_stream"d. Audio_stream"d is used to identify the information of the audio stream, and audio_stream_number is used for the sound switching and the user can see the audio stream number. Number_of_audio_stream2_entries is the number of streams representing the second audio stream registered in STN_table() and providing audio_stream"d2. Audio_stream_id2 is used to identify the information of the audio stream, and audio_stream_number is the audio stream number that is used for sound switching and visible to the user. In this example, the sound played can be switched. 145441.doc -27- 201105108 number_of_PG—txtST_stream_entries is the number of streams that are registered in STN_table() and that provide PG_txtST_stream_id. Among them, the PG stream and the text subtitle file (txtST) of the run-length encoding of the dot matrix subtitle are registered. PG_txtST_stream_id is used to identify the information of the subtitle stream, and PG_txtST_stream_number is used for subtitle switching and the subtitle stream number visible to the user. number_of_IG_stream_entries is the number of streams that are registered in STN_t.able() and that provide IG_stream_id. Among them, there is an IG stream registered. IG_stream_id is used to identify the information of the IG stream, and IG_stream_number is used to switch the graphics stream and the user can see the stream number.

Main TS、Sub TS 之 ID 亦登錄於 STN」able()中。stream_ attribute()中描述著上述ID並非基本串流而是TS之ID之内 容。 [播放裝置1之構成例] 圖20係表示播放裝置1之構成例之方塊圖。 控制器5 1係執行預先準備之控制程式,控制播放裝置1 之整體動作。 例如,控制器5 1對磁碟驅動器52進行控制,以讀出3D播 放用之PlayList檔案。又,控制器5 1係基於STN_table中所 登錄之ID,將Main TS與SubTS讀出並供給至解碼部56。 磁碟驅動器52係按照利用控制器5 1之控制,自光碟2中 讀出資料,並將所讀出之資料輸出至控制器5 1、記憶體 53、或解碼部56中。 145441.doc -28 - 201105108 記憶體5 3係適當記憶控制 等。 益5 1執行各種處理所需之資料 局部儲存器54係包含例如HDD(Hard以吐,硬碟 驅動器)。㈣部儲存器54中,記錄有由舰㈣下载:The ID of Main TS and Sub TS is also registered in STN "able(). The stream_ attribute() describes that the above ID is not the basic stream but the content of the ID of the TS. [Configuration Example of Playback Device 1] FIG. 20 is a block diagram showing a configuration example of the playback device 1. The controller 51 executes a control program prepared in advance to control the overall operation of the playback device 1. For example, the controller 51 controls the disk drive 52 to read the PlayList file for 3D playback. Further, the controller 51 reads and supplies the Main TS and the SubTS to the decoding unit 56 based on the ID registered in the STN_table. The disk drive 52 reads data from the optical disk 2 in accordance with the control of the controller 51, and outputs the read data to the controller 51, the memory 53, or the decoding unit 56. 145441.doc -28 - 201105108 Memory 5 3 is appropriate memory control and so on.益5 1 Information required to perform various processes The local storage 54 contains, for example, HDD (Hard to spit, hard disk drive). (4) In the storage unit 54, the record is downloaded by the ship (4):

Dependent view video串流等。記錄於局部儲存器μ中之串 流亦適當供給至解碼部5 6 » 網際網路介面55係按照來自控制器51之控制,經由網路 71而與飼服器72進行通信,且將由飼服器72下載之資料供 給至局部錯存器54。 自伺服器72下載使記錄於光碟2中之資料經更新之資 料。可藉由-併使許載後之以㈣⑽_他。串流與 記錄於光碟2中之Base view vide。串流,而實現與光㊆之 内容不同之内容之3D播放。於Dependent —他〇串流 下載後,PlayList之内容亦被適當更新β 解碼部56係對由磁碟驅動器52或局部儲存器⑷共給之串 流進行解碼,並將所得之視訊信號輸出至顯示裝置3。音 訊信號亦經由特定之路徑輸出至顯示裝置3。 曰 钿作輸入部57係包含按鈕、按鍵、觸控面板、滾輪、滑 鼠等輪入設備、或接收由特定之遙控器所發送之紅二線; “號之接收部。操作輸入部57係檢測使用者之操作,並將 表不所檢測出之操作内容之信號供給至控制器5 1。、 圖21係表示解碼部56之構成例之圖。 於圖21中係表*進行視訊信號處理之構成。於解碼部% 中’亦進彳T音訊信號之解碼處理1音訊信號為對象而進 145441.doc •29- 201105108 行之解碼處理之結果係經由未圖示之路徑而輸出至顯示裝 置3。 PID(Pr〇p〇rti〇nal Intergral Derivative,比例積分微分)濾 器101係基於構成TS之封包之PID或串流之ID等,識別由 磁碟驅動器52或局部儲存器54供給的tS係為Main TS抑或 是Sub TS。PID渡器101係將Main TS輪出至緩衝器1〇2,並 將Sub TS輸出至緩衝器103。 PID渡器1 04係依序讀出記錄於緩衝器丨〇2中之Main TS之 封包,並基於PID進行分配。 例如’ PID濾器104係將構成Main TS中所含之Base view video串流之封包輸出至b video緩衝器1〇6,並將構成 Dependent view video串流之封包輸出至交換器1〇7。 又’ PID遽器104係將構成Main TS中所含之Base IG串流 之封包輸出至父換器114’並將構成Dependent IG串流之封 包輸出至交換器118。 PID渡器104係將構成Main TS中所含之Base PG串流之封 包輸出至交換器122 ’並將構成Dependent PG串流之封包 輸出至交換器126。 如參照圖5所說明,存在Base view video、Dependent view video、Base PG、Dependent PG、Base IG、Dependent view video stream, etc. The stream recorded in the local storage device μ is also appropriately supplied to the decoding unit. 6 » The internet interface 55 communicates with the feeder 72 via the network 71 in accordance with control from the controller 51, and will be served by the feeding device. The data downloaded by the unit 72 is supplied to the local register 54. The information for updating the data recorded in the optical disc 2 is downloaded from the server 72. Can be used by - and make the burden after (4) (10) _ him. The stream is recorded with the Base view vide recorded on the disc 2. Streaming, and realizing 3D playback of content different from the content of Light Seven. After Dependent - the stream is downloaded, the content of the PlayList is also updated appropriately. The β decoding unit 56 decodes the stream shared by the disk drive 52 or the local storage (4), and outputs the obtained video signal to the display. Device 3. The audio signal is also output to the display device 3 via a specific path. The input unit 57 includes a wheeled device such as a button, a button, a touch panel, a scroll wheel, a mouse, or the like, or receives a red line transmitted by a specific remote controller; a receiving unit of the number. The operation input unit 57 The user's operation is detected, and a signal indicating that the detected operation content is not supplied is supplied to the controller 51. Fig. 21 is a view showing a configuration example of the decoding unit 56. In Fig. 21, the video signal processing is performed. In the decoding unit %, the decoding process of the T audio signal is also performed. The result of the decoding process is 145441.doc • 29-201105108. The result of the decoding process is output to the display device via a path (not shown). 3. The PID (Pr〇p〇rti〇nal Intergral Derivative) filter 101 identifies the tS system supplied from the disk drive 52 or the local storage 54 based on the ID of the PID or the stream constituting the packet of the TS or the like. Is Main TS or Sub TS. The PID Transmitter 101 rotates the Main TS to the buffer 1〇2 and outputs the Sub TS to the buffer 103. The PID Transmitter 04 is sequentially read and recorded in the buffer丨Main2 in the Main TS packet, and based on the PID into For example, 'the PID filter 104 outputs the packet constituting the Base view video stream included in the Main TS to the b video buffer 1〇6, and outputs the packet constituting the Dependent view video stream to the switch 1〇7. Further, the PID buffer 104 outputs a packet constituting the Base IG stream included in the Main TS to the parent converter 114' and outputs a packet constituting the Dependent IG stream to the switch 118. The PID multiplexer 104 The packet constituting the Base PG stream included in the Main TS is output to the switch 122' and the packet constituting the Dependent PG stream is output to the switch 126. As explained with reference to Fig. 5, there are Base view video, Dependent view video, Base PG, Dependent PG, Base IG,

Dependent IG之各自之串流於Main TS中經多工處理之情 形。 PID濾器105係依序讀出緩衝器1〇3中所記憶之Sub TS之 封包,並基於PID進行分配。 145441.doc •30· 201105108 例如’ PID濾器105係將構成Sub TS中所含之Dependent view video串流之封包輸出至交換器1〇7。 又’ PID濾器105係將構成Sub TS中所含之Base IG串流 之封包輸出至交換器114,並將構成Dependent IG串流之封 包輸出至交換器118。 PID濾器105係將構成Sub TS中所含之Base PG串流之封 包輸出至交換器122’並將構成Dependent PG串流之封包 輸出至交換器126。 如參照圖7所說明,存在Dependent view video串流包含 於Sub TS中之情形。又,如參照圖6所說明,存在Base PG、Dependent PG、Base IG、Dependent IG之各自之串流 於Sub TS中經多工處理之情形。 交換器107係將由PID濾器104或PID濾器105供給之構成The Dependent IG's respective streams are multiplexed in the Main TS. The PID filter 105 sequentially reads the packets of the Sub TS memorized in the buffer 1〇3 and allocates them based on the PID. 145441.doc •30· 201105108 For example, the PID filter 105 outputs a packet constituting a Dependent view video stream included in the Sub TS to the switch 1〇7. Further, the PID filter 105 outputs a packet constituting the Base IG stream included in the Sub TS to the switch 114, and outputs a packet constituting the Dependent IG stream to the switch 118. The PID filter 105 outputs a packet constituting the Base PG stream included in the Sub TS to the switch 122' and outputs a packet constituting the Dependent PG stream to the switch 126. As explained with reference to Fig. 7, there is a case where the Dependent view video stream is included in the Sub TS. Further, as described with reference to Fig. 6, there is a case where each of Base PG, Dependent PG, Base IG, and Dependent IG is subjected to multiplex processing in Sub TS. The switch 107 is configured to be supplied by the PID filter 104 or the PID filter 105.

Dependent view video串流之封包輸出至d video緩衝器 108。 父換器10 9係按照規定解碼時序之時刻資訊,依序讀出 記憶於B video緩衝器106中之Base view video之封包、及 記憶於D video緩衝器1〇8中之Dependent view video之封 包。於儲存有Base view video之某一晝面之資料的封包、 及儲存有與其對應之Dependent view video之晝面之資料的 封包中’例如設定有相同之時刻資訊。 交換器109係將由B video緩衝器1〇6或D vide〇緩衝器1〇8 讀出之封包輸出至視訊解碼器!丨〇。 視訊解碼器i丨〇係對由交換器1 〇 9供給之封包進行解碼, 145441.doc 201105108 並將藉由解碼所得之Base view video或Dependent view video之資料輸出至交換器111。 交換器111係將解碼Base view video之封包所得之資料 輸出至B video平面生成部ή,並將解碼Dependent view video之封包所得之資料輸出至D video平面生成部113。 B video平面生成部112係基於由交換器U1供給之資料而 生成Base View Vide0之平面,並將其輸出至合成部13〇。 D video平面生成部113係基於由交換器U1供給之資料而 生成Dependent view video之平面,並將其輸出至合成部 130 ° 父換器114係將由PID濾器1〇4或PID濾器1〇5供給之構成 Base IG串流之封包輸出至b ig緩衝器115。 B IG解碼器116係對記憶於B IG緩衝器115中之構成 IG串流之封包進行解碼,並將解碼所得之資料輸出至b扣 平面生成部11 7。 B IG平面生成部117係基於由B IG解碼器ιΐ6供給之資料 而生成Base IG之平面,並將其輸出至合成部13〇。 交換器118係將由PID遽器1〇4或piD遽器1〇5供給之構成 Dependent IG串流之封包輸出至D IG緩衝器ιΐ9。 D IG解碼器120係對記憶於D IG緩衝器119中之構成The Dependent view video stream packet is output to the d video buffer 108. The parent converter 10 9 sequentially reads the packet of the Base view video stored in the B video buffer 106 and the packet of the Dependent view video stored in the D video buffer 1〇8 according to the timing information of the specified decoding timing. . For example, the same time information is set in the packet storing the data of a certain face of the Base view video and the packet storing the data of the Dependent view video corresponding thereto. The switch 109 outputs the packet read by the B video buffer 1〇6 or the D vide〇 buffer 1〇8 to the video decoder! Hey. The video decoder i decodes the packet supplied by the switch 1 〇 9, 145441.doc 201105108 and outputs the data of the Base view video or Dependent view video obtained by decoding to the switch 111. The switch 111 outputs the data obtained by decoding the packet of the Base view video to the B video plane generating unit, and outputs the data obtained by decoding the packet of the Dependent view video to the D video plane generating unit 113. The B video plane generating unit 112 generates a plane of the Base View Vide0 based on the material supplied from the switch U1, and outputs it to the synthesizing unit 13A. The D video plane generating unit 113 generates a plane of the Dependent view video based on the material supplied from the switch U1, and outputs it to the synthesizing unit 130°. The parent converter 114 is supplied by the PID filter 1〇4 or the PID filter 1〇5. The packet constituting the Base IG stream is output to the b ig buffer 115. The B IG decoder 116 decodes the packet constituting the IG stream stored in the B IG buffer 115, and outputs the decoded data to the b button plane generating unit 11 7 . The B IG plane generating unit 117 generates a plane of the Base IG based on the data supplied from the B IG decoder ι6, and outputs it to the combining unit 13A. The switch 118 outputs a packet constituting the Dependent IG stream supplied from the PID buffer 1〇4 or the piD buffer 1〇5 to the D IG buffer ιΐ9. The D IG decoder 120 is configured to be stored in the D IG buffer 119.

Dependent IG串流之封包進行解碼,並將解碼所得之資料 輸出至D IG平面生成部121。 '、 D IG平面生成部121係基於由D m解碼器12〇供給之資料 而生成Dependents之平面,並將其輸出至合成部13〇。^ J45441.doc •32- 201105108 交換器122係將由P_器1〇4或piD滤器ι〇5供給之構成 Base PG串流之封包輸出至B pG緩衝器。 B PG解碼器124係對記憶於B pG緩衝器123中之構成The Dependent IG stream packet is decoded, and the decoded data is output to the D IG plane generating unit 121. The D IG plane generating unit 121 generates a plane of Dependents based on the data supplied from the D m decoder 12, and outputs it to the combining unit 13A. ^ J45441.doc •32- 201105108 The switch 122 outputs the packet constituting the Base PG stream supplied by the P_Plug 1〇4 or the piD filter 〇5 to the B pG buffer. The B PG decoder 124 is configured to be stored in the B pG buffer 123.

Base PG串流之封包進行解碼,並將解碼所得之資料輸出 至B PG平面生成部125。 B PG平面生成部125係基於由B pG解碼器124供給之資 料而生成Base PG之平面,並將其輸出至合成部13〇。 交換器126係將由pid濾' 器! 〇4或PID滤器1G5供給之構成 Dependent PG串流之封包輪出至D pG緩衝器127。 D PG解碼器128係對記憶於D PG緩衝器127中之構成 Dependent PG串流之封包進行解碼,並將解碼所得之資料 輸出至D PG平面生成部129。 D PG平面生成部129係基於由£> PG解碼器128供給之資 料而生成Dependent PG之平面,並將其輸出至合成部 130。 合成部130係將由b vide〇平面生成部112供給之Base view video之平面、由B IG平面生成部U7供給之以化π 之平面、以及由B PG平面生成部125供給之Base pG之平面 以特定之順序疊合而進行合成,生成Base view之平面。 又’合成部130係將由d video平面生成部113供給之The packet of the Base PG stream is decoded, and the decoded data is output to the B PG plane generating unit 125. The B PG plane generating unit 125 generates a plane of Base PG based on the information supplied from the B pG decoder 124, and outputs it to the combining unit 13A. The switch 126 will be powered by the pid filter! 〇4 or PID filter 1G5 supply configuration The Dependent PG stream packet is rounded out to the D pG buffer 127. The D PG decoder 128 decodes the packet constituting the Dependent PG stream stored in the D PG buffer 127, and outputs the decoded data to the D PG plane generating unit 129. The D PG plane generating unit 129 generates a plane of the Dependent PG based on the information supplied from the £> PG decoder 128, and outputs it to the synthesizing unit 130. The combining unit 130 is a plane of the Base view video supplied from the b vide〇 plane generating unit 112, a plane supplied by the B IG plane generating unit U7 to π, and a plane of the Base pG supplied from the B PG plane generating unit 125. The specific order is superimposed and combined to generate a plane of the Base view. Further, the combining unit 130 is supplied from the d video plane generating unit 113.

Dependent view video之平面、由d IG平面生成部121供給 之Dependent IG之平面、以及pG平面生成部129供給 之Dependent PG之平面以特定之順序疊合而進行合成,生 成 Dependent view之平面 〇 145441.doc •33- 201105108 合成部130係輸出Base view之平面與之 平面之資料。由合成部130輸出之視訊資料係輸出至顯示 裝置3,並藉由交替顯示Base _之平面與^疏加 view之平面,來進行3D顯示。 [T-STD(Transport stream_System 丁叫以 Μ。—,傳輸流 系統目標解碼器)之第丨例] 此處,對圖21所示之構成中之解碼器及其周邊之構成進 行說明。 圖22係表示進行視訊串流處理之構成的圖。 於圖22中’對與圖21所示之構成相同之構成標註相同之 符號。於圖22中,表示有PID濾器1〇4、B Wde〇緩衝器 106、交換器i〇7、D vide〇緩衝器1〇8、交換器ι〇9、視訊 解碼态 110、及DPB(Decoded picture Buffer,已解碼畫面 緩衝态)15 1。圖21中雖未有圖示,但於視訊解碼器丨丨〇之 後’设置有記憶已解碼畫面資料之dpb 1 5 1。 PID濾器1〇4係將構成Main TS中所含之Base Wew vide〇 串流之封包輸出至B video緩衝器i〇6,並將構成Dependent view video串流之封包輸出至交換器1〇7。 例如,於構成Base view video串流之封包中,分配 PID=0作為PID之固定值。又,於構成Dependent view video串流之封包中,分配〇以外之固定值作為piD。 PID濾器1〇4係將標頭中描述有piD=〇之封包輸出至b video緩衝器1 〇6 ’並將標頭中描述有〇以外之piD之封包輸 出至交換器107。 145441.doc -34- 201105108 輸出至B video缓衝器106中之封包係經由TB(Transport Buffer ’ 傳輸緩衝器)丨、MB(Multiplexing Buffer,多工緩 衝器)!而記憶於VSBj。於VSBj記憶有Base view video 之基本串流之資料。 於交換器107中,不僅供給有自PID濾器1〇4中輸出之封 包’而且亦供給有構成圖21之pid濾器1〇5中自Sub TS所提 取之Dependent view video串流的封包。 交換器107係於由PID濾器104供給構成Dependent view video串流之封包時,將該封包輸出至Dvide〇緩衝器ι〇8。 又,交換器107係於由PID濾器1〇5供給構成Dependent view video申流之封包時,將該封包輸出至D Wde〇緩衝器 108 〇 輸出至D video緩衝器108之封包係經由TB2、MB2而記憶 於彻2中。於vsb2中,記憶有Dependent view wde〇之基 本串流之資料。 交換器109係將B video緩衝器1〇6iVSBi中所記憶之 Base view video之封包、及D vide〇緩衝器1〇8之vsBj所 記憶之Dependent viewvide〇之封包依序讀出後,輸出至視 訊解碼器110。 例如,交換器109係以於輪出某一時刻之Base view 之封包之後立即輸出相同時刻之―如一 video之封包的方式,對視訊解WiiQ接著輸出相同時刻 之Base view video之封包與叫㈣加_。之封包。 於儲存有Base view vide〇之某一晝面之資料的封包、及 145441.doc •35· 201105108 儲存有與其對應之Dependent view video之晝面之資料的封 包中’設定有該等封包之編碼時確保PCR(program clock Reference,節目時鐘參考)同步之相同時刻資訊。即便 Base view video串流與Dependent view video串流分別包含 於不同之TS中,亦對儲存有相應之晝面資料之封包設定相 同之時刻資訊。 時刻資訊係為DTS(Decoding Time Stamp,解碼時戳)、 PTS(Presentation Time Stamp,播放時戳),且設定於各 PES(Packetized Elementary Stream,封包化之基本串流)封 包中。 即,於各個串流之畫面按編碼順序/解碼順序排列時, 位於相同時刻之Base view video之畫面與Dependent view video之晝面將成為對應的晝面。於儲存有某一 Base view video之畫面之資料的PES封包、及儲存有於解碼順序下與 該畫面對應之Dependent view video之畫面資料的PES封包 中,設定有相同之DTS。 又,於各個串流之畫面按顯示順序排列時,位於相同時 刻之 Base view video之晝面與 Dependent view video之晝面 亦成為對應的晝面。於儲存有某一 Base view video之晝面 資料的PES封包、及儲存有於顯示順序下與該畫面對應之 Dependent view video之畫面資料的pES封包中,設定有相 同之PTS。The plane of the Dependent view video, the plane of the Dependent IG supplied from the d IG plane generating unit 121, and the plane of the Dependent PG supplied from the pG plane generating unit 129 are superimposed and combined in a specific order to generate a plane Depend 145441 of the Dependent view. Doc •33- 201105108 The synthesis unit 130 outputs the data of the plane and the plane of the Base view. The video data outputted by the synthesizing unit 130 is output to the display device 3, and 3D display is performed by alternately displaying the plane of Base_ and the plane of the view. [T-STD (Transport stream_System 丁 Μ. -, Transport Stream System Target Decoder) Example] Here, the configuration of the decoder and its surroundings in the configuration shown in Fig. 21 will be described. Fig. 22 is a view showing the configuration of video stream processing. In Fig. 22, the same components as those shown in Fig. 21 are denoted by the same reference numerals. In Fig. 22, there are shown PID filter 1〇4, B Wde〇 buffer 106, switch i〇7, D vide〇 buffer 1〇8, switch ι〇9, video decoding state 110, and DPB (Decoded). Picture Buffer, decoded picture buffer state) 15 1. Although not shown in Fig. 21, dpb 1 5 1 for memorizing decoded picture data is provided after the video decoder. The PID filter 1〇4 outputs the packet of the Base Wew vide〇 stream included in the Main TS to the B video buffer i〇6, and outputs the packet constituting the Dependent view video stream to the switch 1〇7. For example, in the packet constituting the Base view video stream, PID=0 is assigned as a fixed value of the PID. Further, in the packet constituting the Dependent view video stream, a fixed value other than 〇 is assigned as piD. The PID filter 1〇4 outputs a packet described in the header with piD=〇 to the b video buffer 1 〇6 ′ and outputs a packet of piD other than 标 in the header to the switch 107. 145441.doc -34- 201105108 The packets output to the B video buffer 106 are via TB (Transport Buffer s), MB (Multiplexing Buffer)! And remembered in VSBj. The basic stream of Base view video is stored in VSBj. In the switch 107, not only the packet supplied from the PID filter 1〇4 but also the packet constituting the Dependent view video stream extracted from the Sub TS in the pid filter 1〇5 of Fig. 21 is supplied. The switch 107 outputs the packet to the Dvide buffer 〇8 when the packet constituting the Dependent view video stream is supplied from the PID filter 104. Further, when the switch 107 supplies the packet constituting the Dependent view video stream by the PID filter 1〇5, the packet is output to the D Wde buffer 108, and the packet output to the D video buffer 108 is via TB2, MB2. And remembered in 2. In vsb2, the basic data of the Dependent view wde〇 is memorized. The switch 109 sequentially reads out the packet of the Base view video stored in the B video buffer 1〇6iVSBi and the Dependent viewvide〇 stored in the vsBj of the D vide buffer 1〇8, and outputs the packet to the video. Decoder 110. For example, the switch 109 is configured to output a packet of the same time immediately after the packet of the Base view at a certain moment, such as a video packet, and then output the base view video packet of the same time to the video solution WiiQ and then add (four) plus _. The package. For the package containing the data of a certain page of the Base view vide, and 145441.doc •35· 201105108 The package containing the data of the corresponding Dependent view video is 'set with the code of the packet Ensure that the PCR (program clock Reference) is synchronized at the same time. Even if the Base view video stream and the Dependent view video stream are respectively included in different TSs, the same time information is set for the packets storing the corresponding data. The time information is a DTS (Decoding Time Stamp) and a PTS (Presentation Time Stamp), and is set in each PES (Packetized Elementary Stream) packet. That is, when the pictures of the respective streams are arranged in the encoding order/decoding order, the picture of the Base view video and the Dependent view video at the same time will become the corresponding faces. The same DTS is set in the PES packet storing the data of the picture of a certain Base view video and the PES packet storing the picture data of the Dependent view video corresponding to the picture in the decoding order. Further, when the pictures of the respective streams are arranged in the order of display, the face of the Base view video at the same time and the face of the Dependent view video also become the corresponding faces. The same PTS is set in the PES packet storing the header data of a certain Base view video and the pES packet storing the screen data of the Dependent view video corresponding to the screen in the display order.

如下所述’於 Base view video 串流之 〇〇p(Group Of Pictures’ 畫面群)結構與 Dependent view video 串流之GOP 145441.doc •36· 201105108 結構為相同結構時,以缺石民1丨丨g广H + 丹T从解喝順序對應之晝面成為亦以顯示 順序對應之晝面。 在以串列進行封包之傳送時,以某一時序自Bvide〇緩衝 器1〇6之彻丨中讀出之封包的咖丨、與以緊接著之時序自 D Vide〇緩衝器108之彻2中讀出之封包的謂2如圖22所示 成為表示相同時刻者。 交換器109係將自B vide〇緩衝器1〇6之VSBi中讀出之 Base view video的封包、或自D ν^〇緩衝器1〇8之vsB2中 讀出之Dependent view video的封包輸出至視訊解碼器 110 ° 視訊解·碼器U0對由交換器109供給之封包依序進行解 碼,並使解碼所得之Base view video之畫面資料、或者 Dependent view video之晝面資料記憶於DpB i 5 1中。 記憶於DPB151中之已解碼之畫面資料係以特定之時序 由交換态111讀出。又,記憶於DPB151中之已解碼之晝面 資料係藉由視訊解碼器11 〇而用於其他畫面之預測。 於以串列進行資料之傳送時,以某一時序輸出之Base view video之晝面資料之PTS、與以緊接著之時序輸出之 Dependent view video之晝面資料之PTS成為表示相同時刻 者。As described below, the "Group Of Pictures' structure and the Dependent view video streamed in the Base view video stream. GOP 145441.doc •36· 201105108 When the structure is the same structure, the stone is missing.丨g Guang H + Dan T corresponds to the face that corresponds to the display order. When the packet is transmitted in series, the packet of the packet read from the Bvide buffer 1〇6 at a certain timing and the buffer from the D Vide buffer 108 at the next timing are 2 As shown in Fig. 22, the packet 2 read out is the same time. The switch 109 outputs the packet of the Base view video read from the VSBi of the B vide buffer 1〇6 or the Dependent view video read from the vsB2 of the D ν^〇 buffer 1〇8 to The video decoder 110 ° video solution/coder U0 sequentially decodes the packets supplied by the switch 109, and memorizes the decoded Base view video picture data or the Dependent view video data in DpB i 5 1 in. The decoded picture data stored in the DPB 151 is read out from the exchange state 111 at a specific timing. Moreover, the decoded header data stored in the DPB 151 is used for prediction of other pictures by the video decoder 11 。. When the data is transmitted in series, the PTS of the data of the Base view video outputted at a certain timing and the PTS of the data of the Dependent view video outputted at the immediately subsequent timing become the same time.

Base view video 串流與 Dependent view video 串流既存在 如參照圖5等所說明般’多工處理為1個丁;§之情形,亦存在 如參照圖7所說明般,分別包含於不同之Ts中之情形。 由於安裝圖22之解碼器模組,播放裝置卜亦可應對如“ 14544l.doc -37- 201105108 view video串流與Dependent view video串流多工處理為1個 TS之情形’或者分別包含於不同之TS之情形。' 例如圖23所示,若假設僅供給H@TS之狀況,則無法應 對Base view video串流與Dependent view video串流分別包 含於不同之TS中之情形等。 又’根據圖22之解碼器模組,由於具有相同之dts,因 此即便 Base view video 串流與 Dependent view video 串流包 含於不同之TS中’亦能夠以正確之時序將封包供給至視訊 解碼器110。 亦可分別並列設置Base view video用之解碼器與 Dependent view video用之解碼器。於此情形時,於Base view video用之解碼與Dependent view video用之解碼器 中,分別以相同之時序供給有相同時刻之封包。 [第2例] 圖24係表示進行視訊串流之處理之另一構成的圖。 於圖24中,除圖22之構成以外’還表示有交換slu、L video平面生成部161、及R video平面生成部162。又,pid 濾器105亦示於交換器107之前段。適當地省略重複之說 明。 L video平面生成部161係生成L view video之平面者,且 其係取代圖21之B video平面生成部112而設置。 R video平面生成部162係生成R view video之平面者,且 其係取代圖21之D video平面生成部113而設置。 於該例中,交換器111必需將L view之視訊資料與R view 14544 丨.doc -38 · 201105108 之視讯資料識別輸出。 即’父換器111必需識別解碼Base view video之封包所 得之資料係為L ▽丨6你與r view中之哪一個視訊資料。 又 父換器111必需識別解碼Dependent view video之封 包所得之資料係為L View與r view中之哪一個視訊資料。 為識別L ”6你與汉view而使用參照圖12與圖14所說明之The Base view video stream and the Dependent view video stream have both the multiplex processing as one as described with reference to FIG. 5 and the like; the case of § is also included in the different Ts as described with reference to FIG. The situation in the middle. Since the decoder module of FIG. 22 is installed, the playback device can also respond to the case where "14544l.doc -37-201105108 view video stream and Dependent view video stream multiplex processing to 1 TS" or respectively In the case of TS, for example, as shown in Fig. 23, if it is assumed that only the H@TS is supplied, it is impossible to cope with the case where the Base view video stream and the Dependent view video stream are respectively included in different TSs. Since the decoder module of FIG. 22 has the same dts, even if the Base view video stream and the Dependent view video stream are included in different TSs, the packet can be supplied to the video decoder 110 at the correct timing. The decoder for the Base view video and the decoder for the Dependent view video can be separately arranged in parallel. In this case, the decoder for the Base view video and the Dependent view video are respectively supplied with the same timing. [Second example] Fig. 24 is a view showing another configuration of processing for video streaming. In Fig. 24, in addition to the configuration of Fig. 22, "there is also an exchange slu". The L video plane generating unit 161 and the R video plane generating unit 162. The pid filter 105 is also shown in the preceding stage of the switch 107. The overlapping description is omitted as appropriate. The L video plane generating unit 161 generates the plane of the L view video. The R video plane generation unit 162 generates a plane of the R view video, and is provided instead of the D video plane generation unit 113 of FIG. 21 . In this example, the switch 111 must recognize and output the video data of the L view and the video data of the R view 14544 丨.doc -38 · 201105108. That is, the parent switch 111 must identify the data obtained by decoding the packet of the Base view video. It is L ▽丨6 which of the video data you and r view. The parental transformer 111 must identify which of the L View and r view data is obtained by decoding the Dependent view video packet. To identify L ” 6 You and Han view use the description with reference to Figure 12 and Figure 14.

VieW—type °例如’控制器5 1係將PlayList檔案中所描述之 view—type輸出至交換器i u。 於view—type之值為〇之情形時’交換器u丨係將記憶於 DPB151中之資料之中解碼以PID=0所識別之Base view Vlde〇之封包所得之資料輸出至L video平面生成部161。如 上所述’ view—type之值之〇係表示Base view video串流為l view之串流。 於此情形時,交換器丨〖丨將解碼以〇以外之piD所識別之VieW_type ° For example, the controller 5 1 outputs the view_type described in the PlayList file to the switch i u . When the value of the view-type is 〇, the switcher will decode the data stored in the DPB151 and output the data from the packet of the Base view Vlde identified by PID=0 to the L video plane generation unit. 161. The value of the 'view-type value as described above indicates that the Base view video stream is a stream of l view. In this case, the switch 丨 丨 will decode the piD identified by 〇

Dependent view Vide0之封包所得之資料輸出至r video平 面生成部162。 另一方面,於VieW_type之值為情形時,交換器lu係 將記憶於DPB 1 5 1中之資料之中解碼以?11)=〇所識別之以“ view Wdeo之封包所得之資料輸出至R vide〇平面生成部 162。view—type之值之 1 係表示 Base view vide〇 串流為 R view之串流。 於此隋形時,交換器i 11將解碼以〇以外之piD所識別之The data obtained by the Dependent view Vide0 packet is output to the r video plane generating unit 162. On the other hand, when the value of VieW_type is the case, the switch lu decodes the data stored in the DPB 151. 11) = The data obtained by the "view Wdeo packet" is output to the R vide plane generation unit 162. The value of the view-type indicates that the base view vide stream is a stream of R view. When it is shaped, the switch i 11 will decode the piD identified by 〇

Dependent view video之封包所得之資料輸出至[vide〇平 面生成部161。 145441.doc •39· 201105108 L video平面生成部16 1係基於由交換器111供給之資料而 生成Lview video之平面,並將其輸出至合成部130。 R video平面生成部162係基於由交換器111供給之資料而 生成Rview video之平面,並將其輸出至合成部13〇。 於以H.264 AVC/MVC設定棺標準編碼之Base view video、Dependent view video之基本串流内不存在表示係 為L view抑或是R view的資訊(欄位)。 因此’由於預先將view_type設定於PlayList檔案中,故 記錄裝置可使播放裝置1識別Base view video串流與 Dependent view video 串流分別為 L view與 R View之哪一個 串流。 播放裝置 1 可識別 Base view video_ 流與 Dependent view video串流分別為L view與R view之哪一個串流,並對應於 識別結果切換輸出目的地。 於IG、PG之平面亦分別準備有L “〜與尺view之情形 時’因可區分視訊串流之L view與R view,故播放裝置1可 易於進行L view彼此、R view彼此之平面合成。 如上所述,於經由HDMI傳輸線而輸出視訊信號之情形 時,要求將L view之信號與R view之信號分別區分之後再 進行輸出,但播放裝置1可達到該要求。 解碼記憶於DPB151中之Base view video之封包所得之資 料、與解碼Dependent view video之封包所得之資料的識別 亦可基於view_id來進行而不必基於pid。 於以H.264 AVC/MVC設定檔標準進行編碼時,對構成編 145441.doc •40· 201105108 碼結果之串流的Access Unit(存取單元)設定view_id。可藉 由 view_id而識別各Access Unit為哪一個view component(視圖組件)之單元。 圖25係表示Access Unit之例的圖。 圖25之Access Unit#l係包含Base view video之資料之單 元。Dependent Unit#2係包含 Dependent view video之資料 之單元。Access Unit(Dependent view 之情形時為 Dependent Unit)係以可以晝面單位進行存取之方式彙集有 例如1張畫面之資料之單元。 藉由以H.264 AVC/MVC設定檔標準進行編碼,而將Base view video與Dependent view video之各晝面之資料儲存於 上述單元中。於以Η.264 AVC/MVC設定檔標準進行編碼 時,如 Dependent Unit#2 内所示,對各自之 view component 中附加MVC標頭。MVC標頭中包含view_id。 於圖25之例之情形時,對於Dependent Unit#2而言,可 根據view_id識別出儲存於該單元中之view component為 Dependent view video 0 另一方面,如圖25所示,於儲存於Access Unit# 1中之 view component即 Base view video 中未附加有 MVC標頭。 如上所述,Base view video串流係亦用於2D播放之資 料。因此,為確保與其之相容性,Base view video於編碼 時未附有MVC標頭。或者將暫時附加之MVC標頭刪除。 記錄裝置之編碼將於下文敍述。 於播放裝置1中,未附加MVC標頭之view component定 145441.doc -41 - 201105108 義(没疋)其之view」d為0,且將view component識別為Base view video。於 Dependent view video中,將 0以外之值於編 碼時設定為view_id。 藉此’播放裝置1可基於識別為〇之view_id,識別Base view video,從而可基於實際設定之〇以外之view—id,識 另丨J Dependent view video ° 於圖24之交換器in中,解碼Base view video之封包所 付之資料、與解碼Dependent view video之封包所得之資料 的識別亦可基於如此之vie w_id來進行。 [第3例] 圖26係表示進行視訊串流之處理之又一其它構成的圖。 於圖26之例中,設置有b video平面生成部112而取代圖 24之L video平面生成部16ι,且設置有d vide〇平面生成部 113而取代R video平面生成部162。於b video平面生成部 112與D video平面生成部113之後段設置有交換器171。亦 於圖26所示之構成中,基於View_type,切換資料之輸出目 的地。 交換器111係將記憶於DPB151中之資料之中解碼Base view video之封包所得之資料輸出至B vide〇平面生成邙 112。又,父換器111係將解碼Dependent view video之封勹 所得之資料輸出至Dvideo平面生成部113。 如上所述’解碼Base view video之封包所得之資料與 解碼Dependent view video之封包所得之資斜,总,、, ^ η 1乐以上述方 式基於PID或view_id而識別。 I4544I.doc •42· 201105108 B video平面生成部112係基於由交換器lu供給之資料而 生成Baseview video之平面,並將其輸出。 D video平面生成部113係基於由交換器lu供給之資料而 生成Dependent view video之平面,並將其輸出。 自控制器51對交換器171供給piayList檔案中所描述之 view_type。 於view_type之值為0之情形時,交換器m將由B vide〇 平面生成部112供給之Base view video之平面輸出至合成 部130作為L view video之平面。view—type之值之〇係表示 Base view video 串流為 L view之串流。 又,於此情形時’父換器1 71係將由D video平面生成部 113供給之Dependent view video之平面輸出至合成部13〇作 為R view video之平面0 另一方面,於view一type之值為1之情形時,交換器m 將由D video平面生成部113供給之Dependent view video之 平面輸出至合成部130作為L view video之平面。view type 之值之1表示Base view video串流為R view之串流。 又’於此情形時’交換器1 71係將由B video平面生成部 Π2供給之Base view video之平面輸出至合成部13〇作為R view video之平面。 即便圖26之構成,播放裝置1亦可識別L Wew與R view, 並根據識別結果,切換輸出目的地。 [平面合成模型之第1例] 圖27係表示圖21所示之構成中之合成部13〇與其前段之 I45441.doc -43- 201105108 構成的圖。 亦於圖27中,對與圖2 1所示之構成相同之構成標註相同 之符號。 對交換器181中,輸入有構成Main TS或Sub TS中所含之 IG串流之封包。於輸入至交換器18 1中之構成IG串流之封 包中’包含有Base view之封包與Dependent view之封包。 對交換器182中,輸入有構成Main TS或Sub TS中所含之 PG串流之封包。於輸入至交換器1 82中之構成PG串流之封 包中’包含有Base view之封包與Dependent view之封包。 如參照圖5等所說明,對於IG、PG亦準備有用以進行3D 顯示之Base view之串流與Dependent view之串流。 由於Base view之IG係與Base view video合成進行顯示, 且 Dependent view 之 IG 係與 Dependent view video 合成進行 顯示,故而使用者不僅可以3D觀賞視訊,而且可3D地看 見按鈕或圖符等。The data obtained by the Dependent view video packet is output to the [vide〇 plane generating unit 161. 145441.doc • 39· 201105108 The L video plane generation unit 16 1 generates a plane of Lview video based on the material supplied from the switch 111, and outputs it to the synthesizing unit 130. The R video plane generation unit 162 generates a plane of the Rview video based on the data supplied from the switch 111, and outputs it to the synthesizing unit 13A. There is no information (field) indicating whether it is L view or R view in the basic stream of Base view video and Dependent view video encoded in H.264 AVC/MVC. Therefore, since the view_type is set in the PlayList file in advance, the recording apparatus can cause the playback apparatus 1 to identify which one of the L view and the R View is the Base view video stream and the Dependent view video stream, respectively. The playback device 1 can identify which of the L view and the R view streams the Base view video_ stream and the Dependent view video stream respectively, and switch the output destination corresponding to the recognition result. In the planes of the IG and PG, the L "~ and the rule view" are respectively prepared. Since the L view and the R view of the video stream can be distinguished, the playback device 1 can easily perform the plane synthesis of the L view and the R view. As described above, when the video signal is output via the HDMI transmission line, it is required to separately distinguish the signal of the L view from the signal of the R view, and then output the signal, but the playback device 1 can achieve the requirement. The decoding is stored in the DPB 151. The data obtained from the packet of the Base view video and the data obtained by decoding the packet of the Dependent view video can also be based on the view_id without being based on the pid. When encoding with the H.264 AVC/MVC profile standard, the composition is compiled. 145441.doc •40· 201105108 The access unit of the stream of code results sets the view_id. The view_id can be used to identify which view component the unit of view component is. Figure 25 shows the Access Unit. Figure of Figure 25 Access Unit #l is a unit containing the data of the Base view video. Dependent Unit #2 is a unit containing the data of the Dependent view video. The cess unit (the Dependent Unit in the case of the Dependent view) is a unit that collects data such as one screen in such a manner that it can be accessed in units of a picture. By encoding with the H.264 AVC/MVC profile standard, The data of each side of the Base view video and the Dependent view video is stored in the above unit. When encoding with the 264.264 AVC/MVC profile standard, as shown in Dependent Unit#2, for the respective view components Additional MVC header. The MVC header contains view_id. In the case of Figure 25, for Dependent Unit#2, the view component stored in the unit can be identified as Dependent view video 0 according to view_id. As shown in FIG. 25, the MVC header is not attached to the view component stored in Access Unit #1, that is, the Base view video. As described above, the Base view video stream is also used for 2D playback data. To ensure compatibility with it, Base view video is encoded without an MVC header. Or delete the temporarily attached MVC header. The encoding of the recording device will be described below. In the playback device 1, the view component without the MVC header is attached 145441.doc -41 - 201105108 (there is no view) its view"d is 0, and the view component is identified as Base view video. In Dependent view video, set a value other than 0 to view_id when encoding. Thereby, the playback device 1 can identify the Base view video based on the view_id identified as 〇, so that it can be decoded based on the view_id other than the actual setting, and the J Dependent view video ° is decoded in the switch in FIG. The identification of the data received by the packet of the Base view video and the data obtained by decoding the packet of the Dependent view video can also be performed based on such a vie w_id. [Third example] Fig. 26 is a view showing still another configuration of the process of performing video stream. In the example of Fig. 26, the b video plane generating unit 112 is provided instead of the L video plane generating unit 16i of Fig. 24, and the d vide〇 plane generating unit 113 is provided instead of the R video plane generating unit 162. The b video plane generating unit 112 and the D video plane generating unit 113 are provided with a switch 171 in the subsequent stage. Also in the configuration shown in Fig. 26, the output destination of the data is switched based on View_type. The switch 111 outputs the data obtained by decoding the packet of the Base view video among the data stored in the DPB 151 to the B vide 〇 plane generation 邙 112. Further, the parent converter 111 outputs the data obtained by decoding the Dependent view video to the Dvideo plane generating unit 113. As described above, the data obtained by decoding the packet of the Base view video and the packet obtained by decoding the Dependent view video packet, total, and ^ η 1 are identified based on the PID or the view_id in the above manner. I4544I.doc • 42· 201105108 B The video plane generation unit 112 generates a plane of the Baseview video based on the data supplied from the switch lu, and outputs it. The D video plane generation unit 113 generates a plane of the Dependent view video based on the data supplied from the switch lu, and outputs it. The controller 51 supplies the switch 171 with the view_type described in the piayList file. When the value of the view_type is 0, the switch m outputs the plane of the Base view video supplied from the B vide 〇 plane generating unit 112 to the synthesizing unit 130 as the plane of the L view video. The value of the view-type value indicates that the Base view video stream is a stream of L view. Further, in this case, the parent switcher 71 outputs the plane of the Dependent view video supplied from the D video plane generating unit 113 to the synthesizing unit 13 as the plane 0 of the R view video, and on the other hand, the value of the view type. In the case of 1, the switch m outputs the plane of the Dependent view video supplied from the D video plane generating unit 113 to the combining unit 130 as the plane of the L view video. A value of 1 of the view type indicates that the Base view video stream is a stream of R views. Further, in this case, the switch 117 outputs the plane of the Base view video supplied from the B video plane generating unit 至2 to the combining unit 13A as the plane of the R view video. Even in the configuration of Fig. 26, the playback device 1 can recognize L Wew and R view and switch the output destination based on the recognition result. [First example of the planar synthesis model] Fig. 27 is a view showing the configuration of the synthesis unit 13〇 in the configuration shown in Fig. 21 and the I45441.doc-43-201105108 of the preceding stage. In Fig. 27, the same components as those shown in Fig. 21 are denoted by the same reference numerals. To the switch 181, a packet constituting the IG stream included in the Main TS or Sub TS is input. In the packet constituting the IG stream input to the switch 18 1 'contains the packet of the Base view and the Dependent view. To the switch 182, a packet constituting the PG stream included in the Main TS or Sub TS is input. In the packet constituting the PG stream input to the switch 182, the package containing the Base view and the Dependent view are included. As described with reference to FIG. 5 and the like, a stream of a Base view and a Dependent view which are useful for 3D display are also prepared for IG and PG. Since the IG system of the Base view is synthesized and displayed by the Base view video, and the IG of the Dependent view is synthesized and displayed by the Dependent view video, the user can view the video in 3D, and can view the button or the icon in 3D.

又,由於Base view之PG係與Base view video合成進行顯 示,且 Dependent view之 PG係與 Dependent Wew vide〇合成 進行顯示,故而使用者不僅可以3D觀賞視訊,而且可3D 地看見字幕之文字等。 交換器181係將構成Base IG串流之封包輸出至b ig解碼 器116,並將構成Dependent IG串流之封包輸出至D沁解碼 器120。交換器181係具有圖21之交換器114與交換器ιΐ8之 功能。於圖27中’省略了各緩衝器之圖示。 B IG解碼器116係對由交換器181供給之構成Base圯串 145441.doc • 44 - 201105108 流之封包進行解瑪,並將解碼所得之f料輪出至β π平面 生成部11 7。 B IG平面生成部117係基於由B m解石馬器ii6供給之資 • 料,生成BaSeIG之平面,並將其輸出至合成部13〇。、 D IG解碼器12()係對由交換器m供給之構成以㈣咖 IG串流之封包進行解碼,並將解碼所得之資料輸出至d犯 平面生成部m。Base IG串流與Dependent IG串流亦可藉 由1個解碼器進行解碼。 D IG平面生成部121係基於由D犯解蝎器12〇供給之資料 而=成Dependent IG之平面,並將其輸出至合成和〇。 口口父換器182係將構成Base PG串流之封包輸出至B pG解碼 器=4,並將構成以_咖pG串流之封包輸出至d扣解 碼盗128。父換器182係具有圖21之交換器ι22與交換器丨 之功能。 /扣解碼器124係對由交換器182供給之構成^扣串 流之封包進行解碼,並將解碼所得之資料輸出至B π平面 生成部125。 B PG平面生成部125係基於由B pG解碼器124供給之資 r''而生成Base PG之平面,並將其輸出至合成部13〇。 D PG解碼器128係對由交換器182供給之構成Dependent PG串机之封包進行解碼,並將解碼所得之資料輸出至D pG平面生成部129。Base pG串流與pG串流亦可 藉由1個解碼器進行解碼。 D PG平面生成部129係基於由d 解碼器供給之資 145441.doc -45- 201105108 料而生成Dependent PG之平面,並將其輸出至合成部 130 ° 視訊解碼器110係對由交換器1〇9(圖22等)供給之封包依 序進行解碼,並將解碼所得之Base view vide〇之資料或 Dependent view video之資料輸出至交換器u j。 父換器111係將解碼Base view video之封包所得之資料 輸出至B video平面生成部112,並將解碼Dependent view video之封包所得之資料輸出至D vide〇平面生成部丨丨3。 B video平面生成部112係基於由交換器ln供給之資料, 生成Base view video之平面,並將其輸出。 D video平面生成部Π3係基於由交換器U1供給之資料, 生成Dependent view video之平面,並將其輸出。 合成部13 0係包括加法部191至19 4、及交換器19 5。 加法部191係將由D PG平面生成部129供給之Dependent PG之平面以疊合於由d video平面生成部113供給之 Dependent view video之平面之上的方式進行合成,並將合 成結果輸出至加法部193。對自D PG平面生成部129供給至 加法部191之Dependent PG之平面實施色彩資訊之轉換處 理(CLUT(Color Look Up Table,色彩查找表)處理)。 加法部192係將由B PG平面生成部125供給之Base PG之 平面以疊合於由B video平面生成部112供給之Base view video之平面之上的方式進行合成,並將合成結果輸出至 加法部19 4。對由B P G平面生成部12 5供給至加法部19 2之 Base PG之平面實施色彩資訊之轉換處理或使用偏移值之 145441.doc •46- 201105108 修正處理。 加法部193係將由D IG平面生成部121供給之Dependent IG之平面以疊合於加法部191之合成結果之上的方式進行 δ成,並將合成結果作為Dependent view之平面輸出。對 由D IG平面生成部121供給至加法部193之Dependent 之 平面實施色彩資訊之轉換處理。 加法部194係將由B IG平面生成部117供給之ι〇之 平面以疊合於加法部192所產生之合成結果之上的方式進 行合成,並將合成結果作為Base view之平面輸出。對由d IG平面生成部121供給至加法部194之Base IG之平面實施 色彩資訊之轉換處理或使用偏移值之修正處理。 基於以上述方式生成之Base view之平面與 view之平面進行顯示之圖像,可於前面看見按鈕或圖符, 於按鈕或圖符之下(縱深方向)可看見字幕文字,於字幕文 字之下可看見視訊。 父換器195係於View_type之值為〇之情形時輸出以“ new之平面作為L view之平面,並輸之 平面作為R view之平面。交換器195係自控制器5i被供給 view_type ° 又,父換器195係於View_type之值為情形時,輸出 Base view之平面作為R view之平面,並輸 view之平面作為L view之平面。所供給之平面中之哪一個 平面為Base view之平面抑或是Dependent view之平面,係 基於PID或View_id予以識別。 145441.doc •47- 201105108 以此方式,於播放裝置1中進行Base view之平面彼此、In addition, since the PG system of the Base view is displayed in combination with the Base view video, and the PG of the Dependent view is synthesized and displayed by the Dependent Wew vide, the user can view the video in 3D and the text of the subtitle in 3D. The switch 181 outputs the packet constituting the Base IG stream to the b ig decoder 116, and outputs the packet constituting the Dependent IG stream to the D 沁 decoder 120. The switch 181 has the functions of the switch 114 and the switch ι8 of Fig. 21. In Fig. 27, the illustration of each buffer is omitted. The B IG decoder 116 decodes the packet of the stream constituting the Base 圯 145 441.doc • 44 - 201105108 supplied from the switch 181, and rotates the decoded f material to the β π plane generating unit 11 7 . The B IG plane generating unit 117 generates a plane of BaSeIG based on the material supplied from the B m eliminator ii6, and outputs it to the combining unit 13A. The D IG decoder 12() decodes the packet supplied by the switch m in the (4) GG stream, and outputs the decoded data to the defying plane generating unit m. Base IG streaming and Dependent IG streaming can also be decoded by a single decoder. The D IG plane generating unit 121 converts the data supplied by the D tamper 12 to the Dependent IG plane, and outputs it to the composition and 〇. The port parent converter 182 outputs the packet constituting the Base PG stream to the B pG decoder = 4, and outputs the packet constituting the _ café pG stream to the d button decoder 128. The parent converter 182 has the function of the switch ι 22 and the switch 图 of Fig. 21. The deductor decoder 124 decodes the packet constituting the stream stream supplied from the switch 182, and outputs the decoded data to the B π plane generating unit 125. The B PG plane generating unit 125 generates a plane of Base PG based on the resource r'' supplied from the B pG decoder 124, and outputs it to the combining unit 13A. The D PG decoder 128 decodes the packet constituting the Dependent PG string machine supplied from the switch 182, and outputs the decoded data to the D pG plane generating unit 129. Base pG streaming and pG streaming can also be decoded by one decoder. The D PG plane generating unit 129 generates a plane of the Dependent PG based on the material 145441.doc -45 - 201105108 supplied from the d decoder, and outputs it to the synthesizing unit 130 °. The video decoder 110 is paired by the switch 1 9 (Fig. 22, etc.) the supplied packets are sequentially decoded, and the decoded Base view vide data or Dependent view video data is output to the switch uj. The parent converter 111 outputs the data obtained by decoding the packet of the Base view video to the B video plane generating unit 112, and outputs the data obtained by decoding the packet of the Dependent view video to the D vide 〇 plane generating unit 丨丨3. The B video plane generation unit 112 generates a plane of the Base view video based on the data supplied from the switch ln, and outputs it. The D video plane generation unit 3 generates a plane of the Dependent view video based on the data supplied from the switch U1, and outputs it. The combining unit 130 includes the adding units 191 to 194 and the exchanger 195. The addition unit 191 synthesizes the plane of the Dependent PG supplied from the D PG plane generation unit 129 so as to be superimposed on the plane of the Dependent view video supplied from the d video plane generation unit 113, and outputs the synthesis result to the addition unit. 193. Color information conversion processing (CLUT (Color Look Up Table) processing) is performed on the plane of the Dependent PG supplied from the D PG plane generating unit 129 to the adding unit 191. The addition unit 192 synthesizes the plane of the Base PG supplied from the B PG plane generation unit 125 so as to be superimposed on the plane of the Base view video supplied from the B video plane generation unit 112, and outputs the synthesis result to the addition unit. 19 4. The color information conversion processing is performed on the plane of the Base PG supplied from the B P G plane generating unit 12 5 to the adding unit 19 2 or the offset value is used 145441.doc • 46- 201105108 correction processing. The addition unit 193 performs δ on the plane of the Dependent IG supplied from the D IG plane generation unit 121 so as to be superimposed on the synthesis result of the addition unit 191, and outputs the result of the synthesis as a plane of the Dependent view. The color information conversion processing is performed on the plane of the Dependent supplied from the D IG plane generating unit 121 to the adding unit 193. The addition unit 194 combines the planes supplied by the B IG plane generating unit 117 so as to superimpose on the synthesis result generated by the addition unit 192, and outputs the result of the synthesis as a plane of the Base view. The color information conversion processing or the correction processing using the offset value is performed on the plane of the Base IG supplied from the d IG plane generating unit 121 to the adding unit 194. Based on the image of the plane of the Base view generated in the above manner and the plane of the view, the button or icon can be seen in front, and the subtitle text can be seen under the button or icon (in the depth direction) under the subtitle text. Video can be seen. The parent converter 195 outputs the plane of the new plane as the plane of the L view and the plane of the input as the plane of the R view when the value of the View_type is 〇. The switch 195 is supplied to the view_type ° from the controller 5i. The parent converter 195 is based on the value of View_type, and outputs the plane of the Base view as the plane of the R view, and the plane of the view is used as the plane of the L view. Which of the planes supplied is the plane of the Base view or It is the plane of the Dependent view, which is identified based on PID or View_id. 145441.doc •47- 201105108 In this way, the planes of the Base view are played in the playback device 1,

Dependent view之平面彼此、video、IG、PG之各平面之合 成。 於video、IG ' PG之所有平面之合成結束後之階段,基 於view_type,判斷Base view之平面彼此合成所得之結果 為L view抑或是R view,並分別輸出r Wew之平面與L view之平面。 又,於video、IG、PG之所有平面之合成結束後之階 ^又,基於view_type,判斷Dependent view之平面彼此合成 所得之結果係為L view抑或是R View,並分別輸出R view 之平面與Lview之平面。 [第2例] 圖28係表示合成部130與其前段之構成的圖。 於圖28所示之構成之中,對與圖27所示之構成相同之構 成標註相同之符號。於圖28中,合成部130之構成係不同 於圖27之構成。又,交換器lu之動作係不同於圖27之交 換器111之動作。取代B video平面生成部1丨2而設置有l video平面生成部161,且取代D vide〇平面生成部113而設 置有R video平面生成部162。重複之說明予以省略。 由控制器5 1對交換器111、以及合成部j 3〇之交換器2〇 j 及父換器202供給相同之view—type之值。 與圖24之交換器111相同,交換器U1係基於view」ype, 切換解碼Base view video之封包所得之資料、以及解碼 Dependent view video之封包所得之資料的輸出目的地。 I4544I.doc •48· 201105108 例如,於view—type之值為0之情形時,交換器⑴將解石馬 Base view Vide。之封包所得之資料輸出至l ^。平面生成 部161 °於此㈣時’交換器⑴係將解碼Dependent View video之封包所得之資料輸出至R。平面生成部i α。 另方面於Vlew_type2值為i之情形時,交換器HI係 將解碼Base view video之封句紐m + ,, μ 了已解碼之資料輸出至R video平 面生成部162 °於此情形時,交換器⑴係、將解碼The planes of the Dependent view are combined with each other, the planes of video, IG, and PG. At the stage after the synthesis of all planes of video and IG 'PG, based on view_type, it is judged whether the planes of the Base view are synthesized with each other as L view or R view, and the planes of r Wew and L view are respectively output. Moreover, after the synthesis of all the planes of video, IG, and PG is completed, based on the view_type, it is determined whether the planes of the Dependent view are synthesized with each other as L view or R View, and the planes of the R view are respectively output. The plane of Lview. [Second example] Fig. 28 is a view showing the configuration of the combining unit 130 and its front stage. In the configuration shown in Fig. 28, the same components as those shown in Fig. 27 are denoted by the same reference numerals. In Fig. 28, the configuration of the synthesizing unit 130 is different from that of Fig. 27. Further, the operation of the switch lu is different from the action of the exchanger 111 of Fig. 27. The 1 video plane generating unit 161 is provided instead of the B video plane generating unit 1丨2, and the R video plane generating unit 162 is provided instead of the D vide〇 plane generating unit 113. The repeated description is omitted. The switch 51 and the switch 2〇 j and the parent converter 202 of the synthesizer j 3 are supplied by the controller 51 to the same view-type value. Like the switch 111 of Fig. 24, the switch U1 switches the data obtained by decoding the packet of the Base view video and the output destination of the data obtained by decoding the packet of the Dependent view video based on the view "ype". I4544I.doc •48· 201105108 For example, when the value of view-type is 0, the switch (1) will solve the base view Vide. The data obtained by the packet is output to l ^. When the plane generating unit 161 is (4), the switch (1) outputs the data obtained by decoding the packet of the Dependent View video to R. Planar generating unit i α. On the other hand, when the Vlew_type2 value is i, the switch HI will decode the base view video block m + , and the decoded data will be output to the R video plane generating unit 162 ° in this case, the switch (1) Department, will decode

Dependent view video之封包所得之資料輸出至l vide〇平 面生成部161。 L video平面生成部161係基於由交換器iu供給之資料, 生成Lviewvideo之平面,並將其輪出至合成部13〇。 R video平面生成部162係基於由交換器lu供給之資料, 生成Rviewvideo之平面,並將其輸出至合成部13〇。 合成部130係包括交換器201、交換器2〇2、及加法部2〇3 至 206 〇 父換器201係基於view_type,切換由b IG平面生成部117 供給之Base IG之平面與由D IG平面生成部121供給之 Dependent IG之平面的輸出目的地。 例如’於view_type之值為〇之情形時,交換器2〇1係將 由B IG平面生成部in供給之Base IG之平面作為L view之 平面’輸出至加法部206。於此情形時,交換器201係將由 D IG平面生成部121供給之Dependent IG之平面作為R view 之平面,輸出至加法部205。 另一方面,於view_type之值為1之情形時,交換器201 145441.doc -49- 201105108 係將由D IG平面生成部121供給之Dependent IG之平面作為 L view之平面,輸出至加法部2〇6。於此情形時,交換器 201係將由B IG平面生成部117供給之Base IG之平面作為r view之平面’輸出至加法部2〇5。 交換器202係基於view__type,切換由Β 平面生成部 125供給之Base PG之平面、與由D PG平面生成部129供給 之Dependent PG之平面的輸出目的地。 例如,於view_type之值為〇之情形時,交換器2〇2係將 由B PG平面生成部125供給之Base PG之平面作為L vieW2 平面,輸出至加法部204。於此情形時,交換器2〇2係將由 D PG平面生成部129供給之Dependent pG之平面作為r view之平面,輸出至加法部2〇3。 另一方面,於view」ype之值為i之情形時,交換器2〇2 係將由D PG平面生成部129供給之Dependent PG之平面作 為L view之平面,輸出至加法部2〇4。於此情形時,交換 器202係將由B PG平面生成部125供給之以“ pG之平面作 為R view之平面,輸出至加法部2〇3。 加法部203係將由交換器202供給之R viewiPG之平面^ 疊合於由R video平面生成部162供給之R Wew Wde〇之平s 之上的方式進行合成,並將合成結果輸出至加法部2〇5。 加法部204係將由交換器202供给之L “6评之pG之平面t 疊合於由L video平面生成部161供給之L view。心〇之平3 之上的方式進行合成,並將合成結果輸出至加法部2〇6。 加法部205係將由交換器2〇1供給之R。6以之1(}之平面t 145441.doc -50- 201105108 疊合於加法部203之合成結果之平面之上的方式進行合 成’並輸出合成結果作為R view之平面。 加法部206係將由交換器201供給之L “6災之1(}之平面以 疊合於加法部204之合成結果之平面之上的方式進行合 成,並輸出合成結果作為L view之平面。 以此方式,於播放裝置1中’對於〃丨心〇、ig、PG之各自 之Base view之平面與Dependent view之平面,在與其他平 面之合成之前判斷哪一個平面係為L view抑或是r view。 於進行上述判斷後,以合成L view之平面彼此、r view 之平面彼此的方式進行video、IG、PG之各平面之合成。 [記錄裝置之構成例] 圖29係表示軟體製作處理部301之構成例的方塊圖。 視訊編碼器3 11係具有與圖3之MVC編碼器11相同之構 成。視編碼益3 11係]ίΛ Η · 2 6 4 AV C/MV C設定槽標準編碼 複數個衫像資料’精此生成Base view video串流盘 Dependent view video串流,並將其等輸出至緩衝器3 12。 例如’視§扎編碼裔3 11係於編碼時,以相同之p c r為美 準,設定DTS、PTS »即,視訊編碼器3丨丨係對儲存某一 Base view video之晝面之資料的PES封包、及儲存解碼順 序下與該畫面對應之Dependent view video之晝面之資料的 PES封包,設定相同之DTS。 又’視5ΪΙ編碼器3 11係對儲存某一 B a s e ν i e w ν i d e 〇之晝 面之資料的PES封包、及儲存顯示順序下與該畫面對應之 Dependent view video之畫面之資料的PES封包,設定相同 145441.doc •51 · 201105108 之PTS。 如下所述,視訊編碼器311係對以解碼順序對應之Base view video之晝面與Base viewvide〇之晝面分別設定相 同之資訊’作為與解碼相關之輔助資訊即附加資訊。 進而’如下所述’視訊編碼器311係對以顯示順序對應 之Base view video之晝面與Base vide〇之畫面分別 設定相同之值’作為表示畫面之輸出順序的p〇c(picture Order Count,畫面順序計數)之值。 又,如下所述,視訊編碼器311係以使Base view yide〇 串流之GOP結構與Dependent view video串流之G〇p結構一 致之方式進行編碼。 音訊編碼器313係對所輸入之音訊串流進行編碼,並將 所得資料輸出至緩衝器314。對音訊編碼器313中,一併輸 入 Base view video、Dependent view Vide0串流以及記錄於 碟片中之音訊串流。 資料編碼器315係對?13丫1^31檔案等之視訊、音訊以外之 上述各種資料進行編碼’並將編碼所得之資料輸出至緩衝 器 316。 資料編碼器3 1 5係根據視訊編碼器3 11之編碼,於The data obtained by the Dependent view video packet is output to the l vide 〇 plane generating unit 161. The L video plane generation unit 161 generates a plane of the Lviewvideo based on the data supplied from the switch iu, and rotates it to the synthesizing unit 13A. The R video plane generation unit 162 generates a plane of Rviewvideo based on the data supplied from the switch lu, and outputs it to the synthesizing unit 13A. The synthesizing unit 130 includes a switch 201, a switch 2〇2, and adders 2〇3 to 206. The parent converter 201 switches the plane of the Base IG supplied by the b IG plane generating unit 117 based on the view_type and the D IG. The output destination of the plane of the Dependent IG supplied from the plane generating unit 121. For example, when the value of the view_type is 〇, the switch 2〇1 outputs the plane of the Base IG supplied from the B IG plane generating unit in the plane of the L view to the adding unit 206. In this case, the switch 201 outputs the plane of the Dependent IG supplied from the D IG plane generating unit 121 as a plane of the R view, and outputs it to the adding unit 205. On the other hand, when the value of the view_type is 1, the switch 201 145441.doc -49 - 201105108 sets the plane of the Dependent IG supplied from the D IG plane generating unit 121 as the plane of the L view, and outputs it to the adding unit 2〇. 6. In this case, the switch 201 outputs the plane of the Base IG supplied from the B IG plane generating unit 117 as the plane of the r view to the adding unit 2〇5. The switch 202 switches the plane of the Base PG supplied from the UI plane generating unit 125 and the output destination of the plane of the Dependent PG supplied from the D PG plane generating unit 129 based on the view__type. For example, when the value of the view_type is 〇, the switch 2〇2 outputs the plane of the Base PG supplied from the B PG plane generating unit 125 to the L vieW2 plane, and outputs it to the adding unit 204. In this case, the switch 2〇2 outputs the plane of the Dependent pG supplied from the D PG plane generating unit 129 as the plane of the r view, and outputs it to the adding unit 2〇3. On the other hand, when the value of the view ype is i, the switch 2〇2 outputs the plane of the Dependent PG supplied from the D PG plane generating unit 129 to the plane of the L view, and outputs it to the adding unit 2〇4. In this case, the switch 202 supplies the plane of pG as the plane of the R view to the adder unit 2〇3, which is supplied from the B PG plane generating unit 125. The adder unit 203 supplies the R viewiPG supplied from the switch 202. The plane is superimposed on the flat s of the R Wew Wde 供给 supplied from the R video plane generating unit 162, and the combined result is output to the adding unit 2〇5. The adding unit 204 is supplied by the exchanger 202. L "6] The plane t of pG is superimposed on the L view supplied from the L video plane generating unit 161. The synthesis is performed in a manner above the palpitations, and the synthesis result is output to the addition unit 2〇6. The addition unit 205 is an R that is supplied from the exchanger 2〇1. 6 is synthesized by the plane of 1 (}, t 145441.doc -50 - 201105108 superimposed on the plane of the synthesis result of the addition unit 203' and outputs the synthesis result as the plane of the R view. The addition unit 206 is to be exchanged The plane of the "6 disaster 1" is synthesized in such a manner as to be superimposed on the plane of the synthesis result of the addition unit 204, and the synthesis result is output as the plane of the L view. In this manner, the playback device In the 1st plane, the plane of the Base view and the Dependent view of each of the 〃丨, ig, and PG, before the synthesis with other planes, determine which plane is L view or r view. The combination of the planes of the video, IG, and PG is performed so that the planes of the L view and the plane of the r view are combined with each other. [Configuration Example of Recording Apparatus] FIG. 29 is a block diagram showing a configuration example of the software creation processing unit 301. The video encoder 3 11 has the same configuration as the MVC encoder 11 of Fig. 3. The video encoding 3 11 system] Λ Λ 2 2 6 4 AV C/MV C setting slot standard encoding a plurality of shirt images 'fine Generate Base view video stream Dependent view video stream, and output it to the buffer 3 12. For example, 'depending on the coded code 3 3 series in the encoding, with the same PCR as the standard, set DTS, PTS » ie, video encoder 3 The system sets the same DTS to the PES packet storing the data of a certain Base view video and the PES packet storing the data in the decoding order corresponding to the Dependent view video corresponding to the picture. The encoder 3 11 sets the same PES packet for storing the data of a certain B ase ν iew ν ide 、 and the PES packet storing the data of the Dependent view video corresponding to the screen in the display order, and the setting is the same 145441. Doc •51 · PTS of 201105108. As described below, the video encoder 311 sets the same information as the side of the Base view video corresponding to the decoding order and the side of the Base view vide, as auxiliary information related to decoding. Additional information. Further, as described below, the video encoder 311 sets the same value as the Base view video corresponding to the display order and the Base vide frame. Output order of p〇c display screen (picture Order Count, picture order count) of the value. Further, as will be described later, the video encoder 311 encodes the GOP structure of the Base view yide stream and the G 〇p structure of the Dependent view video stream. The audio encoder 313 encodes the input audio stream and outputs the resultant data to the buffer 314. In the audio encoder 313, the Base view video, the Dependent view Vide0 stream, and the audio stream recorded in the disc are input. Is the data encoder 315 paired? The above various data other than the video and audio of the 13丫1^31 file are encoded and the encoded data is output to the buffer 316. The data encoder 3 1 5 is based on the encoding of the video encoder 3 11

PlayList樓案中設定表示Base view video串流係為L view之 串流抑或是R view之串流的view_type。亦可設定並非表示In the PlayList building, a view_type indicating whether the Base view video stream is a stream of L view or a stream of R view is set. Can also be set not to indicate

Base view video串流之種類,而是表示Dependent view video串流係為l view之串流抑或是R view之串流的資訊。 又’資料編碼器3 1 5係將下述之EP_map分別設定於Base 145441.doc •52· 201105108 view video 串流之 Clip Information權案、及 Dependent view video串流之Clip Information檔案中。作為解碼開始位置 而設定於EP_map中之Base view video串流之畫面與 Dependent view video串流之畫面成為相應之畫面。 多工部3 1 7係將記憶於各個緩衝器中之視訊資料、音訊 資料、及串流以外之資料與同步信號一併進行多工處理, 並輸出至糾錯編碼部3 1 8。 糾錯編碼部3 18係將糾錯用之編碼附加於經多工部3丨7多 工處理之資料中。 調.炎。卩3 1 9係將由糾錯編碼部3 18供給之資料實施調變處 理後輸出。調變部319之輸出成為可於播放裝置進行播 放之光碟2中所記錄之軟體。 將具有此種構成之軟體製作處理部3〇1設置於記錄裝置 中 。 圖30係表示包含軟體製作處理部3〇1之構成之例的圖。 亦存在圖30所示之構成之一部分設置於記錄裝置内之情 形。 藉由軟體製作處理部3〇1而生成之記錄信號係於母碟前 期製作處理部331中實施母碟製作處理,生成應記錄於光 碟2中之格式之信號。將所生成之信號供給㈣碟記 333。 於記錄用母碟製作部332中準備包含玻璃等之母碟,對 其上塗佈包含光阻劑等之記錄材料。藉此,製作記錄用母 145441.doc -53- 201105108 於母碟記錄部3 3 3中,斜庙+λ Τ應於由母碟前期製作處理部3 3 1 供給之記錄信號’將雷射光束調變後照射至母碟上之光阻 劑。藉此,母碟上之綠_應著記錄信號進行曝光。立 後,使該母碟顯影,從而使母碟上出現訊坑。 ' 於金屬母碟製作部334中,對母碟實施電鑄等處理,製 作轉印有玻璃母碟上之訊坑之金屬母碟。由該金屬《進 而製作金屬壓模,並將其作為成利模具。 於成形處理部335中,利用射出法等將ρΜΜΑ (polymethyl methacrylate,聚甲基丙烯酸曱酯)或% (polycarbonate,聚碳酸酿)等材料注入至成形用模具中, 進行固定化處理。或者’對金屬壓模上塗佈2p(紫外線固 化樹脂)等後,照射紫外線使其硬化。藉&,可將金屬壓 模上之机坑轉印至包含樹脂之複印膜上。 於成膜處理部336中,藉由蒸鍍或濺鍍等而於複印膜上 形成反射膜。或者,藉由旋塗而於複印膜上形成反射膜。 於後續處理部337中,對該碟片實施内外徑之加工,並 實施貼合2張碟片等必要之處理。進而,貼附標籤或安裝 集線器後,將其插入至匣盒中。以此方式,完成記錄有可 藉由播放裝置1而播放之資料之光碟2。 <第2實施形態> [H.264 AVC/MVC Profile視訊串流之運用 1] 於作為光碟2之標準之BD-R0M規格中,如上所述,藉 由採用H.264 AVC/MVC Profile而實現3D影像之編碼。 又,於BD-ROM規格中’使Base view video串流為l 14544l.doc ·54· 201105108 view之影像串流’並使Dependent view video串流為R view 之影像串流。 由於將Base view video編碼為 Η.264 AVC/High Profile視 訊串流’故即便先前之播放器或僅支援2D播放之播放器 中’亦可播放作為支援3D之碟片的光碟2。即,可確保向 下相容性。 具體而言’即便不支援H.264 AVC/MVC設定檔標準之解 碼器亦可僅對Base view video之串流進行解碼(播放)。 即’ Base View video串流成為即便目前之之;bd播放器 亦必然能夠進行播放之串流。 又,可藉由於2D播放與3D播放中共通使用Base view video串流,而於製作時實現減輕負荷。製作方對於Av串 流’除可實施先前進行之作業以外,只要準備Dependent view video串流,便可製作支援3]:)之碟片。 圖3 1係表示設置於記錄裝置中之3D video TS生成部之構 成例的圖。Base view video stream type, but indicates that the Dependent view video stream is the stream of l view stream or R view stream. Further, the data encoder 3 1 5 sets the following EP_map in Base 145441.doc • 52· 201105108 view video Stream Clip Information Rights and Dependent view video stream Clip Information file. The screen of the Base view video stream set in the EP_map as the decoding start position and the screen of the Dependent view video stream are corresponding screens. The multiplexer 3 1 7 multiplexes the video data, the audio data, and the data other than the stream stored in the respective buffers with the synchronization signal, and outputs the result to the error correction coding unit 3 1 8 . The error correction coding unit 3 18 adds the code for error correction to the data processed by the multiplexer 3丨7. Tune. Inflammation.卩3 1 9 The data supplied from the error correction coding unit 3 18 is subjected to modulation processing and output. The output of the modulation unit 319 becomes a soft body recorded in the optical disc 2 that can be played by the playback device. The software creation processing unit 3〇1 having such a configuration is installed in the recording device. FIG. 30 is a view showing an example of a configuration including the software creation processing unit 3〇1. There is also a case where one of the configurations shown in Fig. 30 is provided in the recording apparatus. The recording signal generated by the software creation processing unit 312 is subjected to the mastering process in the mastering/producing processing unit 331, and generates a signal to be recorded in the format of the optical disk 2. The generated signal is supplied to (4) disc 333. A master disk containing glass or the like is prepared in the recording master disk preparation unit 332, and a recording material containing a photoresist or the like is applied thereon. Thereby, the recording mother 145441.doc -53- 201105108 is created in the master recording unit 3 3 3 , and the oblique signal + λ is applied to the recording signal supplied from the master pre-processing unit 3 3 1 'the laser beam The photoresist that is irradiated onto the master disc after modulation. Thereby, the green _ on the master disc is exposed by the recording signal. After that, the master is developed, so that a pit appears on the master. In the metal mastering unit 334, the master is subjected to electroforming or the like to produce a metal master disk on which the pit on the glass master is transferred. From this metal, a metal stamper is produced and used as a mold. In the molding processing unit 335, a material such as polymethyl methacrylate or polycarbonate is injected into a molding die by an injection method or the like to perform an immobilization treatment. Alternatively, after applying 2p (ultraviolet curing resin) or the like to the metal stamper, it is irradiated with ultraviolet rays to be cured. By &, the machine pit on the metal stamper can be transferred to the copy film containing the resin. In the film formation processing unit 336, a reflection film is formed on the copy film by vapor deposition, sputtering, or the like. Alternatively, a reflective film is formed on the copy film by spin coating. In the subsequent processing unit 337, the inner and outer diameters of the disc are processed, and necessary processes such as laminating two discs are performed. Furthermore, after attaching the label or installing the hub, insert it into the cassette. In this way, the optical disc 2 on which the material which can be played by the playback device 1 is recorded is completed. <Second Embodiment> [Application of H.264 AVC/MVC Profile Video Streaming 1] In the BD-R0M standard which is the standard of the optical disc 2, as described above, by using the H.264 AVC/MVC Profile And to achieve the encoding of 3D images. Further, in the BD-ROM specification, 'the Base view video is streamed to the video stream of the '14544l.doc.54·201105108 view' and the Dependent view video is streamed as the video stream of the R view. Since the Base view video is encoded as a 264.264 AVC/High Profile video stream, even the previous player or the player supporting only 2D playback can play the disc 2 as a disc supporting 3D. That is, the compatibility is ensured. Specifically, even if the decoder does not support the H.264 AVC/MVC profile standard, only the stream of Base view video can be decoded (played). That is, the 'Base View video stream becomes even the current one; the bd player is bound to be able to play the stream. Moreover, since the Base view video stream is commonly used in 2D playback and 3D playback, the load is reduced at the time of production. The producer can create a disc supporting the 3]:) as long as the Dependent view video stream is prepared for the Av stream. Fig. 3 is a view showing an example of the configuration of a 3D video TS generating unit provided in the recording device.

圖31之3D video TS生成部係包括MVC編碼器401、MVC 標頭刪除部402、及多工器403。將以參照圖2所說明之方 式拍攝之L view之影像#1之資料與R “6〜之影像“之資料 輸入至MVC編碼器401中。 與圖3之MVC編碼器1 i相同’ MVC編碼器401係以 H.264/AVC對L view之影像#1之資料進行編碼,並輸出編 碼所得之AVC視訊資料作為Base view video 串流。又, MVC編碼器401係基於l View之影像#1之資料與R view《 145441.doc ·55· 201105108 影像#2之資料而生成Dependentviewvide〇串流並將其輸 出。 由MVC編碼器401輸出之Base view vide〇串流係包含儲 存有Base view video之各畫面之資料的Access Unit。又, 由MVC編碼器401輸出之Dependent view vide〇串流係包含 儲存有Dependent view video之各畫面之資料的Dependent Unit。 於構成Base view video串流之各Access Unit與構成 Dependent view video 串流之各 Dependent Unit 中,包含描 述有用以識別所儲存之view c〇mp〇nent之view」d的Mvc標 頭。 作為Dependent view video之MVC標頭中所描述之 view_id之值係使用1以上之固定值。於圖32、圖33之例中 亦為相同情況。 即’ MVC編碼器401係不同於圖3之MVC編碼器11,該 MVC編碼器401係以附加有MVC:標頭之形式,生成並輸出The 3D video TS generation unit of FIG. 31 includes an MVC encoder 401, an MVC header deletion unit 402, and a multiplexer 403. The data of the image #1 of L view and the image of the image of R "6~" taken in the manner described with reference to Fig. 2 are input to the MVC encoder 401. The MVC encoder 401 encodes the data of the image #1 of the L view by H.264/AVC, and outputs the encoded AVC video data as a Base view video stream. Further, the MVC encoder 401 generates a Dependentviewvide stream based on the data of the image #1 of the View and the data of the R view "145441.doc · 55· 201105108 image #2, and outputs it. The Base view vide stream outputted by the MVC encoder 401 is an Access Unit that stores data of each picture of the Base view video. Further, the Dependent view vide stream outputted by the MVC encoder 401 includes a Dependent Unit storing data of each picture of the Dependent view video. Each of the Access Units constituting the Base view video stream and the Dependent Unit constituting the Dependent view video stream include Mvc headers describing the view "d" useful for identifying the stored view c〇mp〇nent. The value of the view_id described in the MVC header of the Dependent view video uses a fixed value of 1 or more. The same is true in the examples of Figs. 32 and 33. That is, the 'MVC encoder 401 is different from the MVC encoder 11 of Fig. 3, and the MVC encoder 401 is generated and output in the form of an MVC: header attached thereto.

Base view video 與 Dependent view video之各串流的編碼 器。於圖3之MVC編碼器11申,僅於以η·264 AVC/MVC設 定檔標準編碼之Dependent view video中附加有MVC標 頭。 自MVC編碼器401輸出之Base view video串流係供給至 MVC才示頭刪除部402 ’ Dependent view video串流係供給至 多工器403。 MVC標頭刪除部402係將構成Base view video串流的各 145441.doc -56· 201105108Encoder of each stream of Base view video and Dependent view video. The MVC encoder 11 of Fig. 3 applies only the MVC header to the Dependent view video encoded with the η·264 AVC/MVC setting standard. The Base view video stream output from the MVC encoder 401 is supplied to the MVC header deletion unit 402'. The Dependent view video stream is supplied to the multiplexer 403. The MVC header deletion unit 402 will constitute each of the Base view video streams 145441.doc -56· 201105108

Access Unit中所含之MVC標頭刪除e Mvc標頭刪除部402 將包含删除MVC標頭後之Access Unit的Base view video串 流輸出至多工器403。 多工器403係生成包含由MVC標頭刪除部4〇2供給之Base view video串流、及由MVC編碼器401供給之Dependent view video串流的TS,並將該TS輸出。於圖3 1之例中,分 別輸出包含Base view video串流之TS與包含Dependent view video串流之TS,但如上所述,亦存在將該等TS多工 處理成相同之TS後輸出之情形。 以此方式’根據安裝之方式,亦考慮將L vieW2影像與 R view之影像作為輸入,且輸出附帶Mvc標頭之Base view video與Dependent view video之各串流的MVC編碼器。 再者,亦可如圖3所示,使MVC編碼器中包含圖31所示 之構成整體。圖32、圖33所示之構成亦為相同情況。 圖3 2係表示設置於記錄裝置中之3D video TS生成部之其 它構成例之圖。 圖3 2之3〇¥丨(16〇丁8生成部係包括混合處理部411']^'^匸 編碼器412、解多工部413、MVC標頭刪除部414、及多工 器415。L view之影像#1之資料與尺view之影像#2之資料係 輸入至混合處理部411中。 混合處理部411係將L· view之各畫面與R view之各晝面按 編碼順序進行排列。Dependent view video之各晝面係灸考 相應之Base view video之畫面進行編碼,因此按編碼順序 排列之結果係L view之畫面與R view之畫面呈交替排列。 145441.doc -57- 201105108 混合處理部4 11係將按照編碼順序排列之L view之書面 與R view之畫面輸出至MVC編碼器412。 MVC編碼器412係以H.264 AVC/MVC設定梢’標準對由混 合處理部411供給之各畫面進行編碼,並將編碼所得之串 流輸出至解多工部413。於由MVC編碼器412輸出之串流 中,Base view video串流與Dependent view video串流係經 多工處理。 由MVC編碼器412輸出之串流中所含之Base view vide〇 串流係包含儲存有Base view video之各畫面之資料的 Access Unit。又’由MVC編碼器412輸出之串流中所含之The MVC header deletion e Mvc header deletion unit 402 included in the Access Unit outputs a Base view video stream containing the Access Unit after deleting the MVC header to the multiplexer 403. The multiplexer 403 generates a TS including a Base view video stream supplied from the MVC header deleting unit 4〇2 and a Dependent view video stream supplied from the MVC encoder 401, and outputs the TS. In the example of FIG. 31, the TS including the Base view video stream and the TS including the Dependent view video stream are respectively output, but as described above, there are also cases where the TS multiplexing is processed into the same TS and output. . In this way, depending on the installation method, it is also considered to input the images of the L vieW2 image and the R view, and output the MVC encoders of the streams of the Base view video and the Dependent view video with the Mvc header. Further, as shown in Fig. 3, the MVC encoder may include the entire configuration shown in Fig. 31. The configuration shown in Figs. 32 and 33 is also the same. Fig. 3 is a view showing another configuration example of the 3D video TS generating unit provided in the recording device. 333丨(16〇8 generation unit includes a mixing processing unit 411'], an encoder 412, a demultiplexing unit 413, an MVC header deleting unit 414, and a multiplexer 415. The information of the image #1 of the L view and the image of the rule view #2 are input to the mixing processing unit 411. The mixing processing unit 411 arranges the respective screens of the L·view and the respective faces of the R view in the coding order. The Dependent view video is encoded by the corresponding Base view video of the moxibustion test, so the result of sorting in the coding order is that the image of the L view and the R view are alternately arranged. 145441.doc -57- 201105108 Mix The processing unit 4 11 outputs the written and R view screens of the L view arranged in the coding order to the MVC encoder 412. The MVC encoder 412 is supplied by the mixing processing unit 411 with the H.264 AVC/MVC setting tip 'standard pair. Each picture is encoded, and the encoded stream is output to the demultiplexing unit 413. In the stream output by the MVC encoder 412, the Base view video stream and the Dependent view video stream are multiplexed. Base view vid included in the stream output by the MVC encoder 412 The e〇 stream system includes an Access Unit storing data of each picture of the Base view video, and is included in the stream output by the MVC encoder 412.

Dependent view video 串流係包含儲存有 Dependent View video之各畫面之資料的Dependent unit。 於構成Base view video串流之各Access Unit、與構成 Dependent view video 串流之各 Dependent Unit 中,包含描 述有用以識別所儲存之view c〇mp〇nent之view」d& MVC標 頭。 解多工部41 3係對多工處理成由MVC編碼器412供給之串 流的 Base view video 串流與 Dependent view vide〇 串流進行 解多工處理後輪出。由解多工部413輸出之Base view video串流係供給至Mvc標頭刪除部414,Dependent video串流係供給至多工器4 i 5。 MVC標頭刪除部414係將構成由解多工部413供給之Base VieW vldeo串流的各Access Unit中所含之MVC標頭刪除。 MVC標頭刪除部414係將包含刪除mvc標頭後之Access 145441.doc •58- 201105108The Dependent view video stream contains a Dependent unit that stores the data of each screen of the Dependent View video. Each of the Access Units constituting the Base view video stream and the Dependent Unit constituting the Dependent view video stream include a description "d& MVC header that is useful for identifying the stored view c〇mp〇nent. The demultiplexing unit 41 3 performs multiplex processing on the multiplexed processing of the streamed Base view video stream supplied by the MVC encoder 412 and the Dependent view vide〇 stream, and then rotates. The Base view video stream output from the demultiplexing unit 413 is supplied to the Mvc header deleting unit 414, and the Dependent video stream is supplied to the multiplexer 4 i 5 . The MVC header deletion unit 414 deletes the MVC header included in each Access Unit constituting the Base VieW vldeo stream supplied from the demultiplexing unit 413. The MVC header deletion unit 414 will contain Access 145441.doc •58- 201105108 after deleting the mvc header.

Unit的Base view video串流輸出至多工器415。 多工器41 5係生成包含由MVC標頭刪除部414供給之Base view video串流、及由解多工部413供給之Dependent view video串流的TS,並將該TS輸出。 圖33係表示設置於記錄裝置中之3D video TS生成部之又 一其它構成例的圖。 圖33之3D video TS生成部係包括AVC編碼器421、MVC 編碼器422、及多工器423。L view之影像#1之資料係輸入 至AVC編碼器421中,R view之影像#2之資料係輸入至 MVC編碼器422中。 AVC編碼器421係以H.264/AVC編碼L view之影像#1之資 料,並將編碼所得之AVC視訊串流作為Base view video串 流,輸出至MVC編碼器422與多工器423。於構成由AVC編 瑪器421輸.出之Base view video串流的各Access Unit中未 包含MVC標頭。 MVC編碼器422係對由AVC編碼器421供給之Base view video串流(AVC視訊串流)進行解碼,生成L view之影像# 1 之資料。 又,MVC編碼器422係基於解碼所得之L view之影像#1 之資料、與自外部輸入之R view之影像#2之資料,生成 Dependent view video串流,並將其輸出至多工器423。於 構成由MVC編碼器422輸出之Dependent view video串流的 各Dependent Unit中包含有MVC標頭。 多工器423可生成包含由AVC編碼器421供給之Base view 145441.doc -59- 201105108 video串流與由MVC編碼器422供給之Dependent view video 串流的TS,並將該TS輸出。 圖33之AVC編碼器421具有圖3之H.264/AVC編碼器2 1之 功能,MVC編碼器422具有圖3之H.264/AVC解碼器22與The Base view video stream of the Unit is output to the multiplexer 415. The multiplexer 41 5 generates a TS including the Base view video stream supplied from the MVC header deleting unit 414 and the Dependent view video stream supplied from the demultiplexing unit 413, and outputs the TS. Fig. 33 is a view showing still another example of the configuration of the 3D video TS generating unit provided in the recording device. The 3D video TS generating unit of FIG. 33 includes an AVC encoder 421, an MVC encoder 422, and a multiplexer 423. The data of the image #1 of the L view is input to the AVC encoder 421, and the data of the image #2 of the R view is input to the MVC encoder 422. The AVC encoder 421 encodes the image #1 of the L view with H.264/AVC, and outputs the encoded AVC video stream as a Base view video stream to the MVC encoder 422 and the multiplexer 423. The MVC header is not included in each Access Unit constituting the Base view video stream outputted by the AVC coder 421. The MVC encoder 422 decodes the Base view video stream (AVC video stream) supplied from the AVC encoder 421 to generate the data of the image #1 of the L view. Further, the MVC encoder 422 generates a Dependent view video stream based on the decoded image of the L view image #1 and the data of the R view image #2 input from the outside, and outputs the Dependent view video stream to the multiplexer 423. The Dependent Unit constituting the Dependent view video stream output by the MVC encoder 422 includes an MVC header. The multiplexer 423 can generate a TS including the Base view 145441.doc -59 - 201105108 video stream supplied from the AVC encoder 421 and the Dependent view video stream supplied from the MVC encoder 422, and output the TS. The AVC encoder 421 of Fig. 33 has the function of the H.264/AVC encoder 2 1 of Fig. 3, and the MVC encoder 422 has the H.264/AVC decoder 22 of Fig. 3 and

Dependent view video編碼器24之功能。又,多工器423具 有圖3之多工器25之功能。 可藉由將具有如此構成之3D video TS生成部設置於記錄 裝置内,而禁止MVC標頭對儲存Base view video之資料之Dependent view video encoder 24 function. Further, the multiplexer 423 has the function of the multiplexer 25 of Fig. 3. The MVC header can be prohibited from storing the data of the Base view video by setting the 3D video TS generating unit having the above configuration in the recording device.

Access Unit之編碼。又,可使儲存Dependent view video之 資料之Dependent Unit中包含設定有1個以上之view—id的 MVC標頭。 圖34係表示將Access Unit解碼之播放裝置1側之構成的 圖。 於圖34中’表示有參照圖22等所說明之交換器1 〇9與視 訊解碼器110。包含Base view video之資料之AccessThe encoding of the Access Unit. Further, the Dependent Unit storing the data of the Dependent view video may include an MVC header having one or more view-ids. Figure 34 is a diagram showing the configuration of the playback device 1 side in which the Access Unit is decoded. In Fig. 34, 'the switch 1 〇 9 and the video decoder 110 described with reference to Fig. 22 and the like are shown. Access containing Base view video

Unit#l、與包含Dependent view video之資料之DependentUnit#l, Dependent with data containing Dependent view video

Unit#2係自緩衝器中讀出,且供給至交換器丨。 由於參考Base view video進行編碼,故為了正確地將 Dependent view video解碼,而首先必需預先將相應之Base view video解碼。 於H.264/MVC設定檔標準中,解碼器側係利用MVC標頭 中所含之view一id,算出各單元之解碼順序。又,於Base view video中規定於其編碼時始終將最小值設定為view_id 值。可使解碼器藉由自包含設定有最小之MVC標 145441.doc •60- 201105108 頭的單元開始解碼,而按正確之順序對Base view "^⑼與 Dependent view video進行解碼 0 但是,於儲存有供給至播放裝置丨之視訊解碼器ιι〇之Unit#2 is read from the buffer and supplied to the switch. Since the encoding is performed with reference to the Base view video, in order to correctly decode the Dependent view video, it is first necessary to decode the corresponding Base view video in advance. In the H.264/MVC profile standard, the decoder side calculates the decoding order of each unit by using the view-id included in the MVC header. Also, in the Base view video, the minimum value is always set to the view_id value when encoding. The decoder can start decoding by including the unit with the smallest MVC standard 145441.doc •60- 201105108, and decode the Base view "^(9) and Dependent view video in the correct order. There is a video decoder ιι〇, which is supplied to the playback device.

Base view "心0的八“^3 Unit中,禁止進行mvc標頭之編 碼。 因此,於播放裝置1中,儲存於無Mvc標頭之Base view " Heart 0 of the "^3 Unit, the encoding of the mvc header is prohibited. Therefore, in the playback device 1, it is stored in the Mvc-free header.

Unit中之view component,係定義為以其之為〇進 行識別。 藉此,播放裝置i可基於以View— id為〇進行識別之 view_id,識別Base view vide〇,且可基於實際設定之〇以 外之 view一id,識別 Dependent view vide〇 0 圖34之交換器109係首先將識別出作為最小值之〇被設定 為viewjd之Access Un池i輸出至視訊解碼器11 〇進行解 碼。 又,父換器109係於Access Unit#l之解碼結束之後,將 作為大於0之固定值之γ被設定為view—id的單元即The view component in the Unit is defined to be identified by its own. Thereby, the playback device i can identify the Base view vide based on the view_id identified by the View_id, and can identify the Dependent view vide〇0 based on the view-id other than the actual setting. First, it recognizes that the Access Un pool i, which is set to the viewjd as the minimum value, is output to the video decoder 11 for decoding. Further, after the decoding of Access Unit #1 is completed, the parent converter 109 sets a γ which is a fixed value greater than 0 as a view-id unit.

Dependent Unit#2輸出至視訊解碼器11〇中進行解碼。儲存 於 Dependent Unit#2 中之 Dependent view Wde〇之畫面係與 儲存於Access Unit#1中之Base view vide〇之晝面對應的晝 面。 可藉由以此方式禁止MVC標頭對儲存有Base view vide〇 之Access Unit之編碼,而使記錄於光碟2中之Base video串流成為先前之播放器中亦可播放之串流。 即便規定成為先前之播放器中亦可播放之串流之條件, 145441.doc 61 201105108 作為BKROM規格經擴展之BD_R〇M 3D標準之Μ% video串流的條件,亦可滿足該條件。 例如圖35所示,於Base video中分別附加有MVC標頭 進行解碼之情形時,該Base view video 與 Dependent view ’且自Base view video起首先 view video成為先前之播放器 中無法播放者。對於先前之播放器所裝載之H 264/AVC解 碼器而言,MVC標頭係為未定義之資料。於輸入有此種未 定義之資料之情形時,存在因解碼器而無法忽視該未定義 之資料,從而導致處理失敗之虞。 再者,於圖 35 中,Base View video 之 viewJd係為 χ, Dependent view video之 View_id 係為大於 x之 γ。 又,即便禁止MVC標頭之編碼,亦可藉由定義為將Base view video之view—id視作〇,而使播放裝置1首先進行Base view video之解碼,其後進行相應之Dependent view vide〇 之解瑪。即’可按正確之順序進行解碼。 [運用2] 關於GOP結構 於H.264/AVC標準中,未定義MPEG-2視訊標準中之 GOP(Group Of Pictures)結構。 因此’於處理H.264/AVC視訊串流之bd-ROM規格中, 對Η. 2 6 4 / AV C視讯串流之G Ο P結構進行定義,從而實現隨 機存取等利用GOP結構之各種功能。 於作為以Η.264 AVC/MVC設定檔標準進行編碼所得之視 訊串流的 Base view video 串流與 Dependent view video 串流 145441.doc -62- 201105108 中,亦與H.264/AVC視訊串流相同,不存在GOP結構之定 義。The Dependent Unit #2 is output to the video decoder 11 for decoding. The Dependent view Wde〇 image stored in Dependent Unit#2 is the same as the one stored in the Base view vide〇 in Access Unit#1. In this way, the MVC header can be prohibited from encoding the Access Unit storing the Base view vide, and the Base video stream recorded in the disc 2 can be streamed in the previous player. Even if it is specified as a condition that can be played back in the previous player, 145441.doc 61 201105108 can also satisfy the condition of the video% video stream of the extended BD_R〇M 3D standard of the BKROM specification. For example, as shown in FIG. 35, when the MVC header is added to the Base video for decoding, the Base view video and the Dependent view 'and the view video from the Base view video are first unplayable in the previous player. For the H 264/AVC decoder loaded by the previous player, the MVC header is undefined. In the case where such undefined data is input, there is a possibility that the undefined data cannot be ignored by the decoder, resulting in failure of processing. Furthermore, in Fig. 35, the viewJd of the Base View video is χ, and the View_id of the Dependent view video is γ larger than x. Moreover, even if the encoding of the MVC header is prohibited, the playback device 1 may first perform decoding of the Base view video by defining the view_id of the Base view video as the UI, and then perform the corresponding Dependent view vide〇. The solution. That is, 'decoding can be performed in the correct order. [Operation 2] About the GOP structure In the H.264/AVC standard, the GOP (Group Of Pictures) structure in the MPEG-2 video standard is not defined. Therefore, in the bd-ROM specification for processing H.264/AVC video stream, the G Ο P structure of Η. 2 6 4 / AV C video stream is defined, thereby realizing random access and the like using the GOP structure. Various functions. In the Base view video stream and the Dependent view video stream 145441.doc -62- 201105108, which is encoded as the 264.264 AVC/MVC profile standard, also with H.264/AVC video stream Again, there is no definition of a GOP structure.

Base view video串流為H.264/AVC視訊串流。因此, Base view video串流之GOP結構成為與BD-ROM規格中定 義之H.264/AVC視訊串流之GOP結構相同的結構。 亦對於Dependent view video串流之GOP結構’以與Base view video串流之GOP結構、即BD-ROM規格中定義之 H.264/AVC視訊串流之GOP結構相同的結構進行定義。 BD-ROM規格中定義之H.264/AVC視訊串流之GOP結構 中具有如下特徵。 1 ·關於串流結構之特徵 (1) Open GOP/Closed GOP結構 圖36係表示Closed GOP結構之圖。 圖36之各晝面係構成H.264/AVC視訊串流之畫面。於 Closed GOP 中包含 IDR(Instantaneous Decoding Refresh, 瞬間解碼重新更新)晝面。 IDR畫面係為I晝面,且於包含IDR畫面之GOP内最先被 解碼。於IDR晝面之解碼時,對參考畫面緩衝器(圖22之 DPB151)之狀態、及至此為止所管理之圖框編號或 POC(Picture Order Count)等之解碼的相關所有資訊進行重 置。 如圖36所示,於作為Closed GOP之當前GOP中,禁止上 述當前GOP之晝面中按顯示順序較IDR晝面靠前(回溯)的 畫面參考前一 GOP之畫面。 145441.doc •63· 201105108 又,禁止當前GOP之畫面中按顯示順序較IDR畫面靠後 (未來)的晝面越過IDR晝面,參考前一 GOP之畫面。於 H.264/AVC中,亦允許自按顯示順序位於I晝面後之P畫面 起參考較該I晝面靠前之畫面。 圖37係表示Open GOP結構之圖。 如圖37所示,於作為Open GOP之當前GO:P中,允許上 述當前GOP之畫面中按顯示順序較non-IDR I晝面(非IDR畫 面之I晝面)靠前之畫面參考前一 GOP之畫面。 又,禁止當前GOP之晝面中按顯示順序較non-IDR I晝面 靠後之晝面越過non-IDRI畫面,參考前一 GOP之晝面。 (2) 於 GOP之前導之Access Unit中,必需對SPS(Sequence Parameter Set,序列參數集)、PPS(Picture Parameter Set, 晝面參數集)進行編碼。 SPS(Sequence Parameter Set)係包含與序列整體之編碼 相關之資訊的序列之標頭資訊。於某一序列解碼時,最初 需要包含序列之識別資訊等之SPS。PPS(Picture Parameter Set)係包含與晝面整體之編碼相關之資訊的畫面之標頭資 訊。 (3) 於GOP之前導之Access Unit中,可對最多30個PPS進 行編碼。 當於前導之Access Unit中編碼複數個PPS之情形時,各 PPS之 id(pic_parameter_set」d)嚴禁相同。 (4) 於GOP之前導以外之Access Unit中,可對最多1個 P P S進行編碼。 145441.doc -64- 201105108 2 ·關於參考結構之特徵 ⑴要求I.P.B畫面分別為僅w.p.B片層構成之畫面。 (2)要求按顯示順序位於參考晝面口或卩晝面)之前之b全 .面必需以編碼順序接著上述參考畫面之後進行編碼。a ()要长隹持參考晝面(1或P晝面)之編碼順序與顯示順序 (保持相同)。 (4) 禁止自p晝面起參考B晝面。 (5) 按編碼順序,非參晝面(B1)位於非參考晝面㈣ 之則時,要求顯示順序亦為B 1在前。 _參考B晝面係不被才文編碼順序位於後面之其他書面夫 考之B畫面。 一> (6) 參考B畫面可參考按顯示順序前後相鄰之參考畫面 或P晝面)。 (7) 非參考B畫面可參考按顯示順序前後相鄰之參考畫面 (工或P畫面)、或者參考B畫面。 (8) 要求使連續之b晝面之數量最多為3張。 3 _關於GOP内之最大圖框.攔位數之特徵 如圖38所不,GOP内之最大圖框.攔位數係對應著視訊之 圖框率進行規定。 如圖38所示,例如於以圖框率為29 97圖框/秒進行交錯 顯示之情形時,能夠顯示於HsG〇p畫面中之最大襴位數 為6〇。又,於以圖框率為59.94圖框/秒進行循序顯示之情 形時,能夠顯示於1個GOP晝面中之最大圖框數為6(^ 亦將具有如上所述之特徵之GOP結構定義為Dependent 145441.doc -65- 201105108 view video串流之GOP結構。 又,規定使Base view video串流之某一 GOP之結構與相 應之Dependent view video串流之GOP之結構一致作為約束 條件。 圖39係表示以上述方式定義之Base view video串流或 Dependent view video 串流之 Closed GOP結構。 如圖39所示,於作為Closed GOP之當前GOP中,禁止上 述當前GOP之晝面中按顯示順序較IDR畫面或錨晝面靠前 (回溯)的畫面參考前一 GOP之畫面。關於錨畫面將於下文 敍述。 又,禁止當前GOP之畫面中按顯示順序較IDR畫面或錨 畫面靠後(未來)的畫面越過IDR畫面或錨畫面,參考前一 GOP之畫面。 圖 40係表示 Base view video 串流或 Dependent view video 串流之Open GOP結構之圖。 如圖40所示,於作為Open GOP之當前GOP中,允許上 述當前GOP之畫面中按顯示順序較non-IDR錨晝面(非IDR 晝面之錨畫面)靠前之晝面參考前一 GOP之晝面。 又,禁止當前GOP之晝面中按顯示順序較non-IDR錨晝 面靠後之晝面越過non-IDR錨畫面,參考前一 GOP之畫 面。 可藉由以上述方式定義GOP結構,而使例如Base view video 串流之某一 GOP、與相應之 Dependent view video 串 流之GOP之間為Open GOP抑或是Closed GOP之類的串流 145441.doc •66- 201105108 結構之特徵一致。 又’與Base view video之非參考Β畫面對應之Dependent view video的畫面係晝面之參考結構之特徵亦可一致,以 必然成為非參考B晝面。 進而Base view video串流之某一 GOP、與相應之 Dependent view video串流之G0P之間係圖框數、攔位數亦 可一致。 如此般’可藉由將Dependent view video _流之G0P結構 定義為與Base view video串流之GOP結構相同之結構,而 使串流間對應之GOP彼此具有相同的特徵。 又,即便自串流之中途進行解碼,亦可無問題地進行解 碼。自串流之中途之解碼例如係於隨時點播或隨機存取時 進行。 串流間對應之G Ο P彼此之結構以圖框數不同之方式不同 時,存在產生其中之一的串流能夠正常進行播放,而另一 串流無法播放之情形之虞,但該情形可進行預防。 於使串流間對應之G0P彼此之結構作為相異者而自串流 之中途開始解碼之情形時,亦存在產生叫⑽如_ V1deo之解碼所需之Base Wew Wde〇之晝面未被解石馬之情形 之虞。於此情形時,作為結果,無法對Dependent — video之畫面進行解碼,從而無法進行犯顯示。又,因安 裝之方法而存在亦無法輸出Base view vide。之圖像之可能 性’但S亥專不良情況亦可避免。 [EP—map] 145441.doc •67- 201105108 可藉由利用 Base Wew video 串流與 Dependent view vide〇 串流之GOP結構,可將隨機存取或隨時點播時之解碼開始 位置設定於EP—map中。EP_map係包含於C丨ip Inf〇rmati〇n 檔案中。 對可設定於EP_map中作為解碼開始位置之畫面之約 束,規定如下2個約束。 1. 使可设疋於Depend'ent view video串流中之位置,為接 著SubsetSPS配置之錨畫面之位置、或接著SubsetSps配置 之IDR晝面之位置。 錨畫面係由H.264 AVC/MVC設定檔標準規定之畫面,且 係無需參考時間方向而進行view間之參考實施編碼之 Dependent view video 串流之晝面。 2. 於將Dependent view video串流之某一畫面作為解碼開 始位置,設定於EP_map中之情形時,相應之Base view video串流之畫面亦作為解碼開始位置設定於EP-map中。 圖4 1係表示設定於滿足上述2個約束之Ep—map中之解碼 開始位置之例的圖。 於圖41中’按解碼順序表示構成Base view video串流之 畫面與構成Dependent view video串流之晝面。The Base view video stream is H.264/AVC video stream. Therefore, the GOP structure of the Base view video stream has the same structure as the GOP structure of the H.264/AVC video stream defined in the BD-ROM specification. The GOP structure of the Dependent view video stream is also defined in the same structure as the GOP structure of the Base view video stream, that is, the GOP structure of the H.264/AVC video stream defined in the BD-ROM specification. The GOP structure of the H.264/AVC video stream defined in the BD-ROM specification has the following features. 1 · Features of the stream structure (1) Open GOP/Closed GOP structure Fig. 36 is a diagram showing the structure of the Closed GOP. Each of the faces of Fig. 36 constitutes a picture of H.264/AVC video stream. The IDR (Instantaneous Decoding Refresh) is included in the Closed GOP. The IDR picture is I-plane and is decoded first in the GOP containing the IDR picture. In the decoding of the IDR face, all the information related to the decoding of the reference picture buffer (DPB 151 of Fig. 22), the frame number managed so far, or the POC (Picture Order Count) is reset. As shown in Fig. 36, in the current GOP as the Closed GOP, the picture of the previous GOP is prohibited from being referred to in the display order of the current GOP in the display order than the IDR face (backtracking). 145441.doc •63· 201105108 In addition, it is prohibited to refer to the previous GOP screen in the current GOP screen in the display order, which is later than the IDR screen (future). In H.264/AVC, it is also allowed to refer to the picture that is located earlier than the I picture from the P picture that is located after the display order. Figure 37 is a diagram showing the structure of an Open GOP. As shown in FIG. 37, in the current GO:P as the Open GOP, the screen of the current GOP is allowed to be displayed in the order of the non-IDR I (the side of the non-IDR picture) in the display order. The picture of GOP. In addition, it is prohibited to cross the non-IDRI picture in the display order of the current GOP in the display order than the non-IDR I, and refer to the face of the previous GOP. (2) In the Access Unit before the GOP, it is necessary to encode the SPS (Sequence Parameter Set) and the PPS (Picture Parameter Set). The SPS (Sequence Parameter Set) is a header information of a sequence containing information related to the encoding of the entire sequence. When decoding a certain sequence, it is necessary to initially include the SPS of the sequence identification information and the like. PPS (Picture Parameter Set) is the header information of the picture containing the information related to the encoding of the entire face. (3) Up to 30 PPSs can be encoded in the Access Unit before the GOP. When a plurality of PPSs are encoded in the leading Access Unit, the id (pic_parameter_set"d) of each PPS is strictly prohibited. (4) Up to 1 P P S can be encoded in an Access Unit other than the GOP. 145441.doc -64- 201105108 2 · Features of the reference structure (1) The I.P.B screen is required to be a screen composed only of w.p.B slices. (2) It is required to be in the order of display in front of the reference face or face). The face must be encoded in the coding order followed by the above reference picture. a () The length of the reference code (1 or P face) is the same as the display order (same). (4) It is forbidden to refer to the side B from the p-face. (5) In the coding order, when the non-parametric surface (B1) is located in the non-reference surface (4), the display order is also required to be B 1 first. The _ reference B 昼 is not a B-picture of other written essays in which the coding order is not followed. One (6) Refer to the B picture for reference to the reference picture or P昼 in the display order. (7) The non-reference B picture can refer to the reference picture (work or P picture) adjacent to the display order, or the reference B picture. (8) It is required to make the number of consecutive b-faces up to three. 3 _About the maximum frame in the GOP. The characteristics of the block number As shown in Figure 38, the maximum frame in the GOP. The number of blocks is determined by the frame rate of the video. As shown in Fig. 38, for example, when the frame rate is 29 97 frames/sec, the maximum number of turns that can be displayed on the HsG〇p screen is 6〇. Moreover, when the frame rate is 59.94 frames/sec, the maximum number of frames that can be displayed in one GOP face is 6 (^ will also have the GOP structure definition as described above. For the Dependent 145441.doc -65- 201105108 view video stream GOP structure. Also, it is specified that the structure of a GOP of the Base view video stream is consistent with the structure of the corresponding Dependent view video stream GOP as a constraint. 39 shows the Closed GOP structure of the Base view video stream or the Dependent view video stream defined in the above manner. As shown in FIG. 39, in the current GOP as the Closed GOP, the display order of the current GOP is prohibited. The picture of the previous GOP is referred to before the picture of the IDR picture or the anchor face (backtracking). The anchor picture will be described below. Also, the current GOP picture is prohibited from being displayed later than the IDR picture or the anchor picture in the display order (future The picture of the previous GOP is referenced over the IDR picture or the anchor picture. Figure 40 is a diagram showing the Open GOP structure of the Base view video stream or the Dependent view video stream. As shown in Figure 40, As the current GOP of the Open GOP, the picture of the current GOP is allowed to refer to the front side of the previous GOP in the display order than the non-IDR anchor picture (the anchor picture of the non-IDR face). The face of the current GOP is displayed in the order of the non-IDR anchor face in the display order, and the picture of the previous GOP is referenced. The GOP structure can be defined in the above manner, for example, Base view A certain GOP of the video stream, and the GOP of the corresponding Dependent view video stream is the same as the Open GOP or Closed GOP stream 145441.doc • 66- 201105108 structure. Also 'with Base view video The reference structure of the Dependent view video corresponding to the non-reference Β screen may also be consistent with the features of the reference structure, so as to be a non-reference B 。 face. Further, a certain GOP of the Base view video stream, and the corresponding Dependent view video The number of frames and the number of blocks in the stream can also be the same. Thus, the structure of the Dependent view video _ stream can be defined as the same structure as the GOP structure of the Base view video stream. Streaming The GOP should have the same characteristics with each other. Further, even if decoding is performed in the middle of streaming, decoding can be performed without any problem. The decoding from the middle of the stream is performed, for example, at any time for on-demand or random access. When the structure of G Ο P corresponding to each other is different in the manner of different number of frames, there is a case where one of the streams can be played normally, and another stream cannot be played, but the situation can be Take precautions. In the case where the structure of the GOPs corresponding to the streams is decoded as a dissimilar person from the middle of the stream, there is also a problem that the base Wew Wde which is required to decode the video (10) such as _V1deo is not solved. The situation of Shima. In this case, as a result, the Dependent_video screen cannot be decoded, and the display cannot be performed. Also, the Base view vide cannot be output due to the installation method. The possibility of the image 'but the S Hai special bad situation can also be avoided. [EP-map] 145441.doc •67- 201105108 By using the Base Wew video stream and the Dependent view vide stream GOP structure, the decoding start position at random access or on-demand can be set to EP_map. in. EP_map is included in the C丨ip Inf〇rmati〇n file. The following two constraints are defined for the constraint that can be set in the EP_map as the decoding start position. 1. Make the position that can be set in the Depend'ent view video stream, the position of the anchor picture configured next to SubsetSPS, or the position of the IDR face configured by SubsetSps. The anchor picture is a picture specified by the H.264 AVC/MVC profile standard, and is a Dependent view video stream that performs reference coding between views without reference to the time direction. 2. When a certain picture of the Dependent view video stream is used as the decoding start position and set in the EP_map, the corresponding Base view video stream picture is also set as the decoding start position in the EP-map. Fig. 4 is a diagram showing an example of setting a decoding start position in an Ep_map satisfying the above two constraints. In Fig. 41, the picture constituting the Base view video stream and the frame constituting the Dependent view video stream are shown in decoding order.

Dependent view video串流之晝面中之標色表示之晝面Ρι 係為錨晝面或IDR畫面。於包含畫面資料的Access Unit 中包含有 SubsetSPS。 於圖41之例中,如中空箭頭#11所示,畫面卩丨係作為解 瑪開始位置而設定於Dependent view video串流之EP_map 145441.doc -68 - 201105108 中。 作為與晝面p,對應之Base view vide。串流之畫面的畫面The face of the Dependent view video stream is represented by the anchor face or the IDR picture. SubsetSPS is included in the Access Unit containing the screen data. In the example of Fig. 41, as shown by the hollow arrow #11, the screen is set as the solution start position in EP_map 145441.doc -68 - 201105108 of the Dependent view video stream. As the base view vide corresponding to the facet p. Picture of streaming picture

Pn係為IDR畫面"。巾空箭頭#12所示,作為伽畫面之畫 面Ρπ亦作為解碼開始位置而設定於8咖串流之 EP_map 中。 由於接收到隨機存取或隨時點播之指示,而自晝面I與 畫面P"開始解碼時,首先進行晝面Ρπ之解碼。因idr畫面 之故,而無需參考其他畫面,便可對畫面Ριι進行解碼。 當畫面Pu之解碼結束時,其次對晝面匕進行解碼。於畫 面卩!之解碼時參考已解碼之畫面ρπ。因錨畫面或IDR畫面 之故,故只要晝面之解碼結束,便可進行晝面Pi之解 碼0 其後,按Base view Video之晝面卩丨之下一個晝面、 Dependent view video之晝面P"之下一個晝面、依此類 推進行解碼。 由於相應之GOP之結構相同’且自相應之位置開始解 碼’因此無論 Base view video 抑或是 Dependent View video ’均可無問題地對設定於Ep_map中之畫面以後的書 面進行解碼。藉此可實現隨機存取。 圖41之與垂直方向所示之點線相較排列於左側之晝面係 為未解碼之晝面。 圖42係表示於不定義Dependent view video之GOP結構之 情形時所產生之問題的圖。 於圖42之例中,標色表示之Base view video之IDR晝面 145441.doc _69· 201105108 即畫面作為解碼開始位置而設定於Ep—map中。 於自BaSe view vide〇之畫面始解碼之情形時設想 與畫面P2丨對應之Dependent view vide〇之畫面即畫面卩3丨並 非錨畫面的情形。於未定義G〇p結構之情形時,無法保證 與 Base view.Wde〇 之 IDR 畫面對應之 Dependent who 之畫面係為IDR畫面或錯畫面。 於此情形時,即便Base view vide〇之畫面Pu之解碼結 束,亦無法對畫面PS1進行解碼。於畫面Pn之解碼時亦需 要時間方向之參考,但較垂直方向所示之點線更靠左側 (按解碼順序為靠前)之畫面未被解碼。 由於無法對畫面P3,進行解碼,因此亦無法對參考畫面 P3丨之Dependent view video之其他晝面進行解碼。 可藉由預先定義Dependent view video串流之GOP結構, 而避免上述情況。 可藉由不僅對Base view video,而且對Dependent view video預先將解碼開始位置設定於Ep—爪叩中,而使播放裝 置1容易地確定解碼之開始位置。 於僅將Base View video之某一畫面作為解碼開始位置而 預先設定於EP一map中之情形時,播放裝置!必需藉由計算 而確疋與解碼開始位置之晝面對應之Dependent view video 之畫面,從而導致處理複雜化。 即便相應之 Base view video與 Dependent view video之畫 面彼此具有相同的DTS/PTS,當視訊之位元率不同時,亦 無法使TS中之位元組排列一致,因此該情形時處理變得複 145441.doc •70· 201105108 雜。 圖43係表示進行以包含Base view vide〇串流與Dependent view video串流之MVC串流為對象之隨機存取或隨時點播 時所需的晝面搜尋之概念之圖。 如圖43所示,於進行隨機存取或隨時點播時,將搜尋 non-IDR錨畫面或IDR畫面’並決定解碼開始位置。 此處,對EP_maP進行說明。說明Base view “心〇之解碼 開始位置設定於EP一map中之情形,而以口⑶心加 video之解碼開始位置亦以相同之方式設定於Dependent view video之 EP_map 中。 圖44係表示記錄於光碟2上之AV串流之結構的圖。 包含Base view video串流之TS係包括具有6144位元組之 大小之整數個對準單元(AUgned Unit)。 對準單元係包含32個來源封包(s〇urce來源封 包具有i92位元組。i個來源封包係包含4位元組之傳輸封 包額外標頭(TP_extra header)與188位元組之傳輸封包 (Transport Packet)。Pn is the IDR picture ". As shown by the towel arrow #12, the picture Ρπ as the gamma picture is also set as the decoding start position in the EP_map of the 8 stream stream. Since the instruction of random access or on-demand is received, and the decoding starts from the face I and the picture P", the decoding of the face Ρ π is first performed. Due to the idr screen, the picture Ριι can be decoded without reference to other pictures. When the decoding of the picture Pu ends, the next time the picture is decoded. In the painting! The decoded picture is referenced to the decoded picture ρπ. Because of the anchor screen or IDR screen, as long as the decoding of the face is finished, the decoding of the face Pi can be performed. 0 Then, press the bottom of the Base view Video and the bottom of the Dependent view video. P" The next one, and so on. Since the structure of the corresponding GOP is the same 'and the decoding is started from the corresponding position', the book after the picture set in the Ep_map can be decoded without problems, whether the Base view video or the Dependent View video ’. This allows random access. In Fig. 41, the pupil line arranged on the left side as compared with the dotted line shown in the vertical direction is an undecoded face. Figure 42 is a diagram showing the problem that occurs when the GOP structure of the Dependent view video is not defined. In the example of Fig. 42, the IDR of the Base view video indicated by the color gradation 145441.doc _69· 201105108 The screen is set as the decoding start position in the Ep_map. In the case where the picture from the BaSe view vide is decoded, it is assumed that the picture of the Dependent view vide corresponding to the picture P2 is the picture 卩3丨 and is not the anchor picture. When the G〇p structure is not defined, there is no guarantee that the screen of Dependent who corresponding to the IDR screen of Base view.Wde〇 is an IDR screen or an error screen. In this case, even if the decoding of the picture Pu of the Base view vide is completed, the picture PS1 cannot be decoded. The reference to the time direction is also required for decoding of the picture Pn, but the picture on the left side (before the decoding order is higher) than the dotted line shown in the vertical direction is not decoded. Since the picture P3 cannot be decoded, it is impossible to decode the other side of the Dependent view video of the reference picture P3. This can be avoided by predefining the GOP structure of the Dependent view video stream. The playback device 1 can easily determine the start position of the decoding by setting the decoding start position to the Ep-claw in advance not only for the Base view video but also for the Dependent view video. The playback device is used when only one of the Base View video screens is set as the decoding start position in the EP map. It is necessary to calculate the Dependent view video corresponding to the face of the decoding start position by calculation, which complicates the processing. Even if the corresponding Base view video and Dependent view video have the same DTS/PTS, when the bit rate of the video is different, the bits in the TS cannot be aligned, so the processing becomes 145441. .doc •70· 201105108 Miscellaneous. Figure 43 is a diagram showing the concept of a face search required to perform random access or on-demand playback of an MVC stream including a Base view vide stream and a Dependent view video stream. As shown in Fig. 43, when random access or on-demand is performed, the non-IDR anchor picture or IDR picture ' is searched for and the decoding start position is determined. Here, EP_maP will be described. Description Base view "The decoding start position of the heart is set in the EP map, and the decoding start position of the mouth (3) plus video is also set in the EP_map of the Dependent view video in the same way. Fig. 44 shows the record in A diagram of the structure of the AV stream on the disc 2. The TS system including the Base view video stream includes an integer number of aligning units (AUgned Units) having a size of 6144 bytes. The aligning unit contains 32 source packets ( The s〇urce source packet has i92 bytes. The i source packets contain a 4-bit transport packet extra header (TP_extra header) and a 188-bit transport packet (Transport Packet).

BaSe VUW Video之資料係封包為MPEG2 PES封包。於 PES封包之資料部中附加pES封包標頭而形成pEs封包。於 PES封包標頭中,包含確定pES封包進行傳送之基本串流 之種類的串流ID。 S封包係進而封包為傳輸封包。#,封包係分判 成傳輸封包之有效負載之大小,於有效負載中附加傳輸: 包標頭而形成傳輸封包。傳輸封包標頭係包含儲存於有效 I45441.doc 201105108 負載中之資料之識別資訊即PID。 再者,於來源封包中,將Clip AV串流之前導例如設為 0,對每個來源封包賦予逐一增加之來源封包序號。又, 對準單元係自來源封包之第1位元組開始。 EP_map係用於當提供有Clip之存取點之時戳時,檢索應 於Clip AV串流檔案之中開始資料之讀出的資料位址。 EP_map係提取自基本串流及傳輸流之登錄點之清單。 EP—map具有用以檢索應在AV串流之中開始解碼之登錄 點的位址資訊。EP_map中之1個EP資料係由PTS、及與 PTS對應之Access Unit之AV串流中之位址所組成之對而構 成。於AVC/H.264中,於1個Access Unit中儲存有相當於1 個畫面之資料。 圖45係表示Clip AV串流之例的圖。 圖45之Clip AV串流係為包含以PID=X識別之來源封包的 視訊串流(Base view vide〇串流)。視訊串流係藉由來源封 包内之傳輸封包之標頭所含的pID區分每個來源封包。 於圖45中,對視訊串流之來源封包中之包含畫面之 月J導位元組的來源封包標有顏色。未標色之四角表示包含 未成為IW機存取點之資料梦來源封包、或包含其他串流之 資料的來源封包。 例如’包含以PID=X區分之視訊串流之可隨機存取之 ⑽晝面之則導位元組的來源封包序號χι之來源封包,係 ;_ PAV串流之時間軸上配置於之位置上。 同樣地’其次之包含可隨機存取之IDR晝面之前導位元 145441.doc •72· 201105108 組的來源封包視為來源封包序號X2之來源封包,且配置於 PTS=pts(x2)之位置上。 圖46係概念性表示與圖45之Clip AV串流對應之EP_map 之例的圖。 如圖 46 戶斤示,EP_map 係包括 stream_PID、PTS_EP_ start、及 SPN_EP_star't。 stream_PID係表示傳送視訊串流之傳輸封包的PID。 PTS_EP_start係表示自可隨機存取之IDR晝面開始之 Access Unit之 PTS 〇 SPN_EP_start係表示包含藉由PTS_EP_start之值進行參 考之Access Unit之第1位元組的來源封包之位址。 視訊串流之PID係儲存於stream_PID中,且生成表示 PTS_EP_start^· SPN—EP_start^ 胃 i% 關係的表才各資 tfl 即 EP_map_for_one_stream_PID() ° 例如,於PID=x之視訊串流之EP_map_for_one_ stream_PID[0中,分別對應地描述有PTS=pts(xl)與來源封 包序號XI、PTS=pts(x2)與來源封包序號X2、…、 PTS=pts(xk)與來源封包序號Xk。 多工處理為相同Clip AV串流之各個視訊串流,亦生成 如此之表格。包含所生成之表格之EP_map係儲存於與該 Clip AV串流對應之Clip Information槽案中。 圖47係表示SPN—EP_start所指之來源封包之資料結構之 例的圖。 如上所述,來源封包係以1 88位元組之傳輸封包中附加 145441.doc •73· 201105108 有4位兀組標頭的形式構成。傳輸封包部分係包括標頭部 (TP header)與有效負載部。spN_Ep_start係表示包含自 職畫面開始之Access 之帛}位元組的來源封包之來源 封包序號。 於AVC/H.264中,Access Unit即畫面係自Au定界符 (Access Unit Delimiter,存取單元定界符)開始。於入。定 界符之後接著存在SPS與PPS。於其後儲存有IDR畫面之片 層資料之前導部分或整體。 位於傳輸封包之TP標頭處的Payl〇ad_unit_start_indicator 之值為1 表示自该傳輸封包之有效負載起開始新的封 包可自5玄來源封包開始Access Unit。 如此之 EP_map 係對 Base view video 串流與 Dependent view video串流而分別準備者。 圖48係表示EP_map中所含之子表格的圖。 如圖48所示,EP_map係分成作為子表格之Ep—c〇arse與 EP_flne。子表格EP_c〇arse係用以按照粗略之單位進行檢 索之表格,且子表格EP_fine係用以按照更精密之單位進 行檢索之表格。 如圖48所示,子表格EP_fine係使登錄pTS_EP-fine與登 錄SPN_EP_fine相對應之表格。於子表格内,對於各登 錄,例如使最上列為「〇」以向上順序賦予登錄編號。於 子表格EP一fine中,使登錄PTS-Ep_fine與登錄 SPN—EP_fine相加之資料寬度為4位元組。 子表格EP_coarse係使登錄ref t〇_Ep_fine」d、登錄 145441.doc •74· 201105108 PTS_EP_coarse及登錄SPN_EP_coarse對應之表格。使登錄 ref_to_EP_fine_id、登錄 PTS_EP_coarse 及登錄 SPN—EP_ coarse相加之資料寬度為8位元組。 子表格EP_fine之登錄係包括登錄PTS_EP_start及登錄 SPN_EP_start之各自之LSB(Least Significant Bit,最低有 效位元)側之位元資訊。又,子表格EP_coarse之登錄係包 括登錄PTS_EP_start及登錄SPN_EP_start之各自之 MSB(Most Significant Bit,最高有效位元)側之位元資 訊、以及與其對應之子表格EP_fine之表格中之登錄編 號。該登錄編號係具有自相同之資料PTS—EP_start中取出 之LSB側位元資訊的子表格EP—fine中之登錄之編號。 圖49係表示登錄PTS_EP_coarse及登錄PTS_EP_fine之格 式之例的圖。 登錄PTS_EP_start係資料長度為33位元之值。若使MSB 之位元為第32位元,LSB之位元為第0位元,則登錄 PTS_EP_coarse中使用自登錄PTS_EP—start之第32位元至第 19位元為止的14個位元。可藉由登錄PTS—EP_coarse,而 以5.8秒,於26.5小時為止之範圍内檢索解析度。 又,登錄PTS_EP_Hne中,使用自登錄PTS_EP_start之第 19位元至第9位元為止的11個位元。可藉由登錄 PTS_EP_fine,而以5.7毫秒於11.5秒為止之範圍内檢索解 析度。再者’於登錄PTS_EP_coarse與登錄PTS_EP_fine中 共通使用第19位元。又,不使用自LSB側之第0位元起至 第8位元為止的9個位元。 145441.doc •75- 201105108 圖50係表示登錄spN_EP_coarse及登錄SPN_EP_fine之格 式之例的圖。The data of BaSe VUW Video is encapsulated in MPEG2 PES. A pES packet header is added to the data portion of the PES packet to form a pEs packet. The PES packet header contains a stream ID that determines the type of elementary stream that the pES packet is to transmit. The S packet is further encapsulated into a transport packet. #, The packet is determined by the size of the payload of the transport packet, and the payload is added to the payload: the packet header forms a transport packet. The transport packet header contains the identification information of the data stored in the payload of the valid I45441.doc 201105108, ie PID. Furthermore, in the source packet, the Clip AV stream preamble is set to, for example, 0, and each source packet is given a source packet number that is incremented one by one. Again, the alignment unit begins with the first byte of the source packet. The EP_map is used to retrieve the data address of the start of reading of the data in the Clip AV stream file when the time stamp of the access point of the Clip is provided. EP_map is a list of login points extracted from the base stream and the transport stream. The EP_map has address information for retrieving a login point that should start decoding in the AV stream. One EP data in the EP_map is composed of a pair of PTSs and addresses in the AV stream of the Access Unit corresponding to the PTS. In AVC/H.264, data equivalent to one screen is stored in one Access Unit. Fig. 45 is a view showing an example of a Clip AV stream. The Clip AV stream of Fig. 45 is a video stream (Base view vide stream) containing a source packet identified by PID = X. The video stream distinguishes each source packet by the pID contained in the header of the transport packet in the source packet. In Figure 45, the source packet containing the month J leader of the picture in the source packet of the video stream is color-coded. The four corners of the unmarked color represent source packets containing data source packets that are not IW machine access points, or data that contains other streams. For example, 'the source packet number of the random bit (10) of the video stream that is divided by the video stream of PID=X is the source packet number of the χι, the system is located at the time axis of the PAV stream. on. Similarly, the source packet containing the random access IDR header 145441.doc •72· 201105108 is regarded as the source packet of the source packet number X2 and is placed at the location of PTS=pts(x2). on. Fig. 46 is a diagram conceptually showing an example of EP_map corresponding to the Clip AV stream of Fig. 45. As shown in Figure 46, EP_map includes stream_PID, PTS_EP_start, and SPN_EP_star't. stream_PID is the PID of the transport packet that carries the video stream. PTS_EP_start indicates the PTS of the Access Unit starting from the IDR that can be randomly accessed. SPN_EP_start is the address of the source packet containing the first byte of the Access Unit referenced by the value of PTS_EP_start. The PID of the video stream is stored in the stream_PID, and a table representing the PTS_EP_start^·SPN_EP_start^ stomach i% relationship is generated. Each resource tfl is EP_map_for_one_stream_PID() ° For example, the EP_map_for_one_stream_PID of the video stream of PID=x[ In 0, PTS=pts(xl) and source packet number XI, PTS=pts(x2) and source packet number X2, . . . , PTS=pts(xk) and source packet number Xk are respectively correspondingly described. Multiple processing is performed for each video stream of the same Clip AV stream, and such a table is also generated. The EP_map containing the generated table is stored in the Clip Information slot corresponding to the Clip AV stream. Figure 47 is a diagram showing an example of the data structure of the source packet indicated by SPN_EP_start. As mentioned above, the source packet is composed of a 148441.doc •73·201105108 with a 4-bit group header in the transmission packet of the 88-bit tuple. The transport packet portion includes a TP header and a payload portion. spN_Ep_start is the source code packet number of the source packet containing the Access 帛} byte starting from the self-service screen. In AVC/H.264, the Access Unit screen starts with the Au Delimiter (Access Unit Delimiter). Into. The delimiter is followed by SPS and PPS. The slice data of the IDR picture is stored in the leading part or the whole. The value of Payl〇ad_unit_start_indicator at the TP header of the transport packet indicates that a new packet can be started from the 5 source packet starting from the payload of the transport packet. Such an EP_map is prepared separately for the Base view video stream and the Dependent view video stream. Fig. 48 is a diagram showing a sub-table included in the EP_map. As shown in Fig. 48, EP_map is divided into Ep_c〇arse and EP_flne as subtables. The sub-form EP_c〇arse is a table for searching in rough units, and the sub-table EP_fine is a table for searching in more precise units. As shown in Fig. 48, the sub-table EP_fine is a table in which the login pTS_EP-fine is associated with the login SPN_EP_fine. In the sub-table, for each registration, for example, the top is listed as "〇", and the registration number is given in the upward order. In the sub-form EP-fine, the data width of the registration PTS-Ep_fine and the login SPN-EP_fine is 4-bit. The sub-form EP_coarse is a table corresponding to the login ref t〇_Ep_fine"d, the login 145441.doc •74·201105108 PTS_EP_coarse and the login SPN_EP_coarse. The data width of the registration ref_to_EP_fine_id, login PTS_EP_coarse, and login SPN_EP_ coarse is octet. The registration of the sub-form EP_fine includes the bit information of the LSB (Least Significant Bit) side of each of the PTS_EP_start and the login SPN_EP_start. Further, the registration of the sub-form EP_coarse includes the registration number in the table of the MSB (Most Significant Bit) side of the login PTS_EP_start and the login SPN_EP_start, and the sub-table EP_fine corresponding thereto. The registration number is the number of the registration in the sub-form EP_fine of the LSB side bit information extracted from the same data PTS_EP_start. Fig. 49 is a view showing an example of the format of the registered PTS_EP_coarse and the registered PTS_EP_fine. The PTS_EP_start data is registered to a value of 33 bits. If the bit of the MSB is the 32nd bit and the bit of the LSB is the 0th bit, the 14 bits from the 32nd bit to the 19th bit of the registered PTS_EP_start are used in the PTS_EP_coarse. The resolution can be retrieved in the range of 26.5 hours in 5.8 seconds by logging in PTS-EP_coarse. Further, in the PTS_EP_Hne, 11 bits from the 19th to the 9th bits of the PTS_EP_start are registered. The resolution can be retrieved in the range of 5.7 milliseconds to 11.5 seconds by logging in PTS_EP_fine. Furthermore, the 19th bit is commonly used in the login PTS_EP_coarse and the login PTS_EP_fine. Further, nine bits from the 0th bit on the LSB side to the 8th bit are not used. 145441.doc •75- 201105108 Figure 50 is a diagram showing an example of the format of login spN_EP_coarse and login SPN_EP_fine.

登錄SPN一EP_start係資料長度為32位元之值。若使MSB 之位元為第3 1位元,使LSB之位元作為第〇位元,則於登 錄SPN 一 EP_coarse中’使用自登錄spN—之第川立 元起至第0位元為止的所有位元。 又,於登錄SPN_EP_fine中使用自登錄SPN—Ep—以⑽之 第1 6位元起至第〇位元為止的丨7個位元。 關於使用EP_coarse與EP_fine進行之隨機存取時之讀出 開始位址之決定方式將於下文敍述。關於Ep—map,例如 亦3己載於曰本專利特開2〇〇5_348;3i4號公報中。 [運用3] 解碼時’作為Dependent view video串流之晝面之 P〇C(PlctUre Order Count)之值,使用與相應之Base view video串流之畫面之p〇c之值相同的值。p〇c係表示 AVC/H.264標準中規定之畫面之顯示順序的值,且於解碼 時藉由計算而求出。 例如,Base view vide〇串流之晝面之p〇c之值係藉由計 算而求出,且以由求出之值表示之順序,自解碼器中輸出 Base view video串流之畫面。又,於輸出wew vide〇 串流之畫面之同時,輸出相應之Dependent view vide〇串流 之晝面。藉此,實質上與Base view vide〇串流之畫面之 p〇c之值相同之值可用作Dependent view vide〇串流之晝面 之POC之值。 145441.doc •76· 201105108 又’於構成Base view video 串流與Dependent view video 串流之各晝面之資料中,附加有SEI(SupplementalThe SPN-EP_start data length is 32 bits. If the bit of the MSB is the 31st bit and the bit of the LSB is the third bit, then in the SPN-EP_coarse, the use of the self-registered spN-from the Chuanyuan Yuan to the 0th bit All bits. Further, in the login SPN_EP_fine, 丨7 bits from the registration SPN_Ep--from the 16th bit of (10) to the third bit are used. The manner in which the read start address is determined when random access is performed using EP_coarse and EP_fine will be described later. Regarding the Ep-map, for example, it is also described in Japanese Patent Laid-Open Publication No. Hei. No. Hei. [Usage 3] When decoding, as the value of P〇C (PlctUre Order Count) of the Dependent view video stream, the same value as the value of p〇c of the picture of the corresponding Base view video stream is used. P〇c is a value indicating the display order of the screen specified in the AVC/H.264 standard, and is obtained by calculation at the time of decoding. For example, the value of p〇c below the Base view vide stream is calculated by calculation, and the Base view video stream is output from the decoder in the order indicated by the obtained value. In addition, while outputting the wew vide〇 streaming picture, the corresponding Dependent view vide is outputted. Thereby, the value substantially the same as the value of p〇c of the picture of the Base view vide stream can be used as the value of the POC of the Dependent view vide. 145441.doc •76· 201105108 In addition, SEI (Supplemental) is attached to the data that constitutes the Base view video stream and the Dependent view video stream.

Enhancement Information,補充增強資訊)。SEI 係由 H.264/AVC規定且包含與解碼相關之輔助資訊的附加資 訊0 於作為SEI之一的Picture Timing SEI(畫面時序SEI)中, 包含有解碼時自CPB(Coded Picture Buffer,編碼畫面緩衝 器)之讀出時刻、自DPB之讀出時刻等時刻資訊。又,該 SEI中包含顯示時刻之資訊、畫面結構之資訊等。 圖5 1係表示Access Unit之構成的圖。 如圖51所示,包含Base view video串流之1個晝面資料 的 Base view video 之 Access Unit、與包含 Dependent view video串流之i個畫面資料的Dependent view video之 Dependent Unit具有相同之構成。1個單元包括表示各單元 之邊界的定界符、SPS、PPS、SEI、及畫面資料。 於編碼時,Base view video串流之畫面中所附加之 Picture Timing SEI與 Dependent view video 串流之畫面中所 附加之Picture Timing SEI係統一運用。 例如’當Base view video串流之編碼順序為第1個的晝 面中’附加有表示自CPB之讀出時刻為T1的Picture Timing SEI時,Dependent view video串流之編碼順序為第1個的晝 面中亦附加表示自CPB之讀出時刻為T1的Picture Timing SEI。 即,於Base view video Φ 流與 Dependent view video 串流 145441.doc -77- 201105108 之各晝面中,以編碼順序或解碼順序附加有相應之晝面彼 此、相同内谷之Picture Timing S£I。 藉此,播放裝置1可將附加有相同picture Timing SEi之 view component作為相應之vjew c〇mp〇nent以解碼順序進 行處理。Enhancement Information, supplemental information). The SEI is an additional information defined by H.264/AVC and including auxiliary information related to decoding. In the Picture Timing SEI (picture timing SEI) which is one of the SEIs, the coded Picture Buffer (CPB) is included in the decoding. Time information such as the read time of the buffer, the read time from the DPB, and the like. Further, the SEI includes information on the display time, information on the screen structure, and the like. Fig. 51 is a diagram showing the configuration of an Access Unit. As shown in Fig. 51, the Access Unit of the Base view video including one face data of the Base view video stream has the same configuration as the Dependent Unit of the Dependent view video including the i picture data of the Dependent view video stream. One unit includes a delimiter indicating the boundary of each unit, SPS, PPS, SEI, and picture material. At the time of encoding, the Picture Timing SEI attached to the Base view video stream is used in conjunction with the Picture Timing SEI system attached to the Dependent view video stream. For example, when the encoding order of the Base view video stream is the first one, the Picture Timing SEI indicating that the reading time from the CPB is T1 is attached, and the encoding order of the Dependent view video stream is the first one. A Picture Timing SEI indicating that the readout time from the CPB is T1 is also added to the face. That is, in each of the Base view video Φ stream and the Dependent view video stream 145441.doc -77- 201105108, the Picture Timing S£I with the corresponding sides and the same inner valley is added in the coding order or the decoding order. . Thereby, the playback apparatus 1 can process the view component to which the same picture Timing SEi is attached as the corresponding vjew c〇mp〇nent in the decoding order.

Picture Timing把^系包含於Base view以心〇與 vieW video之基本串流中,且於播放裝置丨中由視訊解碼器 110進行參考。 視訊解碼器110可基於基本串流中所含之資m,識別相 應之View component。又’視訊解碼器11〇可基於picture Timing SEI以正確之解碼順序進行解碼處理。 々由於不需要為識別相應之view component而參考PlayList 等故可處置System Layer(系統層)或其以上之Layer(層) 產生問題之情形。又’亦可實現不依賴於出現有問題之 Layer之解碼器安裝。 [記錄裝置之構成] 圖5 2係表示按照上述運用逸奸他级 · 項行編碼,將Base view vid 串流與Dependent view video电、、* 4力立上人 甲/7,L §己錄於記錄媒體中之記 裝置之構成例的方塊圖。 於圖52之έ己錄裝置501中,生成〜 王风·Base view video 串流 並且生成D1 view video之串流你盔n ίτ Mj Dependent view video 流。即’於記錄裝置501中, T +生成參考圖3所說明Picture Timing includes the ^ system in the Base view in the basic stream of heart and vieW video, and is referenced by the video decoder 110 in the playback device. The video decoder 110 can identify the corresponding View component based on the resources contained in the basic stream. Further, the video decoder 11 can perform decoding processing in the correct decoding order based on the picture Timing SEI. 々 Since there is no need to refer to the PlayList for recognizing the corresponding view component, it is possible to deal with the problem that the System Layer or the Layer above causes a problem. Also, it is possible to implement a decoder installation that does not depend on the problematic layer. [Composition of Recording Device] Figure 5 2 shows the encoding of the base view vid stream and the Dependent view video according to the above-mentioned application, and the 4 view of the base view vid stream and the Dependent view video. A block diagram of a configuration example of a recording device in a recording medium. In the recording device 501 of Fig. 52, a stream of ~ Wang Feng·Base view video is generated and a stream of D1 view video is generated, and your helmet n ίτ Mj Dependent view video stream. That is, in the recording device 501, T + generation is explained with reference to FIG.

Depth之資訊。 如圖52所示Depth information. As shown in Figure 52

β if U 501係包括資訊生成部5丨丨、MVC 145441.doc •78· 201105108 編碼器512、及記錄部513。資訊生成部511係對應於上述 圖29之資料編碼器315,MVC編碼器512係對應於圖29之視 訊編碼器311。L圖像資料與R圖像資料係輸入至MVC編碼 器512中。 資訊生成部5 11係生成資料庫資料,該資料庫資料包括 播放清單檔案、包含Base view video用之EP_map的ClipThe β if U 501 includes an information generating unit 5, an MVC 145441.doc • 78· 201105108 encoder 512, and a recording unit 513. The information generating unit 511 corresponds to the data encoder 315 of Fig. 29 described above, and the MVC encoder 512 corresponds to the video encoder 311 of Fig. 29. The L image data and the R image data are input to the MVC encoder 512. The information generating unit 5 11 generates a database material, which includes a playlist file and a Clip including an EP_map for Base view video.

Information 檔案、及包含 Dependent view Wde〇 用之 EP—map的Clip Information檔案。資訊生成部511生成資料 庫資料係按照使用者(内容製作者)對記錄裝置5〇1之輸入而 進行。資訊生成部5 11係將所生成之資料庫資料輸出至記 錄部5 13。 又’資訊生成部511係生成附加於Base view vide〇之各 畫面中的圖51之SPS、PPS、SEI等Base view video用之附 加資訊、及附加於Dependent view video之各畫面中的 SPS、PPS、SEI等 Dependent view video用之附加資訊。由 資afl生成部5 11所生成之Base view video用之附加資訊與 Dependent view video用之附加資訊中分別包含有pictureInformation file, and Clip Information file containing EP-map for Dependent view Wde. The information generating unit 511 generates the database data in accordance with the input of the recording device 5〇1 by the user (content creator). The information generating unit 5 11 outputs the generated database material to the recording unit 5 13 . Further, the information generating unit 511 generates additional information for Base view video such as SPS, PPS, and SEI in Fig. 51 added to each screen of the Base view vide, and SPS and PPS added to each screen of the Dependent view video. Additional information for Dependent view video such as SEI. The additional information for the Base view video generated by the afl generation unit 5 11 and the additional information for the Dependent view video respectively include the picture

Timing SEI。資訊生成部511係將所生成之附加資訊輸出至 MVC編碼器512。 MVC編碼器512係按照H.264 AVC/MVC設定檔標準編碼 L圖像資料與R圖像資料,並生成編碼l圖像資料所得之 Base view video之各畫面之資料、及編碼尺圖像資料所得 之Dependent view video之各畫面之資料。 又’ MVC編碼器5 12係藉由對Base view video之各書面 14544l.doc •79- 201105108 之資料令附加由資訊生成部511所生成之Base view video 用之附加資訊’而生成Base view video串流。同樣地, MVC編碼器5 12係藉由對Dependent view video之各畫面之 k料中附加由資g孔生成部511所生成之Dependent view Video用之附加資訊’而生成Dependent view video串流。 MVC編碼器5 12係將所生成之Base view video串流與Timing SEI. The information generating unit 511 outputs the generated additional information to the MVC encoder 512. The MVC encoder 512 encodes the L image data and the R image data according to the H.264 AVC/MVC profile standard, and generates the data of each frame of the Base view video obtained by encoding the image data, and the code image data. The information of each screen of the obtained Dependent view video. Further, the 'MVC encoder 5 12 generates a Base view video string by adding the additional information for the Base view video generated by the information generating unit 511 to the data of the 14544l.doc • 79- 201105108 of the Base view video. flow. Similarly, the MVC encoder 5 12 generates a Dependent view video stream by adding additional information 'for the Dependent view Video generated by the g-hole generating unit 511' to each of the screens of the Dependent view video. MVC encoder 5 12 will generate the generated Base view video stream with

Dependent view video串流輸出至記錄部513。 記錄部513係將由資訊生成部511供給之資料庫資料、及 由MVC編碼器512供給之Base view video串流與Dependent view video串流記錄於BD等記錄媒體中。藉由記錄部513 而記錄有資料之記錄媒體係例如作為上述光碟2而提供至 播放側之裝置。 再者’於記錄部513中,各種處理係於記錄Base View video串流與Dependent view video串流之前進行。例如進 行如下處理等:將Base view video串流與Dependent view video串流多工處理成相同之,或分別與其他資料一併 夕工處理成不同之TS的處理;自Base view video之Access Unit中刪除MVC標頭的處理;將Base view video串流與 Dependent view video串流分割為來源封包的封包化處理。 圖53係表示圖52之MVC編碼器512之構成例的方塊圖。 如圖53所示,MVC編碼器512係包括Base view。心〇編 碼器521與Dependent view video編碼器522。L圖像資料係 輸入至 Base view video 編碼器 52wDependent view vide〇 編碼器522中,R圖像資料係輸入至Dependent view vide〇 145441.doc -80- 201105108 編碼器522中。R圖像資料亦可輸入至Base View Wde〇編碼 器521,作為Base view video進行編碼。The Dependent view video stream is output to the recording unit 513. The recording unit 513 records the library material supplied from the information generating unit 511 and the Base view video stream and the Dependent view video supplied from the MVC encoder 512 in a recording medium such as a BD. The recording medium on which the data is recorded by the recording unit 513 is provided, for example, as the optical disc 2 to the playback side. Further, in the recording unit 513, various processes are performed before the Base View video stream and the Dependent view video stream are recorded. For example, the following processing is performed: the Base view video stream and the Dependent view video stream are multiplexed into the same processing, or processed separately with other data into different TS processing; from the Access Unit of the Base view video Delete the processing of the MVC header; split the Base view video stream and the Dependent view video stream into the packetization processing of the source packet. Figure 53 is a block diagram showing an example of the configuration of the MVC encoder 512 of Figure 52. As shown in FIG. 53, the MVC encoder 512 includes a Base view. The heartbeat encoder 521 and the Dependent view video encoder 522. L image data is input to the Base view video encoder 52wDependent view vide〇 In the encoder 522, the R image data is input to the Dependent view vide〇 145441.doc -80- 201105108 encoder 522. The R image data can also be input to the Base View Wde〇 encoder 521 for encoding as a Base view video.

Base view video編碼器521係按照例如h.264 AVC標準, 對L圖像資料進行編碼。又’ Base view video編碼器52 1係 對編碼所得之各畫面中附加Base view video用之附加資气 後,作為Base view video串流輸出。The Base view video encoder 521 encodes L image data in accordance with, for example, the h.264 AVC standard. Further, the Base view video encoder 52 1 adds an additional asset for the Base view video to each of the encoded pictures, and then outputs it as a Base view video stream.

Dependent view video編碼器522係適當地參考L圖像資 料’並按照H.264 AVC/MVC設定檔標準,對r圖像資料進 行編碼。又,Dependent view video編碼器522係對編碼所 得之各畫面中附加Dependent view video用之附加資訊後, 作為 Dependent view video 串流輸出。 [記錄裝置之動作] 此處’參照圖54之流程圖’對記錄裝置501之記錄處理 進行說明。 於步驟S1中,資訊生成部5U係生成包含播放清單檔案 與Clip Information檔案之資料庫資料、及附加於l圖像資 料與R圖像之各自之畫面中的附加資訊。 於步驟S2中’藉由MVC編碼器512來進行編碼處理。藉 由編碼處理而生成之Base view video串流與Dependent view video串流係供給至記錄部5 13中。 於步驟S3中,記錄部513使記錄媒體記錄藉由資訊生成 部511而生成之資料庫資料、及藉由MVC編碼器512而生成 之 Base view video串流與Dependent view video串流。其 後,使處理結束。 145441.doc •81 - 201105108 其次,參照圖55之流程圖,對圖54之步驟S2中進行之編 碼處理進行說明。 於步驟S11中,Base view video編碼器521係選擇所輸入 之L圖像中之1個畫面(丨個圖框),作為編碼對象之晝面。 於步驟S12中,Base view video編碼器521判定是否將編 碼對象之L圖像編碼為ϊ畫面或IDR畫面。於設定有構成㈠固 GOP之晝面之數量、中所含之!畫面或晝面之 數量等編碼條件時,編碼對象之L圖像之晝面類型係根據 例如按編碼順序排列時之畫面之位置而決定。 當於步驟S12中判定編碼為ϊ晝面或IDR晝面之情形時, 則於步驟S13中,Base view video編碼器521將編碼對象之 L圖像之晝面類型決定為I畫面或ID]R晝面。 於步驟S14中,Dependent view video編碼器522檢測所 輸入之R圖像中與晝面類型於步驟S13中決定畫面或idr 旦面之L圖像對應的1個畫面。如上所述,於按顯示順序、 編碼順序排列各晝面時,處於相同時刻、相同位置上之匕 圖像與R圖像成為相應的晝面。 於步驟S15中,Dependent view video編碼器522係將檢 測出之汉圖像之畫面類型決定為Anchor(錨)晝面。 另方面’ S於步驟S12中判定不將編碼對象之[圖像編 馬為I畫面或IDR晝面之情形時’則於步驟$ 1 6中,BaseThe Dependent view video encoder 522 encodes the r image material in accordance with the L image data' and in accordance with the H.264 AVC/MVC profile standard. Further, the Dependent view video encoder 522 adds the additional information for the Dependent view video to each of the pictures obtained by the encoding, and outputs it as a Dependent view video stream. [Operation of Recording Apparatus] Here, the recording processing of the recording apparatus 501 will be described with reference to the flowchart of Fig. 54. In step S1, the information generating unit 5U generates additional information including the library data of the playlist file and the Clip Information file, and the respective images attached to the respective image data and the R image. The encoding process is performed by the MVC encoder 512 in step S2. The Base view video stream generated by the encoding process and the Dependent view video stream are supplied to the recording unit 53. In step S3, the recording unit 513 causes the recording medium to record the library data generated by the information generating unit 511 and the Base view video stream generated by the MVC encoder 512 and the Dependent view video stream. Thereafter, the process ends. 145441.doc •81 - 201105108 Next, the coding process performed in step S2 of Fig. 54 will be described with reference to the flowchart of Fig. 55. In step S11, the Base view video encoder 521 selects one of the input L images (one frame) as the face of the encoding target. In step S12, the Base view video encoder 521 determines whether or not the L image of the encoding target is encoded as a frame or an IDR picture. It is included in the number of the components that constitute the (a) solid GOP! When the encoding conditions such as the number of screens or the number of faces are used, the face type of the L image to be encoded is determined based on, for example, the position of the screen when arranged in the encoding order. When it is determined in step S12 that the encoding is a facet or an IDR face, then in step S13, the base view video encoder 521 determines the face type of the L image of the encoding target as an I picture or ID]R. Picture. In step S14, the Dependent view video encoder 522 detects one screen of the input R image corresponding to the L image whose screen type is determined in step S13 or the idr plane. As described above, when the respective faces are arranged in the display order and the coding order, the 匕 image and the R image at the same time and at the same position become the corresponding facets. In step S15, the Dependent view video encoder 522 determines the picture type of the detected Chinese image as the Anchor face. On the other hand, in step S12, it is determined that the [image is encoded as an I picture or an IDR face] is not in the step $16, Base

VleW Vldeo編碼器521根據編碼對象之L圖像之位置,決定 晝面類型。 於步驟S17中,Dependent view video編碼器522檢測所 145441.doc •82· 201105108 輸入之R圖像中與畫面類型於步驟S16中決定之乙圖像對應 的1個晝面。 ' 於步驟S18中,Dependent view video編碼器522係決定 可接著當前選擇為編碼對象之L圖像輸出之類型決定,作 為檢測.出之R圖像之晝面類型。 於步驟S19中,Base view video編碼器521係按照所決定 之晝面類型’對編碼對象之L圖像進行編碼。又, Dependent view vide〇編碼器522係按照所決定之晝面類 型,對步驟S14或S17中檢測出之R圖像進行編碼。 於步驟S20中,Base view video編碼器521係將附加資訊 附加於編碼所得之Base view video之畫面中。又, Dependent view video編碼器522係將附加資訊附加於編石馬 所得之Dependent view video之晝面十。 於步驟S21中,Base view video編碼器521係判定當前選 擇為編碼對象之L圖像是否為最後之晝面。 當於步驟S21中判定當前選擇為編碼對象之[圖像並非為 最後之晝面之情形時,則返回至步驟su中切換編碼對象 之畫面’重複進行上述處理。當於步驟S21中判定當前選 擇之L圖像為最後之畫面之情形時,則返回至圖54之步驟 S2 ’進行其後之處理。 可藉由上述處理,而以使編碼後之Base Wew vide〇串 流、Dependent view video串流中G0P結構變得相同之方 式,對L圖像之資料與R圖像之資料進行編碼。 又,可對Base View video之晝面、及相應之Dependent 145441.doc -83 - 201105108 view video之畫面中分別附加相同内容之附加資訊。 [播放裝置之構成] 圖56係表示使藉由記錄裝置5〇1而記錄有資料之記錄媒 體進行播放之播放裝置之構成例的方塊圖。 如圖56所示’播放裝置5〇2係包括取得部53ι、控制部 532、MVC解碼器533、及輸出部534。取得部531係對應於 例如圖20之磁碟驅動器52,控制部532係對應於圖2〇之控 制器51。MVC解碼器533係對應於圖20之解碼部56之一部 分構成。 取得部53 1係按照控制部532之控制,自藉由記錄裝置 501而記錄有資料且安裝於播放裝置5〇2中之記錄媒體中讀 出資料。取得部53 1將自記錄媒體中讀出之資料庫資料輸 出至控制部532,並將Base view video串流與Dependent view video串流輸出至MVC解碼器533。 控制部5 3 2係控制資料自記錄媒體中之讀出等播放裝置 502之整體動作。 例如’控制部532係藉由控制取得部53 1,自記錄媒體進 行讀出而獲得資料庫資料。又,於控制部5 3 2指示包含於 所取得之資料庫資料中之3D播放用之播放清單(圖13之 SD-PL—type之值為〇1的播放清單)之播放時,將播放清單 中·ί田述之串ID專資δίΐ供給至取得部5 3 1,並使B a s e view video串流與Dependent view video串流自記錄媒體讀出。 才二制部532係控制MVC解碼器533,使Base view video事流 與 Dependent view video 串流解碼 〇 145441.doc -84 - 201105108 MVC解碼器533係按照控制部532之控制,對仏“ view video串流與Dependent view video串流進行解碼。MVC解 碼器 533 係將解碼 Base view video 串流與 Dependent view Vldeo串流所得之資料輸出至輸出部534。例如,MVC解碼 器533按照View—type(圖14),將解碼Base view vide〇串流所 得之資料作為L圖像資料’並將解碼Dependent view vide〇 串流所得之資料作為R圖像資料而分別輸出β 輸出部534係將由MVC解碼器533供給之L圖像資料與R 圖像資料輸出至顯示器’並顯示L圖像與R圖像。 圖57係表示MVC解碼器533之構成例的方塊圖。 如圖57所示’ MVC解碼器533係包括CPB541、解碼器 542、及 DPB543。CPB541係包含圖 22 之B video緩衝器 106 與D video緩衝器1〇8。解碼器542係對應於圖22之視訊解 碼器110 ’ DPB543係對應於圖22之DPB151。儘管省略了圖 示,但於CPB541與解碼器542之間亦設置有與圖22之交換 器109對應之電路。 CPB541係5己憶由取仔部53 1供給之Base view video串流 之資料與Dependent view video串流之資料。記憶於 CPB541中之Base view video串流之資料係以構成i個 Access Unit之資料單位藉由解碼器讀出。記憶於 CPB541中之Dependent view video串流之資料亦同樣地以 構成1個Dependent Unit之資料單位藉由解碼器542讀出。 解碼器542係對自CPB541中讀出之資料進行解碼,並將 解碼所得之Base view video、Dependent view video之各畫 I45441.doc -85- 201105108 面之資料輸出至DPB543。 DPB543係記憶由解碼器542供給之資料。記憶於DpB543 中之 Base view video、Dependent view video之各書面之資 料係於按解碼順序對之後的畫面進行解碼時,由解碼器 542進行適當參考。記憶於DpB543中之各晝面之資料係按 照由Picture Timing SEI表示之各畫面之顯示時刻等輸出。 [播放裝置之動作] 此處,參照圖58之流程圖,對播放裝置5〇2之播放處理 進行說明。 再者,於圖58中,於Base view video串流之處理後,以 進行Dependent view video串流處理之方式來表示各步驟, 但 Base View Vide0 串流處理與 Dependent Wew Wde〇 串流處 理係適當地並行實施。關於與Base view vide〇串流及The VleW Vldeo encoder 521 determines the type of the face based on the position of the L image of the encoding object. In step S17, the Dependent view video encoder 522 detects one of the R images input by the 145441.doc • 82·201105108 and the picture type corresponding to the B image determined in step S16. In step S18, the Dependent view video encoder 522 determines the type of the L image output that can be selected as the encoding object, as the face type of the detected R image. In step S19, the Base view video encoder 521 encodes the L image of the encoding target in accordance with the determined face type '. Further, the Dependent view vide encoder 522 encodes the R image detected in step S14 or S17 in accordance with the determined type of face. In step S20, the Base view video encoder 521 adds the additional information to the picture of the encoded Base view video. Further, the Dependent view video encoder 522 attaches additional information to the top ten of the Dependent view video obtained by the stone horse. In step S21, the Base view video encoder 521 determines whether or not the L picture currently selected as the encoding target is the last picture. When it is determined in step S21 that the image currently selected as the encoding target is not the last image, the process returns to the screen of the switching target in step su, and the above processing is repeated. When it is determined in step S21 that the currently selected L picture is the last picture, the process returns to step S2' of Fig. 54 to perform the subsequent processing. By the above processing, the data of the L image and the data of the R image are encoded in such a manner that the encoded Base Wew vide stream and the Dependent view video stream have the same G0P structure. In addition, additional information of the same content may be added to the screen of Base View video and the corresponding Dependent 145441.doc -83 - 201105108 view video. [Configuration of Playback Apparatus] Fig. 56 is a block diagram showing an example of the configuration of a playback apparatus for playing back a recording medium on which data is recorded by the recording apparatus 5〇1. As shown in Fig. 56, the "playback device 5" 2 includes an acquisition unit 53i, a control unit 532, an MVC decoder 533, and an output unit 534. The acquisition unit 531 corresponds to, for example, the disk drive 52 of Fig. 20, and the control unit 532 corresponds to the controller 51 of Fig. 2A. The MVC decoder 533 is configured in correspondence with a portion of the decoding unit 56 of Fig. 20 . The acquisition unit 53 1 reads the data from the recording medium mounted on the playback device 5〇2 by the recording device 501 in accordance with the control of the control unit 532. The acquisition unit 53 1 outputs the library data read from the recording medium to the control unit 532, and outputs the Base view video stream and the Dependent view video stream to the MVC decoder 533. The control unit 523 controls the overall operation of the playback device 502 such as reading of data from the recording medium. For example, the control unit 532 reads the data from the recording medium by the control acquisition unit 53, and obtains the material of the database. Further, when the control unit 531 indicates the playlist for the 3D playback included in the acquired database material (the playlist whose SD-PL_type value is 〇1 in FIG. 13), the playlist is played. The middle · 田 ID ID ID ID ID ID ID ΐ ΐ ΐ ΐ ΐ ΐ ΐ ΐ ΐ ΐ ΐ ΐ ΐ ΐ 取得 取得 取得 取得 Depend Depend Depend Depend Depend Depend Depend Depend Depend Depend Depend Depend The second component 532 controls the MVC decoder 533 to decode the Base view video stream and the Dependent view video stream. 145441.doc -84 - 201105108 The MVC decoder 533 is controlled by the control unit 532, and "view video" The stream is decoded by the Dependent view video stream. The MVC decoder 533 outputs the decoded data of the Base view video stream and the Dependent view Vldeo stream to the output unit 534. For example, the MVC decoder 533 follows the View-type (Fig. 14), the data obtained by decoding the Base view vide stream is used as the L image data 'and the data obtained by decoding the Dependent view vide〇 stream is output as the R image data, respectively. The β output unit 534 is to be used by the MVC decoder 533. The supplied L image data and R image data are output to the display 'and the L image and the R image are displayed. Fig. 57 is a block diagram showing a configuration example of the MVC decoder 533. As shown in Fig. 57, the 'MVR decoder 533' The system includes a CPB 541, a decoder 542, and a DPB 543. The CPB 541 includes a B video buffer 106 and a D video buffer 1 〇 8 of Fig. 22. The decoder 542 corresponds to the video decoder 110 'DPB 543 of Fig. 22 The DPB 151 of Fig. 22. Although not shown, a circuit corresponding to the switch 109 of Fig. 22 is provided between the CPB 541 and the decoder 542. The CPB 541 system 5 recalls the Base view video supplied from the picking unit 53 1 The data of the stream and the data of the Dependent view video stream. The data of the Base view video stream stored in the CPB541 is read by the decoder in the data unit constituting the i Access Unit. The Dependent view video memorized in the CPB541 The data of the stream is similarly read by the decoder 542 in the data unit constituting one Dependent Unit. The decoder 542 decodes the data read from the CPB 541 and decodes the Base view video and Dependent view. The data of I45441.doc -85- 201105108 is output to DPB543. DPB543 is the data supplied by decoder 542. The written data of Base view video and Dependent view video stored in DpB543 are decoded by decoding. When the subsequent pictures are sequentially decoded, the decoder 542 makes an appropriate reference. The data stored in each page of the DpB543 is output in accordance with the display time of each picture indicated by the Picture Timing SEI. [Operation of Playback Device] Here, the playback process of the playback device 5〇2 will be described with reference to the flowchart of Fig. 58. Furthermore, in FIG. 58, after the processing of the Base view video stream, the steps are represented by the Dependent view video stream processing, but the Base View Vide0 stream processing and the Dependent Wew Wde stream processing are appropriate. The ground is implemented in parallel. About streaming with Base view vide

Dependent view video串流之處理相關的其他流程圖亦為相 同0 於步驟S31中,取得部531係自安裝於播放裝置5〇2中之 δ己錄媒體讀出資料。取得部5 3 1係將讀出之資料庫資料輸 出至控制部532 ’並將Base view video串流之資料與 Dependent view video串流之資料輸出至Mvc解碼器533。 於步驟S32中’ MVC解碼器533係進行解碼處理。 於步驟S33中,輸出部534係將由MVC解碼器53 3供給之 L圖像資料與R圖像資料輸出至顯示器,並顯示1圖像與R 圖像。其後,結束處理。 其次,參照圖59及圖60之流程圖,對圖58之步驟S32中 145441.doc -86- 201105108 所進行之解碼處理進行說明。 於步驟S41中’ CPB541係記憶Base view video串流之資 料與Dependent view video串流之資料。記憶於CPB541中 之資料係藉由控制部53-2而適當讀出。 於步驟S42中,控制部532係參考記憶於CPB541中之資 料,檢測Base view video串流之Access Unit之邊界。 Access Unit之邊界之檢測係例如藉由檢測Access Unit定界 符而進行。自某一位置之邊界至下一邊界為止之資料成為 1個Access Unit之資料。於HiIAccess Unit之資料中,包含 有Base View videoii個畫面之資料、及附加於其中之附 加資訊。 於步驟S43中’控制部532係判定檢測出邊界之仏“ view video之 1 個 Access Unit 中 Picture Timing SEI是否經編 碼(含有)。 當於步驟S43中判定Picture Timing SEI經編碼之情形 時’則於步驟S44中,控制部532讀出Picture Timing SEI。 於步驟S45中,控制部532結合所讀出之picture Timing SEI中所描述之提取時刻(讀出時刻),使檢測出邊界之1個The other flowcharts related to the processing of the Dependent view video stream are also the same. In step S31, the acquisition unit 531 reads the data from the δ recorded media installed in the playback device 5〇2. The acquisition unit 531 outputs the read database data to the control unit 532' and outputs the data of the Base view video stream and the Dependent view video stream to the Mvc decoder 533. In step S32, the MVC decoder 533 performs decoding processing. In step S33, the output unit 534 outputs the L image data and the R image data supplied from the MVC decoder 53 3 to the display, and displays the 1 image and the R image. Thereafter, the processing ends. Next, the decoding process performed by 145441.doc -86 - 201105108 in step S32 of Fig. 58 will be described with reference to the flowcharts of Figs. 59 and 60. In step S41, the CPB 541 is a data of the Base view video stream and the Dependent view video stream. The data stored in the CPB 541 is appropriately read by the control unit 53-2. In step S42, the control unit 532 detects the boundary of the Access Unit of the Base view video stream by referring to the information stored in the CPB 541. The detection of the boundary of the Access Unit is performed, for example, by detecting the Access Unit delimiter. The information from the boundary of a certain location to the next boundary becomes the data of one Access Unit. The HiIAccess Unit data contains information on the Base View videoii screen and additional information attached to it. In step S43, the control unit 532 determines whether or not the Picture Timing SEI is encoded (inclusive) in one Access Unit of the view video. When it is determined in step S43 that the Picture Timing SEI is encoded, then In step S44, the control unit 532 reads out the Picture Timing SEI. In step S45, the control unit 532 combines the extracted time (readout time) described in the read picture Timing SEI to make one of the detected boundaries.

Access Unit之資料中的Base vievv video之畫面之資料自 CPB541供給至解碼器542。 另一方面’當於步驟S43中判定Picture Timing SEI尚未 編碼之情形時,則於步驟S46中,控制部532結合系統資訊 (DTS) ’使檢測出邊界之丨個Access un[t之資料中的Base view video之畫面之資料自cpB541供給至解碼器542。 145441.doc -87- 201105108 於步驟S47中’解碼器542係對由CPB541供給之資料進 行解碼。於Base view video之晝面之解碼時,適當地參考 記憶於DPB543中之已解碼之晝面。 於步驟S48中,DPB543係記憶藉由解碼所得之Base view video之晝面之資料。 於步驟S49中,控制部532係計算經解碼之Base view video之畫面之P0C,並加以記憶。 於步驟S50中,控制部532係檢測Dependent Wew video 串流之Dependent Unit之邊界,並檢測與步驟842中檢測出 邊界之 Base view video 串流之 Access Unit對應的 Dependent view video 串流之Dependent Unit 〇 於步驟S51中’控制部532係判定檢測出邊界之The data of the Base vievv video screen in the Access Unit data is supplied from the CPB 541 to the decoder 542. On the other hand, when it is determined in step S43 that the Picture Timing SEI has not been encoded, then in step S46, the control unit 532 combines the system information (DTS) to make the boundary of the Access Un[t The data of the picture of the Base view video is supplied from the cpB 541 to the decoder 542. 145441.doc -87- 201105108 In step S47, the decoder 542 decodes the data supplied from the CPB 541. When decoding the Base view video, refer to the decoded face that is stored in the DPB543 as appropriate. In step S48, the DPB 543 memorizes the data of the Base view video obtained by decoding. In step S49, the control unit 532 calculates and stores the P0C of the decoded Base view video screen. In step S50, the control unit 532 detects the boundary of the Dependent Unit of the Dependent Wew video stream, and detects the Dependent Unit of the Dependent view video stream corresponding to the Access Unit of the Base view video stream in which the boundary is detected in step 842. In step S51, the control unit 532 determines that the boundary is detected.

Dependent view video 之 1 個 Dependent Unit 中 PictureDependent view video 1 Dependent Unit Picture

Timing SEI是否經編碼。 當於步驟S5 1中判定Picture Timing SEI經編碼之情形 時’則於步驟S52中’控制部532讀出Picture Timing SEI。 於步驟S53中’控制部532係結合所讀出之Picture Timing SEI中所描述之提取時刻,使檢測出邊界之1個Dependent Unit之資料中的Dependent view video之晝面之資料自 CPB541供給至解碼器542。 另一方面,當於步驟S51中判定Picture Timing SEI未編 碼之情形時,於步驟S 5 4中,控制部5 3 2結合系統資訊,將 檢測出邊界之1個Dependent Unit之資料中的Dependent view video之畫面之資料自CPB541供給至解碼器542。 145441.doc •88· 201105108 再者,當Base view video用之解碼器與Dependent view video用之解碼器分別設置於MVC解碼器533中時,則記憶 於CPB541中之Dependent view video之畫面之資料,以與Whether Timing SEI is coded. When it is determined in step S51 that the Picture Timing SEI is encoded, then the control unit 532 reads out the Picture Timing SEI in step S52. In step S53, the control unit 532 combines the extracted time described in the read Picture Timing SEI, so that the data of the Dependent view video in the data of the Dependent Unit in which the boundary is detected is supplied from the CPB541 to the decoding. 542. On the other hand, when it is determined in step S51 that the Picture Timing SEI is not encoded, in step S54, the control unit 523 combines the system information to detect the Dependent view in the data of one Dependent Unit of the boundary. The video screen information is supplied from the CPB 541 to the decoder 542. 145441.doc •88· 201105108 Furthermore, when the decoder for Base view video and the decoder for Dependent view video are respectively set in the MVC decoder 533, the information of the screen of the Dependent view video stored in the CPB541 is With

Base view video之畫面之資料自CPB541供給至Base Wew video用之解碼器的時序相同之時序,供給至Dependent view video用之解碼器。 於步驟S55中’解碼器542係對由CPB541供給之資料進 行解碼。於Dependent view video之晝面之解碼時,適當地 參考3己憶於DPB543中之已解碼之Base view video之晝面、 及 Dependent view video之畫面。 於步驟S56中,DPB543係記憶藉由解碼所得之The data of the Base view video screen is supplied to the decoder for Dependent view video from the timing when the CPB541 is supplied to the decoder for Base Wew video. In step S55, the decoder 542 decodes the material supplied from the CPB 541. For the decoding of the Dependent view video, refer to the picture of the decoded Base view video and the Dependent view video in the DPB543. In step S56, the DPB 543 is stored by decoding.

Dependent view video之晝面之資料。藉由重複進行上述處 理,而於DPB543中記憶計算出POC值之複數個Base view video之畫面、及相應之Dependent view video之晝面。Information on the Dependent view video. By repeating the above processing, the DPB 543 memorizes the picture of the plurality of Base view videos in which the POC value is calculated, and the corresponding Dependent view video.

Dependent view video之晝面並不進行p〇c值之計算。 於步驟S57中,控制部532係使記憶於DPB543中之Base view video之畫面中P0C值最小之畫面自dPB543中輸出, 並且以相同時序使相應之Dependent view video之畫面自 DPB543中輸出。自DPB543中輸出之畫面係供給至輸出部 534。The calculation of the p〇c value is not performed after the Dependent view video. In step S57, the control unit 532 causes the screen having the smallest P0C value in the picture of the Base view video stored in the DPB 543 to be output from the dPB 543, and causes the corresponding Dependent view video picture to be output from the DPB 543 at the same timing. The picture output from the DPB 543 is supplied to the output unit 534.

Base View vide〇之晝面之輸出係於該畫面中附加有The output of Base View vide〇 is attached to this screen.

Picture Timing SEI之情形時,結合Picture Tirning SEI 中所 描述之顯示時刻進行。另一方面,於未附加picture SEI之情形時,則結合由系統資訊(pTS)所表示之顯示時刻 i45441.doc -89- 201105108 進行。 於步驟S58中,控制部532係判定“ _。與The picture Timing SEI case is combined with the display time described in the Picture Tirning SEI. On the other hand, when the picture SEI is not attached, it is combined with the display time i45441.doc -89 - 201105108 indicated by the system information (pTS). In step S58, the control unit 532 determines "_.

Dependent view video之所有畫面之輸出是否結束。當控制 部532於步驟S58中判定所有晝面之輸出尚未結束時,返回 至v驟S41,並重複進饤上述處理。當於步驟㈣中判定所 有畫面之輸出結束之情形時,則返回至圖58之步驟s32, 進行之後的處理。 藉由上述處理,可以使G〇p結構為相同之方式進行編 碼,並且對各晝面中附加有相同附加資訊之W㈣ video串流與Dependent view vide〇串流進行解碼。 繼之,參照圖61之流程圖,對使用£1>—map進行之播放 裝置502之隨機存取播放之處理進行說明。 於步驟S71中,控制部532係控制取得部531,讀出Base view video 串流之 Clip與 Dependent view vide〇 串流之 CUp 之各自之Clip Informati〇n檔案。又,控制部532係取得 Base view Vide0 用之 EP_map與 Dependent vjew 用之 EP_map。如上所述,將EP_map分別準備有Base view video 用之 EP_map與 Dependent view video 用之 EP map。 於步驟S72中,控制部532係基於使用者之操作等而取得 表示表示隨機存取播放之開始時刻的PTS。例如,當自選 單畫面選擇設定於視訊串流中之章節時,則取得所選擇章 節之PTS。 於步驟S73中’控制部532係藉由Base view video用之 EP_map而確定與所取得之播放開始時刻之ρτ§對應之 145441.doc •90- 201105108 SPN_EP_start所表示的來源封包序號。又,控制部532係 將記錄有藉由經確定之來源封包序號而識別之來源封包的 呂己錄媒體上之位址讀出,並設定為開始位址。 例如,基於構成PTS之32位元中之MSB侧之14位元,以 作為Base view video用之EP一map之子表格的EP_coarse為 對象進行檢索’並確定PTS_EP_c〇arse及對應之 ref_to—EP_fine_id、SPN_EP_coarse。又,基於經確定之 ref一to一EP_fine_id,以EP_fine為對象進行檢索,並確定與 LSB側之自第1〇位元起的u位元值對應的登錄 PTS EP fine。 — _ 確疋與PTS_EP_fine對應之SPN_EP_coarse所表示的來源 封包序號’並將記錄有藉由來源封包序號而識別之來源封 包之位址讀出,並決定為開始位址。各個來源封包之記錄 媒體上之位址係藉由對記錄於記錄媒體中之資料進行管理 的權案系統而確定。 於步驟S74中,控制部532係藉由Dependent view vide〇 用之EP_map而確定與所取得之播放開始時刻之pTs對應之 SPN_EP_start所表示的來源封包序號。與pTS對應之 SPN_EP—start所表示的來源封包序號之確定,亦使用構成Whether the output of all screens of Dependent view video ends. When the control unit 532 determines in step S58 that the output of all the faces has not yet ended, it returns to step S41 and repeats the above processing. When it is determined in step (4) that the output of all the screens has ended, the process returns to step s32 of Fig. 58 to perform the subsequent processing. By the above processing, the G 〇 p structure can be encoded in the same manner, and the W (four) video stream and the Dependent view vide 〇 stream with the same additional information added to each side are decoded. Next, the processing of random access playback by the playback device 502 using £1>-map will be described with reference to the flowchart of Fig. 61. In step S71, the control unit 532 controls the acquisition unit 531 to read the Clip Informati〇n file of each of the Clip of the Base view video stream and the CUp of the Dependent view vide stream. Further, the control unit 532 acquires EP_map for Base view Vide0 and EP_map for Dependent vjew. As described above, EP_map is prepared with an EP map for Base view video and an EP map for Dependent view video. In step S72, the control unit 532 acquires a PTS indicating the start time of the random access playback based on the user's operation or the like. For example, when the self-selected single screen selects a chapter set in the video stream, the PTS of the selected chapter is obtained. In step S73, the control unit 532 determines the source packet number indicated by 145441.doc • 90-201105108 SPN_EP_start corresponding to the acquired playback start time ρτ§ by the EP_map for the Base view video. Further, the control unit 532 reads out the address on the Luncun recording medium on which the source packet identified by the determined source packet number is recorded, and sets it as the start address. For example, based on the 14-bit element on the MSB side of the 32-bit constituting the PTS, the EP_coarse which is the sub-table of the EP-map for Base view video is searched for and 'determines PTS_EP_c〇arse and the corresponding ref_to_EP_fine_id, SPN_EP_coarse . Further, based on the determined ref_to_EP_fine_id, the search is performed for EP_fine, and the registration PTS EP fine corresponding to the u-bit value from the 1st bit on the LSB side is determined. — _ Confirm the source packet number indicated by SPN_EP_coarse corresponding to PTS_EP_fine and read the address of the source packet identified by the source packet sequence number and determine it as the start address. Recording of each source packet The address on the media is determined by a rights system that manages the data recorded on the recording medium. In step S74, the control unit 532 determines the source packet number indicated by the SPN_EP_start corresponding to the obtained pTs of the playback start time by the EP_map of the Dependent view vide. The determination of the source packet number indicated by SPN_EP_start corresponding to pTS is also used to constitute

Dependent view video用之EP_map之子表格來進行。又, 控制部532係將記錄有藉由經確定之來源封包序號而識別 之來源封包的記錄媒體上之位址讀出,並設定為開始位 址。 於步驟S75中,取得部53丨係自步驟S73中設定之讀出開 145441.doc -91 - 201105108 始位址’開始讀出構成Base view vide〇串流之各來源封包 之資料。又,取得部531係自步驟S74中設定之讀出開始= 址,開始讀出構成Dependent view vide〇串流之各來源封包 之資料。 0 所讀出之Base view video串流之資料與 video串流之資料係供給至MVC解碼器533。藉由進行參照 圖59、圖60所說明之處理,而自使用者所指定之播放開始 位置進行解碼。 於步驟S76中,控制部532判定繼而是否進行搜尋,即, 判定是否指示自其他位置開始隨機存取播放,當判定已指 示時,重複進行步驟S71以後之處理。 當於步驟S76中判定並未指示自其他位置開始隨機播放 之情形時,則結束處理。 [緩衝控制資訊] 如上所述,於Η.264 AVC/MVC設定檔標準中,定義有作 為基礎之視訊串流即Base view video串流、及以Base view video串流為基礎進亍編碼、解碼之視訊串流即Dependent view video 串流。 於Η·264 AVC/MVC設定檐標準中,容許Base view video 串流與Dependent view video串流作為1個視訊串流而存 在,亦容許其等分別作為獨立之視訊串流而存在。 圖 62A 係表示 Base view video 串流與 Dependent view video串流作為1個視訊串流而存在之狀態的圖。 於圖62A之例中,Base view video串流整體與Dependent 145441.doc -92· 201105108 view video串流整體分別分割於每個特定之區間内,且以 混合存在於各區間中之方式,構成1個基本串流。於圖62 A 中 “π主B」文子進行表示之區間表示Base view video 串流之區間,標註「D」文字進行表示之區間表示 Dependent view video 串流之區間。 圖 62B 係表示 Base view video 串流與 Dependent view video串流分別作為獨立之視訊串流而存在之狀態的圖。 於BD-ROM 3D標準中,如圖62B所示,要求Base view video串流與Dependent view video串流分別作為獨立之基 本串流而s己錄於碟片上。又,要求Base view vide〇串流係 以H.264/AVC標準編碼之串流。上述限制係用以實現不支 援3D播放之BD播放器僅播放Base view vide〇串流(瓜播 放)。 因此,於BD-ROM 3D標準中,無論僅播放以Η 264/Αν€ 標準編碼之Base view video串流,抑或是—併播放Base view video串流與Depen(ient view video串流,均必需於記 錄裝置側預先編碼串流’以正確進行播放。具體而士,必 需以不會產生緩衝器下溢或溢流之方式預先進行編碼。 於H.264/AVC標準中,為了不產生緩衝器下溢等,而可 將2種緩衝控制資訊編碼於串流中。亦於bd_r〇m 標準 中,設想有僅Base view video串流之解碼、及Base Wew video串流與Dependent view video串流一併進行之解碼, 而必需將緩衝控制資訊預先編碼於串流中。 然而,於支援BD-ROM 3D標準之播放裝置中,存在利 145441.doc •93· 201105108 用1個解碼器對Base view video电办τλ weo _ 流與 Dependent view video串流進行解碼者、及利 J 用 Base view video 用與 Dependent view video 用之 2個解石民哭、隹—. 〜口解碼益進行解碼者。於bd_ ROM 3D標準中並未規定解碼器之數量。 因此,於BD-ROM 3D擇進Φ,ρ ^ μ 知早中,無論利用1個解碼器進行 解碼,抑或是利用2個解碼器進行解碼,均必需於記 置側將緩衝控制資訊預先編碼於串流中,以正_地進行播 放。 由上所述,於記錄裝置中以如下方式編碼緩衝控制資 訊0 1. 於Base view Vide0串流中,僅播放仏% view νί&〇串 流之情形時,對用以可正確進行上述播放之值進行編碼。 2. 於Dependent view video串流中,利用獨立解碼器 (Dependent view video 用之解碼器)播放 Dependent view video串流之情形時,對用以可正確進行上述播放之值進 行編碼。 3. 於Dependent view video串流中,利用1個解碼器一併 播放 Base view video 串流與 Dependent view video 串流之情 形時’對用以可正確進行上述播放之值進行編碼。 [編碼位置之具體例] 於 Base view video 串流與 Dependent view video 串流中, 編碼 HRD(HypotheticaI Reference Decoder,假想參考解 碼)parameters(參數)與 max_dec_frame_buffering,作為緩 衝控制資訊。 145441.doc -94- 201105108 HRD parameters係包含表示自CPB對解瑪器之輸入之最 大位元率的資訊。亦可包含表示對於CPB之輸入之最大位 元率的資訊、表示CPB之緩衝器大小的資訊、及表示HRD 是否為CBR(Constant Bit Rate,固定位元率)的旗標。 max_dec_frame_buffering係表示可記憶於DPB中之畫面 (參考畫面)之最大張數的資訊。 圖 63係表示 Base view video 串流中之HRD parameters之 編碼位置之例的圖。 如圖63所示,HRD parameters係編碼為構成Base view video串流之各Access Unit中所含的SPS之1個資訊。於圖 63之例中,係編碼為包含於SPS中的VUI(Video Usability Information,視訊可用資訊)之1個資訊。 圖 63之HRD parameters係表示僅播放Base view video 串 流時對解碼器之輸入之最大位元率。於CPB與解碼器之間 之匯流排用於僅傳送Base view video串流資料時,將該傳 送率限制為由HRD parameters表示之位元率以下。 再者,圖63之AUD係對應於參照圖5 1所說明之AU定界 符,Slices(片段)係對應於圖63之Access Unit中所含之1個 畫面之資料。 圖64係表示圖63所示之位置上編碼HRD parameters時之 seq_parameter_set_data()(SPS)之描述格式的圖。 如圖 64所示,hrd_parameters〇(HRD parameters)係描述 於 seq_parameter—set—data()中之 vui_parameters()(VUI)之 中0 145441.doc -95- 201105108 圖 6 5 係表示 B ase view video 串流中之 max_dec_frame_buffering之編碼位置之例的圖。 如圖65所示,max_dec_frame_buffering亦編碼為構成 Base view video串流之各Access Unit中所含的SPS之1個資 訊。於圖65之例中,係編碼為包含於SPS中的VUI之1個資 訊。 @ 65 max_dec_frame_buffering#, Base view video串流後可記憶於DPB中之畫面之最大張數。於1個 DPB用於僅記憶Base view video串流之已解碼之畫面時, 記憶於DPB中之晝面之張數係限制為由 max_dec_frame_buffering表示之張數以下。 圖 6 6係表示於圖 6 5所示之位置上編碼 max_dec_frame_buffering 後之 seq_parameter_set_data()之 描述格式的圖。 如圖 66 所示,max_dec_frame_buffering 係描述於 seq_parameter_set一data()中之 vui一parameters。之中。 以下,適當地將以圖63所示之方式編碼於Base view video串流中之HRD parameters稱為第 1 HRD parameters。 又,將以圖65所示之方式編碼於Base view video串流中之 max_dec_frame_buffering稱為第 1 max_dec_frame_buffering。 圖67係表示Dependent view video串流中之HRD parameters之編碼位置之例的圖。 如圖67所示,HRD parameters係編碼為構成Dependent view video串流之各Dependent Unit中所含的 SubsetSPS之 1 145441.doc -96- 201105108 個資訊。於圖67之例中,係編碼為包含於SubsetSPS中的 SPS之1個資訊d 編碼為SPS之1個資訊之HRD parameters係表示利用獨立 解碼器播放Dependent view video 串流後對Dependent view video用之解碼器之輸入之最大位元率。於CPB與獨立解碼 器之間之匯流排用於僅傳送Dependent view video串流之資 料時,該傳送率限制為由HRD parameters表示之位元率以 下。 圖68係表示編碼HRD parameters作為SPS之1個資訊後之 subset_seq_parameter_set_data()(SubsetSPS)之描述格式的 圖。SubsetSPS係擴展H.264/AVC之SPS之參數之描述,且 包含表示視角間之依存關係的資訊等。 如圖 68 所示,hrd_parameters()係描述於 subset_seq_ parameter_set_data()中之 seq_parameter_set_data()中之 vui_parameters()之中 ° 於圖67之例中,亦編碼HRD parameters,作為包含於 SubsetSPS 中的 MVC VUI Ext之 1個資訊。 編碼為MVC VUI Ext之1個資訊之HRD parameters係表示 利用1個解碼器一併播放Base view video串流與Dependent view video串流後對解媽器之輸入之最大位元率。於CPB 與1個解碼器之間之匯流排用於Base view video串流資料 與Dependent view video串流資料之傳送時,該傳送率限制 為由HRD parameters表示之位元率以下。 圖69係表示編碼HRD parameters作為MVC VUI Ext之1個 145441.doc •97· 201105108 資訊後之 subset_seq_parameter_set_data()之描述格式的 圖。 如圖 69 所示,hrd_parameters()係描述於 subset—seq_ parameter_set_data()之中之 mvc_vui_parameters—extension() (MVC VUI Ext)中。 以下,適當地將以圖67所示之方式於Dependent view video串流中編碼為SPS之1個資訊的HRD parameters(圖67 之左側)稱為第 2 HRD parameters。又,於Dependent view video串流中編碼為MVC VUI Ext之1個資訊的HRD parameters(圖 67之右側)稱為第 3 HRD parameters。 圖70係表示Dependent view video串流中之max_ dec_frame_buffering之編碼位置之例的圖。 如圖70所示,max_dec_frame_buffering係編碼為構成 Dependent view video 串流之各 Dependent Unit 中所含的 SubsetSPS之1個資訊。於圖70之例中,係編碼為包含於 SubsetSPS中的SPS之1個資訊。 編碼為SPS之1個資訊之max_dec_frame—buffering係表示 利用獨立解碼器播放Dependent view video串流後可記憶於 DPB中之晝面之最大張數。於1個DPB用於僅記憶 Dependent view video串流之已解碼之畫面時,記憶於DPB 中之晝面之張數限制為由max_dec_frame_buffering表示之 張數以下。 圖71係表示編碼max_dec_frame_buffering作為SPS之1個 資訊後之 subset_seq_parameter_set_data()之描述格式的 145441.doc -98- 201105108 圖。 如圖 71 所示,max—dec_frame—buffering 係描述於 subset_seq_parameter—set_data()中之 seq_parameter_set_data() 中之 vui_parameters()之中。 於圖70之例中,作為SEI之1個資訊亦將max_dec_ frame_buffering編碼 ° 編碼為SEI之1個資訊之max_dec_frame_buffering係表示 利用1個解碼器一併播放Base view video串流與Dependent view video串流後可記憶於DPB中之畫面之最大張數。於1 個DPB用於Base view video串流之已解碼之晝面與 Dependent view video串流之已解碼之畫面之記憶時,記憶 於DPB中之畫面之張數限制為由max_dec_frame_buffering 表示之張數以下。 圖72係表示編碼max_dec_frame_buffering作為SEI之1個 資訊後之sei_message()(SEI)之描述格式的圖。 如圖 72 所示,max_dec_frame_buffering 係描述於 sei_ message()中之 view_scalability_info()(View scalability information SEI,視角可分級資訊SEI)中。 以下,適當地將以圖70所示之方式於Dependent view video串流中編碼為SPS之1個資訊之max_dec_frame_ buffering(圖 70之左側)稱為第 2 max—dec_frame_buffering ° 又,將於Dependent view video串流中編碼為SEI之1個資訊 之 max_dec_frame_buffering(圖 70 之右側)稱為第 3max_ dec_frame_buffering。 145441.doc -99- 201105108 如上所述’於Base view video串流與Dependent view video串流中,各編碼3種HRD parameters與扪以dec一 frame buffering。 [裝置之構成] 將包含緩衝控制資訊之資料記錄於BD中之記錄裝置係 具有與圖52所示之記錄裝置5〇1相同之構成。又,使記錄 於BD中之資料播放之播放裝置係具有與圖56所示之播放 裝置502相同之構成。 以下,引用圖5 2、圖5 6之構成,對使用緩衝控制資訊進 行處理的記錄裝置與播放裝置之構成進行說明。與上述說 明重複之說明將予以適當省略。Dependent view video is used with the subform of EP_map. Further, the control unit 532 reads the address on the recording medium on which the source packet identified by the determined source packet number is recorded, and sets it as the start address. In step S75, the acquisition unit 53 reads the data of each source packet constituting the Base view vide stream from the read address 145441.doc -91 - 201105108 starting address set in step S73. Further, the acquisition unit 531 starts reading the data of each source packet constituting the Dependent view vide stream from the read start address set in step S74. The data of the Base view video stream and the video stream read out are supplied to the MVC decoder 533. The decoding is performed from the playback start position designated by the user by performing the processing described with reference to Figs. 59 and 60. In step S76, the control unit 532 determines whether or not to perform the search, that is, whether or not to instruct random access playback from another position, and when the determination is made, repeats the processing in and after step S71. When it is determined in step S76 that the situation of random play from another position is not indicated, the processing is ended. [Buffer Control Information] As mentioned above, in the Η.264 AVC/MVC profile standard, the base view video stream, which is the base video stream, and the base view video stream, are defined and encoded. The video stream is the Dependent view video stream. In the Η 264 AVC/MVC setting standard, the Base view video stream and the Dependent view video stream are allowed to exist as one video stream, and they are allowed to exist as independent video streams, respectively. Fig. 62A is a diagram showing a state in which a Base view video stream and a Dependent view video stream exist as one video stream. In the example of FIG. 62A, the Base view video stream and the Dependent 145441.doc -92·201105108 view video stream are respectively divided into each specific interval, and are mixed in each interval to form 1 Basic stream. In Fig. 62 A, the section indicated by the "π main B" text indicates the interval of the Base view video stream, and the section marked with the "D" character indicates the interval of the Dependent view video stream. Figure 62B is a diagram showing a state in which a Base view video stream and a Dependent view video stream exist as independent video streams, respectively. In the BD-ROM 3D standard, as shown in Fig. 62B, the Base view video stream and the Dependent view video stream are required to be recorded as separate independent streams, respectively, on the disc. Also, the Base view vide stream is required to be encoded by the H.264/AVC standard. The above limitation is for the BD player that does not support 3D playback to play only the Base view vide stream (melon play). Therefore, in the BD-ROM 3D standard, whether only the Base view video stream encoded with Η 264/Αν€ standard is played, or whether the Base view video stream and the Depen (ient view video stream) are played are required. The recording device side pre-encodes the stream 'to play correctly. The specific taxi must be pre-coded in such a way that no buffer underflow or overflow occurs. In the H.264/AVC standard, in order not to generate a buffer For example, in the bd_r〇m standard, it is assumed that only the decoding of the Base view video stream and the Base Wew video stream are combined with the Dependent view video stream. For the decoding, the buffer control information must be pre-coded in the stream. However, in the playback device supporting the BD-ROM 3D standard, there is a benefit of 145441.doc •93· 201105108 with a decoder for Base view video Do τλ weo _ stream and Dependent view video stream for decoder, and J. Use Base view video with Dependent view video. Use 2 oracles to cry, 隹 . 〜 〜 〜 〜 〜 〜 〜 〜 〜 〜 〜 于 于 于 于 于 于 于 于 于 于 于 于 于 于 于 于 于 于 于 于 于 于 于 于 于 于 于The number of decoders is not specified in the quasi-medium. Therefore, in BD-ROM 3D selection Φ, ρ ^ μ knows that whether decoding is performed by one decoder or decoding by two decoders is required. The recording side pre-encodes the buffer control information in the stream and plays it in the positive direction. As described above, the buffer control information is encoded in the recording device as follows: 1. In the Base view Vide0 stream, only the playback is performed.仏% view νί&〇In the case of streaming, the value used to correctly play the above is encoded. 2. In the Dependent view video stream, the Dependent is played by the independent decoder (the decoder for Dependent view video). In the case of the view video stream, the value used to correctly perform the above playback is encoded. 3. In the Dependent view video stream, the Base view video stream and the Dependent view video stream are played together using one decoder. In the case of 'encoding the value used for the above playback correctly. [Specific example of encoding position] In the Base view video stream and Dependent view video stream, encode HR D (Hypothetica I Reference Decoder) parameters and max_dec_frame_buffering are used as buffer control information. 145441.doc -94- 201105108 The HRD parameters contain information indicating the maximum bit rate from the input of the CPB to the damper. It may also include information indicating the maximum bit rate of the input to the CPB, information indicating the buffer size of the CPB, and a flag indicating whether the HRD is a CBR (Constant Bit Rate). Max_dec_frame_buffering is information indicating the maximum number of sheets that can be memorized in the picture (reference picture) in the DPB. Figure 63 is a diagram showing an example of the coding position of the HRD parameters in the Base view video stream. As shown in FIG. 63, the HRD parameters are encoded as one piece of information of the SPS included in each Access Unit constituting the Base view video stream. In the example of FIG. 63, it is encoded as one piece of information of VUI (Video Usability Information) included in the SPS. The HRD parameters in Figure 63 represent the maximum bit rate of the input to the decoder when only the Base view video stream is played. When the bus between the CPB and the decoder is used to transmit only the Base view video stream data, the transfer rate is limited to the bit rate indicated by the HRD parameters. Further, the AUD of Fig. 63 corresponds to the AU delimiter described with reference to Fig. 51, and the Slice corresponds to the data of one screen included in the Access Unit of Fig. 63. Figure 64 is a diagram showing a description format of seq_parameter_set_data() (SPS) when the HRD parameters are encoded at the position shown in Figure 63. As shown in Figure 64, hrd_parameters〇(HRD parameters) is described in vui_parameters()(VUI) in seq_parameter_set_data(). 0 145441.doc -95- 201105108 Figure 6 5 shows the B ase view video string A diagram of an example of the encoding position of max_dec_frame_buffering in the stream. As shown in Fig. 65, max_dec_frame_buffering is also encoded as one of the SPSs included in each Access Unit constituting the Base view video stream. In the example of Fig. 65, it is encoded as one of the VUIs included in the SPS. @ 65 max_dec_frame_buffering#, Base view video The maximum number of pictures that can be memorized in the DPB after streaming. When one DPB is used to memorize the decoded picture of the Base view video stream, the number of pictures stored in the DPB is limited to the number of sheets indicated by max_dec_frame_buffering. Fig. 6 6 is a diagram showing a description format of seq_parameter_set_data() after encoding max_dec_frame_buffering at the position shown in Fig. 65. As shown in Figure 66, max_dec_frame_buffering is described in veq-parameters in seq_parameter_set_data(). Among them. Hereinafter, the HRD parameters encoded in the Base view video stream in the manner shown in Fig. 63 are appropriately referred to as the first HRD parameters. Further, max_dec_frame_buffering encoded in the Base view video stream in the manner shown in Fig. 65 is referred to as a first max_dec_frame_buffering. Fig. 67 is a diagram showing an example of the coding position of the HRD parameters in the Dependent view video stream. As shown in FIG. 67, the HRD parameters are encoded as 1 145441.doc -96-201105108 pieces of SubsetSPS included in each Dependent Unit constituting the Dependent view video stream. In the example of FIG. 67, the HRD parameters encoded as one piece of information of the SPS included in the SubsetSPS are encoded as one piece of information of the SPS, and the Dependent view video is played by the independent decoder to play the Dependent view video stream. The maximum bit rate of the input to the decoder. When the bus between the CPB and the independent decoder is used to transmit only the Dependent view video stream, the transfer rate is limited to the bit rate indicated by HRD parameters. Fig. 68 is a diagram showing the description format of the subset_seq_parameter_set_data() (SubsetSPS) after encoding the HRD parameters as one piece of information of the SPS. SubsetSPS is a description that expands the parameters of the SPS of H.264/AVC, and includes information indicating the dependencies between views. As shown in Fig. 68, hrd_parameters() is described in vui_parameters() in seq_parameter_set_data() in subset_seq_ parameter_set_data(). In the example of Fig. 67, HRD parameters are also encoded as MVC VUI Ext included in SubsetSPS. 1 piece of information. The HRD parameters encoding 1 message of MVC VUI Ext indicates that the maximum bit rate of the input of the solution is calculated by using one decoder to play the Base view video stream and the Dependent view video stream. When the bus between the CPB and a decoder is used for the transmission of the Base view video stream data and the Dependent view video stream data, the transmission rate is limited to the bit rate indicated by the HRD parameters. Fig. 69 is a diagram showing the description format of the subset_seq_parameter_set_data() after encoding the HRD parameters as one of the MVC VUI Ext 145441.doc •97·201105108 information. As shown in Figure 69, hrd_parameters() is described in mvc_vui_parameters_extension() (MVC VUI Ext) in subset-seq_ parameter_set_data(). Hereinafter, the HRD parameters (the left side of Fig. 67) encoded as one piece of information of the SPS in the Dependent view video stream in the manner shown in Fig. 67 are appropriately referred to as the second HRD parameters. Further, the HRD parameters (the right side of Fig. 67) encoded as one piece of information of the MVC VUI Ext in the Dependent view video stream are referred to as the 3rd HRD parameters. Fig. 70 is a diagram showing an example of the coding position of max_dec_frame_buffering in the Dependent view video stream. As shown in Fig. 70, max_dec_frame_buffering is encoded as one piece of SubsetSPS included in each Dependent Unit constituting the Dependent view video stream. In the example of Fig. 70, it is encoded as one piece of information of the SPS included in the SubsetSPS. The max_dec_frame-buffering code of one piece of information encoded as SPS indicates the maximum number of sheets that can be memorized in the DPB after playing the Dependent view video stream with an independent decoder. When one DPB is used to memorize a decoded picture of a Dependent view video stream, the number of sheets stored in the DPB is limited to the number of sheets represented by max_dec_frame_buffering. Figure 71 is a diagram showing the description format of the sub_seq_parameter_set_data() format of max_dec_frame_buffering as one of the SPS information, 145441.doc -98- 201105108. As shown in Fig. 71, max_dec_frame_buffering is described in vui_parameters() in seq_parameter_set_data() in subset_seq_parameter_set_data(). In the example of FIG. 70, the max_dec_frame_buffering code is also encoded as the information of the SEI. The max_dec_frame_buffering means that the Base view video stream and the Dependent view video stream are played together by one decoder. The maximum number of frames that can be memorized in the DPB. When 1 DPB is used for the decoding of the decoded picture of the Base view video stream and the decoded picture of the Dependent view video stream, the number of pictures stored in the DPB is limited to the number of sheets represented by max_dec_frame_buffering. . Fig. 72 is a diagram showing a description format of sei_message() (SEI) after encoding max_dec_frame_buffering as one piece of information of the SEI. As shown in Fig. 72, max_dec_frame_buffering is described in view_scalability_info() (view scalability information SEI) in sei_message(). Hereinafter, the max_dec_frame_buffering (the left side of FIG. 70) which is encoded as one of the SPS in the Dependent view video stream in the manner shown in FIG. 70 is referred to as the second max-dec_frame_buffering °, and will be in the Dependent view video. The max_dec_frame_buffering (on the right side of FIG. 70) of one piece of information encoded as SEI in the stream is referred to as 3max_dec_frame_buffering. 145441.doc -99- 201105108 As described above, in the Base view video stream and the Dependent view video stream, each of the three types of HRD parameters is encoded with dec-frame buffering. [Configuration of Apparatus] The recording apparatus for recording the data including the buffer control information in the BD has the same configuration as that of the recording apparatus 5〇1 shown in Fig. 52. Further, the playback device for playing the data recorded in the BD has the same configuration as the playback device 502 shown in Fig. 56. Hereinafter, the configuration of the recording device and the playback device that use the buffer control information will be described with reference to the configurations of Figs. 5 and 25. Descriptions overlapping with the above description will be omitted as appropriate.

記錄裝置501之資訊生成部511係生成包含播放清單檔案 與Clip Information檔案之資料庫資料,並且生成Base view video用之附加資§凡、及Dependent view video用之附加資 5孔。於Base view video用之附加資訊中包含第1 hrd parameters、及第 1 max—dec_frame_buffering。又,於 Dependent view video用之附加資訊中包含第2及第3 HRD parameters、以及第 2及第 3 max_dec_frame_buffering。 資訊生成部5 11係將所生成之資料庫資料輸出至記錄部 5 13 ’並將附加資訊輸出至μVC編碼器5 12。 MVC編碼器5 12係按照Η.264 AVC/MVC設定檔標準,對 L圖像資料與R圖像資料進行編碼,並生成編碼l圖像資料 所得之Base view video之各畫面之資料、及編碼r圖像資 料所得之Dependent view video之各晝面之資料。 •】00· 145441.doc 201105108 又’ MVC編碼器512係藉由對Base view video之各晝面 之資料中附加由資訊生成部511生成之Base View Wdeo用 之附加資δΗ* ’而生成Base view video串流。於Base view video串流中’於圖63所示之位置上編碼第1 HRD parameters ’於圖65所示之位置上編碼第1 max_dec— frame_buffering 〇 同樣地’ MVC編碼器512係藉由對Dependent view video 之各畫面之資料中附加由資訊生成部5丨丨生成之Dependent view video用之附加資訊,而生成Dependent view video串 流。於Dependent view video串流中,於圖67所示之位置上 編碼第2及第3 HRD parameters,.於圖70所示之位置上編碼 第 2及第 3 max_dec_frame_buffering。 MVC編碼器512係將所生成之Base vievv video串流與 Dependent view video串流輸出至記錄部5 13。 記錄部513係將由資訊生成部511供給之資料庫資料、及 由MVC編碼器512供給之Base view video串流與Dependent view video串流記錄於BD中。藉由記錄部513而記錄有資 料之BD係提供至播放裝置5 〇2。 播放裝置502之取得部531係自藉由記錄裝置5〇1而記錄 有資料且安裝於播放裝置502中之BD中讀出資料。取得部 53 1係將自BD中讀出之資料庫資料輸出至控制部532,並 將 Base view video 串流與 Dependent view video 串流輸出至 MVC解碼器533。 控制部532係控制資料自記錄媒體中之讀出等播放裝置 145441.doc • 101 - 201105108 502之整體動作。 例如’控制部532係於僅播放Base view video串流之情 形時,自 Base view video 串流中讀出第 1 HRD parameters 與第1 max_dec一frame一buffering。控制部532係基於所讀出 之資sfl ’控制MVC解碼器533對Base view video串流之解 碼0 又’控制部532係於使Base view video串流與Dependent view video串流播放(3D播放)之情形時,當MVC解碼器533 具有1個解碼器時’自Dependent view video串流中讀出第 3 HRD parameters及第 3 max_dec_frame一buffering。控制部 532係基於所讀出之資訊’控制Mvc解碼器533對Base view video 串流與Dependent view video 串流之解碼。 MVC解碼器533係按照控制部532之控制,僅對Base view video串流、或對Base view video串流與Dependent view video串流進行解碼。MVC解碼器533係將解碼所得之 資料輸出至輸出部534。 輸出部534係將由MVC解碼器533供給之圖像輸出至顯示 器,並顯示2D圖像或3D圖像。 [裝置之動作] 此處,參照圖73之流程圖,對記錄裝置5〇1之記錄處理 進行說明。 於步驟S101中,資訊生成部511係生成資料庫資料、及 附加於Base view video與Dependent view video之各自書面 中的包含緩衝控制資訊之附加資訊。 145441.doc -102· 201105108 於步驟S102中,藉由MVC編碼器512來進行編碼處理。 此處,進行與參照圖5 5所說明.之處理相同之處理。將藉由 乂驟8101而生成之缓衝控制資訊附加於Base vjew video與 Dependent view video之各晝面、中。藉由編碼處理而生成之The information generating unit 511 of the recording device 501 generates the database material including the playlist file and the Clip Information file, and generates an additional resource for the Base view video and an additional 5 holes for the Dependent view video. The additional information used in the Base view video includes the 1st hrd parameters and the 1st max_dec_frame_buffering. Further, the additional information for the Dependent view video includes the 2nd and 3rd HRD parameters, and the 2nd and 3rd max_dec_frame_buffering. The information generating unit 5 11 outputs the generated database material to the recording unit 5 13 ' and outputs the additional information to the μVC encoder 5 12 . The MVC encoder 5 12 encodes the L image data and the R image data according to the 264.264 AVC/MVC profile standard, and generates data and codes of the respective frames of the Base view video obtained by encoding the image data. r The data of the Dependent view video obtained from the image data. • 00· 145441.doc 201105108 Further, the MVC encoder 512 generates a Base view by adding an additional asset δΗ* ' to the Base View Wdeo generated by the information generating unit 511 to the data of each of the Base view videos. Video stream. In the Base view video stream, the first HRD parameters are encoded at the position shown in FIG. 63. The first max_dec_frame_buffering is encoded at the position shown in FIG. 65. Similarly, the MVC encoder 512 is used by the Dependent view. A Dependent view video stream is generated by adding additional information for the Dependent view video generated by the information generating unit 5 to the data of each screen of the video. In the Dependent view video stream, the second and third HRD parameters are encoded at the position shown in Fig. 67, and the second and third max_dec_frame_buffering are encoded at the position shown in Fig. 70. The MVC encoder 512 outputs the generated Base vievv video stream and the Dependent view video stream to the recording unit 5 13 . The recording unit 513 records the library data supplied from the information generating unit 511 and the Base view video stream and the Dependent view video supplied from the MVC encoder 512 in the BD. The BD which has recorded the information by the recording unit 513 is supplied to the playback device 5 〇2. The acquisition unit 531 of the playback device 502 reads data from the BD recorded by the recording device 5〇1 and mounted in the playback device 502. The acquisition unit 53 1 outputs the database data read from the BD to the control unit 532, and outputs the Base view video stream and the Dependent view video to the MVC decoder 533. The control unit 532 controls the overall operation of the playback device 145441.doc • 101 - 201105108 502 such as reading of data from the recording medium. For example, when the control unit 532 plays only the Base view video stream, the first HRD parameters and the first max_dec frame-buffering are read from the Base view video stream. The control unit 532 controls the decoding of the Base view video stream by the MVC decoder 533 based on the read resource sfl '. The control unit 532 is configured to stream the Base view video stream and the Dependent view video (3D playback). In the case, when the MVC decoder 533 has one decoder, the third HRD parameters and the third max_dec_frame-buffering are read out from the Dependent view video stream. The control unit 532 controls the Mvc decoder 533 to decode the Base view video stream and the Dependent view video stream based on the read information. The MVC decoder 533 decodes only the Base view video stream or the Base view video stream and the Dependent view video stream according to the control of the control unit 532. The MVC decoder 533 outputs the decoded data to the output unit 534. The output unit 534 outputs the image supplied from the MVC decoder 533 to the display, and displays a 2D image or a 3D image. [Operation of Device] Here, the recording process of the recording device 5〇1 will be described with reference to the flowchart of Fig. 73. In step S101, the information generating unit 511 generates the database material and the additional information including the buffer control information added to the respective writings of the Base view video and the Dependent view video. 145441.doc -102· 201105108 In step S102, the encoding process is performed by the MVC encoder 512. Here, the same processing as that described with reference to Fig. 55 is performed. The buffer control information generated by step 8101 is added to each side of Base vjew video and Dependent view video. Generated by encoding processing

Base view video 串流與 Dependent view video 串流係供給至 記錄部5 13中。 於步驟S103中,記錄部513係使BD記錄藉由資訊生成部 511而生成之資料庫資料、及藉由Mvc編碼器512而生成之The Base view video stream and the Dependent view video stream are supplied to the recording unit 5 13 . In step S103, the recording unit 513 causes the BD to record the database data generated by the information generating unit 511 and the data generated by the Mvc encoder 512.

Base View video 串流與Dependent View video 串流。其後, 結束處理。 繼而,參照圖74之流程圖,對播放裝置5〇2之播放處理 進行說明。 於步驟S111中,取得部531係自安裝於播放裝置502中之 BD中讀出資料。取得部53丨係將所讀出之資料庫資料輸出 至控制部532,且例如進行3D播放之情形時,將Base Wew video串流之資料與Dependent view video串流之資料輸出 至MVC解碼器533。 於步驟S112中,控制部532係自& BD讀出並供給之串流 資料中讀出缓衝控制資訊,且將參數設定於MVC解碼器 533中。如下所述,作為緩衝控制資訊之讀出源之串流係 根據自BD中讀出之串流,或者根據]^¥(:解碼器533之構成 而改變。 於步驟S113中,MVC解碼器533係按照由控制部532所設 定之參數,進行參照圖59、圖60所說明之解碼處理。 145441.doc 201105108 於步驟S114中,輸出部534係將藉由MVC解碼器533解碼 處理所得之圖像資料輸出至顯示器。其後,結束處理。 [參數之設定之具體例] 說明使用緩衝控制資訊進行之參數之設定之具體例。 此處’使僅播放Base view video串流後對解碼器之輸入 之最大位元率為40 Mbps »又’使利用獨立解碼器播放Base View video Streaming with Dependent View video. Thereafter, the processing ends. Next, the playback processing of the playback device 5〇2 will be described with reference to the flowchart of Fig. 74. In step S111, the acquisition unit 531 reads data from the BD installed in the playback device 502. The acquisition unit 53 outputs the read database data to the control unit 532, and outputs the data of the Base Wew video stream and the Dependent view video stream to the MVC decoder 533 when, for example, 3D playback is performed. . In step S112, the control unit 532 reads the buffer control information from the stream data read and supplied from the & BD, and sets the parameters in the MVC decoder 533. As described below, the stream as the source of the buffer control information is changed according to the stream read from the BD or according to the configuration of the decoder 533. In step S113, the MVC decoder 533 The decoding process described with reference to FIGS. 59 and 60 is performed in accordance with the parameters set by the control unit 532. 145441.doc 201105108 In step S114, the output unit 534 is an image obtained by decoding the MVC decoder 533. The data is output to the display. The processing is terminated. [Specific example of parameter setting] A specific example of setting the parameters using the buffer control information. Here, 'Enable the input to the decoder after playing only the Base view video stream. The maximum bit rate is 40 Mbps » and 'uses to play with an independent decoder

Dependent view video 串流後對 Dependent view video 用之 解碼器之輸入之最大位元率為40 Mbp。使利用解碼器 併播放 Base view video 串流與 Dependent view video 串流 後對解碼器之輸入之最大位元率為6〇 Mbps。 於此情形時’於記錄裝置501中,作為第1 HRD parameters之值、第2 HRD parameters之值,均編碼表示4〇 Mbps之值。作為第3 HRD parameters之值,編碼表示60 Mbps之值。 又’使僅播放Base view video串流後可記憶於DPB中之 晝面之最大張數為4張。使利用獨立解碼器播放Dependent view video串流後可記憶於DPB中之晝面之最大張數為4 張。使利用1個解碼器一併播放Base view video串流與 Dependent view video串流後可記憶於DPB中之晝面之最大 張數為6張。 於此情形時’於記錄裝置501中,作為第1 max_ dec_frame一buffering之值、第2max_dec_frame_buffering之 值’均編碼表示4張之值。作為第3 max_dec_frame_ buffering之值,編碼表示6張之值。 145441.doc -104· 201105108 圖75係表示於具有1個解碼器之mvc解碼器533中僅對 Base view video串流進行解碼之情形之例的圖。 於此情形時,如圖75所示,藉由控制部532而將編碼於 Base Vlew vide。串流中之第 1 hrD parameters 與第 1 max_dec_frame_buffering讀出。於 Base view video 串流Dependent view video The maximum bit rate of the input to the Dependent view video decoder is 40 Mbp after streaming. After using the decoder and playing the Base view video stream and the Dependent view video stream, the maximum bit rate of the input to the decoder is 6 〇 Mbps. In this case, in the recording device 501, the values of the first HRD parameters and the values of the second HRD parameters are all coded to represent values of 4 〇 Mbps. As the value of the third HRD parameters, the code represents a value of 60 Mbps. In addition, the maximum number of sheets that can be memorized in the DPB after playing only the Base view video stream is four. The maximum number of sheets that can be memorized in the DPB after playing the Dependent view video stream using an independent decoder is four. The maximum number of sheets that can be memorized in the DPB after playing the Base view video stream and the Dependent view video stream by one decoder is six. In this case, in the recording device 501, the value of the first max_dec_frame-buffering and the value of the second max_dec_frame_buffering are both coded to represent four values. As the value of the third max_dec_frame_buffering, the code represents the value of six sheets. 145441.doc -104· 201105108 FIG. 75 is a diagram showing an example of a case where only the Base view video stream is decoded in the mvc decoder 533 having one decoder. In this case, as shown in FIG. 75, the control unit 532 encodes the code into Base Vlew vide. The 1st hrD parameters in the stream are read out with the 1st max_dec_frame_buffering. Streaming in Base view video

上標註斜線表示之緩衝控制資訊D1係表示第1 hRD parameters與第 1 max_dec_frame_buffering。 又,藉由控制部532,而使自CPB541對解碼器sc之輸 入之最大位元率為4〇 Mbps,並基於第1 Hrd parameters進 行設定。例如,藉由將CPB541與解碼器542之間之匯流排 之頻寬確保為4〇 Mbps ’而設定最大位元率。 進而’藉由控制部532,而使可記憶於DPB543中之畫面 之最大張數為4張’並基於第1 max—dec_frame_buffering進 行設定。例如,藉由確保DPB543之記憶區域中可記憶4張 已解碼之畫面的區域’而設定可記憶之晝面之最大張數。 藉此’可按照設想,於記錄側使用丨個解碼器進行Base view video串流之解碼。若以可於約束之範圍内進行解碼 之方式於記錄側對Base view video串流進行編碼,則可防 止播放側之緩衝失敗。 圖76係表示於具有1個解碼器之MVc解碼器533中對Base view video串流與Dependent view video串流進行解碼之情 形之例的圖。 於此情形時’如圖76所示’藉由控制部532而將編碼於 Dependent view video 串流中之第 3 HRD parameters 與第 145441.doc •105- 201105108 3 max_dec一frame_buffering讀出。於Dependent view video 串流上標註斜線表示之緩衝控制資訊D2係表示第2 HRD parameters與第 2 max_dec_frame_buffering。又,緩衝控制 寊 §fl D3 係表示第 3 HRD parameters 與第 3 max—dec_ frame_buffering ° 又’藉由控制部532,而使自CPB541對解碼器542之輸 入之最大位元率為60 Mbps ’並基於第3 HRD parameters進 行設定。 進而,藉由控制部532,而使可記憶於DPB543中之晝面 之最大張數為6張’並基於第3 max_dec_frame_buffering進 行設定。 藉此可按照设想’於§己錄側進行B a s e v i e w v i d e 〇串流 與Dependent view video串流之解碼。若以可於約束之範圍 内進行解碼之方式’於記錄側對Base view vide〇串流與 Dependent view video串流進行編碼,則可防止播放側之緩 衝失敗。 圖77係表示MVC解碼器533之其它構成例的方塊圖。 對圖7 7所示之構成中與圖5 7所示之構成相同之構成標註 相同之符號。重複之說明將予以適當省略。 於圖77之例中,設置有解碼器542_1與解碼器542_2之2 個解碼器。解碼器542-1係Base view vide〇用之解碼器,解 碼器542-2係Dependent view video用之解媽器。 έ己憶於CPB541中之Base view vide〇串流之資料係以構成 1個Access Unit之資料單位藉由解碼器542 1而讀出。又, 145441.doc •106- 201105108 記憶於CPB541中之Dependent view video串流係以構成1個 Dependent Unit之資料單位藉由解碼器542-2而讀出。 解碼器542-1係解碼自CPB541中讀出之資料,並將解碼 所得之Base view video之各晝面之資料輸出至DPB543。 解碼器542-2係解碼自CPB541中讀出之資料,並將解碼 所得之Dependent view video之各畫面之資料輸出至 DPB543 。 如此般,對MVC解碼器533具有2個解碼器之情形進行說 明。 圖78係表示於具有2個解碼器之MVC解碼器533中僅解碼 Base view video串流之情形之例的圖。 於此情形時,如圖78所示,編碼於Base view video串流 中之第 1 HRD parameters與第 1 max_dec_frame_buffering係 藉由控制部532而讀出。 又,藉由控制部532,而使自CPB541對解碼器542之輸 入之最大位元率為40 Mbps,並基於第1 HRD parameters進 行設定。 進而,藉由控制部532,而使可記憶於DPB543中之晝面 之最大張數為4張,並基於第1 max_dec_frame_buffering進 行設定。 於圖78中,以虛線表示解碼器542-2係表示於解碼器 542-2中尚未進行處理。 圖79係表示於具有2個解碼器之MVC解碼器533中對Base view video串流與Dependent view video串流進行解碼之情 145441.doc -107- 201105108 形之例的圖。 於此情形時’如圖79所示,編碼於Base view video串流 中之第 1 HRD parameters、編碼於Dependent view video 串 流中之第 2 HRD parameters及第 3 max_dec—frame buffering 係错由控制部5 3 2而讀出。 又,藉由控制部532,而使自CPB541對解碼器542-1之輸 入之最大位元率為40 Mbps ’並基於第1 HRD parameters進 行設定’且使自CPB541對解碼器542-2之輸入之最大位元 率為40 Mbps ’並基於第2 HRD parameters進行設定。 進而,藉由控制部532,而使可記憶於DPB543中之晝面 之最大張數為6張’並基於第3 max_dec_frame_buffering進 4亍 a又疋。於Base view video與Dependent view video 中共通 使用 DPB543 ’ 因此’使用第 3 max—dec_frame buffering,作 為用以設定可記憶於DPB543中之晝面之最大張數的參 數。 圖.80係表示於具有2個解碼器之MVC解碼器533中對Base view video串流與Dependent view video串流進行解碼之情 形之其它例的圖。 於圖80之MVC解碼器533中,CPB541與DPB543亦分別 設置有 Base view video 用者與 Dependent view vide〇 用者。 於此情形時’如圖80所示’編碼於Base viexv video串流 中之第 1 HRD parameters與第 1 max—dec_frame_buffering係 藉由控制部532而讀出。又,編碼於Dependent view vide〇 串流中之第 2 HRD parameters與第 2 max_dec_frame_buffering 145441.doc •108- 201105108 係藉由控制部532而讀出。 藉由控制部532,而使自Base view video用之CPB即 CPB541-1對解碼器542-1之輸入之最大位元率為40 Mbps, 並基於第1 HRD parameters進行設定。又,使自Dependent view video用之CPB即CPB54卜2對解碼器542-2之輸入之最 大位元率為40 Mbps,並基於第2 HRD parameters進行設 定。 進而,藉由控制部532,而使可記憶於Base view video 用之DPB即DPB543-1中之晝面之最大張數為4張,並基於 第1 max_dec—frame_buffering進行設定。又,使可記憶於 Dependent view video 用之 DPB 即DPB543-2 中之畫面之最 大張數為4張,並基於第2 max_dec_frame_buffering進行設 定。 圖81係表示於具有2個解碼器之MVC解碼器533中對Base view video串流與Dependent view video串流進行解碼之情 形之又一其它例的圖。 於圖81之MVC解碼器533中,CPB設置有Base view video用者與 Dependent view video用者,但DPB 係於 Base view video 與 Dependent view video 中共通使用。又,Base view video用之CPB即CPB541-1與解碼器542-1之間之資料 傳送、Dependent view video 用之 CPB 即CPB541-2 與解碼器 542-2之間之資料傳送係經由相同匯流排進行。 於此情形時’如圖81所示’編碼於Dependent view video 串流中之第 3 HRD parameters與第 3 max—dec_frame_buffering 14544 丨,doc -109- 201105108 係藉由控制部532而讀出。 又,藉由控制部532,而使CPB541-1與解碼器542-1之間 之資料傳送、及CPB54 1-2與解碼器542-2之間之資料傳送 所使用的匯流排之最大位元率為60 Mbps,並基於第3 HRD parameters進行設定。 進而’藉由控制部532 ’而使可記憶於DPB543中之畫面 之最大張數為6張’並基於第3 max_dec_frame_buffering進 行設定。 [驗證裝置] 圖82係表示對藉由記錄裝置5〇1而記錄於bd中之視訊串 流是否可於播放裝置502中正確播放進行驗證之驗證裝置 的圖。 圖82之驗證裝置551包括電腦。對驗證裝置551輸入自 BD中讀出之視訊串流。 於作為視串流而輸入至驗證裝置551之Base view video串流中’編碼第 1 HRD parameters與第 1 max_dec_ frame一buffering。又’於Dependent view video串流中,編 碼第 2 及第 3 HRD parameters、與第 2 及第 3 max_dec— frame_buffering 〇 於驗證裝置5Μ中,藉由利用CPU(Central Processing Unit ’中央處理單元)執行特定之程式,而實現控制部 551A。控制部551A係對所輸入之視訊串流是否可於播放 裝置502中正確播放進行驗證,並輸出表示驗證結果之資 訊。驗證結果例如顯示於顯示器中,且由使用驗證裝置 145441.doc • 110· 201105108 551進行驗證之使用者進行確認。 又,於驗證裝置551中,藉由利用CPU執行特定之程 式,而實現HRD(Hypothetical Reference Decoder)。HRD係 虛擬地再現播放裝置502之MVC解碼器533者。HRD之功能 構成示於圖83中。 如圖83所示,HRD561係包括CPB571、解碼器572、及 DPB573。 CPB571係記憶所輸入之Base view video串流之資料與 Dependent view video串流之資料。記憶於CPB571中之 Base view video串流之資料係以構成1個Access Unit之資 料單位藉由解碼器572而讀出。記憶於CPB571中之 Dependent view video串流之資料亦同樣地以構成1個 Dependent Unit之資料單位藉由解碼器572而讀出。 解碼器572對自CPB571中讀出之資料進行解碼,並將解 碼所得之Base view video、Dependent view video之各晝面 之資料輸出至DPB573。 DPB573係記憶由解碼器573供給之資料。記憶於DPB573 中之 Base view video、Dependent view video 之各畫面之資 料係以由Picture Timing SEI表示之各晝面之顯示時刻等輸 出。 說明驗證之具體例。 與上述例相同,分別編碼表示40 Mbps、40 Mbps、60 Mbps之值,作為第1、第2、第3 HRD parameters之值。 又,分別編碼表示4張、4張、6張之值,作為第1、第2、 145441.doc -111 - 201105108 第 3 max_dec_frame_buffering之值。 圖83係表示僅對Base view video串流進行解碼之情形之 例的圖。 於此情形時,如圖83所示,編碼於Base view vide〇串流 中之第 1 HRD parameters與第 1 max dec_frame—“打〜叩係 藉由控制部551A而讀出。 又’藉由控制部5 5 1A ’而使自CPB5 71對解碼器572之輸 入之最大位元率為4〇 Mbps ’並基於第i Hrd 行設定。進而’藉由控制部551A,而使可記憶於〇ρΒ573 中之畫面之最大張數為4張’並基於第1 max+dec^frameD buffering進行設定。 於此狀態下,藉由控制部55 1A來驗證是否可正確地進行 Base view video串流之解碼,並將表示驗證結果之資訊輸 出。當判斷可正確進行解碼時,所輸入之Base view vide〇 串流成為可基於編碼於其中之第1 HRD parameters與第 1 max_dec_frame一buffering,以參照圖 75、圖 78、圖 80說 明之方式進行正確播放之串流。 圖84係表示利用Dependent view video用之解碼器僅解碼 Dependent view video串流之情形之例的圖。 於此清形時’如圖84所不’編碼於Dependent view video 串 W 中之第 2 HRD parameters 與第 2 max—dec_frame_ buffering係藉由控制部55丨八而讀出。 又’藉由控制部5 5 1A ’而使自CPB571對解碼器572之輸 入之最大位元率為40 Mbps,並基於第2 HRD parameters進 145441.doc •112· 201105108 行設定。進而,藉由控制部551A,而使可記憶於DpB573 中之畫面之最大張數為4張,並基於第2 max_dec_frame_ buffering進行設定。 於此狀態下,藉由控制部551A來驗證是否可正確進行 Dependentviewvideo串流之解碼,並輸出表示驗證結果之 資訊。當判斷可正確進行解碼時,所輸入之Dependent view video串流成為可基於編碼於其中之第2 HRD parameters與第 2 max dec—frame buffering,以參照圖 8〇所 說明之方式’利用Dependent view video用之解碼器進行正 確播放之串流。 再者,於對Dependent view video串流進行解碼時,需要 Base view video串流。對圖84之解碼器572亦適當輸入Base view video串流之已解碼之畫面資料,並將其用於 Dependent view video 串流之解碼。 圖85係表示利用1個解碼器對Base view video串流與 Dependent view video串流進行解碼之情形之例的圖β 於此情形時,如圖85所示,編碼於Dependent view video 串流中之第 3 HRD parameters與第 3 max_dec_frame_buffering ' 係藉由控制部551A而讀出。 - 又’藉由控制部551A,而使自CPB571對解碼器572之輸 入之最大位元率為60 Mbps,並基於第3 HRD parameters進 行設定。 進而’藉由控制部551A,而使可記憶於DPB573中之晝 面之歲大張數為6張’並基於第3 max_dec_frame_buffering 145441.doc -113- 201105108 進行設定。 於此狀態下,藉由控制部551A來驗證是否可正確地進行 Base view video 串流與 Dependent vUw Wde〇 串流之解碼, 並輸出表示驗證結果之資訊。當判斷可正確地進行解碼 時,所輸入之Base view video 串流與Dependent Wew 串流成為可基於第3 HRD parameters與第3 max—dec_frame_ buffering,以參照圖76所說明之方式進行正確播放之串 流。 [view_type之位置] 於上述中,將以參照圖12所說明之方式表示Base Wew video串流係為L圖像之串流抑或是R圖像之串流的 view—type描述於PlayList中,但亦可描述於其他位置。 例如’亦可考慮將Base view video串流與Dependent view video串流多工處理為相同之丁8,或者多工處理為各 不相同之TS,並經由無線電波或網路進行傳送。於此情形 時,view_type例如描述於作為傳送控制資訊之 PSI(Program Specific Information,節目特定資訊)中、咬 者 Base view video 串流或 Dependent view video 串流(基本 串流)中。 圖 86係表示於PSI(Program Specific Information)中所含 之 PMT(Program Map Table,節目映射表)中描述 view_type 之情形之例的圖。 如圖 86所示’可重新定義 MVC_video_stream_descriptor()作 為 MVC 用之 descriptor(描述符),且於 MVC—video—stream 14544 丨.doc 114· 201105108 descript〇r()之中描述View_type。再者,例如分配65作為 descriptor_tag 值0 於接收到TS之播放裝置1中,將基於PMT中所描述之 viewjype值,判斷多工處理成TS中之Base view Wde〇串流 係為L圖像之串流抑或是R圖像之串流,並進行對解碼結果 之資料之輸出目的地進行切換等參照圖24、圖26所說明之 處理。 亦可不於PMT之中描述view_type,而於siT(Selection Information Table,選擇資訊表)等其他位置描述 view_type 〇 圖87係表示於基本串流中描述vieW-type之情形之例的 圖。 如圖87所示’亦可於se;[中之MvC_video_stream」nf()G 之中描述view—type。如上所述,SEI係附加於構成Base view video串流與Dependent view video串流之各畫面資料 中的附加資訊。包含view—type之SEI係附加於Base View video串流與Dependent view video串流中之至少任一串流 之各畫面中。 於讀出SEI之播放裝置!中,將基於sei中所描述之 view_type值,判斷Base view vide〇串流係為L圖像之串流 抑或是R圖像之串流’並進行將解碼結果之資料之輸出目 的地切換等參照圖24、圖26所說明之處理。 上述一系列處理既可藉由硬體來執行,亦可藉由軟體來 執行。於藉由軟體來執行一系列處理之情形時,構成該軟 145441.doc -115- 201105108 體之程式係自程式記錄媒體安裝於專用硬體中所組裝之電 腦或通用之個人電腦等中。 圖88係表示藉由程式來執行上述一系列處理之電腦硬體 之構成例的方塊圖。 CPU(Central Processing Unit)701 ^ R0M(Read Only Memory,唯讀記憶體)7〇2、及 RAM(Rand〇m AccessThe buffer control information D1 indicated by a diagonal line indicates the first hRD parameters and the first max_dec_frame_buffering. Further, the control unit 532 sets the maximum bit rate of the input from the CPB 541 to the decoder sc to 4 Mbps, and sets it based on the first Hrd parameters. For example, the maximum bit rate is set by ensuring that the bandwidth of the bus bar between the CPB 541 and the decoder 542 is 4 Mbps. Further, the control unit 532 sets the maximum number of pictures that can be stored in the DPB 543 to four sheets and sets them based on the first max_dec_frame_buffering. For example, the maximum number of sheets that can be memorized is set by ensuring that the area of the decoded picture can be memorized in the memory area of the DPB 543. By this, it is conceivable to decode the Base view video stream using one decoder on the recording side. If the Base view video stream is encoded on the recording side in such a manner that decoding is possible within the constraint range, the buffering failure on the playback side can be prevented. Figure 76 is a diagram showing an example of a case where a Base view video stream and a Dependent view video stream are decoded in an MVc decoder 533 having one decoder. In this case, as shown in Fig. 76, the third HRD parameters encoded in the Dependent view video stream and the 145441.doc • 105 - 201105108 3 max_dec_frame_buffering are read by the control unit 532. The buffer control information D2 indicating the slanted line on the Dependent view video stream indicates the second HRD parameters and the second max_dec_frame_buffering. Further, the buffer control 寊§fl D3 indicates that the third HRD parameters and the third max_dec_frame_buffering ° and the maximum bit rate of the input from the CPB 541 to the decoder 542 are 60 Mbps by the control unit 532. Set based on the 3rd HRD parameters. Further, the control unit 532 sets the maximum number of sheets that can be memorized in the DPB 543 to six sheets and sets them based on the third max_dec_frame_buffering. In this way, the decoding of the stream and the Dependent view video stream can be performed on the basis of the assumption that the B a s e v i e w v i d e 〇 stream. If the Base view vide stream and the Dependent view video stream are encoded on the recording side in a manner that can be decoded within the constraint, the buffer failure on the playback side can be prevented. 77 is a block diagram showing another configuration example of the MVC decoder 533. The same components as those shown in Fig. 57 in the configuration shown in Fig. 7 are denoted by the same reference numerals. Repeated descriptions will be omitted as appropriate. In the example of Fig. 77, two decoders of the decoder 542_1 and the decoder 542_2 are provided. The decoder 542-1 is a decoder for Base view vide, and the decoder 542-2 is a solution for Dependent view video. The data of the Base view vide stream in the CPB 541 is read by the decoder 542 1 in the data unit constituting one Access Unit. Further, 145441.doc •106- 201105108 The Dependent view video stream stored in the CPB 541 is read by the decoder 542-2 by the data unit constituting one Dependent Unit. The decoder 542-1 decodes the data read from the CPB 541 and outputs the data of each side of the decoded Base view video to the DPB 543. The decoder 542-2 decodes the data read from the CPB 541 and outputs the data of each picture of the decoded Dependent view video to the DPB 543. In this manner, the case where the MVC decoder 533 has two decoders will be described. Fig. 78 is a diagram showing an example of a case where only the Base view video stream is decoded in the MVC decoder 533 having two decoders. In this case, as shown in Fig. 78, the first HRD parameters and the first max_dec_frame_buffering encoded in the Base view video stream are read by the control unit 532. Further, the control unit 532 sets the maximum bit rate from the CPB 541 to the decoder 542 to 40 Mbps, and sets it based on the first HRD parameters. Further, the control unit 532 sets the maximum number of sheets that can be stored in the DPB 543 to four, and sets them based on the first max_dec_frame_buffering. In Fig. 78, the decoder 542-2 is indicated by a broken line and is not yet processed in the decoder 542-2. Figure 79 is a diagram showing an example of the case of decoding a Base view video stream and a Dependent view video stream in an MVC decoder 533 having two decoders 145441.doc -107 - 201105108. In this case, as shown in FIG. 79, the first HRD parameters encoded in the Base view video stream, the second HRD parameters encoded in the Dependent view video stream, and the third max_dec_frame buffering error are controlled by the control unit. Read out 5 3 2 . Further, the control unit 532 sets the maximum bit rate of the input from the CPB 541 to the decoder 542-1 to 40 Mbps ' and sets it based on the first HRD parameters', and inputs the input from the CPB 541 to the decoder 542-2. The maximum bit rate is 40 Mbps ' and is set based on the second HRD parameters. Further, the control unit 532 causes the maximum number of sheets that can be memorized in the DPB 543 to be six sheets and enters the third max_dec_frame_buffering based on the third max_dec_frame_buffering. The Base view video and the Dependent view video are commonly used. The DPB543 '' is therefore used as the third max-dec_frame buffering as a parameter for setting the maximum number of sheets that can be memorized in the DPB 543. Fig. 80 is a diagram showing another example of the case where the Base view video stream and the Dependent view video stream are decoded in the MVC decoder 533 having two decoders. In the MVC decoder 533 of Fig. 80, the CPB541 and the DPB 543 are also provided with a Base view video user and a Dependent view vide user, respectively. In this case, the first HRD parameters and the first max-dec_frame_buffering coded in the Base viexv video stream are read as shown in Fig. 80 by the control unit 532. Further, the second HRD parameters and the second max_dec_frame_buffering 145441.doc •108-201105108 encoded in the Dependent view vide〇 stream are read by the control unit 532. The control unit 532 sets the maximum bit rate of the CPB541-1 input from the Base view video, that is, the CPB541-1, to the decoder 542-1 to 40 Mbps, and sets it based on the first HRD parameters. Further, the maximum bit rate of the input from the CPB54, which is the CPB for the Dependent view video, to the decoder 542-2 is 40 Mbps, and is set based on the second HRD parameters. Further, the control unit 532 sets the maximum number of sheets that can be stored in the DPB543-1, which is a DPB for Base view video, to four, and sets it based on the first max_dec_frame_buffering. Further, the maximum number of pictures that can be memorized in the DPB of the Dependent view video, that is, the DPB543-2, is four, and is set based on the second max_dec_frame_buffering. Figure 81 is a diagram showing still another example of the case where the Base view video stream and the Dependent view video stream are decoded in the MVC decoder 533 having two decoders. In the MVC decoder 533 of FIG. 81, the CPB is provided with a Base view video user and a Dependent view video user, but the DPB is commonly used in the Base view video and the Dependent view video. In addition, the CPB for the Base view video, that is, the data transfer between the CPB541-1 and the decoder 542-1, and the CPB for the Dependent view video, that is, the data transfer between the CPB541-2 and the decoder 542-2, are via the same bus. get on. In this case, the third HRD parameters and the third max_dec_frame_buffering 14544 编码, doc - 109 - 201105108 encoded in the Dependent view video stream are read by the control unit 532 as shown in Fig. 81. Further, the control unit 532 causes the data transfer between the CPB 544-1 and the decoder 542-1 and the maximum bit of the bus used for data transfer between the CPB 54 1-2 and the decoder 542-2. The rate is 60 Mbps and is set based on the 3rd HRD parameters. Further, the control unit 532' sets the maximum number of pictures that can be stored in the DPB 543 to six sheets and is set based on the third max_dec_frame_buffering. [Verification device] Fig. 82 is a view showing a verification device for verifying whether or not the video stream recorded in bd by the recording device 5〇1 can be correctly played back in the playback device 502. The verification device 551 of Fig. 82 includes a computer. The video stream read from the BD is input to the verification device 551. The first HRD parameters and the first max_dec_frame-buffering are encoded in the Base view video stream input to the verification device 551 as a video stream. In the Dependent view video stream, the second and third HRD parameters are encoded, and the second and third max_dec_frame_buffering are used in the verification device 5, and the specific processing is performed by using a CPU (Central Processing Unit). The program is realized by the control unit 551A. The control unit 551A verifies whether or not the input video stream can be correctly played back in the playback device 502, and outputs the information indicating the verification result. The verification result is displayed, for example, on the display, and is confirmed by the user who authenticates using the verification device 145441.doc • 110· 201105108 551. Further, in the verification device 551, HRD (Hypothetical Reference Decoder) is realized by executing a specific program by the CPU. The HRD system virtually reproduces the MVC decoder 533 of the playback device 502. The functional configuration of the HRD is shown in Fig. 83. As shown in FIG. 83, the HRD 561 includes a CPB 571, a decoder 572, and a DPB 573. CPB571 is the data of the Base view video stream input and the Dependent view video stream input. The data of the Base view video stream stored in the CPB 571 is read by the decoder 572 in units of information constituting one Access Unit. The data of the Dependent view video stream stored in the CPB 571 is similarly read by the decoder 572 in the data unit constituting one Dependent Unit. The decoder 572 decodes the data read from the CPB 571, and outputs the decoded data of each of the Base view video and the Dependent view video to the DPB 573. The DPB 573 is a memory that is supplied by the decoder 573. The information of each of the Base view video and the Dependent view video stored in the DPB 573 is outputted by the display time of each face indicated by the Picture Timing SEI. Explain the specific example of verification. Similarly to the above example, the codes represent values of 40 Mbps, 40 Mbps, and 60 Mbps, respectively, as values of the first, second, and third HRD parameters. Further, the values of 4, 4, and 6 sheets are respectively encoded as the values of the first, second, and 145441.doc -111 - 201105108 3rd max_dec_frame_buffering. Fig. 83 is a diagram showing an example of a case where only the Base view video stream is decoded. In this case, as shown in FIG. 83, the first HRD parameters and the first max dec_frame encoded in the Base view vide stream are read by the control unit 551A. The portion 5 5 1A 'and the maximum bit rate of the input from the CPB 5 71 to the decoder 572 is 4 〇 Mbps ' and is set based on the ith H rd line. Further, the control unit 551A can be memorized in the 〇ρΒ573 The maximum number of sheets of the screen is four sheets and is set based on the first max+dec^frameD buffering. In this state, the control unit 55 1A verifies whether the decoding of the Base view video stream can be correctly performed, and The information indicating the verification result is output. When it is judged that the decoding can be correctly performed, the input Base view vide stream can be based on the first HRD parameters and the first max_dec_frame buffered in the encoding, with reference to FIG. 75 and FIG. 78. Fig. 80 shows the flow of correct playback in the manner described in Fig. 80. Fig. 84 is a diagram showing an example of the case where only the Dependent view video stream is decoded by the decoder for Dependent view video. 'coding The second HRD parameters and the second max_dec_frame_buffering in the Dependent view video string W are read by the control unit 55. The 'from the CPB571 to the decoder 572' is also 'by the control unit 5 5 1A ' The maximum bit rate of the input is 40 Mbps, and is set based on the second HRD parameters 145441.doc • 112· 201105108. Further, the maximum number of pictures that can be memorized in DpB573 is 4 by the control unit 551A. The setting is based on the second max_dec_frame_buffering. In this state, the control unit 551A verifies whether or not the Dependentviewvideo stream can be decoded correctly, and outputs information indicating the result of the verification. When it is judged that the decoding can be performed correctly, The input Dependent view video stream becomes a string that can be correctly played by the decoder using the Dependent view video in the manner described with reference to FIG. 8B based on the second HRD parameters and the second max dec-frame buffering encoded therein. Stream. In addition, Base view video stream is required to decode the Dependent view video stream. The decoded picture data of the Base view video stream is also input to the decoder 572 of FIG. 84 and used for decoding the Dependent view video stream. 85 is a diagram showing an example of a case where a base view video stream and a Dependent view video stream are decoded by one decoder. In this case, as shown in FIG. 85, it is encoded in a Dependent view video stream. The third HRD parameters and the third max_dec_frame_buffering ' are read by the control unit 551A. - By the control unit 551A, the maximum bit rate from the CPB 571 to the decoder 572 is 60 Mbps, and is set based on the third HRD parameters. Further, by the control unit 551A, the number of sheets that can be memorized in the DPB 573 is six sheets, and is set based on the third max_dec_frame_buffering 145441.doc - 113 - 201105108. In this state, the control unit 551A verifies whether or not the Base view video stream and the Dependent vUw Wde〇 stream can be correctly decoded, and outputs information indicating the result of the verification. When it is judged that the decoding can be correctly performed, the input Base view video stream and the Dependent Wew stream become a string that can be correctly played back in the manner described with reference to FIG. 76 based on the third HRD parameters and the third max-dec_frame_buffering. flow. [Location of view_type] In the above, the view-type indicating whether the Base Wew video stream is a stream of L images or a stream of R images will be described in the PlayList, but in the manner described with reference to FIG. It can also be described in other locations. For example, it is also conceivable to process the Base view video stream and the Dependent view video stream to the same D, or to process the different TSs and transmit them via radio waves or the network. In this case, the view_type is described, for example, in PSI (Program Specific Information) as a transmission control information, a bite Base view video stream, or a Dependent view video stream (basic stream). Fig. 86 is a diagram showing an example of a case where view_type is described in a PMT (Program Map Table) included in PSI (Program Specific Information). As shown in Fig. 86, the MVC_video_stream_descriptor() can be redefined as the descriptor (descriptor) for MVC, and the View_type is described in MVC_video_stream 14544 丨.doc 114· 201105108 descript〇r(). Furthermore, for example, the allocation 65 is used as the descriptor_tag value 0 in the playback device 1 that receives the TS, and based on the viewjype value described in the PMT, the multiplex processing is determined to be the Base view Wde in the TS as the L image. The processing described with reference to Figs. 24 and 26 is performed by streaming or streaming the R picture and switching the output destination of the data of the decoding result. The view_type may not be described in the PMT, but the view_type is described in other positions such as the SiT (Selection Information Table). Fig. 87 is a diagram showing an example of the case where the vieW-type is described in the basic stream. As shown in Fig. 87, 'view-type' can also be described in se; [MvC_video_stream" nf()G. As described above, the SEI is attached to the additional information constituting each of the picture data of the Base view video stream and the Dependent view video stream. The SEI containing the view-type is attached to each of at least one of the Base View video stream and the Dependent view video stream. Read the SEI playback device! In the above, based on the view_type value described in sei, it is determined whether the Base view vide stream is a stream of L images or a stream of R images and the output destination of the data of the decoding result is switched. The processing illustrated in Figs. 24 and 26 is as follows. The above series of processing can be performed by hardware or by software. When a series of processing is executed by software, the program constituting the soft 145441.doc -115-201105108 is installed in a computer or a general-purpose personal computer or the like assembled in a dedicated hardware. Figure 88 is a block diagram showing an example of the configuration of a computer hardware that executes the above-described series of processes by a program. CPU (Central Processing Unit) 701 ^ R0M (Read Only Memory) 7〇2, and RAM (Rand〇m Access

Memory,隨機存取記憶體)7〇3係藉由匯流排7〇4而相互連 接。 於匯流排704上進而連接有輸入輸出介面7〇5。於輸入輸 出/1面705上連接有包括鍵盤、滑鼠等之輸入部7⑽及包 括顯示器、揚聲器等之輸出部7〇7。又,於匯流排7〇4上連 接有包括硬碟或非揮發性記憶體等之記憶部7〇8、包括網 路介面等之通信部709、及驅動可移動媒體7ιι之驅動器 710 〇 於以如上方式構成之電腦中,cpU7GH|;'藉由例如將記 憶於記憶部7 G 8中之程式經由輸人輸出介面7 q 5及匯流排 704而加載至RAM703中予以勃耔,品、杜> 甲卞以執仃,而進行上述一系列 理。 CPU701執订之程式係記錄於例如可移動媒體川令,或 者經由區域網路、網際網路、數位廣播等有線或無線之傳 送媒體而提供,並安裝於記憶部7〇8中。 再者,電腦執行之程式既可為按照本說明書中說明之順 序以時間序列進行處理之程式’亦可為並列地或者以進行 調用時等所需之時序進行處理之程式。 145441.doc 201105108 本發明之貫把形態並不限定於上述實施形態,可於不脫 離本發明精神之範圍内進行各種變更。 【圖式簡單說明】 圖1係表示.包含應甩本發明..之播放裝置之播放系統之構 成例的圖。 圖2係表示拍攝例之圖。 圖3係表示MVC編碼器之構成例的方塊圖。 圖4係表示參考圖像例之圖。 圖5係表示ts之構成例之圖。 圖6係表示TS之其它構成例之圖。 圖7Α、圖7Β係表示ts之又一其它構成例之圖。 圖8係表示AV串流之管理例.之圖。 . 圖9係表示Main Path與Sub Path之結構之圖。 圖1 〇係表示記錄於光碟中之檔案之管理結構之例的圖。 圖11係表示PlayList檔案之語法的圖。 圖12係表示圖! i中之reserved—f〇r—future-Use之用法之 的圖。 圖13係表示3D_PL—type之值之含義的圖 圖14係表示vieW-type之值之含義的圖。 圖15係表示圖11之PlayList〇之語法的圖。 圖16係表示圖15之SubPath()之語法的圖。 圖17係表示圖16之SubPlayltem(i)之語法的圖。 圖18係表示圖15之Playltem()之語法的圖。 圖19係表示圖18之STN_table()之語法的圖。 145441.doc -117- 201105108 圖20係表示播放裝置之構成例的方塊圖。 圖21係表示_之解碼部之構成例的圖。 圖22係表示進行視訊串流處理之構成的圖。 圖23係表示進行視訊串流處理之構成的圖。 圖24係表示進行視訊串流處理之其它構成的圖。 圖25係表示AccessUnit之例的圖。 圖26係表示進行視訊串流處理之其它構成的圖。 圖27係表示合成部與其前段之構成的圖。 圖28係表示合成部與其前段之構成的其它圓。 圖29係表示軟體製作處理部之構成例的方塊圖。 圖3〇係表示包含軟體製作處理部之各構成例之圖。 圖η係表示設於記錄裝置中之3D vide〇Ts生成部之構成 例的圖。 圖32係表示設於記錄裝置中之3D Wde〇 ts生成部之其它 構成例的圖。 圖33係表示設於記錄裝置中之3D vide〇 ts生成部之又— 其它構成例的圖。 圖34係表示對Access Unh進行解碼之播放裝置側之構成 的圖。 圖35係表示解碼處理之圖。 圖36係表示closed GOP結構之圖。 圖37係表示Open G0P結構之圖。 圖38係表示G0P内之最大圖框·欄位數之圖。 圖39係表示closed GOP結構之圖。 145441.doc •118· 201105108 圖40係表示Open GOP結構之圖。 圖41係表示設於EP_map中之解碼開始位置之例的圖。 圖42係於不定義Dependent view video之GOP結構之情形 時所產生之問題的示意圖。 圖4 3係表示晝面搜尋之概念之圖。 圖44係表示記錄於光碟上之AV串流結構的圖。 圖4^係表示Clip AV串流之例的圖。 圖46係概念性表示與圖45之Clip AV串流對應之EP_ma.p 的圖。 圖47係表示SPN_EP_start所指之來源封包之資料結構之 例的圖。 圖48係表示EP_map中所含之子表格的圖。 圖49係表示登錄PTS_EP_coarse及登錄PTS_EP_fine之格 式之例的圖。 圖50係表示登錄SPN_EP_coarse及登錄SPN—EP—fine之格 式之例的圖。 圖51係表示Access Unit之構成的圖。 圖52係表示記錄裝置之構成例的方塊圖。 圖53係表示圖52之MVC編碼器之構成例的方塊圖。 圖54係說明記錄裝置之記錄處理的流程圖。 圖55係說明圖54之步驟S2中進行之編碼處理的流程圖。 圖56係表示播放裝置之構成例的方塊圖。 圖57係表示圖56之MVC解碼器之構成例的方塊圖。 圖58係說明播放裝置之播放處理的流程圖。 145441.doc .119- 201105108 圖59係說明圖58之步驟S32中進行之解碼處理的流程 圖。 圖60係接著圖59,說明圖58之步驟S32中進行之解碼處 理的流程圖。 圖61係說明播放裝置之隨機存取播放處理的流程圖。 圖 62A、圖 62B係表示 Base view video 串流與 Dependent view video串流之狀態的圖。 圖 63係表示 Base view video串流中之HRD parameters 之 編碼位置之例的圖。 圖64係表示於圖63所示之位置上編碼HRD parameters時 之描述格式的圖。 圖 65 係表示 Base view video 串流中之 max—dec_frame_ buffering之編碼位置之例的圖。 圖66係表示於圖65所示之位置上編碼max_dec_frame_ buffering時之描述格式的圖。 圖67係表示Dependent view video串流中之HRD parameters之編碼位置之例的圖 圖68係表示於圖67所示之位置上編碼HRD parameters時 之描述格式的圖。 圖69係表示於圖67所示之位置上編碼HRD parameters時 之其它描述格式的圖。 圖 70 係表示 Dependent view video 串流中之 max_dec frame_buffering之編碼位置之例的圖。 圖71係表示於圖7〇所示之位置上編媽max dec 145441.doc -120· 201105108 frame_buffering時之描述格式的圖。 圖72係表示於圖70所示之位置上編碼max_dec frame_buffering時之其它描述格式的圖。 圖73係說明記錄裝置之記錄處理的流程圖。 圖74係說明播放裝置之播放處理的流程圖。 圖75係表示參數設定之例的圖。 圖76係表示參數設定之其它例的圖。 圖77係表示MVC解碼器之其它構成例的方塊圖。 圖78係表示參數設定之其它例的圖。 圖79係表示參數設定之例的圖。 圖80係表示參數設定之其它例的圖。 圖8 1係表示參數設定之又一其它例的圖。 圖82係表示驗證裝置之圖。 圖83係表示HRD之功能構成的圖。 圖8 4係表示驗證例之圖。 圖85係表示驗證之其它例的圖。 圖86係表示view_type之描述例之圖。 圖87係表示view_type之描述之其它例之圖。 圖88係表示電腦之硬體構成例之方塊圖。 【主要元件符號說明】 1 播放裝置 2 光碟 3 顯示裝置 11 MVC編碼器 14544I.doc -121 - 201105108 21 22 23 24 25 51 52 53 54 55 56 57 H.264/AVC編碼器 H.264/AVC解碼器 Depth計算部Memory, random access memory) 7〇3 are connected to each other by bus bars 7〇4. An input/output interface 7〇5 is further connected to the bus 704. An input unit 7 (10) including a keyboard, a mouse, and the like, and an output unit 7〇7 including a display, a speaker, and the like are connected to the input/output/1 surface 705. Further, a memory unit 〇8 including a hard disk or a non-volatile memory, a communication unit 709 including a network interface, and a drive unit 710 for driving the removable medium 7 连接 are connected to the bus bar 7〇4. In the computer configured as above, cpU7GH|; 'for example, the program stored in the memory unit 7 G 8 is loaded into the RAM 703 via the input output interface 7 q 5 and the bus bar 704 to be burgundy, product, du &gt The hyperthyroidism carries out the above-mentioned series of rules. The program to be bound by the CPU 701 is recorded, for example, in a portable medium, or is provided via a wired or wireless transmission medium such as a regional network, an Internet, or a digital broadcast, and is installed in the storage unit 〇8. Further, the program executed by the computer may be a program that is processed in time series in accordance with the sequence described in the specification, and may be a program that is processed in parallel or at a timing required for calling. The present invention is not limited to the above-described embodiments, and various modifications can be made without departing from the spirit and scope of the invention. BRIEF DESCRIPTION OF THE DRAWINGS Fig. 1 is a view showing an example of a configuration of a playback system including a playback device of the present invention. Fig. 2 is a view showing an example of shooting. Fig. 3 is a block diagram showing an example of the configuration of an MVC encoder. Fig. 4 is a view showing an example of a reference image. Fig. 5 is a view showing a configuration example of ts. Fig. 6 is a view showing another configuration example of the TS. 7A and 7B are views showing still another example of the configuration of ts. Fig. 8 is a view showing an example of management of an AV stream. Figure 9 is a diagram showing the structure of Main Path and Sub Path. Fig. 1 is a diagram showing an example of a management structure of a file recorded on a disc. Figure 11 is a diagram showing the syntax of a PlayList file. Figure 12 is a diagram! A diagram of the usage of reserved-f〇r-future-Use in i. Fig. 13 is a view showing the meaning of the value of 3D_PL_type. Fig. 14 is a view showing the meaning of the value of vieW-type. Fig. 15 is a view showing the syntax of the PlayList〇 of Fig. 11; Fig. 16 is a view showing the syntax of SubPath() of Fig. 15. Figure 17 is a diagram showing the syntax of SubPlayltem(i) of Figure 16 . Fig. 18 is a diagram showing the syntax of Playltem() of Fig. 15. Fig. 19 is a view showing the syntax of STN_table() of Fig. 18. 145441.doc -117- 201105108 FIG. 20 is a block diagram showing a configuration example of a playback device. Fig. 21 is a view showing an example of the configuration of a decoding unit of _. Fig. 22 is a view showing the configuration of video stream processing. Fig. 23 is a view showing the configuration of video stream processing. Fig. 24 is a view showing another configuration for performing video stream processing. Fig. 25 is a diagram showing an example of an AccessUnit. Fig. 26 is a view showing another configuration for performing video stream processing. Fig. 27 is a view showing the configuration of the combining unit and its front stage. Fig. 28 is a view showing another circle in which the combining portion and the front portion thereof are constructed. 29 is a block diagram showing a configuration example of a software creation processing unit. Fig. 3 is a view showing each configuration example including a software creation processing unit. Figure η is a view showing an example of the configuration of a 3D vide〇Ts generating unit provided in the recording device. Fig. 32 is a view showing another configuration example of the 3D Wde ts generating unit provided in the recording apparatus. Fig. 33 is a view showing still another example of the configuration of the 3D virtual ts generating unit provided in the recording apparatus. Fig. 34 is a view showing the configuration of the playback device side for decoding Access Unh. Figure 35 is a diagram showing the decoding process. Figure 36 is a diagram showing the structure of a closed GOP. Figure 37 is a diagram showing the structure of the Open GP. Fig. 38 is a view showing the maximum number of frames and columns in the GOP. Figure 39 is a diagram showing the structure of a closed GOP. 145441.doc •118· 201105108 Figure 40 is a diagram showing the structure of the Open GOP. Fig. 41 is a diagram showing an example of a decoding start position set in the EP_map. Figure 42 is a diagram showing the problem that occurs when the GOP structure of the Dependent view video is not defined. Figure 4 3 shows a diagram of the concept of faceted search. Figure 44 is a diagram showing the AV stream structure recorded on a compact disc. Fig. 4 is a diagram showing an example of a Clip AV stream. Fig. 46 is a diagram conceptually showing EP_ma.p corresponding to the Clip AV stream of Fig. 45. Figure 47 is a diagram showing an example of the data structure of the source packet indicated by SPN_EP_start. Fig. 48 is a diagram showing a sub-table included in the EP_map. Fig. 49 is a view showing an example of the format of the registered PTS_EP_coarse and the registered PTS_EP_fine. Fig. 50 is a view showing an example of the format of the login SPN_EP_coarse and the login SPN_EP_fine. Fig. 51 is a view showing the configuration of an Access Unit. Fig. 52 is a block diagram showing an example of the configuration of a recording apparatus. Figure 53 is a block diagram showing an example of the configuration of the MVC encoder of Figure 52. Figure 54 is a flow chart showing the recording process of the recording device. Figure 55 is a flow chart for explaining the encoding process performed in step S2 of Figure 54. Fig. 56 is a block diagram showing an example of the configuration of a playback apparatus. Figure 57 is a block diagram showing an example of the configuration of the MVC decoder of Figure 56. Figure 58 is a flow chart showing the playback processing of the playback device. 145441.doc .119 - 201105108 FIG. 59 is a flow chart for explaining the decoding process performed in step S32 of FIG. Figure 60 is a flow chart showing the decoding process performed in step S32 of Figure 58 following Figure 59. Figure 61 is a flow chart showing the random access playback process of the playback device. 62A and 62B are diagrams showing states of a Base view video stream and a Dependent view video stream. Figure 63 is a diagram showing an example of the coding position of the HRD parameters in the Base view video stream. Fig. 64 is a view showing the format of the description when the HRD parameters are encoded at the position shown in Fig. 63. Fig. 65 is a diagram showing an example of the encoding position of max-dec_frame_buffering in the Base view video stream. Fig. 66 is a view showing a description format when max_dec_frame_buffering is encoded at the position shown in Fig. 65. Fig. 67 is a diagram showing an example of the coding position of the HRD parameters in the Dependent view video stream. Fig. 68 is a diagram showing the description format when the HRD parameters are encoded at the position shown in Fig. 67. Fig. 69 is a view showing another description format when the HRD parameters are encoded at the position shown in Fig. 67. Figure 70 is a diagram showing an example of the encoding position of max_dec frame_buffering in the Dependent view video stream. Fig. 71 is a diagram showing the description format of the frame at the position shown in Fig. 7〇 at the position of the mother max dec 145441.doc -120·201105108 frame_buffering. Figure 72 is a diagram showing other description formats when max_dec frame_buffering is encoded at the position shown in Figure 70. Figure 73 is a flow chart showing the recording process of the recording device. Fig. 74 is a flow chart showing the playback processing of the playback apparatus. Fig. 75 is a view showing an example of parameter setting. Fig. 76 is a view showing another example of parameter setting. Fig. 77 is a block diagram showing another configuration example of the MVC decoder. Fig. 78 is a view showing another example of parameter setting. Fig. 79 is a view showing an example of parameter setting. Fig. 80 is a view showing another example of parameter setting. Fig. 8 is a view showing still another example of parameter setting. Figure 82 is a diagram showing the verification device. Fig. 83 is a view showing the functional configuration of the HRD. Fig. 8 is a diagram showing a verification example. Fig. 85 is a view showing another example of verification. Fig. 86 is a diagram showing a description example of the view_type. Fig. 87 is a view showing another example of the description of the view_type. Figure 88 is a block diagram showing an example of the hardware configuration of a computer. [Description of main component symbols] 1 Playback device 2 Disc 3 Display device 11 MVC encoder 14544I.doc -121 - 201105108 21 22 23 24 25 51 52 53 54 55 56 57 H.264/AVC encoder H.264/AVC decoding Depth calculation department

Dependent view video編石馬器 多工器 控制器 磁碟驅動器 記憶體 局部儲存器 網際網路介面 解碼部 操作輸入部 145441.doc -122-Dependent view video horological multiplexer controller disk drive memory local storage internet interface decoding operation input section 145441.doc -122-

Claims (1)

201105108 七、申請專利範圍: ι· 一種播放裝置,其包括: 取得機構,其取得以牲定 . 仟以特疋之編碼方式編碼複數個影像 貧料而得之基本串流與擴展串流; 解碼機構’其對由上述取值她^ 义取侍機構所取得之上述基本串 流與上述擴展串流進行解碼;及 切換機構’其基於表示表示上述基本串流與上述擴展 串机中之哪-串流為左圖像之串流抑或是右圖像之串流 的旗標’切換上述解碼機構所產生之解碼結果的資料之 輸出目的地。 2·如請求項1之播放裝置,其更包括: 生成左圖像之平面的第丨生成機構;及 生成右圖像之平面的第2生成機構;且 上述切換機構係將上述基本串流與上述擴展串流中、 藉由上述旗標而表示為左圖像串流之串流之解碼結果之 資料輸出至上述第Κ成機構,且將另—串流之解碼結 果之資料輸出至上述第2生成機構。 3. 如請求項1之播放裝置,其中 上述切換機構係基於PID,識別經上述解碼機構解碼 之資料係為上述基本串流之解碼結果之資料抑或是上述 擴展串流之解碼結果之資料。 4. 如請求項1之播放裝置,其中 上述切換機構係基於編碼時設定於上述擴展串流中之 視角之識別資訊,識別經上述解碼機構解碼之資料為上 I45441.doc 201105108 述基本串流之解碼結果之資料、抑或是上述擴展串流之 解碼結果之資料。 5 ·如請求項4之播放裝置,其中 上述切換機構係基於所設定之上述視角之識別資訊, 識別上述擴展串流之解碼結果之資料,並且將未設定上 述視角之識別資訊的串流之解碼結果之資料識別為上述 基本串流之解碼結果之資料。 6.如請求項1之播放裝置,其中 上述取得機構係自所安裝之記錄媒體令讀出並取得上 述基本串流與上述擴展串流,並且自上述記錄媒體中讀 出描述上述旗標且用》控制±述基纟串流與上述擴展串 流之播放的播放控制資訊,且 上述切換機構係基於由上述取得機構自上述記錄媒體 中讀出之上述播放控制資訊令所描述之上述旗標,切換 上述解碼機構所產生之解碼結果之資料的輸出目的地。、 7·如請求項1之播放裝置,其中 上 2传機構係取得附加於上述基本串流與上述擴展 之至少任一各串流内且描述上述旗標之附加資 机,且 只 上边切換機構係基於由 Λ 田上迻取侍機構而取得之上述附 加資讯中所描述之上述旗 之解碼、纟胃㈣h、 祕糾構所產生 衣<貧科的輸出目的地。 8.如請求項7之播放裝置,其中 上述取得機構係取得附加於播送而來之上述基本串流 I45441.doc 201105108 各串流内且描述上述旗標 與上述擴展串流中之至少任_ 之附加資訊。 9. 如請求項1之播放裝置,其中 上述取得機構係接收並取得傳送而來之上述基本串产 與上述擴展串流,並且接收描述上述旗標且心控制上 述基本串流與上述擴展串流之傳送的傳送控制資訊,且 上述切換機構係基於經上述取得機構接收之上述傳送 控制資訊中所描述之上述旗標,切換上述解碼機構所產 生之解碼結果之資料的輪出目的地。 10. 如請求項9之播放裝置,其中 上述取得機構係接收並取得經由無線電波傳送而來之 上述基本串流與上述擴展串流,並且純描述上述旗標 且用於控制上述基本串流與上述擴展串流之傳送的傳送 控制資訊。 11. -種播放方法,其包括如下步驟: 取得以特定之編碼方式編碼複數個影像資料而得之基 本串流與擴展串流, 對所取得之上述基本串流與上述擴展串流進行解碼, 基於表示上述基本串流與上述擴展串流中之哪一串流 為左圖像之串流抑或是右圖像之串流的旗標, : 結果之資料的輪出目的地。 ' 12. —種使電腦執行包含如下步驟之處理之程式,該步驟係 取得以特定之編碼方式編碼複數個影像資料而得之基 本串流與擴展串流, 145441.doc 201105108 =所取得之上述基本串流與上述擴展串流進行 基於表示上述基本申流與上述擴展串流中之哪一串流 為左圖像之串流抑或是右圖像之串流的旗標切換解碼 結果之資料的輸出目的地。 145441.doc201105108 VII. Patent application scope: ι· A playback device, comprising: an acquisition mechanism, which acquires a basic stream and an extended stream obtained by encoding a plurality of image poor materials by a special encoding method; The mechanism 'decodes the above-mentioned basic stream obtained by the above-mentioned value from the server and the extended stream; and the switching mechanism 'based on the representation of the basic stream and the above-mentioned extended stringer - The stream is a stream of the left image or a stream of the right image 'the destination of the data of the decoding result generated by the above decoding mechanism. 2. The playback device of claim 1, further comprising: a second generation mechanism that generates a plane of the left image; and a second generation mechanism that generates a plane of the right image; and the switching mechanism associates the basic stream with The data of the decoding result of the stream represented by the left image stream by the flag is output to the first forming mechanism in the extended stream, and the data of the decoding result of the other stream is output to the above 2 generation agency. 3. The playback device of claim 1, wherein the switching mechanism identifies, based on the PID, whether the data decoded by the decoding mechanism is data of a decoding result of the basic stream or a decoding result of the extended stream. 4. The playback device of claim 1, wherein the switching mechanism identifies the data decoded by the decoding mechanism based on the identification information of the viewing angle set in the extended stream at the time of encoding, and the basic stream is described in the above I45441.doc 201105108 The data of the decoding result or the decoding result of the above extended stream. The playback device of claim 4, wherein the switching mechanism identifies the data of the decoding result of the extended stream based on the set identification information of the view angle, and decodes the stream of the identification information of the view angle not set. The result data is identified as the data of the decoding result of the above basic stream. 6. The playback device of claim 1, wherein the obtaining means reads and acquires the basic stream and the extended stream from the installed recording medium, and reads and describes the flag from the recording medium. Controlling the playback control information based on the streaming and playback of the extended stream, and the switching mechanism is based on the flag described by the playback control information command read from the recording medium by the obtaining mechanism. The output destination of the data of the decoding result generated by the above decoding mechanism is switched. 7. The playback device of claim 1, wherein the upper transmission mechanism acquires an additional asset attached to at least one of the basic stream and the extension and describes the flag, and only the upper switching mechanism It is based on the above-mentioned additional information obtained by the above-mentioned additional information obtained by the pick-up of the field, and the output of the above-mentioned flag, the stomach (4) h, and the secrets generated by the secrets. 8. The playback device of claim 7, wherein the obtaining means obtains, in the stream of the basic stream I45441.doc 201105108, which is broadcasted, and describes at least any of the flag and the extended stream. Additional information. 9. The playback device of claim 1, wherein the obtaining means receives and acquires the transmitted basic stream and the extended stream, and receives the flag describing and controls the basic stream and the extended stream. The transfer control information is transmitted, and the switching means switches the rounding destination of the data of the decoding result generated by the decoding means based on the flag described in the transfer control information received by the obtaining means. 10. The playback device of claim 9, wherein the acquisition mechanism receives and acquires the basic stream and the extended stream transmitted via radio waves, and describes the flag purely and controls the basic stream and The transmission control information of the above-mentioned extended stream transmission. 11. A playback method, comprising the steps of: obtaining a basic stream and an extended stream obtained by encoding a plurality of image data in a specific coding manner, and decoding the obtained basic stream and the extended stream, Based on a flag indicating whether the stream of the basic stream and the extended stream is a stream of a left image or a stream of a right image, a result of the rounding of the result data. ' 12. A program that causes a computer to execute a process comprising the following steps: obtaining a basic stream and an extended stream obtained by encoding a plurality of image data in a specific encoding manner, 145441.doc 201105108 = The basic stream and the extended stream perform data based on a flag switching decoding result indicating whether the stream of the basic stream and the extended stream is a stream of a left image or a stream of a right image. Output destination. 145441.doc
TW099110155A 2009-04-08 2010-04-01 A playback device, a recording medium, and an information processing method TWI532362B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2009094258 2009-04-08
JP2010065113A JP5267886B2 (en) 2009-04-08 2010-03-19 REPRODUCTION DEVICE, RECORDING MEDIUM, AND INFORMATION PROCESSING METHOD

Publications (2)

Publication Number Publication Date
TW201105108A true TW201105108A (en) 2011-02-01
TWI532362B TWI532362B (en) 2016-05-01

Family

ID=42936241

Family Applications (1)

Application Number Title Priority Date Filing Date
TW099110155A TWI532362B (en) 2009-04-08 2010-04-01 A playback device, a recording medium, and an information processing method

Country Status (7)

Country Link
US (1) US9049427B2 (en)
EP (1) EP2288173B1 (en)
JP (1) JP5267886B2 (en)
KR (1) KR20120068658A (en)
CN (1) CN102282858B (en)
TW (1) TWI532362B (en)
WO (1) WO2010116957A1 (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5267886B2 (en) * 2009-04-08 2013-08-21 ソニー株式会社 REPRODUCTION DEVICE, RECORDING MEDIUM, AND INFORMATION PROCESSING METHOD
PL2433429T3 (en) * 2009-05-18 2019-03-29 Koninklijke Philips N.V. Entry points for 3d trickplay
WO2012164920A1 (en) 2011-06-01 2012-12-06 パナソニック株式会社 Video processing device, transmission device, video processing system, video processing method, transmission method, computer program and integrated circuit
CN107277533B (en) 2011-10-28 2020-04-24 三星电子株式会社 Method and apparatus for inter-frame prediction and method and apparatus for motion compensation
JP2013126048A (en) * 2011-12-13 2013-06-24 Sony Corp Transmitting device, transmission method, receiving device, and reception method
MX341068B (en) 2012-04-23 2016-08-05 Panasonic Ip Corp America Image encoding method, image decoding method, image encoding device, image decoding device, and image encoding/decoding device.
CN102780896A (en) * 2012-05-31 2012-11-14 新奥特(北京)视频技术有限公司 Method for supporting three-dimensional (3D) technology through stream media materials
US10021394B2 (en) 2012-09-24 2018-07-10 Qualcomm Incorporated Hypothetical reference decoder parameters in video coding
US9154785B2 (en) * 2012-10-08 2015-10-06 Qualcomm Incorporated Sub-bitstream applicability to nested SEI messages in video coding
US20150287433A1 (en) * 2014-04-04 2015-10-08 Rocky Williform System and method of generating static contiguous media formats from dynamic playlists
CN114080804A (en) * 2019-06-19 2022-02-22 夏普株式会社 System and method for signaling decoded picture buffer information in video coding

Family Cites Families (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW436777B (en) * 1995-09-29 2001-05-28 Matsushita Electric Ind Co Ltd A method and an apparatus for reproducing bitstream having non-sequential system clock data seamlessly therebetween
EP0888018B1 (en) 1996-02-28 2006-06-07 Matsushita Electric Industrial Co., Ltd. Optical disk having plural streams of digital video data recorded thereon in interleaved manner, and apparatuses and methods for recording on, and reproducing from, the optical disk
WO1998025413A1 (en) * 1996-12-04 1998-06-11 Matsushita Electric Industrial Co., Ltd. Optical disc for high resolution and three-dimensional image recording, optical disc reproducing device, and optical disc recording device
US6925250B1 (en) * 1997-08-29 2005-08-02 Matsushita Electric Industrial Co., Ltd. Optical disc for recording high resolution and normal image, optical disc player, optical disc recorder, and playback control information generator
US6819395B2 (en) * 2002-08-30 2004-11-16 Texas Instruments Incorporated Digital cinema transport stream with embedded projector configuration data
US20020009137A1 (en) * 2000-02-01 2002-01-24 Nelson John E. Three-dimensional video broadcasting system
WO2001076257A1 (en) * 2000-03-31 2001-10-11 Koninklijke Philips Electronics N.V. Encoding of two correlated sequences of data
JP2002016919A (en) 2000-04-28 2002-01-18 Sony Corp Information transmissions method and device, information receiving method and device, information recording method and device, and information recording regenerating method and device
US7765567B2 (en) 2002-01-02 2010-07-27 Sony Corporation Content replacement by PID mapping
WO2003103288A1 (en) * 2002-05-29 2003-12-11 Diego Garrido Predictive interpolation of a video signal
KR100523052B1 (en) * 2002-08-30 2005-10-24 한국전자통신연구원 Object base transmission-receive system and method, and object-based multiview video encoding apparatus and method for supporting the multi-display mode
KR100556826B1 (en) * 2003-04-17 2006-03-10 한국전자통신연구원 System and Method of Internet Broadcasting for MPEG4 based Stereoscopic Video
JP4608953B2 (en) 2004-06-07 2011-01-12 ソニー株式会社 Data recording apparatus, method and program, data reproducing apparatus, method and program, and recording medium
US7515759B2 (en) * 2004-07-14 2009-04-07 Sharp Laboratories Of America, Inc. 3D video coding using sub-sequences
US9131247B2 (en) * 2005-10-19 2015-09-08 Thomson Licensing Multi-view video coding using scalable video coding
JP2007180981A (en) * 2005-12-28 2007-07-12 Victor Co Of Japan Ltd Device, method, and program for encoding image
KR101357982B1 (en) * 2006-01-09 2014-02-05 톰슨 라이센싱 Method and apparatus for providing reduced resolution update mode for multi-view video coding
WO2007081177A1 (en) * 2006-01-12 2007-07-19 Lg Electronics Inc. Processing multiview video
JP4793366B2 (en) * 2006-10-13 2011-10-12 日本ビクター株式会社 Multi-view image encoding device, multi-view image encoding method, multi-view image encoding program, multi-view image decoding device, multi-view image decoding method, and multi-view image decoding program
JP2008252740A (en) 2007-03-30 2008-10-16 Sony Corp Remote commander and command generating method, playback apparatus and playback method, program, and recording medium
KR101385884B1 (en) * 2008-01-30 2014-04-16 고려대학교 산학협력단 Method for cording and decording multiview video and apparatus for the same
KR101506217B1 (en) * 2008-01-31 2015-03-26 삼성전자주식회사 Method and appratus for generating stereoscopic image data stream for temporally partial three dimensional data, and method and apparatus for displaying temporally partial three dimensional data of stereoscopic image
JPWO2010038409A1 (en) * 2008-09-30 2012-03-01 パナソニック株式会社 REPRODUCTION DEVICE, RECORDING MEDIUM, AND INTEGRATED CIRCUIT
MY151243A (en) * 2008-09-30 2014-04-30 Panasonic Corp Recording medium, playback device, system lsi, playback method, glasses, and display device for 3d images
US9288470B2 (en) * 2008-12-02 2016-03-15 Lg Electronics Inc. 3D image signal transmission method, 3D image display apparatus and signal processing method therein
BRPI0922722A2 (en) * 2008-12-09 2016-01-05 Sony Corp image processing device and method
WO2010076933A1 (en) * 2008-12-30 2010-07-08 (주)엘지전자 Digital broadcast receiving method providing two-dimensional image and 3d image integration service, and digital broadcast receiving device using the same
CN104618708B (en) * 2009-01-28 2017-07-07 Lg电子株式会社 Broadcasting receiver and its video data handling procedure
KR20100089705A (en) * 2009-02-04 2010-08-12 삼성전자주식회사 Apparatus and method for encoding and decoding 3d video
EP2521363B1 (en) * 2009-02-19 2014-05-14 Panasonic Corporation Playback device
WO2010113454A1 (en) * 2009-03-31 2010-10-07 パナソニック株式会社 Recording medium, reproducing device, and integrated circuit
JP5267886B2 (en) * 2009-04-08 2013-08-21 ソニー株式会社 REPRODUCTION DEVICE, RECORDING MEDIUM, AND INFORMATION PROCESSING METHOD
KR20110139304A (en) * 2009-04-22 2011-12-28 엘지전자 주식회사 Reference picture list changing method of multi-view video
CN102461183B (en) * 2009-06-16 2015-08-19 Lg电子株式会社 Broadcast transmitter, broadcasting receiver and 3D method for processing video frequency thereof
US8446958B2 (en) * 2009-09-22 2013-05-21 Panasonic Corporation Image coding apparatus, image decoding apparatus, image coding method, and image decoding method
US9014276B2 (en) * 2009-12-04 2015-04-21 Broadcom Corporation Method and system for 3D video coding using SVC temporal and spatial scalabilities
JP5916624B2 (en) * 2010-01-06 2016-05-11 ドルビー ラボラトリーズ ライセンシング コーポレイション Scalable decoding and streaming with adaptive complexity for multi-layered video systems

Also Published As

Publication number Publication date
EP2288173A1 (en) 2011-02-23
EP2288173A4 (en) 2013-07-03
JP2010263616A (en) 2010-11-18
TWI532362B (en) 2016-05-01
CN102282858A (en) 2011-12-14
US9049427B2 (en) 2015-06-02
EP2288173B1 (en) 2019-03-20
CN102282858B (en) 2015-08-19
KR20120068658A (en) 2012-06-27
WO2010116957A1 (en) 2010-10-14
JP5267886B2 (en) 2013-08-21
US20110075989A1 (en) 2011-03-31

Similar Documents

Publication Publication Date Title
TWI458337B (en) Information processing apparatus, information processing method, revenue apparatus, and reproductive method
TWI428016B (en) A playback device, a playback method, and a recording method
TWI516092B (en) A playback device, a playback method, and a recording method
TWI532362B (en) A playback device, a recording medium, and an information processing method
JP2010245970A (en) Reproduction device, reproduction method, and program
JP2010245968A (en) Recording device, recording method, reproduction device, reproduction method, recording medium, and program
JP4985883B2 (en) REPRODUCTION DEVICE, REPRODUCTION METHOD, AND RECORDING METHOD
JP4993044B2 (en) REPRODUCTION DEVICE, REPRODUCTION METHOD, AND RECORDING METHOD

Legal Events

Date Code Title Description
MM4A Annulment or lapse of patent due to non-payment of fees