TW201215102A - Signaling for multiview 3D video - Google Patents

Signaling for multiview 3D video Download PDF

Info

Publication number
TW201215102A
TW201215102A TW100124502A TW100124502A TW201215102A TW 201215102 A TW201215102 A TW 201215102A TW 100124502 A TW100124502 A TW 100124502A TW 100124502 A TW100124502 A TW 100124502A TW 201215102 A TW201215102 A TW 201215102A
Authority
TW
Taiwan
Prior art keywords
view
display
data
video
pixel
Prior art date
Application number
TW100124502A
Other languages
Chinese (zh)
Inventor
Philip Steven Newton
Bart Kroon
Gunnewiek Reinier Bernardus Maria Klein
Original Assignee
Koninkl Philips Electronics Nv
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninkl Philips Electronics Nv filed Critical Koninkl Philips Electronics Nv
Publication of TW201215102A publication Critical patent/TW201215102A/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/349Multi-view displays for displaying three or more geometrical viewpoints without viewer tracking
    • H04N13/351Multi-view displays for displaying three or more geometrical viewpoints without viewer tracking for displaying simultaneously
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/111Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/172Processing image signals image signals comprising non-image signal components, e.g. headers or format information
    • H04N13/178Metadata, e.g. disparity information

Abstract

A video processing device (100) is for processing three dimensional [3D] video information and is coupled to an auto-stereoscopic 3D display device (120), e.g. a TV having a lenticular 3D display (123). The video processing device has an input unit (101) for receiving the 3D video data and a video processor (106) for generating a 3D display signal (110) representing the 3D video data and overlayed auxiliary data. The video processing device receives view data including view mask data from the 3D display device, the view mask data defining a pixel arrangement of multiple views to be displayed by the 3D display device. The video processor generates the multiple views according to the view mask data, and includes the multiple views in the display signal. Advantageously the auxiliary data is combined with the main data when generating the multiple views, which avoids artifacts.

Description

201215102 六、發明說明: 【發明所屬之技術領域】 本發明係關於一種用於處理三維[3D]視訊資訊之視訊處 理裝置,該3D資訊包括3D視訊資料及輔助資料,該裝置 包括 輸入構件,其用於接收根據一輸入格式之視訊資料, _ 一視訊處理器,其用於接收該3D視訊資訊並產生一 3D顯 示信號,該3D顯示信號表示該3D視訊資料及根據—顯示 格式之輔助資料,及 須示"面,其用於介接一 3D顯示裝置以傳遞該3D顯示 信號。 ’ 本發明係進一步關於一種用於顯示3D視訊資訊之顯示裝 置,該3D視訊資訊包括3D視訊資料及輔助資料,該裝置 包括 • 一介面,其介接一視訊處理裝置以傳遞表示該3D視訊資 料及根據-顯示格式之該輔助資料之—3D顯示信號, 器其用於顯示多重視圖,一對不同的視圖經 配置以藉由一檢視者之各自眼睛感知, 一顯示處理器’其用於基於該職示信號而提供表示該 多重視圖之一顯示控制信號給該3]〇顯示器。 本發明係進—步關於一種用於經由一視訊處理裝置與一 ”‘1 π·裝置之間之—介面傳遞3D視訊資訊之3D顯示信號、 方法及電腦程式。 本發明係關於基於產生多重視圖而在自動立體顯示器上 157190.doc 201215102 演現3D視訊之領域;不同的視圖係藉由一檢視者之 睛感知。 自眼 【先前技術】 如一 BD播放器或機上盒之一 3D視訊處理裝置可耦入 如電視機或監視器之一 3D顯示裝置以經由合適的介面(轸 佳的,如HDMI之-高速數位介面)上之—顯示信號傳遞二 視訊資料。 除如字幕之主3D視訊輔助資訊之外’圖形或選單或一進 一步視訊信號可與待顯示之主視訊資料組合。視訊資料定 義待顯示之該主視訊之内容。輔助資料定義可與該主視訊 資料(諸如圖形資料或字幕)組合顯示之任意其他資料。在 (例如)主視訊之任意物件前方之—深度處組合該辅助資料 與該主資料以供該3D視訊資料上覆疊之顯示。 該3D顯示裝置經由該介面接收—3D顯示信號並提供不 同的影像給—檢視者之各自眼睛以產生-3D效果1顯示 裝置可為一立體裝置(例如’用於戴有傳遞循序顯示之左 視圖及右視圖至-檢視者之各自左眼睛及右眼睛之快門型 立體眼鏡(shutter gIass)之檢視者)。然而,該顯示裝置亦 可為產生多重視圖(例如,9個視圖)之— 自動立體顯示器 不同的視圖係藉由未戴眼鏡之—檢視者之各自眼睛感知。 本發明係集中於通常被稱為自動立體顯示器之特定類型 =3 D顯不器’ 3 D顯示器在—㈣分佈中提供多個影像使 得-檢視者無需戴眼鏡。當相對於該空間分佈正福地定位 時,該空間配置包括多重視圖(通常至少為5個),且多對該 157190.doc 201215102 等不同的視圖經配置以藉由一檢視者之各自不同的眼睛成 知以產生一 3 D效果。 文章「Integrating 3D Point Clouds with Multi-viewpoint201215102 VI. Description of the Invention: [Technical Field] The present invention relates to a video processing device for processing three-dimensional [3D] video information, the 3D information comprising 3D video data and auxiliary materials, the device comprising an input member, For receiving video data according to an input format, a video processor is configured to receive the 3D video information and generate a 3D display signal, the 3D display signal indicating the 3D video data and the auxiliary data according to the display format. And a " face, which is used to interface a 3D display device to transmit the 3D display signal. The present invention further relates to a display device for displaying 3D video information, the 3D video information including 3D video data and auxiliary data, the device comprising: an interface, which interfaces with a video processing device to transmit the 3D video data And a 3D display signal of the auxiliary data according to the -display format, the display is for displaying multiple views, a pair of different views are configured to be perceived by respective viewers of the respective eyes, and a display processor is used for A display control signal indicating one of the multiple views is provided to the 3] display based on the job signal. The present invention relates to a 3D display signal, method and computer program for transmitting 3D video information through a video processing device and a "1 π · device" interface. The present invention relates to generating more importance based on In the auto-stereoscopic display, 157190.doc 201215102 presents the field of 3D video; different views are perceived by the eye of a viewer. From the eye [Prior technology] such as a BD player or a set-top box 3D video processing The device can be coupled to a 3D display device such as a television or a monitor to transmit a second video data via a suitable interface (such as a high-speed digital interface such as HDMI). In addition to the main 3D video such as subtitles. In addition to the auxiliary information, a graphic or menu or a further video signal may be combined with the main video data to be displayed. The video data defines the content of the main video to be displayed. The auxiliary data definition may be associated with the main video material (such as graphic data or subtitles). Combining any other data displayed, combining the auxiliary material with the master data at a depth of, for example, any object in front of the main video Displaying the overlay on the 3D video data. The 3D display device receives the 3D display signal through the interface and provides different images to the respective eyes of the viewer to generate a -3D effect. The display device can be a stereo device (eg, ' It is used to view the left and right views of the sequential display to the shutter gIass of the respective left and right eyes of the viewer. However, the display device can also generate multiple views. (eg, 9 views) - different views of the autostereoscopic display are perceived by the respective eyes of the viewer without glasses. The present invention focuses on a particular type commonly referred to as an autostereoscopic display = 3 D display The '3D display provides multiple images in the (4) distribution so that the viewer does not need to wear glasses. When positioned with respect to the spatial distribution, the spatial configuration includes multiple views (usually at least 5), and more Different views, such as 157190.doc 201215102, are configured to be known by a different eye of a viewer to produce a 3D effect. Article "Integrating 3D Po Int Clouds with Multi-viewpoint

Video; by Feng Chen, Irene Cheng and Anup Basu; Dept, ofVideo; by Feng Chen, Irene Cheng and Anup Basu; Dept, of

Computing Sc.,Univ. of Alberta, Canada, IEEE 3DTV-CON 2009」描述組合3D主視訊及圖形物件。此一系統中之關鍵 問題之一者係重新構建一經捕獲之場景之深度資訊以實現 §亥組合。該主視訊通常提供僅兩個視圖。在經提出以解決 此問題之大部分方法中,恢復該深度資訊z係轉換為估計 對該深度逆相關之像差d。 該3D視訊處理裝置產生傳遞至該顯示裝置之一顯示信 號。通常該顯示信號提供一左視圖及一右視圖。因此自動 立體裝置需要基於來自該顯示信號之輸入來產生多重視 圖,該顯示信號並非微不足道。 【發明内容】 為基於一立體輸入信號產生多重視圖,必須再生深度資 汛。特疋s之。當如圖形物件之輔助資訊已與該主視訊組 合時,可遮蔽一些視訊資訊。 本發明之一目的係提供一種用於在一多視圖顯示器上顯 示包含輔助資料之3D視訊資訊之系統,其避免產生多重視 圖及假像方面之困難。 為目的根據本發明之一第一態樣,如在[發明所屬 之技術領域]中描述之該視訊處理裝置經配置用於自該3D 顯示裝置接收包含視圖遮罩資料之視圖資料,該視圖遮罩 157190.doc 201215102 資料定義藉由該3D顯示裝置顯示之多重、& 置,且該視訊處理器經配置以根據該現圖之—像素配 多重視圖,且用於將該多重視圖包 ^罩育料而產生 顯示格式不同於該輸入格式。 信號中,該 為此目的’根據本發明之一進一步熊笨 屬之技術領域]中描述之顯示裝置:配;樣’如在[發明所 遮罩資料之視圖資料至該視訊處理裝置二傳遞包含視圖 定義該多重視圖之—像素配置,且該顯八:視圖遮罩資料 基於自該顯示信號擷取根據該視圖遮’ ifL置以 而提供該顯示控制信號。 〕斗之該多重視圖 而且,提供一種用於經由一視訊處理 之間之-介面傳遞3D視訊資訊之3D顯^與一顯示裝置 資訊包括3D視訊資料及輔助資料,’、號該3D視。凡 -該3D顯示信號表示該3D視訊資料及 助資料, 顯示格式之輔 -該顯示裝置包括用於顯示多重視圖之 不同的視圖經配置以藉由―柃親去# & ....員不器,一對 錯* k視者之各自眼睛感知, 該3D顯示信號包括 待自該顯示裝置傳遞至該視訊處理裝置之包含視圖遮罩 貝枓之視圖該視圖料t料定㈣多重視圖之一像 素配置,及 •根據自該視訊處理裝置傳遞至該顯示裝置之該視圖遮罩 資料之該多重視圖。 而且’提供-種經由-視訊處理裝置與—顯示裝置之間 I57190.doc 201215102Computing Sc., Univ. of Alberta, Canada, IEEE 3DTV-CON 2009 describes the combination of 3D primary video and graphics objects. One of the key issues in this system is to reconstruct the depth information of a captured scene to achieve the § hai combination. The main video usually provides only two views. In most of the methods proposed to solve this problem, the recovery of the depth information z is converted to an aberration d that estimates the inverse correlation of the depth. The 3D video processing device generates a display signal that is transmitted to the display device. Typically the display signal provides a left view and a right view. Therefore, the autostereoscopic device needs to generate a multi-value map based on the input from the display signal, which is not trivial. SUMMARY OF THE INVENTION In order to generate multiple views based on a stereo input signal, it is necessary to regenerate the depth of assets. Special 疋 s. When the auxiliary information such as a graphic object has been combined with the main video, some video information can be obscured. It is an object of the present invention to provide a system for displaying 3D video information containing auxiliary data on a multi-view display that avoids the difficulty of creating multiple images and artifacts. According to a first aspect of the present invention, the video processing device is configured to receive view material including view mask data from the 3D display device, as described in the technical field of the invention. The cover 157190.doc 201215102 data defines the multiplex, & display by the 3D display device, and the video processor is configured to match the multi-view of the pixel according to the current image, and for the multi-view package ^The cover material produces a display format different from the input format. In the signal, the display device described in the technical field of the present invention is further described in the technical field of the invention. The view defines a pixel configuration of the multiple view, and the display mask data is provided based on the view signal from the view signal to provide the display control signal. The multi-view of the bucket is also provided. A 3D display and a display device for transmitting 3D video information through a video processing interface include 3D video data and auxiliary data, and the 3D view. Where the 3D display signal indicates the 3D video data and the auxiliary data, the display format is supplemented - the display device includes a different view for displaying the multiple views configured to be used by "柃亲去# & .... The viewer does not have a pair of erroneous views, and the 3D display signal includes a view of the view mask that is to be transmitted from the display device to the video processing device. The view material is set to (4) multiple views. One of the pixel configurations, and the multiple views of the view mask data that are passed from the video processing device to the display device. And 'providing-type-via video processing device and display device between I57190.doc 201215102

該3D视訊資訊包括3D 之一介面傳遞3D視訊資訊之方法 視訊資料及輔助資料, 一對 -該顯示裝置包括用於顯示多重視圖之一扣顯示器 不同的視圖經配置以藉由—檢視者之各自眼睛ΙΓ 該方法包括 -處理該3D視訊資訊並產生— 也斗_ 3D顯不化唬,該3D顯示信號 表示該3D視訊資料及根據一顯示格式之辅助資料, -經由該介面傳遞該3D顯示信號至該顯示裝置, 該方法包括 •將包含視圖遮罩資料之視圖資料自該顯示裝置傳遞至該 視訊處理裝置,該視圖遮罩資料定義該多重視圖之一像素 配置,及 -將根據該視_罩資料之該多重視圖包含於㈣顯 號中。 ’° 而且’提供-種用於經由一視訊處理裝置與一顯示裝置 之間之—介面傳遞扣視訊資訊之__電腦程式產品,該程式 經操作以使一處理器執行如上文定義之方法。 上述特徵具有下列效果。該視圖資料定義顯示性質及檢 視組態,1包含於該視圖資料中之該視圖遮罩資料定義如 藉由該3D顯示裝置產生之該多重視圖之組態及性質,特定 言之係結合-雙凸透鏡或屏障光學元件之-顯示器面板上 之像素之配置及類型’該光學元件引導該等像素之輸出光 使得多重視圖係展示在-空間分佈中。自動立體顯示器本 身已為所熟知。 157190.doc 201215102 包含該視圖遮罩資料之該視圖資料係藉由該3 D顯示裝置 予以輸出並藉由該視訊處理裝置予以接收。該視訊處理裝 置現在意識到該3D顯示器之特定組態及需求,且現在產^ -亥夕重視圓。特定言之,在該視訊處理裝置中執行該輔助 資料與該主視訊之組合,該視訊處理裝置的確具有初始、 完整及未遮蔽主視訊資料。因此包含該輔助資訊之該多重 視圖係產生於該視訊處理裝置中。有利的是,在該3d顯示 裝置中,㈣灰復深度或像差資料。特定言之,不存在歸 因於首先在該主視訊上覆疊該輔助資料且隨後恢復深度以 產生夕重視圖而發生在先前技術中之遮蔽區域。 本發明亦係基於下列認知。傳統上自動立體顯示器必須 基於具有-左視圖及一右視圖之一顯示信號產生大量視圖 (例如’ 9個視圖)。特定言之,如字幕或選單之辅助資訊可 能已覆疊在該主視訊上方。當產生額外的視圖時,該主視 Λ之些。卩分被该輔助資訊遮蔽,且必須在產生其他視圖 寺予以内插或估什。發明者已明白該主視訊完全可用於該 視Λ處理裝置中。發明者已提出傳遞定義各自多重視圖之 性質之視圖遮罩資料至該視訊處理裝置以使該裝置產生該 多重視圖,且隨後覆疊該輔助資料而未引起假像或不需要 估計遮蔽區域。注意,該視圖遮罩資料本身大致上可針對 不同顯不而不同。傳統上,此視圖遮罩資料係僅内部地使 用於該顯不裝置中,例如,當設計該顯示處理器或嵌入式 軟體時。發明者已提出定義適應傳遞視圖遮罩資料之全部 相關參數之一標準化格式。接收該視圖遮罩資料之該視訊 157190.doc 201215102 處理裝置具備藉由兮^目 圖3^罩資料控制之處理功能以產生 耦合至該顯示介面 面之特疋3D顯示裝置所需要之各自視 視圖組係在該鞀+ # $上_ 〇〇 5唬中傳遞(例如,在單獨的圖框中或 在單圖框中傳遞),該等視圖經交錯對應於該顯示裝 置中之像素之最終配置。有利的是,可使用該視訊處理裝 置中可用之處理能力,同時該扣顯示裝置需要較少處理能 力0 在,亥視A處理裝置之—實施例中,該顯示介面經配置用 於該經由該3D顯示信號自該3D顯示裝置接收包含視圖遮 罩資料之視圖資料。此具有之優點在於,該3D顯示裝置經 由傳遞該多重視圖至該扣顯示裝置之相同介面直接傳遞相 關視圖遮罩資料至該視訊處理裝置。在—進—步實施例 中’該顯示介面係一高清晰度多媒體介面[hdmi],其經 配置用於該經由增強擴展顯示識別資料[e_edid]自該3D顯 示裝置接收包含視圖遮罩資料之視圖資料。此具有之優點 在於,該HDMI標準經擴展以能夠產生並傳遞多重視圖。 在該系統之一實施例中,該視圖遮罩資料包括下列之至 少一者: -指示各自視圖之像素之一位置之像素結構資料; -指示一雙凸透鏡顯示器之配置之一顯示器類型指示符; -指示該多重視圖之性質之多視圖資料; -才θ示指派給各自視圖之像素之一重複型樣之性質之遮罩 週期資料; -指示用於各自色彩之子像素之一結構之子像素資料; 15719〇.<j〇c -10- 201215102 -指示在该顯示器之该專像素上組態之一透鏡之配置之透 鏡資料。 此具有之優點在於,使用上述資料元素之一合適的組 合,多種3D顯示器之性質係可定義的。 在該系統之一實施例中,該視圖遮罩資料包括一像素處 理清晰度’且该視訊處理器經配置以執行該像素處理清晰 度以產生該多重視圖。該處理清晰度定義必須如何產生該 多重視圖之像素。藉由提供並執行此此程式碼,產生多重 視圖之一極靈活的系統可製作而成。此具有之優點在於, 仍可適應具有不匹配視圖遮罩資料之一預定組參數之一像 素配置或性質之顯示器。 在該系統之一實施例中,該視圖資料包括下列之至少一 者: -指示3D檢視之設定之使用者參數; -指示該3D顯示器之能力之顯示參數; 且該視訊處理器經配置以基於該等參數調適該多重視 圖。如取決於該顯示器之結構元件之視圖之性質,使用者 參數(例如,一較佳深度範圍或深度限制)或顯示參數傳遞 至該視訊處理裝置並使該多重視圖被調適 输至-最小深度)。此具有之優點在於:該3 = °〇負經調適而適於使用者偏好及/或檢視。 根據本發明之裝置及該方法之進一步較佳實施例係在附 加技術方案中給定’該等附加技術方案之内容以引用併入 本文。在附屬技術方案中針對-特定方法或裝置定義之特 157190.doc •11· 201215102 徵對應地適用於其他裝置或方法。 【實施方式】 參考藉由下列描述中之實例描述之實施例及參考隨附圖 式將進一步瞭解並闡明本發明之此等及其他態樣。 該等圖式僅係概略圖的且並未按比例繪製。在該等圖式 中,對應於已描述之元件之元件具有相同參考數字。 圖1展示處理三維(3D)視訊資訊之一系統。該3D視訊資 訊包含亦稱為主視訊資料之3D視訊資料及諸如字幕、圖形 及八他額外視覺資訊之輔助資料。一 3d視訊處理裝置1 〇〇 係耦合至一3D顯示裝置120以傳遞一3D顯示信號丨丨^。 該3D視訊處理裝置具有用於接收根據—輸入格式之扣 視訊資料之輸入構件,其包含擷取該3〇視訊資料之一輸入 2兀101(例如,一視訊碟播放器、媒體播放器或一機上 盒)。舉例而言,t玄等輸入構件可包含自一光學記錄載體 1〇5(如,一 DVD或—藍光光碟(BD))榻取視訊及輔助資訊 之一光碟單元1G3。在—實施例中,該等輸人構件可包含 輕口 t Γ網路Μ例如,網際網路或—廣播網路)之一網路 :面:兀102。可自—廣播、遠端媒體伺服器或網站擷取 該3D視訊處理裝置亦可為-衛星接收器或直接 顯示信號之—媒體餘器輸出待耦合至一 設:使用2 3D顯示信號之任意視訊裝置。該裝置可具備 元件。偏好(例如’戰訊之演現參數)之使用者控制 鶴視訊處理裝置具有耗合至該輪入單元⑻之 157l90.doc -12· 201215102 處理器106以處理視訊資訊,用於產生待經由-顯示介面 單元_專遞至該顯示裝置之一職示信號11〇。該輔 料可添加至該視訊資料(例如,在該主視訊上覆疊字幕卜 該視訊處理||1()6經配置用於將該視崎訊包含於待傳遞 至該3D顯示裝置120之3D顯示信號11〇中。 該3D顯示裝置12〇係用於顯示3D視訊資訊。該裝置具有 接收職示控制信號以藉由產生多重㈣而顯㈣視_ 讯之一 3D顯示器123,例如一雙凸透鏡1^1)。進一步參考 圖2至圖4闡明該3D顯示器。該裝置具有用於接收包含自該 3D視訊處理裝置100傳遞之3D視訊資訊之sd顯示信號丨ι〇 之一顯示介面單元12卜該裝置具有耦合至該介面i2i之一 顯示處理器U2。經傳遞之視訊資料係在該顯示處理器122 中處理,以基於該3D視訊資料而產生在—3D顯示器123上 演現3D視訊資訊之3D顯示控制信號。該顯示裝置12〇可為 提供多重視圖之任意類型立體顯示器,且具有藉由箭頭 124私示之一顯不深度範圍。該顯示裝置可具備用於設定 該顯示器之顯示參數(例如,對比、色彩或深度參數)之使 用者控制元件。 該輸入單元101經配置以自一源擷取視訊資料。可於該 裝置中產生辅助資料(例如,選單或按鈕),或亦可自一外 部源(例如,經由網際網路)接收輔助資料,或可藉由該源 連同該主視訊資料一起提供輔助資料(例如,各種語言之 子幕’可錯由使用者選擇該等語言之一者)。 該視訊處理器106經配置以處理該3D視訊資訊,如下所 157190.doc -13· 201215102 示°魏訊處㈣處理㈣視訊資訊並產生該聊示信 號。該3D顯示信號表示㈣視訊資料及根據—顯示格式 (例如,HDMI)之輔助資料。_ β顯不"面介接該3D顯示 裝置120以傳遞該3〇顯示信號。該視訊處理裝置⑽經配置 以(例如)在耦合至該顯示裝置時自該3D顯示裝置動態地接 收。3視圖遮罩資料之視圖資料。該視圖遮罩資料定義該 多重視圖之-像素配置’如下文詳細討論。舉例而言可 經由該顯示介面傳遞該視圖資料之至少部分。 該顯示處理器122經配置以基於如在該介面121上接收到 之該3D顯示信號而提供表示該多重視圖之_顯示控制信號 給該3D顯示器。該顯示裝置經配置以傳遞包含視圖遮罩資 料之視圖資料至該視訊處理裝置。該視圖遮罩資料可儲存 於(例如)該3D顯示裝置之生產期間提供之一記憶體中。該 顯示處理器或一進一步控制器可經由該介面傳遞該視圖遮 罩資料’即’沿朝該視訊處理裝置之方向。該顯示處理器 經配置以基於自該顯示信號擷取根據該視圖遮罩資料之多 重視圖而提供該顯示控制信號。 在該視訊處理裝置之一實施例中,該顯示介面經配置用 於該經由該3D顯示信號自該3D顯示裝置接收包含視圖遮 罩資料之視圖資料。可藉由該3D顯示裝置將該視圖資料包 含於經由一合適的高速數位視訊介面傳遞之一雙向31)顯示 信號中(例如,包含於使用熟習的HDMI介面(參見2006年11 月 10 曰「High Definition Multimedia Interface Specification」 版本1.3a)之一HDMI信號中),特定言之參見關於經擴展以 157190.doc •14- 201215102 疋義下文疋義之視圖資料之經由增強擴展顯示識別資料 (E-EDID資料結構)章節8 3。因此,在一進一步實施例 中,该顯不介面係—高清晰度多媒體介面[HDMI],其經 配置用於该經由增強擴展顯示識別資料[E_EDID]自該31)顯 不裝置接收包含視圖遮罩資料之視圖資料。參考圖9至圖 13描述特定實例。 在一實施例中,視圖資料經由一單獨的路徑(例如,經 由一區域網路或網際網路)傳遞,舉例而言,該顯示裝置 之製造商可經由一網站、一軟體更新、一裝置内容表 (device property table),經由一光碟或一USB記憶體裝置 等等提供該視圖資料之至少部分。 «亥視圖寊料包含疋義藉由該3 d顯示裝置顯示之多重視.圖 之一像素配置之視圖遮罩資料。該視訊處理器經配置以根 據該視圖遮罩資料而產生多重視圖。待組合之任意辅助資 料係覆疊在該主視訊資料上。隨後將該多重視圖包含於該 顯示信號中。 庄思,邊3 D顯示彳§號之顯示格式不同於該輸入格式,特 定吕之,關於該多重視圖之數目。在該輸入格式中,視圖 之數目通常係2,即,一左視圖及一右視圖。如現在闡 明,在輸出格式中,視圖之數目係藉由該31)顯示裝置判定 (例如,7或9)。 =2展示提供多重視圖之一3〇顯示器。一顯示器面板以 之一水平掃描線經示意地展示,且提供藉由7個分散箭頭 指示之多重視圖23。該等視圖係產生於該顯示器前方之一 157190.doc •15· 201215102 合適的檢視距離處(例如,於一電視接收機前方2米p 一對 不同的視圖係藉由一檢視者22之各自眼睛感知。在每一視 圖中’藉由該面板之—各自像素24產生—經感知之像素, 7個像素之一序列對應於7個視圖之各者中之一像素。注 意’貫際上’需要不同的子像素來提供彩色演現3D視訊之 至少二種色彩。沿掃描線重複7個像素24之序列,且光學 元件(如透鏡或屏障)係位於該顯示器面板前方以引導自該 等各自像素發出之光至各自不同視圖。 圖3展示一雙凸透鏡勞幕。該圖式展示具有組成一 6個視 圖3D顯示之一週期之6個像素之一重複型樣之一顯示器面 板31 (例如,一 LCD面板)。—雔几、未拉〜〆 ; 又凸透鏡32係安裝於該顯示 器面板前方以朝多重視圖34產 丄 y 。 座生光束33。该面板中之像素 係編號為1、2、 6,曰· …且6亥荨視圖係對應地編號。一檢視 者35之一眼睛感知第三視 力眼睛感知第四視圖。 雙凸透鏡顯示器係能夠針對不同水平檢視方向展示多個 通常為8或9)之一視差扣顯示器。以此方法,該檢視 者可經歷運動視差及立體線. 索(Stereosc〇Pic cue)。存在兩 種效果,這係因為眼睛感知 μ 芽砍知不问視圖,且藉由水平移動頭 。[5而改變所感知之視圖。 該等雙凸透鏡適應:對於一 特疋視角,該檢視者僅看j 基礎LCD之一子組子像素。 .^尺将疋§之,若針對各種檢名 方向對相關聯之像素設定適告 k田值,則該檢視者將看見來自 不冋的私視方向之不同的影像。 ^ . 此貫現决現立體影像之习 此性。各種類型多視圖顯示器 身已為所熟知。一基本類 157190.doc • 16 - 201215102 型具有垂直的透鏡薄片,使得為視圖而犧牲水平解析产。 為平衡垂直及水平方向之解析度損失,已研發出傾斜的透 鏡。-第三類型係-分率式視圖顯示器,在該分率式視圖 顯示器中,透鏡之節距比該(子)像素寬度寬—非整數倍。 因此,哪一個像素組成哪一個視圖係組態特有的,且必須 基於一可用的視訊輸入信號而產生用於各自視圖之對應顯 示控制信號。 圖4展示產生多重視圖。示意地指示產生多重視圖料之 處理。於一輸入單元中接收一輸入信號47用於一解多工步 驟40。該解多工步驟擷取各自視訊資料41(在該實例中係 一2D圖框及深度圖z,及視需要來自該輸入信號之進一步 輔助資料、音訊資料等等)。特定言之,亦可自(例如)包含 於該輸入信號中之一標題或資料訊息擷取控制參數43以調 整該視訊資訊之演現。在執行產生9個多重視圖料之一演 現粒序42之一處理器中處理該視訊資訊,每一視圖係自一 梢祕不同位置檢視之相同場景。在一進一步處理步驟 中’父織έ亥多重視圖’即,按照控制該3 〇顯示器面板(例 如,如上文參考圖2及圖3討論之一面板)格式化該多重視 圖。注意,傳統上,在直接耦合至該顯示器面板之一顯示 處理器中執行該處理。然而,在根據本發明之系統中,在 該視訊處理裝置中執行該處理,其中交織之一最終步驟根 據一顯不格式(例如’ HDMI)而產生輸出3D顯示信號。舉 例而言,可循序傳遞該等視圖,或可傳遞一單一交織圖 框’該單一交織圖框具有對應於該3D顯示裝置中之像素之 157190.doc •17· 201215102 實體位置(如藉由該視圖遮罩資料定義)的像素資料。 圖5展示-顯示器之一視圖遮[該圖式示意地展示呈 有用於各自原色紅、綠及藍色之子像素行52 之—顯 示器面板5卜實際上,可使用更多或不同的色彩十亥面員 板上提供-傾斜的雙凸透鏡53。歸因於該雙凸.透鏡及針、 -特定方向而檢視,一檢視者僅可看見一些像素,該—些 像素組成如該圖式之右半部中所示之一視圖54。在該面: 51上藉由粗體矩形55反白顯示可見之各自像素。^ 對於此9個視圖顯示,可針對9個檢視方向識別9個不同 的子組像素。目此’可同時顯示9個不同的影像。綠製此 等9個影像(每個影像對應於基礎矩陣顯示之子像素之其相 關聯之子組)之程序係被稱為交錯。當照亮一子像素時目 其照明該子像素上方之整個透鏡’從如該右邊—半圖式中 可見其對應的檢視方向來看。 一單一視圖之各自像素之位置係被稱為一視圖遮罩。該 ,圖遮罩可藉由-組視圖遮草資料(例如,-各自視圖之 母:像素相對於—起始點之位置)定義。通常該視圖遮罩 之型樣係重複的,且該視圖遮罩資料僅需要定義該型樣之 一單一週期(即,總螢幕之僅一小部分)。然而,具有一非 重複型樣或-極複雜型樣之3D多視圖顯示器亦係可能的。 對於此等顯示器,可針對整個螢幕大小提供—視圖遮罩。 該視圖遮罩資料亦可包含該透鏡之參數(如寬度或傾角)。 下文給定視圖遮罩資料之進一步實例。 圖6展示—視圖演現程序。注意,傳統上係在一多視圖 157190.doc 201215102 3D顯示裝置中執行總程序。3D資料内容61係可用作含有 根據一視訊格式之視訊資訊之一資料串流,諸如L+R(Left +Right)、2D+深度(2維資料及深度圖)、多視圖深度The 3D video information includes a 3D video interface for transmitting 3D video information, and a pair of display devices including a display for displaying multiple views. A different view of the display is configured to be viewed by the viewer The respective methods include: - processing the 3D video information and generating - doubling the 3D display signal indicating the 3D video data and auxiliary data according to a display format - transmitting the 3D via the interface Displaying a signal to the display device, the method comprising: transmitting view data including view mask data from the display device to the video processing device, the view mask data defining a pixel configuration of the multiple view, and - based on The multiple views of the view data are included in the (4) display. And providing a computer program product for transferring video information via a interface between a video processing device and a display device, the program being operative to cause a processor to perform the method as defined above. The above features have the following effects. The view data defines the display property and the view configuration, and the view mask data defined in the view data defines the configuration and properties of the multiple views generated by the 3D display device, and the specific words are combined - The configuration and type of lenticular or barrier optics - the pixels on the display panel - directs the output light of the pixels such that the multiple views are displayed in a - spatial distribution. Autostereoscopic displays are well known per se. 157190.doc 201215102 The view data including the view mask data is output by the 3D display device and received by the video processing device. The video processing device is now aware of the specific configuration and needs of the 3D display, and is now producing a focus on the circle. Specifically, the combination of the auxiliary material and the primary video is performed in the video processing device, and the video processing device does have initial, complete, and unmasked primary video material. Thus the multiple views containing the auxiliary information are generated in the video processing device. Advantageously, in the 3d display device, (iv) gray depth or aberration data. In particular, there is no occlusion region that occurred in the prior art due to the first overlay of the auxiliary material on the primary video and subsequent restoration of depth to produce a gradual view. The invention is also based on the following recognition. Traditionally, autostereoscopic displays have to generate a large number of views (e.g., '9 views) based on one of the -left and one right views. In particular, auxiliary information such as subtitles or menus may have been overlaid on the main video. When the extra view is generated, the main view is a bit more. The score is obscured by the auxiliary information and must be interpolated or evaluated in the generation of other views. The inventors have appreciated that the primary video is fully available in the video processing device. The inventors have proposed passing view mask data defining the properties of the respective multiple views to the video processing device to cause the device to generate the multiple views, and then overlaying the auxiliary material without causing artifacts or estimating the masked region. . Note that the view mask data itself can be roughly different for different purposes. Traditionally, this view mask data has been used internally only in the display device, for example, when designing the display processor or embedded software. The inventors have proposed a standardized format that defines one of all relevant parameters adapted to pass the view mask data. Receiving the video of the view mask data 157190.doc 201215102 The processing device is provided with a processing function controlled by the data to generate a special view of the special 3D display device coupled to the display interface The group is passed in the 鼗+#$上_〇〇5唬 (eg, in a separate frame or in a single frame), the views being interleaved to correspond to the final configuration of the pixels in the display device . Advantageously, the processing power available in the video processing device can be used while the buckle display device requires less processing power. In the embodiment of the Haishi A processing device, the display interface is configured for the The 3D display signal receives view data including view mask data from the 3D display device. This has the advantage that the 3D display device directly passes the associated view mask data to the video processing device via the same interface that passes the multiple views to the buckle display device. In the embodiment of the present invention, the display interface is a high definition multimedia interface [hdmi] configured to receive the view mask data from the 3D display device via the enhanced extended display identification data [e_edid]. View data. This has the advantage that the HDMI standard is extended to be able to generate and deliver multiple views. In one embodiment of the system, the view mask material includes at least one of: - pixel structure data indicating a location of a pixel of a respective view; - a display type indicator indicating a configuration of a lenticular display; a multi-view data indicating the nature of the multi-view; - θ indicates the mask period data of the nature of the repeat pattern assigned to one of the pixels of the respective view; - sub-pixel data indicating the structure of one of the sub-pixels for the respective color 15719〇.<j〇c -10- 201215102 - Indicates the lens data for configuring the configuration of one of the lenses on the dedicated pixel of the display. This has the advantage that the properties of the various 3D displays can be defined using a suitable combination of one of the above data elements. In one embodiment of the system, the view mask data includes a pixel processing resolution' and the video processor is configured to perform the pixel processing sharpness to produce the multiple view. This processing definition defines how the pixels of the multiple view must be generated. By providing and executing this code, a very flexible system that produces multiple views can be made. This has the advantage that it can still accommodate displays having a pixel configuration or nature that is one of the predetermined set of parameters that does not match the view mask data. In one embodiment of the system, the view material includes at least one of: - a user parameter indicating a setting of the 3D view; - a display parameter indicating a capability of the 3D display; and the video processor is configured to be based on These parameters adapt the multiple views. Depending on the nature of the view of the structural elements of the display, user parameters (eg, a preferred depth range or depth limit) or display parameters are passed to the video processing device and the multiple views are adapted to a minimum depth ). This has the advantage that the 3 = °〇 is adapted to suit the user's preferences and/or view. Further preferred embodiments of the device according to the invention and of the method are given in the appended technical solutions. The contents of these additional technical solutions are incorporated herein by reference. In the subsidiary technical solution, the specific method or device definition is specifically applied to other devices or methods. 157190.doc •11·201215102 is applicable to other devices or methods. [Embodiment] These and other aspects of the present invention will be further understood and clarified by reference to the embodiments of the invention. The drawings are only schematic and not drawn to scale. In the figures, elements that correspond to the elements that have been described have the same reference numerals. Figure 1 shows a system for processing three-dimensional (3D) video information. The 3D video information includes 3D video data, also known as main video data, and auxiliary materials such as subtitles, graphics and additional visual information. A 3d video processing device 1 is coupled to a 3D display device 120 for transmitting a 3D display signal. The 3D video processing device has an input component for receiving a video data according to an input format, and includes inputting one of the three video data inputs 2兀101 (for example, a video disc player, a media player or a On-board box). For example, the input member such as t Xuan may include a disc unit 1G3 that receives video and auxiliary information from an optical record carrier 1〇5 (for example, a DVD or a Blu-ray Disc (BD). In an embodiment, the input components may comprise a network of light ports, for example, an internet or a broadcast network: face: 兀 102. The 3D video processing device can be captured from a broadcast, a remote media server or a website. The satellite receiver or the direct display signal can also be coupled to a device: any video using 2 3D display signals. Device. The device can be equipped with components. The user control crane video processing device having a preference (such as the 'competition parameter of the warfare') has a processor 106 that is consuming the 157l90.doc -12·201215102 processor of the wheeling unit (8) to process the video information for generating a to-be- The display interface unit _ is delivered to one of the display devices. The auxiliary material can be added to the video material (for example, overlaying the subtitle on the main video), the video processing ||1() 6 is configured to include the visual image in the 3D to be transmitted to the 3D display device 120. The display signal 11 is displayed. The 3D display device 12 is used for displaying 3D video information. The device has a 3D display 123, such as a lenticular lens, for receiving a job control signal to generate multiple (4) video signals. 1^1). The 3D display is further explained with reference to Figs. 2 to 4 . The device has a display interface unit 12 for receiving a 3D video signal transmitted from the 3D video processing device 100. The device has a display processor U2 coupled to the interface i2i. The transmitted video data is processed in the display processor 122 to generate a 3D display control signal for the 3D video information on the 3D display 123 based on the 3D video data. The display device 12A can be any type of stereoscopic display that provides multiple views and has a depth range that is shown by one of the arrows 124. The display device can be provided with a user control element for setting display parameters (e. g., contrast, color or depth parameters) of the display. The input unit 101 is configured to retrieve video material from a source. Auxiliary materials (eg, menus or buttons) may be generated in the device, or auxiliary materials may be received from an external source (eg, via the Internet), or auxiliary materials may be provided by the source together with the primary video material (For example, a sub-screen of various languages may be wrongly selected by the user for one of the languages). The video processor 106 is configured to process the 3D video information as follows: 157190.doc -13· 201215102 shows that the Wei Wei office (4) processes (4) the video information and generates the chat signal. The 3D display signal indicates (4) video data and auxiliary data according to a display format (for example, HDMI). The _β display does not face the 3D display device 120 to transmit the 3 〇 display signal. The video processing device (10) is configured to dynamically receive from the 3D display device, for example, when coupled to the display device. 3 view mask data view data. The view mask data defines the multi-view-pixel configuration as discussed in detail below. For example, at least a portion of the view material can be communicated via the display interface. The display processor 122 is configured to provide a display control signal representative of the multiple view to the 3D display based on the 3D display signal as received on the interface 121. The display device is configured to communicate view data including view mask data to the video processing device. The view mask data can be stored in one of the memories provided during production of, for example, the 3D display device. The display processor or a further controller can communicate the view mask data 'i' to the direction of the video processing device via the interface. The display processor is configured to provide the display control signal based on multi-views of the view mask data from the display signal. In one embodiment of the video processing device, the display interface is configured to receive view material including view mask data from the 3D display device via the 3D display signal. The view data can be included in the bidirectional 31) display signal via a suitable high speed digital video interface by the 3D display device (for example, included in the familiar HDMI interface (see November 10, 2006 曰 "High Definition Multimedia Interface Specification" One of the HDMI signals in version 1.3a), in particular, refer to the enhanced extension display identification data (E-EDID data) that has been expanded to 157190.doc •14-201215102 Structure) Section 8 3. Therefore, in a further embodiment, the explicit interface system - High Definition Multimedia Interface [HDMI] is configured to receive the display information via the enhanced extended display identification data [E_EDID] from the 31) display device View data of the cover data. A specific example will be described with reference to Figs. 9 to 13 . In an embodiment, the view data is transmitted via a separate path (eg, via a regional network or the Internet). For example, the manufacturer of the display device can update via a website, a software, or a device. A device property table provides at least a portion of the view material via a compact disc or a USB memory device or the like. The «Hai view material contains a lot of emphasis on the display by the 3D display device. Figure One view of the pixel configuration mask data. The video processor is configured to generate multiple views based on the view masking material. Any auxiliary data to be combined is overlaid on the main video material. The multiple view is then included in the display signal. Zhuang Si, the edge 3 D shows that the display format of the 彳 § is different from the input format, specifically Lu Zhi, about the number of the multiple views. In this input format, the number of views is typically 2, i.e., a left view and a right view. As now explained, in the output format, the number of views is determined by the 31) display device (e.g., 7 or 9). The =2 display provides one of the multiple views of the 3" display. A display panel is shown schematically as a horizontal scan line and provides a multi-view 23 indicated by 7 scatter arrows. The views are generated at a suitable viewing distance of one of the front of the display 157190.doc •15· 201215102 (eg, 2 meters in front of a television receiver. A pair of different views is through the respective eyes of a viewer 22 Perceptual. In each view 'by the panel's - respective pixels 24 - the perceived pixel, one of the seven pixels corresponds to one of the seven views. Note the 'speech' needs Different sub-pixels provide at least two colors of the color rendering 3D video. A sequence of 7 pixels 24 is repeated along the scan line, and optical elements (such as lenses or barriers) are located in front of the display panel to guide from the respective pixels. The light is emitted to different views. Figure 3 shows a lenticular screen. The figure shows one of the display panels 31 having one of the six pixels that make up one of the six views of the 3D display (for example, one LCD panel) - 雔 several, not pulled ~ 〆; and convex lens 32 is mounted in front of the display panel to produce 多重 y in multiple views 34. Build beam 33. The pixel number in the panel 1, 2, 6, 曰 ... and 6 荨 views are numbered correspondingly. One of the viewers 35 perceives the third view of the third eye to perceive the fourth view. The lenticular display is capable of displaying a plurality of views for different levels of viewing directions. A parallax display for one of 8 or 9). In this way, the viewer can experience motion parallax and stereoscopic lines (Stereosc〇 Pic cue). There are two effects, because the eye perceives the μ buds without knowing the view and moves the head horizontally. [5] Change the perceived view. The lenticular lenses accommodate: for a particular viewing angle, the viewer only looks at a subset of sub-pixels of the base LCD. The ruler will 疋§, if the k-field value is set for the associated pixel for each checkpoint direction, the viewer will see different images from the unspoken private view direction ^ . This is the reality of stereoscopic image learning. Various types of multi-view displays are well known. A basic class 157190.doc • 16 - 201215102 has a vertical lens sheet that sacrifices horizontal resolution for view. In order to balance the resolution loss in the vertical and horizontal directions, a tilted lens has been developed. - A third type of system - a fractional view display in which the pitch of the lens is wider than the (sub)pixel width - not an integer multiple. Therefore, which pixel constitutes which view configuration is unique, and the corresponding display control signals for the respective views must be generated based on an available video input signal. Figure 4 shows the generation of multiple views. The process of generating multiple views is schematically indicated. An input signal 47 is received in an input unit for a demultiplexing step 40. The demultiplexing step retrieves respective video data 41 (in this example, a 2D frame and depth map z, and further auxiliary data, audio data, etc. from the input signal as needed). In particular, control parameters 43 may also be retrieved from, for example, a header or data message included in the input signal to adjust the presentation of the video information. The video information is processed in a processor that performs one of the nine multiview materials, and each view is viewed from the same location at a different location. In a further processing step, the 'parent woven multi-view' is formatted in accordance with the control panel (e.g., one of the panels discussed above with reference to Figures 2 and 3). Note that this process has traditionally been performed in a display processor that is directly coupled to the display panel. However, in the system according to the present invention, the processing is performed in the video processing device, wherein one of the final steps of the interleaving produces an output 3D display signal based on a display format (e.g., 'HDMI). For example, the views may be passed sequentially, or a single interlaced frame may be passed. The single interlaced frame has a physical location corresponding to a pixel in the 3D display device (eg, by the 157190.doc • 17·201215102) Pixel data for view mask data definition). Figure 5 shows a view of the display. [This figure schematically shows the display of the sub-pixel rows 52 for the respective primary colors red, green and blue - the display panel 5 actually, more or different colors can be used. A slanted lenticular lens 53 is provided on the panel. Due to the lenticular lens and needle, - a particular direction of view, a viewer can only see some of the pixels, which constitute a view 54 as shown in the right half of the figure. On the face: 51, the visible respective pixels are highlighted by a bold rectangle 55. ^ For these 9 view displays, 9 different subgroup pixels can be identified for 9 viewing directions. This can display 9 different images simultaneously. The program of nine images (each image corresponding to its associated sub-pixel of the sub-pixel displayed by the base matrix) is called interlace. When a sub-pixel is illuminated, it illuminates the entire lens above the sub-pixel as seen from its corresponding viewing direction as seen in the right-half pattern. The position of the respective pixels of a single view is referred to as a view mask. The mask can be defined by the group view (for example, - the parent of the respective view: the position of the pixel relative to the starting point). Usually the shape of the view mask is repeated, and the view mask data only needs to define a single cycle of the pattern (ie, only a small portion of the total screen). However, a 3D multi-view display having a non-repetitive pattern or a very complex pattern is also possible. For these displays, a view mask can be provided for the entire screen size. The view mask data may also include parameters of the lens (such as width or tilt). Further examples of view mask data are given below. Figure 6 shows the view-presentation process. Note that the general procedure is traditionally performed in a multi-view 157190.doc 201215102 3D display device. 3D data content 61 can be used as a data stream containing video information according to a video format, such as L+R (Left + Right), 2D + depth (2D data and depth map), multi-view depth

(MVD);參見2004年8月美國加州洛杉磯市《ACM SIGGRAPH and ACM Trans, on Graphics》C. L. Zitnick等 人發表之「High-Quality Video View Interpolation Using a(MVD); see "High-Quality Video View Interpolation Using a" by C. L. Zitnick et al., ACM SIGGRAPH and ACM Trans, on Graphics, Los Angeles, California, August 2004

Layered Representation」,或經由介接一合適的API(諸如Layered Representation, or via a suitable API (such as

OpenGL ;參見 http://www.opengl.org)或 DirectX(參見 http://msdn.microsoft.com/en-us/directx/default.aspx)之一 程式。就該内容而言,在步驟62(演現視圖)中藉由以影像 特徵完成取決於特徵深度及該視圖之一水平平移之此一方 式形變(morphing)來演現視圖。若在步驟62中未考慮該等 視圖,則其等必須在步驟63(消除鋸齒)中濾波(例如,經平 滑化以防止鋸齒)。在此之後,在步驟交錯64中交錯該等 視圖以形成該螢幕之原始解析度之一 可在綱串擾)中應用額外的渡波以(例如 間之串擾。最後產生該3D顯示控制信號,且在步驟^(顯 示)中於該螢幕上顯示該圖框。 ‘ 圖7展示一 3D播放器模型。在傳統的系統中,此一播放 器係耦合至如關於圖6描述之-多視圖3D顯示器。該模型 可應用於DVB機上各 执上皿(DVB set_up _播放器 光光碟(BD)播放器及其他類 — 頰似裝備。視訊資訊71係可自— 輸入串流中得到(例如,自一 自BD或DVB得到)。在該模OpenGL; see http://www.opengl.org) or DirectX (see http://msdn.microsoft.com/en-us/directx/default.aspx). In this regard, the view is rendered in step 62 (presentation view) by performing the morphing of the feature depth and the horizontal translation of one of the views in the image feature. If the views are not considered in step 62, they must be filtered (e.g., smoothed to prevent aliasing) in step 63 (anti-aliasing). Thereafter, interleaving the views in step interleaving 64 to form one of the original resolutions of the screen may apply additional waves in the crosstalk (eg, crosstalk). Finally, the 3D display control signal is generated, and The frame is displayed on the screen in step ^ (display). Figure 7 shows a 3D player model. In a conventional system, this player is coupled to a multi-view 3D display as described with respect to Figure 6. The model can be applied to each DVB set (DVB set_up _ player BD player and other types - cheek-like equipment. Video information 71 can be obtained from the input stream (for example, from one Obtained from BD or DVB)

中,該視訊内容係在可用之3D 卞®之第一平面72上。其 157190.doc 201215102 他平面係專用於如圖形73之輔助資料(諸如字幕、畫中畫 (間)及播放器螢幕上顯示(OSD)74。錢送該等平^至該 螢幕之前合併該等平面之程序係被稱為複合75。此係用於 L+R或多視圖格式之-直接操作,其中該輸出上對該視訊 及該等圖形提供之全部該等視圖存在於該播放器中。然 而,對於中間格式(諸如2D+深度、MVD),但對於其⑽ 式(如OpenGL),需要遮蔽測試或規則以按照此等格式所需 來判定哪一種内容應在—遮蔽層中。 通常一播放器或PC圖形卡與傳輪至該螢幕之格式中之配 合相比具有更多的3D可用資料。舉例而言,若3D視訊内 容係與一半透明選單覆疊,則假像可能與影像+深度、影 像+深度+遮蔽或立體+深度一起作為一格式。其他問題係 關於Z緩衝區至深度轉換。在3〇圖形中,一z緩衝區係用 以保持移除隱藏表面之物件之2值。此緩衝區申之該等值 取決於藉由應用所設定之相機投影,且必須經轉換以適合 用於用作「深度」值。該z緩衝區之值通常並不可靠,這 係因為其極度取決於程式化遊戲之方式。此等問題可藉由 發送全部3D資料至該螢幕而解決,此係因為此產生頻寬問 題且使顯示器更昂貴。 藉由演現視圖之任務自該多視圖3D顯示裝置移動將至該 視訊處理裝置(諸如光碟播放器、—PC或介於該播放器與 忒顯不器之間之一單獨裝置(如一 3D接收器解決該等上 述問題。結果係維持最高的顯示品質,此係因為該演現程 序中使用全部可用的扣資料,且解決頻寬問題,此係因為 157190.doc -20· 201215102 必須經由諸如HDMI或DisplayPort之鏈路發送僅螢幕原始 解析度之圖框。 達成此解決方法需要考慮:存在許多不同的顯示器,且 必須提供定義該3D顯示之資料。此資料係被稱為視圖遮罩 寅料’其經較佳地標準化使得不同品牌的播放器及顯示器 可協作。經由該視圖遮罩資料’該顯示裝置能夠以該播放 器無需意識到產生該多重視圖時之顯示細節之此一方式描 述其實體組態,及基於該3D顯示器之輸入介面上之顯示信 號中提供之多重視圖以一未暴露之方式描述顯示功能。 在一較佳實施例中,提出藉由添加含有關於透鏡或一新 資料區塊中之視圖映射組態之視圖遮罩資料來擴展當前e_ EDID規格。該新資料區塊可經製作成相容於消費者電子 協會(CEA)標準「CEA_861_E A DTV pr〇me 加 Unc〇寧以㈣The video content is on the first plane 72 of the available 3D®. Its 157190.doc 201215102 his plane is dedicated to auxiliary materials such as graphics 73 (such as subtitles, picture-in-picture (inter-picture) and player on-screen display (OSD) 74. Money is sent to the screen before the screen is merged. The program of the plane is referred to as composite 75. This is used for direct operation of the L+R or multiview format in which all of the views provided on the output to the video and the graphics are present in the player. However, for intermediate formats (such as 2D+depth, MVD), but for its (10) (such as OpenGL), masking tests or rules are needed to determine which content should be in the shadowing layer as required by these formats. The device or PC graphics card has more 3D available data than the transfer to the screen format. For example, if the 3D video content is overlaid with a half transparent menu, the artifact may be related to the image + depth. Image + Depth + Mask or Stereo + Depth together as a format. Other issues relate to Z-buffer to depth conversion. In the 3-〇 graph, a z-buffer is used to keep the value of the object that removes the hidden surface. This buffer The value depends on the camera projection set by the application and must be converted to be suitable for use as a "depth" value. The value of the z-buffer is usually not reliable because it is highly dependent on stylization. The way of the game. These problems can be solved by sending all 3D data to the screen, because this creates a bandwidth problem and makes the display more expensive. The task of presenting the view from the multi-view 3D display device will move To the video processing device (such as a CD player, a PC, or a separate device between the player and the display device (such as a 3D receiver to solve the above problems. The result is to maintain the highest display quality, this Because all the available deduction data are used in the presentation process, and the bandwidth problem is solved, this is because 157190.doc -20· 201215102 must send a frame with only the original resolution of the screen via a link such as HDMI or DisplayPort. This solution needs to be considered: there are many different displays, and the data that defines the 3D display must be provided. This data is called view mask data. Jiadi standardization enables different brands of players and displays to collaborate. Masking data via the view's display device can describe its physical configuration in such a way that the player does not need to be aware of the display details when generating the multiple views And displaying the display function in an unexposed manner based on the multiple views provided in the display signal on the input interface of the 3D display. In a preferred embodiment, it is proposed to add a mask or a new data block by adding The View View map configures the view mask data to extend the current e_EDD specification. The new data block can be made to be compatible with the Consumer Electronics Association (CEA) standard "CEA_861_E A DTV pr〇me plus Unc. (4)

High Speed Digital Interfaces,March 2008」。該等視圖遮 罩參數係發送至播放裝置,而該播放裝置基於此等參數計 算用於顯示之正確視圖及映射。 替代地,在一實施例中,根據需要提出定義容許該顯示 器發送關於該處理之資訊之一新資料區塊。其中該視圖遮 罩Η料包括一像素處理清晰度,且該視訊處理裝置中之視 处里器.屋配置以執行該像素處理清晰度以產生該多重視 圖。舉例而言,該處理清晰度係對該播放裝置之該顯示器 特有之-像素著色器(pixd ―叫^接著在該播放裝置之 視afL處理态上執行該像素著色器以產生所演馮之輸出。注 ,S'像素著色器係運算每一像素之色彩及其他屬性之- 157190.doc -21- 201215102 運算核心功能。定義該運算核心功能之額外的視圖遮罩資 料係在τν與該演現/播放裝置之間傳輸。此解決方法之優 點在於其支援顯示細節後處理/濾波。 圖8展不使用視圖遮罩資料之一系統架構。一 3d播放器 78係耦合至一3D顯示器79以傳遞一顯示信號77。該3d顯 不器將包含視圖遮罩資料8〇之視圖資料自一内容資料 (property data)記憶體傳遞至該3〇播放器。根據HDMI,該 顯示資料功能係被稱為EDID 87。在該播放器中,3D資料 81係用以基於該視圖遮罩資料8〇演現多重視圖82。如上所 述,該等視圖可經進一步濾波83消除鋸齒且經交錯84,其 功能現在可在該視訊處理裝置中基於所提供之視圖遮罩資 料執行。該3D顯示裝置79可執行串擾濾波85,且執行顯示 86 /主思,5玄/肖除錯齒及視圖演現步驟可移動至該播放 器仁顯示特有之遽波(例如,為減小串擾)可保持於該顯 不器中。該播放器及該顯示器係藉由一鏈路(諸如Hdmi) 連接。 該播放器能夠自該3D顯示裝置查詢描述螢幕之視圖遮罩 及與s亥多重視圖之處理相關之其他參數之視圖遮罩資料。 在一貫施例中,每一視圖具有一相關聯之視圖遮罩,該視 圖遮罩係一個二進色彩影像,該二進色彩影像之每—子像 素(例如)藉由一個二進值(1 )或(0)指示是否屬於該視圖或另 一視圖。替代地,由於每一子像素完全屬於一視圖,因此 更有效方式用以储存一視圖遮罩用於一多個視圖顯示作 為一影像,在該影像中每一子像素之值係視圖索弓丨,例 157190.doc •22- 201215102 如,具有N個視圖(通常為9,但可能為許多個;實驗已經 具有Μ個)之一順序值(〇 ·Ν_1)β藉由複製僅用於屬於該視 圖之該等子像素之一視圖影像執行交錯。 對於各種類型的雙凸透鏡顯示器(例如,基本垂直、傾 斜及分率式雙凸透鏡顯示器),該視圖遮罩之結構係週期 性的且係關於雙凸透鏡顯示器之類型。為描述整個視圖遮 罩’其充分供應: -視圖中之一子像素之位置及色彩(xref、yref、cref); -相同掃描線上之兩個子像素之間之子像素單元(p)中之距 離(Δχ); -黑色矩陣大小’即,定義子像素之結構; -當向下移動一掃描線時水平移位或傾斜(s);及 -RGB分量之順序。 因此,在一實施例中,該視圖遮罩資料包括: —各自視圖中之—參考子像素之子像素資料; -一掃描線上之子像素單元之間之一距離; 才曰示子像素之一結構之黑色矩陣資料; -該子像素單元中之子像素色彩之一順序; -指示相鄰掃描線中之一位置差之一傾斜。 舉例而σ,一掃描線上之子像素單元之間之距離對應於 透鏡即距。可在此參數巾指示該透鏡節距本身,即,關於 該子像素節距(例如,4·5)表達之微透鏡之寬度。 、 RGB分量之順序亦係用於字形之子像素平滑化。而且, 该視圖遮罩資料可包含雙凸透鏡視圖組態元資料。該扣顯 157190.doc -23- 201215102 不裝置可發送其雙凸透鏡叙態至播放器(或⑽。 該視圖遮罩可藉由包含有限數目個參數而定義,如在先 前段落尹討論。為證明更具未來,一替代性實施方案係將 全部視圖遮罩完全包含在該視圖遮軍資料中。然而,由於 該視圖遮罩通常係週期性的,因此可僅發送一分率,即一 週期。 由於未來的雙凸透鏡螢幕可具有許多視圖,因此發送分 開的視圖遮罩需要大量頻寬。在—實施例中,該視圖遮罩 資料包括每-子像素具有編碼該子像素表示之視圖編號之 一值。額外地,該視圖遮罩資料可包含與一經報告之實體 顯示器大小組合之每一像素編號、檢視視錐之定向,且最 佳檢視距離提供足夠的資訊以正確地演現3D資料。 在一實施例中’該視圖遮罩資料包含以下之至少一者: -指示各自視圖之像素之一位置之像素結構資料; 才曰示又凸透鏡顯示器之配置之一顯示類型的指示符; -指不該多重視圖之性質之多視圖資料; -指示指派給各自視圖之像素之一重複型樣之性質之遮罩 週期資料; -指示用於各自色彩之子像素之一結構之子像素資料; •指示在該顯示器之像素上組態之一透鏡之配置之透鏡資 料。 在下文參考圖9至圖13討論上文視圖遮罩資料之實例。 該視圖遮罩可藉由定義-新CEA資料區塊添加至該£裡〇 規格。舉例而言’可使用針對用於一「三維視圖遮罩資料 157l90.doc •24· 201215102 區塊」(3D View Mask Data Block)之與視訊有關的區塊 (即,6)而保留之「擴展標籤碼」(Extended Tag Code)之一 者。 圖9展示一 3D視圖遮罩資料區塊。該圖式展示該資料區 塊之一表格。該區塊係根據CEA-861-E標準格式化,如針 對該表格中之各種欄位指示。一新欄位(位元組〇、1、 32、33、64、65)係指示新的「擴展標籤碼」並指示資料 區塊中之資料類型。欄位「位元組2」定義視圖數目。攔 位「位元組3_4」定義視圖遮罩之一週期之大小(高度及寬 度)’即’其中之重複型樣。攔位5-6中之參數提供可關於 演現s亥多重視圖之視圖資料之一實例。該資料區塊之部分 基於視圖數目及s亥視圖遮罩之大小而具有一可變大小。搁 位7-31及34-63係藉由根據圖1〇及圖u之表格定義。最後, 欄位66-76及77-85分別係藉由根據圓12及圖〗3之表格定 義。 圖10展示一視圖描述。一表格91展示—視圖之一描述, 即,一長度參數、中心像素之最佳檢視距離處之一視圖位 移。可針對每一視圖重複該表格。 圖11展示子像素之視圖遮罩資料。一表格92展示針對各 自色彩之一組子像素之視圖數目。每一子像素之值提供該 視圖遮罩。 八Μ 圖12展不-子像素結構。該子像素結構亦係被稱為黑色 矩陣一纟格93展示定義該像素結構之參#丈。該像素結構 >數了為對儲存於该「演現(rendering)」裝置中之二主功 T、~表格 J57190.doc -25- 201215102 之一識別碼,該表格提供介於該識別碼與該像素結構之間 之映射。像素佈局參數可為對儲存於該「演現」裝置中 之表格之一識別碼,該表格提供介於該識別碼與該像素 佈局(即,RGB或BGR、V-RGB等等)之間之一映射。 圖13展示透鏡組態資料。該透鏡組態資料可包含於該視 圖遮罩資料中。一表格94展示透鏡參數。透鏡類型可為對 儲存於该「演現」裝置中之一表格之—識別碼〇_255 ’該 表格提供介於透鏡類型與所使用之「透鏡(丨ens)」類型之 間之一映射。(即,屏障、雙凸透鏡、微透鏡陣列等等)。 透鏡參數係對儲存於該「演現」裝置中之一表格之一識別 碼0-255,該表格提供介於透鏡參數識別碼與特定透鏡特 性(形狀、視角等等)之間之一映射。該值取決於透鏡類型 之值。 /主意’顯示器之深度能力在設計之間可能存在差異,例 如,歸因於像素、疊層或膠合透鏡等等前方之一玻璃板之 厚度。因此,在一實施例中,該視圖資料經擴展以包含指 示δ玄3 D顯示器之深度能力之深度參數。該視訊處理器經配 置以基於該等深度參數調適該多重視圖。藉由應用該視圖 資料,該視訊處理器能夠調整該多重視圖至該等深度能 力。 在一實施例中,使用者偏好設定元資料可包含於該視圖 資料中。人們對3D視訊中之深度可具有不同的偏好。一些 人喜歡大量深度’而其他人喜歡少量深度。同樣可適用於 (例如)該深度之明確性(crispiness)及零平面(zer〇p丨ane)。 157190.doc -26- 201215102 δ玄視圖貝料故擴展以發送指示用於3D檢視之設定之使用者 參數。藉由應用該視圖資料,該視訊處理器能夠調整該多 重視圖至該等使用者偏好。 在一貫施例中,該視訊處理裝置經配置以將深度元資料 包含於朝向該3D顯示裝置之該顯示信號中。該深度元資料 可為指示當前視訊資訊之最小深度之-參數,或指示發生 於該螢幕之各種部分中之深度之—深度圖。注意,該深度 元—寅料係關於如该視訊處理裝置中處理之經組合之主資料 輔助資料4冰度元資料使該卿貞示器能夠在深度方面 定位進一步輔助資料(如一選單或按紅)於該3〇視訊資訊中 存在之任意其他資料前方。 —意本毛月可在使用可程式化組件之硬體及/或軟體中 貫施。-種實施本發明之方法具有對應於針對參考旧描 述之該系統定義之該等功能之步驟。 严:I:到4 β楚起見之上述描述已參考不同功能單元及 Γ 然而,應瞭解不同功能單元 或處理态之間之功能性之任音人 明之情況下使用。舉例而/ 刀佈可在不稅離本發 ^ +例而s,經圖解說明藉由單獨單元、 處理器或控制器執行之功能 執行。因此,對特定功处單 Γ 或控制器 所描述之功能性而朴…之,考係僅被視為對提供該 2的構件之參考。本發明可以包含硬體 t 此等之任意組合之任意合適的形式實施。-體或 注意到此文獻中字詞「句 匕括」並不排斥除該等所列出之 157190.doc •27· 201215102 外之其他元件或步驟之存在,且—元件之前之字詞「― 或「一個」並不排斥複數個此元件之存在;任意參考符號 並不限制該等中請專利範圍之料;本發明可藉由硬體及 軟體兩者之構件實施;及若干「構件」或「單元」可藉由 硬體或軟體之相同項表示,日 下且一處理器在可能與硬體元件 協作之情況下可實現一或多個單元之功能。進—步言之, 本發明並不限於該等實施例,且本發明在於每個新顆的特 徵或上文描述或互不相同的附屬技術方案中列舉之特徵之 組合。 【圖式簡單說明】 圖1展示處理3D視訊資訊之一系統; 圖2展不提供多重視圓之一31)顯示器; 圖3展示一雙凸透鏡螢幕; 圖4展示產生多重視圖; 圖5展示一顯示器之一視圖遮罩; 圖6展示一視圖演現程序; 圖7展示一3D播放器模型; 圖8展示使用視圖遮罩資料之一系統架構; 圖9展示一 3D視圊遮罩資料區塊; 圖10展示一視圖描述; 圖11展示子像素之視圖遮罩資料; 圖12展示一子像素結構;及 圖13展示透鏡組態資料。 【主要元件符號說明】 157190.doc -28- 像素 像素 像素 顯示器面板 檢視者 多重視圖 序列 顯示器面板 雙凸透鏡 光束 多重視圖 檢視者 視§fl貧料 控制參數 多重視圖 輸入信號 顯示器面板 子圖元行 雙凸透鏡 視圖 粗體矩形 3D資料内容 視訊資訊 第一平面 -29- 201215102 73 圖形 74 播放器螢幕上顯示 77 顯示信號 78 3D播放器 79 3D顯示器 80 視圖遮罩資料 81 3D資料 82 多重視圖 87 顯示識別資料(EDID) 91 表格 92 表格 93 表格 94 表格 100 視訊處理裝置 101 輸入單元 102 網路介面單元 103 光碟單元 104 網路 105 光學記錄載體 106 視訊處理器 107 顯示介面單元 110 3D顯示信號 120 自動立體3D顯示裝置 121 顯示介面單元 157190.doc ·30· 201215102 122 123 124 顯示處理器 3D顯示器 顯示深度範圍 I57190.doc -31 -High Speed Digital Interfaces, March 2008". The view mask parameters are sent to the playback device, and the playback device calculates the correct view and mapping for display based on these parameters. Alternatively, in an embodiment, a new data block defining one of the information that allows the display to send information about the process is proposed as needed. The view mask data includes a pixel processing resolution, and the video processing device is configured to perform the pixel processing resolution to generate the multi-value map. For example, the resolution of the process is unique to the display of the playback device - the pixel shader (pixd - then ^ executes the pixel shader on the view afL processing state of the playback device to produce the output of the von Note that the S' pixel shader is used to calculate the color and other properties of each pixel - 157190.doc -21- 201215102 The core function of the operation. The additional view mask data defining the core functions of the operation is in τν and the presentation. / Transfer between playback devices. The advantage of this solution is that it supports display detail processing/filtering. Figure 8 shows a system architecture that does not use view mask data. A 3d player 78 is coupled to a 3D display 79 to pass A display signal 77. The 3D display device transmits the view data including the view mask data from a content data memory to the 3 player. According to HDMI, the display data function is called EDID 87. In the player, the 3D material 81 is used to present multiple views 82 based on the view mask data. As described above, the views can be further filtered 83 to anti-alias and interlace 84, its function can now be performed in the video processing device based on the provided view mask data. The 3D display device 79 can perform crosstalk filtering 85, and perform display 86 / main thinking, 5 Xuan / Xiao division teeth and view The presentation step can be moved to the player to display a characteristic chopping (e.g., to reduce crosstalk) that can be maintained in the display. The player and the display are connected by a link (such as Hdmi). The player is capable of querying, from the 3D display device, view mask data describing the view mask of the screen and other parameters related to the processing of the multiple views of the sho. In the consistent embodiment, each view has an associated view. A mask, the view mask is a binary color image, each sub-pixel of the binary color image (for example) indicating whether it belongs to the view or another view by a binary value (1) or (0). Alternatively, since each sub-pixel completely belongs to a view, a more efficient way is to store a view mask for displaying a plurality of views as an image, and the value of each sub-pixel in the image is a view. ,example 1 57190.doc •22- 201215102 For example, one of the N views (usually 9, but possibly many; the experiment already has one) has a sequential value (〇·Ν_1)β that is only used by the copy to belong to the view. One of the sub-pixels views images are interleaved. For various types of lenticular displays (eg, substantially vertical, tilt, and fractional lenticular displays), the view mask structure is periodic and is related to a lenticular display. Type. To describe the entire view mask 'its full supply: - the position and color of one of the sub-pixels in the view (xref, yref, cref); - the sub-pixel unit between the two sub-pixels on the same scan line (p) Medium distance (Δχ); - Black matrix size 'ie, defines the structure of the sub-pixels; - Horizontal shift or tilt (s) when moving down a scan line; and - Order of RGB components. Therefore, in an embodiment, the view mask data includes: - sub-pixel data of the reference sub-pixel in the respective views; - a distance between the sub-pixel units on a scan line; Black matrix data; - one of the sub-pixel colors in the sub-pixel unit; - indicating one of the adjacent scan lines is tilted. For example, σ, the distance between sub-pixel units on a scan line corresponds to the lens distance. The parameter can be indicated here by the lens pitch itself, i.e., the width of the microlens expressed with respect to the sub-pixel pitch (e.g., 4·5). The order of the RGB components is also used for sub-pixel smoothing of glyphs. Moreover, the view mask data can include lenticular view configuration metadata. The buckle shows 157190.doc -23- 201215102 No device can send its lenticular state to the player (or (10). The view mask can be defined by including a finite number of parameters, as discussed in the previous paragraph Yin. For proof More in the future, an alternative embodiment would include all view masks entirely in the view occlusion material. However, since the view mask is typically periodic, only one fraction, ie one cycle, can be sent. Since future lenticular screens can have many views, sending a separate view mask requires a large amount of bandwidth. In an embodiment, the view mask data includes one per sub-pixel having one of the view numbers encoding the sub-pixel representation. In addition, the view mask data may include each pixel number combined with the reported physical display size, the orientation of the viewing cone, and the optimal viewing distance provides sufficient information to properly present the 3D data. In an embodiment, the view mask material includes at least one of: - pixel structure data indicating a position of a pixel of the respective view; One of the configurations of the lenticular display displays an indicator of the type; - refers to multi-view data that is not of the nature of the multi-view; - mask period data indicating the nature of the repeating pattern assigned to one of the pixels of the respective view; - indication Sub-pixel data of one of the sub-pixels of the respective colors; • Lens data indicating the configuration of one of the lenses on the pixels of the display. An example of the above view mask material is discussed below with reference to FIGS. 9-13. The view mask can be added to the specification by definition - the new CEA data block. For example, 'can be used for a "3D view mask data 157l90.doc •24· 201215102 block" (3D View Mask Data Block) One of the "Extended Tag Codes" reserved for the video-related block (ie, 6). Figure 9 shows a 3D view mask data block. A table of data blocks formatted according to the CEA-861-E standard, as indicated for the various fields in the table. A new field (bytes 〇, 1, 32, 33, 64, 65) indicates a new "expansion" "Show Label Code" and indicate the type of data in the data block. The field "Bytes 2" defines the number of views. The "Bytes 3_4" of the block defines the size (height and width) of one period of the view mask. 'There is a repeating pattern. The parameters in Blocks 5-6 provide an example of view data that can be used to view multiple views of the scene. The portion of the data block is based on the number of views and the size of the s-view mask. It has a variable size. The positions 7-31 and 34-63 are defined by the table according to Fig. 1 and Fig. u. Finally, the fields 66-76 and 77-85 are respectively based on the circle 12 and the figure. Table definition of 3. Figure 10 shows a view description. A table 91 shows a description of one of the views, i.e., a length parameter, one of the view distances at the best viewing distance of the center pixel. This table can be repeated for each view. Figure 11 shows a view mask material for a sub-pixel. A table 92 shows the number of views for a set of sub-pixels for each color. The view mask is provided by the value of each subpixel. Gossip Figure 12 shows the non-subpixel structure. The sub-pixel structure is also referred to as a black matrix. A grid 93 shows the parameters defining the pixel structure. The pixel structure > is an identification code for one of the two main functions T, ~ form J57190.doc -25-201215102 stored in the "rendering" device, the form providing the identification code and The mapping between the pixel structures. The pixel layout parameter may be an identification code for one of the tables stored in the "presentation" device, the table providing between the identification code and the pixel layout (ie, RGB or BGR, V-RGB, etc.) A mapping. Figure 13 shows the lens configuration data. The lens configuration data can be included in the view mask data. A table 94 shows the lens parameters. The lens type can be a map of one of the "lens" types used for the table stored in the "presentation" device - the identification code 〇 _ 255 '. (ie, barriers, lenticular lenses, microlens arrays, etc.). The lens parameter is an identification code 0-255 for one of the tables stored in the "presentation" device, which provides a mapping between the lens parameter identification code and a particular lens characteristic (shape, viewing angle, etc.). This value depends on the value of the lens type. / The idea of the depth capability of the display may vary from design to design, for example, due to the thickness of one of the glass plates in front of the pixel, laminate or cemented lens. Thus, in one embodiment, the view material is expanded to include a depth parameter indicative of the depth capability of the delta 3D display. The video processor is configured to adapt the multiple views based on the depth parameters. By applying the view data, the video processor can adjust the multiple views to the depth capabilities. In an embodiment, the user preference setting metadata may be included in the view material. People may have different preferences for depth in 3D video. Some people like a lot of depth' while others like a small amount of depth. The same applies to, for example, the depth of the depth and the zero plane (zer〇p丨ane). 157190.doc -26- 201215102 The delta view expands to send user parameters indicating the settings for 3D view. By applying the view material, the video processor can adjust the multi-view to the user preferences. In a consistent embodiment, the video processing device is configured to include depth metadata in the display signal toward the 3D display device. The depth metadata may be a parameter indicating the minimum depth of the current video information, or a depth map indicating the depth occurring in various portions of the screen. Note that the depth element - the data relating to the combined master data auxiliary data 4 processed in the video processing device enables the clear display to locate further auxiliary data in depth (such as a menu or red) ) in front of any other data that exists in the 3 video messages. - The original month can be applied in hardware and/or software using programmable components. The method of carrying out the invention has the steps corresponding to the functions defined for the system described with reference to the old description. Strict: I: The above description of 4 β Chu has been referred to different functional units and Γ However, it should be understood that the functional units of the functional units or processing states are used. For example, / knife cloth can be executed without a tax from the original. It is illustrated by a function performed by a separate unit, processor or controller. Therefore, for a particular function or function described by the controller, the examination is only considered as a reference to the component providing the 2 . The invention may be embodied in any suitable form including any combination of hardware and the like. - or note that the word "sentence" in this document does not exclude the existence of other elements or steps other than those listed in 157190.doc •27·201215102, and—the word before the element “― Or "one" does not exclude the existence of a plurality of such elements; any reference sign does not limit the scope of the claims; the invention may be implemented by means of both hardware and software; and a number of "components" or A "unit" can be represented by the same item of hardware or software, and a processor can implement the function of one or more units, possibly in cooperation with a hardware component. Further, the present invention is not limited to the embodiments, and the present invention resides in the combination of the features of each new one or the features listed in the above described or different embodiments. BRIEF DESCRIPTION OF THE DRAWINGS Figure 1 shows one system for processing 3D video information; Figure 2 shows one of the multi-points of the lens 31); Figure 3 shows a lenticular screen; Figure 4 shows the generation of multiple views; Figure 5 shows Figure 1 shows a view presentation program; Figure 7 shows a 3D player model; Figure 8 shows a system architecture using view mask data; Figure 9 shows a 3D view mask data area Figure 10 shows a view of the view; Figure 11 shows the view mask data for the sub-pixels; Figure 12 shows a sub-pixel structure; and Figure 13 shows the lens configuration data. [Main component symbol description] 157190.doc -28- Pixel Pixel Display Panel Viewer Multiple View Sequence Display Panel Double Lens Beam Multi-view Viewer §fl Poor Material Control Parameter Multi View Input Signal Display Panel Sub-Element Line lenticular view Bold rectangle 3D data content video information first plane -29- 201215102 73 graphics 74 player display on screen 77 display signal 78 3D player 79 3D display 80 view mask data 81 3D data 82 multiple view 87 Display Identification Data (EDID) 91 Table 92 Table 93 Table 94 Table 100 Video Processing Device 101 Input Unit 102 Network Interface Unit 103 Optical Disk Unit 104 Network 105 Optical Record Carrier 106 Video Processor 107 Display Interface Unit 110 3D Display Signal 120 Auto Stereoscopic 3D display device 121 Display interface unit 157190.doc ·30· 201215102 122 123 124 Display processor 3D display display depth range I57190.doc -31 -

Claims (1)

201215102 七、申請專利範圍: 1. 一種處理二維[3D]視tfL資訊之視訊處理裝置,該3D視訊 資訊包括3D視訊資料及辅助資料,該裝置(1〇〇)包括: 輸入構件(HH、102、1〇3),其用於接收根據一輸入格 式之該3D視訊資料; -視訊處理器(1〇6) ’其用於處理該3D視訊資訊並產 生3D..,、員示仏號,3亥3D顯示信號表示該扣視訊資料及 根據一顯示格式之該輔助資料; 一顯示介面⑽),其詩介接—3_示裝置(12〇)以 傳遞該3D顯示信號(11〇), 該視訊處理裝置經配置以自該川顯示裝置接收包含視 圖遮罩資料之視圖資料,該視圖遮罩資料定義藉由該3D 顯示裝置顯示之多重視圖之一像素配置,且 该視訊處理器經配置以根據該視圖冑罩資料而產生該 多重視圖,且將該多重視圖包含於該顯示信號中,該顯 示袼式不同於該輸入格式。 2·如請求t之視訊處理裝置,其中該顯示介面(ι〇7)經配 置用於該經由該3D顯示信號⑴〇)自該扣顯示褒置(12〇) 接收包含視圖遮罩資料之該視圖資料。 3. 如請求項2之視訊處理裂置,其中該顯示介面⑽)係一 高清晰度多媒體介面_],其經配置用於該經由增強 擴展顯示識別資料_DID]自該3D顯示裝置接收包含視 圖遮罩資料之該視圖資料。 4. 如。月求項!之視訊處理裝置’其中該視圖遮罩資料包括 157190.doc 201215102 以下之至少一者: 指不各自視圖之像素之一位置之像素結構資料; 指示一雙凸透鏡顯示器之配置之一顯示器類型指示 符; ' 指示該多重視圖之性質之多視圖資料; 指示指派給各自視圖之像素之一重複型樣之性質之遮 罩週期資料; Λ 指示用於各自色彩之子像素之一結構之子像素資料; 指示在該顯示器之該等像素上組態之一透鏡之配置之 透鏡資料。 5.如請求項!之視訊處理裝置,其中該視圖遮罩資料包 括: 一各自視圖中之一參考子像素之子像素資料; 掃描線上之子像素單元之間之一距離; 一黑色矩陣之一大小; 該子像素單元中之子像素色彩之一順序; 指不相鄰掃描線中之一位置差之一傾斜。 6·如印求項1之視訊處理裝置,其中該視圖遮罩資料包括 像素處理清晰度,且該視訊處理器(106)經配置以執行 5亥像素處理清晰度以產生該多重視圖。 7.如請求項1之視訊處理裝置,其中該視圖資料包括以下 之至少一者: 才a示用於3D檢視之設定之使用者參數; 心不該3D顯示器之深度能力之深度參數; 157190.doc 201215102 圖 且該視訊處理器經配置以基於該等參數調適該多重視 0 8. :種用於顯示三維㈣視訊資訊之顯示裝置,該扣視訊 貧訊包括3D視訊資料及輔助資料, 該裝置(120)包括: -介面叫其用於介接一視訊處理裝置(_以 傳遞表不該3D視訊資料及根據_顯示格式之該輔助資料 之一 3D顯示信號(11〇); 一 3D顯示器〇23)’其用於顯示多重視圖,一對不 同的視圖經配置以藉由-檢視者之各自眼睛感知; 一顯示處理器(122)’其用於基於該3D顯示信號提 供表不該多重視圖之-顯示控制信號給該糊示器, 、該顯示裝置經配置以傳遞包含視圖遮罩資料之視圖資 料至該視訊處理裝置’該視圖遮罩資料定義該多重視圖 之一像素配置,且 該顯示處理器經配置以基於自該顯示信號操取根據該 視圖遮罩資料之該多重視圖來提供該顯示控制信號。 9. 一種用於經由一視訊處理裝置與一顯示裝置之間之一介 面傳遞三維[3D]視訊資訊之扣顯示信號,該扣視訊資訊 包括3D視訊資料及輔助資料, 該3D顯示信號表示該3£>視訊資料及根據一顯示格式之 該輔助資料, 該顯示裝置包括用於顯示多重視圖之—3d顯示器,一 對不同的視圖經配置以藉由一檢視者之各自眼睛感知, 157190.doc 201215102 該3D顯示信號包括: 待自該顯示裝置傳遞至該視訊處理裝 遮罩資料之視圖資料,該視圖遮罩 ^ 3視圖 圖之一像素配置,及 '義該多重視 根據待自該視訊處理裝置傳遞至該顯示 圖遮罩資料之多重視圖。 、置之该視 ίο. 11. 一種用於經由一視訊處理裝置與一顯示襞 ΓθΠ ^ « vy. 面傳遞三維[3D]視訊資訊之方法,該 ~ ;, ^々兄汛資訊句 視訊資料及輔助資料, I匕括3D 該顯示裝置包括用於顯示多重視圖 顯示器,— 對不同的視圖經配置以藉由一檢視者之各自眼睛感去 該方法包括: 月 >處理該3D視訊資訊並產生—犯顯示信號,該祀顯 示信號表示該3D視訊資料及根據一顯示袼式之該輔助次 料, 貝 經由該介面傳遞該3D顯示信號至該顯示裝置, 該方法包括: 自該顯不裝置傳遞包含視圖遮罩資料之視圖資料至 該視訊處理裝置,該視圖遮罩資料定義該多重視圖之一 像素配置,及 將根據該視圖遮罩資料之該多重視圖包含於該3D顯 示信號中。 一種用於處理二維[3D]視訊資訊之方法,該31)視訊資訊 包括3D視訊資料及輔助資料, 157190.doc 201215102 該方法包括: 接收根據一輸入格式之該3D視訊資料’ 處理該3D視訊資訊並產生一 3D顯示信號,該3]〇顯 示信號表示該3D視訊資料及根據一顯示格式之該輔助資 料, 介接一3D顯示裝置(120)以傳遞該3D顯示信號 (110), 接收之步驟經配置以自該3D顯示裝置接收包含視圖遮 罩資料之視圖資料,該視圖遮罩資料定義待藉由該3D顯 示裝置顯示之多重視圖之一像素配置,且 產生之步驟經配置以根據該視圖遮罩資料而產生該多 重視圖,且將該多重視圖包含於該顯示信號中,該顯示 格式不同於該輸入格式。 12. —種用於顯示三維[3D]視訊資訊之方法,該3〇視訊資訊 包括3D視訊資料及輔助資料, 該方法包括: 介接一視訊處理裝置(1〇〇)以傳遞表示該3E)視訊資 料及根據一顯示格式之該輔助資料之一 顯示信號 (110), 顯不多重視圖,一對不同的視圖經配置以藉由一檢 視者之各自眼睛感知, 基於該3D顯示信號提供表示該多重視圖之一顯示控 制信號給該3D顯示器, 介接之步驟經配置以傳遞包含視圖豸罩資料之視圖資 157190.doc 201215102 料至該視訊處理裝罟,# ^ .β _ 置°兹視圖遮罩資料定義該多重視圖 之一像素配置,·&· 提供之步驟經配置以基於自該顯示信號擷取根據該視 圖遮罩資料之該多重視圖而提供該顯示控制信號。 13. —種用於處理三維[3D]視訊資訊之電腦程式產品’該程 式經操作以使〆處理器執行如請求項10、11或12中任一 項中之方法。 157190.doc -6-201215102 VII. Patent application scope: 1. A video processing device for processing two-dimensional [3D] viewing tfL information, the 3D video information includes 3D video data and auxiliary materials, and the device (1〇〇) includes: an input member (HH, 102, 1〇3) for receiving the 3D video data according to an input format; - a video processor (1〇6) 'for processing the 3D video information and generating 3D.., the nickname The 3H 3D display signal indicates the deduction of the video data and the auxiliary data according to a display format; a display interface (10)), which is connected to the 3D display device (12〇) to transmit the 3D display signal (11〇) The video processing device is configured to receive view data including view mask data from the display device, the view mask data defining a pixel configuration of the multiple views displayed by the 3D display device, and the video processor The multi-view is configured to generate the multi-view based on the view mask data, and the multi-view is included in the display signal, the display format being different from the input format. 2. The video processing device of claim t, wherein the display interface (ι 7) is configured to receive the view mask data from the buckle display device (12〇) via the 3D display signal (1) View data. 3. The video processing burst of claim 2, wherein the display interface (10) is a high definition multimedia interface _] configured to receive from the 3D display device via the enhanced extended display identification data_DID] The view material of the view mask data. 4. For example. The video processing device of the month of the present invention, wherein the view mask data includes at least one of: 157190.doc 201215102: pixel structure data indicating a position of a pixel not in a respective view; a display indicating a configuration of a lenticular display a type indicator; 'a multi-view material indicating the nature of the multi-view; a mask period data indicating the nature of the repeat pattern of one of the pixels assigned to the respective view; Λ a sub-pixel indicating the structure of one of the sub-pixels for the respective color Data; indicates lens data that configures the configuration of one of the lenses on the pixels of the display. 5. As requested! The video processing device, wherein the view mask data comprises: a sub-pixel data of one reference sub-pixel in a respective view; a distance between the sub-pixel units on the scan line; a size of a black matrix; a sub-pixel unit One order of pixel colors; refers to one of the position differences of one of the adjacent scan lines being tilted. 6. The video processing device of claim 1, wherein the view mask material comprises pixel processing resolution, and the video processor (106) is configured to perform 5 megapixel processing resolution to produce the multiple view. 7. The video processing device of claim 1, wherein the view data comprises at least one of: a indicates a user parameter for setting the 3D view; a depth parameter of a depth capability of the 3D display; 157190. Doc 201215102 and the video processor is configured to adapt the multi-value based on the parameters. 8. A display device for displaying three-dimensional (four) video information, the video information including 3D video data and auxiliary data, the device (120) comprising: - an interface for interfacing with a video processing device (for transmitting a 3D video data and a 3D display signal (11" for the auxiliary data according to the _ display format; a 3D display" 23) 'It is used to display multiple views, a pair of different views are configured to be perceived by the respective eyes of the viewer; a display processor (122) 'for providing a table based on the 3D display signal Re-viewing - displaying a control signal to the sticker, the display device being configured to transmit view data including view mask data to the video processing device 'The view mask data definition is more The re-view is a one-pixel configuration, and the display processor is configured to provide the display control signal based on the multi-view from the display signal that the mask data is based on the view. 9. A buckle display signal for transmitting three-dimensional [3D] video information through an interface between a video processing device and a display device, the video information includes 3D video data and auxiliary data, and the 3D display signal indicates the 3 £> Video material and the auxiliary material according to a display format, the display device includes a 3d display for displaying multiple views, a pair of different views configured to be perceived by a viewer's respective eyes, 157190. Doc 201215102 The 3D display signal includes: a view data to be transmitted from the display device to the video processing device mask data, the view masks a pixel configuration of the ^3 view image, and the meaning of the image is to be received from the video The processing device passes to the multiple views of the display mask material. 11. A method for transmitting three-dimensional [3D] video information through a video processing device and a display 襞ΓθΠ ^ « vy. surface, the video data of the video message and Auxiliary data, I include 3D. The display device includes a display for displaying a plurality of views, - the different views are configured to be perceived by a viewer's respective eyes. The method includes: monthly > processing the 3D video information and Generating a display signal indicating that the 3D video data and the auxiliary secondary material according to a display mode, the shell transmits the 3D display signal to the display device via the interface, the method comprising: from the display device Passing view data including view mask data to the video processing device, the view mask data defining a pixel configuration of the multiple view, and including the multiple view according to the view mask data in the 3D display signal . A method for processing two-dimensional [3D] video information, the 31) video information including 3D video data and auxiliary data, 157190.doc 201215102 The method includes: receiving the 3D video data according to an input format to process the 3D video And generating a 3D display signal, the 3] display signal indicating the 3D video data and the auxiliary data according to a display format, and interfacing a 3D display device (120) to transmit the 3D display signal (110), receiving The step is configured to receive view material including view mask data from the 3D display device, the view mask data defining a pixel configuration of the multiple views to be displayed by the 3D display device, and generating the steps configured to The view masks the material to produce the multiple view, and the multiple view is included in the display signal, the display format being different from the input format. 12. A method for displaying three-dimensional [3D] video information, the 3D video information comprising 3D video data and auxiliary data, the method comprising: interfacing a video processing device (1) to transmit the 3E) The video data and the display signal (110) according to one of the auxiliary data in a display format, the multiple views are configured, and a pair of different views are configured to be perceived by the respective eyes of a viewer, and the representation is provided based on the 3D display signal One of the multiple views displays a control signal to the 3D display, and the interface is configured to pass the view containing the view data to the video processing device, #^.β_ The view mask data defines a pixel configuration of the multiple view, and the step of providing is configured to provide the display control signal based on the multi-view from the display signal based on the view mask data. 13. A computer program product for processing three-dimensional [3D] video information. The program is operative to cause a processor to perform the method of any one of claims 10, 11 or 12. 157190.doc -6-
TW100124502A 2010-07-12 2011-07-11 Signaling for multiview 3D video TW201215102A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP10169210 2010-07-12

Publications (1)

Publication Number Publication Date
TW201215102A true TW201215102A (en) 2012-04-01

Family

ID=44583206

Family Applications (1)

Application Number Title Priority Date Filing Date
TW100124502A TW201215102A (en) 2010-07-12 2011-07-11 Signaling for multiview 3D video

Country Status (2)

Country Link
TW (1) TW201215102A (en)
WO (1) WO2012007867A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110428354A (en) * 2019-06-25 2019-11-08 福建华佳彩有限公司 The panel method of sampling, storage medium and computer
WO2022225977A1 (en) * 2021-04-19 2022-10-27 Looking Glass Factory, Inc. System and method for displaying a three-dimensional image

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI630815B (en) 2012-06-14 2018-07-21 杜比實驗室特許公司 Depth map delivery formats for stereoscopic and auto-stereoscopic displays
MX345274B (en) * 2012-10-25 2017-01-24 Lg Electronics Inc Method and apparatus for processing edge violation phenomenon in multi-view 3dtv service.
EP2765774A1 (en) * 2013-02-06 2014-08-13 Koninklijke Philips N.V. System for generating an intermediate view image
EP2949121B1 (en) 2013-02-06 2020-07-15 Koninklijke Philips N.V. Method of encoding a video data signal for use with a multi-view stereoscopic display device
EP2765775A1 (en) * 2013-02-06 2014-08-13 Koninklijke Philips N.V. System for generating intermediate view images
US10212532B1 (en) 2017-12-13 2019-02-19 At&T Intellectual Property I, L.P. Immersive media with media device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007072289A2 (en) * 2005-12-20 2007-06-28 Koninklijke Philips Electronics N.V. Autostereoscopic display device
US20110037830A1 (en) * 2008-04-24 2011-02-17 Nokia Corporation Plug and play multiplexer for any stereoscopic viewing device
JP5642695B2 (en) * 2008-11-24 2014-12-17 コーニンクレッカ フィリップス エヌ ヴェ 3D video player with flexible output

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110428354A (en) * 2019-06-25 2019-11-08 福建华佳彩有限公司 The panel method of sampling, storage medium and computer
CN110428354B (en) * 2019-06-25 2023-04-07 福建华佳彩有限公司 Panel sampling method, storage medium and computer
WO2022225977A1 (en) * 2021-04-19 2022-10-27 Looking Glass Factory, Inc. System and method for displaying a three-dimensional image
US11736680B2 (en) 2021-04-19 2023-08-22 Looking Glass Factory, Inc. System and method for displaying a three-dimensional image

Also Published As

Publication number Publication date
WO2012007867A1 (en) 2012-01-19

Similar Documents

Publication Publication Date Title
TWI444036B (en) 2d to 3d user interface content data conversion
US20160154563A1 (en) Extending 2d graphics in a 3d gui
KR101749893B1 (en) Versatile 3-d picture format
TW201215102A (en) Signaling for multiview 3D video
US9083963B2 (en) Method and device for the creation of pseudo-holographic images
EP2235685B1 (en) Image processor for overlaying a graphics object
EP2362671B1 (en) 3d display handling of subtitles
JP5809064B2 (en) Transfer of 3D image data
US20100091012A1 (en) 3 menu display
TWI508521B (en) Displaying graphics with three dimensional video
EP2347597B1 (en) Method and system for encoding a 3d image signal, encoded 3d image signal, method and system for decoding a 3d image signal
US20110298795A1 (en) Transferring of 3d viewer metadata
US20100045780A1 (en) Three-dimensional video apparatus and method providing on screen display applied thereto
US20110293240A1 (en) Method and system for transmitting over a video interface and for compositing 3d video and 3d overlays
CN103155579B (en) 3D rendering display device and display packing thereof
EP2400766A2 (en) Method and apparatus for processing video image
JP2011508557A5 (en)
KR101314601B1 (en) apparatus for transmitting contents, apparatus for outputting contents, method for transmitting contents and method for outputting contents
CN102439553B (en) Apparatus and method for reproducing stereoscopic images, providing a user interface appropriate for a 3d image signal
US20120249872A1 (en) Video signal processing apparatus and video signal processing method