TW201230817A - Method and system for producing video archive on film - Google Patents

Method and system for producing video archive on film Download PDF

Info

Publication number
TW201230817A
TW201230817A TW100137382A TW100137382A TW201230817A TW 201230817 A TW201230817 A TW 201230817A TW 100137382 A TW100137382 A TW 100137382A TW 100137382 A TW100137382 A TW 100137382A TW 201230817 A TW201230817 A TW 201230817A
Authority
TW
Taiwan
Prior art keywords
video
film
color
data
archive
Prior art date
Application number
TW100137382A
Other languages
Chinese (zh)
Inventor
Chris Scott Kutcka
Joshua Pines
Original Assignee
Thomson Licensing
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thomson Licensing filed Critical Thomson Licensing
Publication of TW201230817A publication Critical patent/TW201230817A/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/84Television signal recording using optical recording
    • H04N5/87Producing a motion picture film from a television signal
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B21/00Projectors or projection-type viewers; Accessories therefor
    • G03B21/10Projectors with built-in or built-on screen
    • G03B21/11Projectors with built-in or built-on screen for microfilm reading
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B27/00Photographic printing apparatus
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B20/00Signal processing not specific to the method of recording or reproducing; Circuits therefor
    • G11B20/10Digital recording or reproducing
    • G11B20/12Formatting, e.g. arrangement of data block or words on the record carriers
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B20/00Signal processing not specific to the method of recording or reproducing; Circuits therefor
    • G11B20/10Digital recording or reproducing
    • G11B20/12Formatting, e.g. arrangement of data block or words on the record carriers
    • G11B20/1261Formatting, e.g. arrangement of data block or words on the record carriers on films, e.g. for optical moving-picture soundtracks
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B23/00Record carriers not specific to the method of recording or reproducing; Accessories, e.g. containers, specially adapted for co-operation with the recording or reproducing apparatus ; Intermediate mediums; Apparatus or processes specially adapted for their manufacture
    • G11B23/38Visual features other than those contained in record tracks or represented by sprocket holes the visual signals being auxiliary signals
    • G11B23/40Identifying or analogous means applied to or incorporated in the record carrier and not intended for visual display simultaneously with the playing-back of the record carrier, e.g. label, leader, photograph
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B7/00Recording or reproducing by optical means, e.g. recording using a thermal beam of optical radiation by modifying optical properties or the physical structure, reproducing using an optical beam at lower power by sensing optical properties; Record carriers therefor
    • G11B7/002Recording, reproducing or erasing systems characterised by the shape or form of the carrier
    • G11B7/003Recording, reproducing or erasing systems characterised by the shape or form of the carrier with webs, filaments or wires, e.g. belts, spooled tapes or films of quasi-infinite extent
    • G11B7/0032Recording, reproducing or erasing systems characterised by the shape or form of the carrier with webs, filaments or wires, e.g. belts, spooled tapes or films of quasi-infinite extent for moving-picture soundtracks, i.e. cinema
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B20/00Signal processing not specific to the method of recording or reproducing; Circuits therefor
    • G11B20/10Digital recording or reproducing
    • G11B20/12Formatting, e.g. arrangement of data block or words on the record carriers
    • G11B2020/1291Formatting, e.g. arrangement of data block or words on the record carriers wherein the formatting serves a specific purpose
    • G11B2020/1298Enhancement of the signal quality

Abstract

A method and system are disclosed for archiving video content to film and recovering the video from the film archive. Video content and a characterization pattern associated with the content are provided as encoded data, which is recorded onto a film and processed to produce a film archive. By encoding the video data using a non-linear transformation between video codes and film density codes, the resulting film archive allows a film print to be produced at a higher quality compared to other film archive techniques. The characterization pattern contains spatial, temporal and colorimetric information relating to the video content, and provides a basis for recovering the video content from the film archive.

Description

201230817 六、發明說明: 【發明所屬之技術領域】 本發明係關於一種建立視訊内容之影片存檔及自影片存 檔恢復視訊内容之方法及系統。 本專利申請案主張均在2010年1〇月15曰申請之美國臨時 專利申請案第6^93,865號「用於在影片中製作視訊存檔 之方法及系統」(「Method and System f0r Producing Vide〇 Archive on Filmj )及美國臨時專利申請案第6i/393 858號 「將視訊存檔至影片之方法及系統」(「Meth〇d and System of Archiving Video t〇 Film」)之優先權之利益。兩個臨時 專利申請案之教示全文以引用方式明確併入本文中。 【先前技術】 雖然存在許多可用於存檔目的之媒體格式’但影片存檔 仍具有超越其他格式之優點,包含超過5〇年之一經證實存 檔奇命。除了降級問題之外,其他媒體(諸如錄影帶及數 位格式)亦可變得過時,潛在關注係關於用於讀取磁性或 數位格式之設備將來是否仍可用。 用於將視訊傳送至影片之傳統方法涉及拍攝一顯示監視 器上之視讯内容。在一些情況中,此意指著透過分開彩色 濾光片拍攝顯不在一黑白監視器上之彩色視訊。結果係視 訊影像之一照片。一電視電影(telecine)用於自存檔照片擷 取或恢復視訊影像。藉由一視訊攝影機觀看影片之每一圖 框’且可現場直播或記錄所得視訊影像。此存檔及擷取過 程之缺點係最終視訊係與原始視訊不相同之「一視訊顯示 159347.doc V'- &lt; 201230817 照片之一視訊攝影機之影像」β 自此類型影片存檔恢復視訊内容通常需要手動、藝術干 預以恢復顏色及原始影像品質。儘管那樣,恢復的視訊經 常展現空間、時間及/或色度假像。在視訊顯示之拍攝捕 獲或拍攝存檔之視訊攝影機捕獲中,空間假像可歸因於不 同原因而出現,例如,在顯示視訊影像中若存在任何空間 失準。 時間假像可歸因於捕獲相鄰線對之時間之差異而起因於 一交錯視訊顯示之照片。在視訊圖框速率及影片圖框速率 並非1:1之情況中,影片影像可產生起因於圖框速率不匹 配(例如,電視電影顫抖)之時間假像。舉例而言,此可發 生在影片具有24圖框每秒伽)之—圖框速率且視訊具有⑼ fps(在美國)或50 fps(在歐洲)之一圖框速率時且一影片 之一圖框重複兩個或兩個以上視訊圖框。 此外’由於顯示器、影片與視訊攝影機之間之位變異構 而引入色度假像,即,由顯示哭本止 田.·廣不态產生的不同顏色可顯現為 與影片相同的顏色,且存檔影片φ 廿化〜月中之不同顏色可顯現為與 該視訊攝影機相同的顏色。 【發明内容】 先刚技術方法中之此等問題在本發明之一方法中被克 服’其中影片媒體之動態範圍用於以一自編文件、可準確 恢復、抵抗降級及人類可讀格式維持數位視訊資料。根據 本發明,ϋ由基於-非線性關係(例如,使用一顏色杳詢 表)將至少該數位視訊資料編竭為影片密度碼,及提供盘 159347.doc 201230817 該視訊資料相關聯之用於解碼存檔的一特徵化圖案來建立 一影片存檔。可利用或不利用該顏色查詢表編碼該特徵化 圖案。所得存檔具有適合於與電視電影或一影片印刷機搭 配使用之足夠品質以用於產生極為接近原始視訊之影片影 像之一視訊’同時相比於原始視訊允許該視訊恢復為具有 可忽略之空間、時間及色度假像,且對於顏色恢復或色域 再映射不需要人類干預。 本發明之一態樣提供一種用於將視訊内容存檔至影片上 之方法,該方法包含:藉由基於一非線性變換至少將數位 視訊資料轉換成影片密度碼來編碼該數位視訊資科;.提供 包含經編碼數位視訊資料及與該數位視訊資料相關聯之一 特徵化圖案之經編碼資料;根據該等影片密度碼將該經編 碼資料記錄在影片上;及自具有該記錄的經編碼資料之影 片製作一影片存檔》 本發明之另一態樣提供一種用於自一影片存檔恢復視訊 内谷之方法,5亥方法包含:掃描含有編碼為基於影片的資 料之數位視訊資料及與該數位視訊資料相關聯之一特徵化 圖案的該影片存檔之至少一部分;其中藉由一非線性變換 將。亥數位視訊資料編碼為基於影片的資料;及基於該特徵 化圖案中含有的資訊解碼該影片存檔。 本發明之又一態樣提供一種用於將視訊内容存檔至影片 上之系統’該系統包含:-編碼器’其用於產生含有對應 於數位視訊資料之基於影片的資料及與該視訊資料相關聯 之一特徵化圖案之經編碼資料,其中該數位視訊資料及該 159347.doc • 6 · 201230817 特徵化圖案之像素值係藉由一非線性變換而編碼為該基於 影片的資料;一影片記錄器,其用於將該經編碼資料記錄 在一影片上;及一影片處理器,其用於處理該影片以製作 一影片存檔。 本發明之又一態樣提供一種用於自一影片存檔恢復視訊 内容之系統,該系統包含:一影片掃描器,其用於掃描該 影片存檔以產生基於影片的資料;一解碼器,其用於識別 來自該基於影片的資料之一特徵化圖案且用於基於該特徵 化圖案解碼該基於影片的資料以產生用於在恢復該視訊内 谷中使用之視訊資料;其中該基於影片的資料係藉由一非 線性變換而與該視訊資料相關。 【實施方式】 藉由結合隨附圖式考慮以下詳細描述可容易理解本發明 之教示。 本發明提供一種用於製作視訊内容之一影片存檔並且用 於自該存槽恢復該視訊内容之方法及系統。編碼視訊資 料接著將其連同與該視訊資料相關聯之一特徵化圖案一 起記錄在影片上,其允許恢復原始視訊資料。該視訊資料 經編碼使得自該影片存檔產生之電視電影或影片印刷品可 產生較接近該原始視訊之一視訊或影片影像,僅輕微損及 該原始視tfl資料之可恢復性。舉例㈣,對於該視訊資料 之至少部分可能在量化雜訊上有增加。在一些實施例中, 對於該視訊㈣之-些部分可能在量化雜訊㈣小,但整 體存在淨增加。當使該影片顯影時,所得影片提供一存槽201230817 VI. Description of the Invention: [Technical Field of the Invention] The present invention relates to a method and system for creating a video archive of video content and restoring video content from a video archive. U.S. Patent Application Serial No. 6/93,865, entitled "Method and System for Creating Video Archives in Films", filed on January 15th, 2010. ("Method and System f0r Producing Vide〇Archive" On Filmj) and U.S. Provisional Patent Application No. 6i/393,858, entitled "Meth〇d and System of Archiving Video t〇Film". The teachings of both Provisional Patent Applications are hereby expressly incorporated by reference in their entirety. [Prior Art] Although there are many media formats that can be used for archiving purposes, but video archiving still has advantages over other formats, including one of more than 5 years of proven storage. In addition to downgrade issues, other media (such as videotapes and digital formats) can become obsolete, with the potential concern that devices for reading magnetic or digital formats will still be available in the future. The traditional method for transmitting video to a video involves taking a video content on a display monitor. In some cases, this means taking a color video that is not displayed on a black and white monitor by separating the color filters. The result is a photo of the video image. A telecine is used to capture or restore video images from an archived photo. The video frame is viewed by a video camera and the resulting video image can be broadcast live or recorded. The shortcomings of this archiving and retrieval process are that the final video system is different from the original video. "One video display 159347.doc V'- &lt; 201230817 One video camera image" β Recovering video content from this type of video archive usually requires Manual, artistic intervention to restore color and original image quality. In spite of this, recovered video often exhibits space, time and/or color vacation images. In video camera captures where video capture captures or captures an archive, spatial artifacts can occur for different reasons, such as any spatial misalignment in displaying video images. Time artifacts can be attributed to a photo of an interlaced video display due to the difference in time to capture adjacent pairs. In the case where the video frame rate and the movie frame rate are not 1:1, the movie image may produce a time artifact resulting from a frame rate mismatch (e.g., tv movie trembling). For example, this can occur when the film has a frame rate of 24 frames per second—the frame rate and the video has a frame rate of (9) fps (in the US) or 50 fps (in Europe) and a picture of a movie The box repeats two or more video frames. In addition, due to the metamorphism between the display, the film and the video camera, the color holiday image is introduced, that is, the different colors produced by the display of the crying and the end of the field can be displayed as the same color as the film, and the archived movie The different colors of φ 廿 〜 ~ Month can be displayed in the same color as the video camera. SUMMARY OF THE INVENTION These problems in the prior art method are overcome in one of the methods of the present invention. The dynamic range of the film medium is used to maintain the digits in a self-documenting, accurately recoverable, resistant to degradation and human readable format. Video material. According to the present invention, at least the digital video data is compiled into a film density code by a non-linear relationship (for example, using a color query table), and a disk 159347.doc 201230817 is provided for the video data associated with the decoding. A characterization of the archive is used to create a movie archive. The characterization pattern can be encoded with or without the color lookup table. The resulting archive has sufficient quality suitable for use with a television movie or a film printer for producing a video image that is very close to the original video image while allowing the video to be restored to a negligible space compared to the original video, Time and color vacation images, and no human intervention is required for color recovery or gamut remapping. One aspect of the present invention provides a method for archiving video content onto a movie, the method comprising: encoding the digital video constellation by converting at least digital video data into a film density code based on a non-linear transformation; Providing encoded data comprising encoded digital video material and a characterization pattern associated with the digital video material; recording the encoded data on the video based on the film density codes; and encoding the encoded data from the recording </ RTI> </ RTI> </ RTI> </ RTI> </ RTI> </ RTI> </ RTI> </ RTI> </ RTI> </ RTI> </ RTI> </ RTI> </ RTI> </ RTI> </ RTI> </ RTI> At least a portion of the video archive of one of the characterization patterns associated with the video material; wherein by a non-linear transformation. The digital video data is encoded as video-based data; and the video archive is decoded based on information contained in the characterization pattern. Yet another aspect of the present invention provides a system for archiving video content onto a movie. The system includes: an encoder for generating video-based material corresponding to digital video data and associated with the video material Combining the encoded data of the characterization pattern, wherein the digital video data and the pixel value of the 159347.doc • 6 · 201230817 characterization pattern are encoded into the film-based data by a nonlinear transformation; And for recording the encoded material on a movie; and a video processor for processing the movie to create a movie archive. Yet another aspect of the present invention provides a system for restoring video content from a movie archive, the system comprising: a video scanner for scanning the movie archive to generate film-based material; a decoder for use Identifying a characterization pattern from the video-based material and for decoding the video-based material based on the characterization pattern to generate video material for use in restoring the video intra-valley; wherein the video-based material is Associated with the video material by a non-linear transformation. The teachings of the present invention can be readily understood by considering the following detailed description in conjunction with the accompanying drawings. The present invention provides a method and system for creating a video archive of video content and for recovering the video content from the storage slot. The encoded video material is then recorded on the film along with a characterization pattern associated with the video material, which allows the original video material to be restored. The video material is encoded such that the television movie or video print produced from the video archive can produce a video or video image that is closer to the original video, and only slightly detracts from the recoverability of the original video. For example (4), at least part of the video data may be increased in quantization noise. In some embodiments, some of the portions of the video (4) may be small in the quantization noise (4), but there is a net increase in the whole. When the film is developed, the resulting movie provides a storage slot

S 159347.doc 201230817 品質儲存媒體,該儲存媒體可透過一電視電影讀取或經攝 影印刷。當掃描該存檔用於恢復時,該特徵化圖案提供用 於解碼影片圖框至視訊之基礎。即使出現數十年的影片染 料之褪色,影片圖框掃描資料之隨後解碼仍然產生類似於 原始視訊之視訊。 不像演現視訊内容作為記錄在影片上之一圖片之先前技 術(例如,藉由使用一顯像管或電影攝影機拍下顯示在一 監視器上之每一視訊圖框之一圖片),本發明之存檔製作 系統將視訊信號處理為數字㈣,可藉由制該特徵化圖 案實質上準確恢復該數字資料。 圖1 A展示本發明之一影片存檔系統100之一實施例,其 包含:一編碼器112,其用於提供含有視訊内容1〇8及一特 歡化圖案11〇之-經編碼槽案114; _影片記錄器ιΐ6,其 用於s己錄s亥經編碼檔案;及一影片處理器丨24,其用於處 理該經記錄檔案並且製作該視訊内容之一影片存檔以6。 如本文使用,結合編碼器1丨2之整體活動,術語「編碼」 包含自視訊資料格式變換為影片資料格式(例如,自 709碼(表示三個視訊顯示原色之分率貢獻)變換為影片密度 碼(表示具有〇至1〇23範圍内之值之一底片中之三種染料之 各自密度’例如,Cineon碼))及空間與時間格式化(例如, 視訊資料丨08及特徵化圖案110中之像素被映射至該影片記 錄器116之影像空間中之適當像素)。在此背景中f時間格 式化指根據該視訊資料之時間序列’像素自視訊映射至影 片影像㈣,例如’視訊中之連續圖片映射至影片之連續 159347.doc 201230817 圖框中。對於漸進式視訊,個別視訊圖框記錄為單一影片 圖框,而交錯視訊記錄為分開圖場(例如,形成一圖場之 奇數列像素及形成另一圖場之偶數列像素),一圖框之分 開圖場記錄在相同影片圖框中。 經由一視訊源104提供原始視訊内容1 〇2至該系統丨〇〇。 此内容之實例包含當前儲存在錄影帶上之電視節目(呈數 位或類比形式)。適合與原始視訊内容1 〇2之格式搭配使用 之該視訊源104(例如,一錄影帶播放器)提供該内容至視訊 數位化器10 6以產生視訊資料1 〇 8。在一實施例中,視訊資 料108係或可轉換為RGB(紅色、綠色、藍色)碼值,此係因 為其等相比於其他格式引起可忽略的假像。雖然視訊資料 108可以非RGB格式提供至該編碼器112(例如,作為明度 值及色度值)’但使用此等格式之存檔及視訊轉換過程中 之各種缺點及争擾可在所恢復視訊中引入假像。 可由數位化器106以不同視訊格式提供視訊資料1〇8,舉 例而言’包含高清晰度格式(諸如「Rec 7〇9」),其提供 用於使用數字值編碼視訊像素之一協定。根據Rec 7〇9標 準(由瑞士日内瓦城之國際電信聯盟無線電通信部門或 ITU-R出版之Rec〇mmendati〇n bT7〇9),一相容視訊顯示 將對該視訊資料應用一 2.4功率函數(亦稱為具有2·4之一伽 馬(gamma)) ’使得具有一RGB碼值χ(例如來自數位化器 106)之一像素在適當顯示時將產生與χ2.4成比例之一光輸 出。其他視訊標準提供其他功率函數,舉例而言,遵從 SRGB標準之一監視器將具有2.2之一伽馬。若來自源之視 159347.doc 201230817 訊内容已經以數位形式提供,例如,專業等級錄影帶播放 器上之SDI視訊輸出(「串列數位介面」),則可省略該視 訊數位化器106。 在一些組態中,該原始視訊内容丨〇2可表示為明度值及 色度值,即,為YCrCb碼(或,用於一類比表示,Yprpb)或 可轉譯為RGB碼值之其他編碼。此外,原始視訊内容1〇2 可經子取樣,舉例而言4:2:2(其中對於每四個像素,明度 「γ」由四個樣本表示,而色度分量「Cr」及「Cb」各自 僅經兩次取樣),使所需要頻寬減小1/3 ,而不明顯影響影 像品質。 與該内容之該視訊資料相關聯且下文結合圖4A至圖4b 更詳細闡述之特徵化圖案110被提供至一編碼器112以在建 立一存檔時創建該存檔之空間、色度及/或時間組態(或此 等組態之至少一者)。 此外’ 一顏色查詢表(CLUT)128被提供至編碼器112,該 編碼器112根據特徵化圖案110及乩1^ 128編碼視訊資料 1〇8。該視訊資料係使用cLut 128予以編碼或處理,該 cLUT 128提供用於將視訊資料自數位視訊碼轉換為影片密 度碼之一非線性變換。經編碼檔案114含有經編碼視訊資 料及特徵化圖案110 ’可利用或不利用cLUT 128處理或編 碼該特徵化圖案110,如下文結合圖5及圖7闡述。在該經 編碼檔案中亦可能包含該特徵化圖案之僅一部分,只要存 在足夠資訊可用於一解碼器解碼該影片存檔即可。 在經編碼檔案114中,特徵化圖案11〇可定位在該經編碼 159347.doc •10- 201230817 視訊資料前面(例如’如圖4A至圖4B中)或可提供在與該經 編碼視訊資料相同的影像圖框中(圖中未展示)^在此方法 中使用一 cLUT(或者更一般而言,一非線性變換)導致最適 合於製作相對高品質之一影片印刷品之一影片存檔。此一 影片印刷品可經投射用於與自該影片存檔恢復之視訊内容 之視覺比較(若龠要)。 藉由編碼器112之空間及時間編碼存在於特徵化圖案! 1〇 中,其指不視訊資訊之每一圖框在該存檔之每一圖框中被 發現之處。若交錯圖場存在於視訊内容1〇2中,則特徵化 圖案11 〇亦&amp;不時間上相異圖場之由編碼器η 2執行之一空 間編碼。 此資訊可提供為該圖案1附含有之資料或文字或係基 於該圖案之空間組態或佈局,其等之任一者適用於機器或 SI!性。舉例而言,圖案U〇可含有關於影像資料之 位置及佈局之文字,例如, 話ι 影像資料完全在紅色邊 • 4除該紅色邊界」(例如,參考圖4Β,元素451), 二=定資訊可特別有助於不熟悉該存播格式之 圖案(舉例而言,指示該原始視訊之格 式,例如,「192〇x1〇8〇、夺 ^ 圖框之時間碼(其中在:、6〇 HZ」)’且可印刷每-至少一部分)。 存檔中週期性提供校準圖案之 此外,特定元素(例如, 一 112指示資料之實體範 5扣不線)可用於對編碼器 個資料區域之兩個此等立置,且對應於一圖框中之兩 几素(或一雙高度元素)之存在可用 I59347.doc •11· 201230817 於指示每圖框兩個待交錯圖場之存在。 在另一貫施例中,諸如二進位值之一集合之資料可提供 為明冗像素及黑暗像素,視情況結合幾何參考標記(指示 用於水平及垂直座標之參考圖框及比例)。可使用此一基 於數值的位置及比例取代圖形描繪資料區域之邊界。此一 一進位圖案亦可表示用於每一圖框之適當時間碼。S 159347.doc 201230817 Quality storage media, which can be read by a TV movie or by photo. The characterization pattern provides the basis for decoding the video frame to the video when the archive is scanned for recovery. Even after decades of fading of the film dye, the subsequent decoding of the scanned material of the film frame produces a video similar to the original video. Unlike the prior art in which video content is recorded as one of the pictures recorded on the video (for example, by using a picture tube or a movie camera to take a picture of each of the video frames displayed on a monitor), the present invention The archive production system processes the video signal into a number (4), and the digital data can be substantially accurately restored by making the characterization pattern. 1A shows an embodiment of a video archive system 100 of the present invention, comprising: an encoder 112 for providing an encoded slot 114 containing video content 1 8 and a special greeting pattern 11 ; _ movie recorder ιΐ6, which is used for recording a file; and a video processor 24 for processing the recorded file and making a video archive of the video content to 6. As used herein, in conjunction with the overall activity of the encoder 1丨2, the term "encoding" includes converting a video data format into a video material format (eg, from 709 code (representing the contribution of the three video display primary colors) to a film density). a code (representing a respective density of three dyes in a film having a value ranging from 〇 to 1〇23, eg, Cineon code) and spatial and temporal formatting (eg, in video data 丨08 and characterization pattern 110) The pixels are mapped to the appropriate pixels in the image space of the film recorder 116). In this context, the f-time format refers to the time series 'pixel self-visual mapping to the video image (4) according to the video data, for example, the continuous picture in the video is mapped to the continuous film 159347.doc 201230817 frame. For progressive video, individual video frames are recorded as a single video frame, and interlaced video recordings are separate fields (eg, forming odd-column pixels of one field and even-numbered columns of another field), a frame The separate fields are recorded in the same movie frame. The original video content 1 〇 2 is provided to the system via a video source 104. Examples of this content include television programs (in digital or analog form) that are currently stored on the videotape. The video source 104 (e.g., a video tape player) suitable for use with the original video content 1 〇 2 format provides the content to the video digitizer 106 for generating video data 1 〇 8. In one embodiment, the video material 108 can be converted to RGB (red, green, blue) code values because it causes negligible artifacts compared to other formats. Although the video material 108 can be provided to the encoder 112 in a non-RGB format (eg, as a brightness value and a chrominance value), the various shortcomings and contention in the archiving and video conversion process using such formats can be recovered in the video. Introduce an artifact. The video material 1 〇 8 may be provided by the digitizer 106 in different video formats, for example, including a high definition format (such as "Rec 7 〇 9"), which provides for the agreement of encoding one of the video pixels using digital values. According to the Rec 7〇9 standard (Rec〇mmendati〇n bT7〇9 published by the International Telecommunications Union Radiocommunication Sector in Geneva, Switzerland or ITU-R), a compatible video display will apply a 2.4 power function to the video material ( Also known as having a gamma gamma of '2 such that one of the pixels having an RGB code value χ (eg, from digitizer 106) will produce a light output proportional to χ2.4 when properly displayed. . Other video standards provide other power functions. For example, one monitor that complies with the SRGB standard will have 2.2 one gamma. If the content from source 159347.doc 201230817 has been provided in digital form, for example, the SDI video output ("serial digital interface") on a professional-grade video tape player, the video digitizer 106 can be omitted. In some configurations, the original video content 丨〇2 can be represented as a luma value and a chrominance value, i.e., a YCrCb code (or, for an analogy representation, Yprpb) or other encoding that can be translated into RGB code values. In addition, the original video content 1〇2 can be subsampled, for example 4:2:2 (where for every four pixels, the brightness "γ" is represented by four samples, and the chrominance components "Cr" and "Cb" Each sample is sampled twice, reducing the required bandwidth by 1/3 without significantly affecting image quality. A characterization pattern 110 associated with the video material of the content and described in greater detail below in connection with Figures 4A-4b is provided to an encoder 112 to create the space, chrominance and/or time of the archive when an archive is created. Configuration (or at least one of these configurations). In addition, a color lookup table (CLUT) 128 is provided to the encoder 112, which encodes the video material 1〇8 based on the characterization pattern 110 and 乩1^128. The video data is encoded or processed using cLUT 128, which provides a non-linear transformation for converting video data from a digital video code to a video density code. The encoded file 114 containing the encoded video material and characterization pattern 110&apos; may or may not be processed or encoded by the cLUT 128, as explained below in connection with Figures 5 and 7. Only a portion of the characterization pattern may also be included in the encoded file, as long as sufficient information is available for a decoder to decode the movie archive. In the encoded file 114, the characterization pattern 11 can be located in front of the encoded 159347.doc • 10 - 201230817 video material (eg, as in Figures 4A-4B) or can be provided in the same as the encoded video material The image frame (not shown) ^ using a cLUT (or, more generally, a non-linear transformation) in this method results in a movie archive that is best suited for producing one of the relatively high quality film prints. This video print can be projected for visual comparison (if desired) with the video content restored from the video archive. The spatial and temporal coding by the encoder 112 exists in the characterization pattern! In 1〇, it refers to where each frame of the non-visual information is found in each frame of the archive. If the interlaced field exists in the video content 1 〇 2, then the characterization pattern 11 & also &amp; does not temporally differ from the picture field by the encoder η 2 to perform a spatial encoding. This information may be provided for the material or text attached to the pattern 1 or based on the spatial configuration or layout of the pattern, any of which may be applicable to the machine or SI! For example, the pattern U〇 may contain text regarding the position and layout of the image data, for example, the ι image data is completely on the red side • 4 except the red border (for example, refer to FIG. 4Β, element 451), The information may be particularly helpful in unfamiliar with the pattern of the stored format (for example, indicating the format of the original video, for example, "192 〇 x1 〇 8 〇, capture the time code of the frame (in the :, 6 〇) HZ")' and can print every - at least a part). In addition, the calibration pattern is periodically provided in the archive. In addition, a specific element (for example, a physical indicator of the data indicating a data item of 5) can be used for two such positions of the encoder data area, and corresponds to a frame. The presence of two or more elements (or a pair of height elements) can be found in I59347.doc •11·201230817 to indicate the presence of two fields to be interlaced in each frame. In another embodiment, data such as a set of binary values can be provided as bright and dark pixels, optionally in conjunction with geometric reference marks (indicating reference frames and scales for horizontal and vertical coordinates). This numerically based position and scale can be used instead of the boundary of the graphical depiction data area. This one-bit pattern can also represent the appropriate time code for each frame.

關於藉由編碼器1Π之色度編碼,特徵化圖案11〇包含形 成選擇碼值之一預定空間配置之色票(patch)o可選擇該等 選擇碼值㈣,視訊白色、黑色、灰色、色度藍色、色 度、彔色、各種肉色調、泥土色調、天藍色及其他顏色), 此係因為其等對於一影像之校正技術演現是重要的對人 類感知是重要的或係、—寬泛顏色範圍之實例。每—預定顏 色具有一預定位置(例如,此顏色將演現在該色票内之 處),所以該解碼器知道在哪裡發現預定顏色。用於此等 色不之該等碼值經選擇以實質上涵蓋視訊碼值之全部範 圍Ιέ母顏色分量之極值處或附近之值,以便允許足 夠準確地内插或外推非選擇值,尤其若涵蓋稀疏。若亦使 用該cLUT編碼該特徵化圖案,則全部範圍之視訊碼(對應 於存檔的視訊内容)可在藉由該cLUT編碼之前表示在色票 中例如,選擇該等碼值為視訊碼之實質上整個範圍之一 稀疏表示。在不使用該cLUT編碼或處理該特徵化圖案之 情況中,該等色票應具有預定密度值,且自此之任何偏差 了用於判疋對於該存檔中(例如,源自老齡化,或源自影 片處理中之變動)之任何漂移之一補償。當結合倒置cLUT 159347.doc •12· 201230817 使用時如此判定之一補償將允許準確恢復原始視訊資料 碼。特徵化圖案110中供應之該等色票之子集可與其他分 量分開地或獨立於其他分量而呈現顏色分量(即,其他^ 量之值係固定的或為零)及/或以變化之組合(例如,其中所 有分量具有相同值之灰階;及/或非灰度值之不同集合)呈 現顏色分量。 分開呈現分量之特徵化圖案110的一個用途是允許隨著 一存檔老化(連同染料串擾之任何影響),容易地特徵化彩 色染料之線性及褪色。然而,具有各種顏色分量組合之色 票亦可用於傳達類似資訊。使該特徵化圖案中之色票之空 間配置及碼值可供-解碼器用於自影片存槽恢復視訊。舉 例而言,關於一色票之位置(絕對或相對於一參考位置)之 資訊及其顏色或碼值表示將允許該解碼器適當地解譯該色 票,不考慮整體處理變動或存檔老化之介入問題。 無論視訊數位化器106產生RGB碼值或一些其他表示, 該視訊資料108均包含為RGB碼值或可轉換至RGB碼值之 碼值。該等RGB碼值通常係10位元表示,但該等表示可更 小或更大(例如,8位元或12位元)^ 視訊資料108之RGB碼之範圍(例如,由該視訊數位化器 106之組態或轉換至RGB時選擇之一處理所判定,或由該 原始視訊内容102或視訊源104之表示予以預定)應對應於 該特徵化圖案110中表示的碼之範圍。換言之,該特徵化 圖案較佳至少涵蓋該等視訊像素值可使用之碼範圍使得 不需要外推該範圍。(此外推不可能非常準確。舉例而 I59347.doc •13* 201230817 言,若該圖案涵蓋100至900之範圍中之碼,但該視訊涵蓋 64至940之範圍,則在該視訊之端部子範圍64至1〇〇及9〇〇 至940中,需要自最近兩個或三個相鄰者外推(即其可為 每百個計數)。此問題起因於需要基於視訊碼1〇〇、2〇〇及 300等等之轉換估計視訊碼64之一轉換其假設視訊碼64 處之影片行為以類似於其在視訊碼】〇〇、2〇〇等等處回應之 方式回應於光,其可能不是實情,此係因為—影片之特性 曲線在低及高曝光極限附近通常具有非線性回應。 舉例而S,若特徵化圖案1 1 〇使用丨〇位元碼值,且若用 於視訊資料108之編碼僅係8位元,則作為藉由編碼器i 12 之編碼操作之部分,視訊資料1〇8可左移位且用〇填補以產 生1 〇位元值,其中8個最高有效位元對應於原始8位元值。 在另實例中,若該特徵化圖案110使用比視訊資料1 〇 8之 表不更少的位元,則可截去視訊資料1〇8之過多的最低有 效位元(四捨五入或不捨位)以四配該特徵化圖案表示之大 /J\ 〇 取決於該圖案之特定實施或設計,將利用cLUT 128編碼 之該特徵化圖案11〇併入經編碼檔案114中可提供用於解譯 一存檔之自編文件或自身足夠資訊,包含該存檔之老化效 應。舉例而言,可基於色度元素(諸如表示用於該視訊資 料之碼值之全部範圍之一密度梯度)解釋老化效應,此係 因為該#徵化圖案中之元素可具有與該存檔中之視訊影像 相同的老化效應。若彩色圖案經設計以表示該視訊内容之 整個顏色範圍,則在該解碼器不具有關於該圖案之先前知 159347.doc 201230817 識或預定資訊之情況中,亦可能演算或啟發式解喝該圖 案。在另-實施例中’用於存檔解譯之文字指令可包含在 2特徵化圖案中,使得一解碼器在不具有關於該圖案之先 前知識情況下可解碼該存檔。 在不利用cLUT 128編碼該特徵化圖案11〇(代之,使用數 位像素值與影片密度碼之間之—線性變換或使用_識別碼 變換來編碼該特徵化圖案110)之一實施例中,藉由使用該 特徵化圖案中之密度梯度解釋該存槽之老化效應,但將需 要呈原始cLUT 128或其倒置(圖⑺中之元素148)形式之額 外文件或知識以解譯一存檔。 /諸存在一記憶體器件(圖中未展示)中且稍後當解竭器ιΐ2 操作時即時重新叫用或串流傳輸之該經編碼槽案ιΐ4提供 至〜片。己錄器116,該影片記錄II i i 6根據該經編碼槽案資 料曝光彩色膠片m以產生具有潛在存槽資料之影片輸出 122(即,曝光影片),該影片輸出122經顯影且固定在化學 影片處理器124中以產生影片存檔126。 〜片s己錄器116之目的係接受經編碼檔案114中用於每一 像素之一密度碼值且使膠片118產生曝光,其引起由影片 處理器124產生之影片存檔126上之一特定彩色影片密度。 為改良提現至該影片記錄器丨j 6之碼值與該影片存檔上之 所得密度之間之關係或關聯,使用來自一校準程序之資料 120來校準影片記錄器116。可提供在一查詢表中用於將影 片密度碼轉換至影片密度之該校準資料120取決於膠片118 之特定製造及該影片處理器124之期望設定。在膠片ιΐ8之Regarding the chrominance coding by the encoder 1 特征, the characterization pattern 11 〇 includes a patch that forms a predetermined spatial configuration of one of the selection code values. The selection code value (4) can be selected, and the video is white, black, gray, and color. Degrees of blue, chromaticity, ochre, various flesh tones, earth tones, sky blue, and other colors), which are important for human perception because they are important for the interpretation of an image correction technique. An example of a broad range of colors. Each of the predetermined colors has a predetermined position (e.g., the color will appear in the color ticket), so the decoder knows where to find the predetermined color. The code values for such color combinations are selected to substantially cover values at or near the extreme values of the full range of video code values to allow for sufficiently accurate interpolation or extrapolation of non-selected values, particularly If the coverage is sparse. If the characterization pattern is also encoded using the cLUT, the full range of video codes (corresponding to the archived video content) may be represented in the color ticket before being encoded by the cLUT, for example, selecting the code values as the essence of the video code. One of the entire range is sparsely represented. In the case where the cLUT is not used to encode or process the characterization pattern, the color tickets should have a predetermined density value, and any deviation therefrom is used to determine for the archive (eg, from aging, or One of any drift from the change in film processing). When combined with the inverted cLUT 159347.doc •12· 201230817, one of the determinations will allow accurate restoration of the original video data. The subset of the color tickets supplied in the characterization pattern 110 may present color components separately or independently of other components (ie, other values are fixed or zero) and/or combinations of variations (For example, gray scales in which all components have the same value; and/or different sets of non-gray values) present color components. One use of the characterization pattern 110 that presents the components separately is to allow for easy characterization of the linearity and fading of the color dye as a result of archival aging (along with any effect of dye crosstalk). However, color tickets with a combination of various color components can also be used to convey similar information. The spatial arrangement and code value of the color ticket in the characterization pattern is used by the decoder to recover video from the video storage slot. For example, information about the position of a color ticket (absolute or relative to a reference position) and its color or code value representation will allow the decoder to properly interpret the color ticket, regardless of overall processing changes or archival aging interventions. problem. Regardless of whether the video digitizer 106 produces an RGB code value or some other representation, the video material 108 includes a code value that is an RGB code value or that can be converted to an RGB code value. The RGB code values are typically represented by 10 bits, but the representations may be smaller or larger (eg, 8-bit or 12-bit) ^ the range of RGB codes of the video material 108 (eg, digitized by the video) The configuration of the device 106 or the selection of one of the processing options when converting to RGB, or predetermined by the representation of the original video content 102 or the video source 104, should correspond to the range of codes represented in the characterization pattern 110. In other words, the characterization pattern preferably covers at least the range of codes over which the video pixel values can be used such that the range does not need to be extrapolated. (In addition, the push cannot be very accurate. For example, I59347.doc •13* 201230817, if the pattern covers a code in the range of 100 to 900, but the video covers the range of 64 to 940, then at the end of the video Ranges 64 to 1 and 9 to 940 need to be extrapolated from the last two or three neighbors (ie, they can be counted per hundred). This problem arises from the need to be based on video code 1〇〇, Conversion of 2〇〇 and 300, etc., one of the video codes 64 converts the behavior of the video at the hypothetical video code 64 to respond to light in a manner similar to that of the video code 〇〇, 2〇〇, etc. This may not be the case, because the characteristic curve of the film usually has a non-linear response near the low and high exposure limits. For example, if the characterization pattern 1 1 丨〇 uses the 丨〇 bit code value, and if used for video data The encoding of 108 is only 8 bits, and as part of the encoding operation by encoder i 12, video data 1 〇 8 can be left shifted and padded with 〇 to produce 1 〇 bit value, of which 8 most significant bits The element corresponds to the original 8-bit value. In another example, if the The escrow pattern 110 uses less bits than the video data 1 〇8, and the excess least significant bits (rounded or unrounded) of the video data 1 〇 8 can be truncated to match the characterization pattern. The representation /J\ 表示 depends on the particular implementation or design of the pattern, and the characterization pattern 11 encoded with the cLUT 128 is incorporated into the encoded archive 114 to provide a self-editing file for interpreting an archive or itself. Sufficient information, including the aging effect of the archive. For example, the aging effect can be interpreted based on a chrominance element (such as a density gradient representing one of the full range of code values for the video material) because of the # 征化 pattern The element in the middle may have the same aging effect as the video image in the archive. If the color pattern is designed to represent the entire color range of the video content, then the decoder does not have a prior knowledge about the pattern 159347.doc 201230817 Or in the case of scheduled information, it is also possible to calculate or heuristically dissolve the pattern. In another embodiment, the text instruction for archival interpretation may be included in the 2 characterization pattern so that A decoder can decode the archive without prior knowledge of the pattern. The characterization pattern 11 is encoded without using the cLUT 128 (and instead, using a linear transformation between the digital pixel value and the film density code) Or using one of the characterization codes to encode the characterization pattern 110), the aging effect of the sump is interpreted by using the density gradient in the characterization pattern, but would need to be the original cLUT 128 or inverted ( Additional documentation or knowledge in the form of element 148) in Figure (7) to interpret an archive. / Present in a memory device (not shown) and later re-call or stream when the decompressor ιΐ2 operates The encoded slot file ι 4 is transmitted to the ~ slice. The recorder 116, the film record II ii 6 exposes the color film m according to the encoded slot data to generate a film output 122 (ie, an exposure film) having potential slot data, the film output 122 being developed and fixed in chemistry A movie archive 126 is generated in the movie processor 124. The purpose of the s-recorder 116 is to accept a density code value for each pixel in the encoded file 114 and to cause exposure of the film 118, which causes a particular color on the movie archive 126 produced by the film processor 124. Film density. To improve the relationship or association between the value of the movie recorder 6j 6 and the resulting density on the video archive, the material logger 116 is calibrated using data 120 from a calibration procedure. The calibration data 120 that can be provided in a lookup table for converting the film density code to the film density depends on the particular fabrication of the film 118 and the desired settings of the film processor 124. In the film ιΐ8

S 159347.doc •15· 201230817 特性曲線(即’ l〇glQ曝光量(勒克斯秒)與密度(其係透射率 之倒數之log丨〇)之間之關係)具有任何非線性之程度上,校 準資料120產生線性化使得橫跨密度碼值之整個範圍,密 度碼值之一給定改變使密度產生一固定改變。此外,該校 準資料可包含用於染料靈敏度之串擾之一補償矩陣。 在一實施例中’膠片118係一中間膠片(例如,由美國紐 約州羅切斯特市之柯達(Kodak)公司製造的Eastman colc)I· Internegative II Film 5272),尤其經設計以與一影片記錄 器搭配使用的膠片(例如,亦由柯達公司製造之K〇(jak VISI0N3彩色數位中間膠片5254),且該膠片118經設計以 具有一更為線性之特性曲線。圖12八展示此膠片在特定曝 光及處理條件下藍色、綠色及紅色之特性曲線。 可搭配不同對應校準資料12〇而使用其他類型的膠片。 圖12B展示用於此等膠片之一特性曲線(例如,對於一種顏 色)之另一實例,其可展現一較短線性區域(即,相比於圖 12A,線性區域BC内之一較小曝光值範圍)。此外,該特 性曲線具有:一更實質(例如,一較大曝光範圍)「腳趾」 區域AB (即,5亥曲線中之一較小斜率),其在低曝光下具有 降低的膠片靈敏度,其中一增量曝光相比於該線性區域 BC產生一相對小增量密度;及較高曝光下之一「肩膀」 區域CD’其具有由曝光決定之一類似降低的膠片靈敏 度。對於此等膠片,整個特性曲線具有一更明顯的s形形 狀。但是,對應校準資料12〇可用於線性化記錄在該影片 存檔上之像素碼值與密度之間之關係。然而,所得影片存 159347.docS 159347.doc •15· 201230817 Characteristic curve (ie the relationship between 'l〇glQ exposure (lux seconds) and density (the log 倒 of the reciprocal of the transmission)) has any degree of nonlinearity, calibration The data 120 is linearized such that across the entire range of density code values, a given change in one of the density code values produces a fixed change in density. In addition, the calibration data may include a compensation matrix for crosstalk of dye sensitivity. In one embodiment, the film 118 is an intermediate film (e.g., Eastman Colc manufactured by Kodak Company, Rochester, NY, USA), especially designed to match a film recorder. The film used (for example, K〇 (jak VISI0N3 color digital intermediate film 5254) also manufactured by Kodak Company, and the film 118 is designed to have a more linear characteristic curve. Figure 12 shows the film at a specific exposure and Characteristic curves for blue, green and red under processing conditions. Other types of film can be used with different corresponding calibration data. Figure 12B shows another characteristic curve for one of these films (for example, for one color) An example, which can exhibit a shorter linear region (ie, a smaller range of exposure values within the linear region BC than in Figure 12A). Moreover, the characteristic has: a more substantial (eg, a larger exposure range) "Toe" area AB (ie, one of the smaller slopes of the 5 hai curve), which has reduced film sensitivity at low exposures, with an incremental exposure compared to The linear region BC produces a relatively small incremental density; and one of the "shoulder" regions CD' of the higher exposure has a film sensitivity that is similarly reduced by one of the exposure decisions. For these films, the overall characteristic curve has a more pronounced The s-shaped shape. However, the corresponding calibration data 12〇 can be used to linearize the relationship between the pixel code value and the density recorded on the video archive. However, the resulting video is stored 159347.doc

S 201230817 檔126將對影片記錄器116及影片處理器124之準確性的變 動更靈敏。此外,因為該特性曲線之線性區域BC比K〇dak Intemegative Π Film 5272更陡(即,密度的變動對於一給 定增量曝光變化將更大),所以此膠片將更傾向於在此中 - 間區域中有雜訊(且在低曝光區域或高曝光區域中有較少 雜訊)。 因此,為產生一影片存檔,來自經編碼檔案ιΐ4之一數 字密度碼值「c」(例如,對應於一像素之顏色中之紅色原 色之量)被提供至影片記錄器116以用於基於校準資料 轉換至一對應基於影片的參數(例如,影片密度(經常以稱 為「狀態-M」之單位測量))。校準提供密度碼值「c」盘 一所得密度之間之-精確預定線性關係。在—通常使用的 實例中,言亥影片記錄器經校準以提供按每增量碼值〇侧 之-增量密度。產生期望影片密度需要的曝光由膠片特性 曲線(類似於圖12A至圖12B)且應用於該膠片其在藉由該 影片處理器m處理之後引起—影片存檔。為自該影片存 檀操取視訊内容,藉由-經校準影片掃描器將影片密度轉 換回瑪值rc」,如下文在圖1B之存樓操取系統中閣述。 圖18展示用於自-影片存檔(例如,由存檔製作系統100 製作之影存檔126)恢復視訊之—存槽讀取或摘取系統 m之-實例。影片存檔126最近可由影片存棺系統_製 作或可實質上已老化(即,存槽讀取系統130在建立存槽之 後可對存樓126操作約50年)。因為該視訊資料係基於一非 線性變換(例如,使用CLUT)自數位視訊轉換至影片密度The S 201230817 file 126 will be more sensitive to changes in the accuracy of the film recorder 116 and the film processor 124. In addition, because the linear region BC of the characteristic curve is steeper than K〇dak Intemegative Π Film 5272 (ie, the change in density will be greater for a given incremental exposure), the film will be more prone to this - There is noise in the area (and there is less noise in the low or high exposure area). Thus, to generate a video archive, a digital density code value "c" from one of the encoded files ι4 (eg, the amount corresponding to the red primary color in the color of a pixel) is provided to the film recorder 116 for calibration based The data is converted to a corresponding film-based parameter (eg, film density (often measured in units called "state-M"). Calibration provides a precisely predetermined linear relationship between the density of the density code "c" disc and the resulting density. In the example that is commonly used, the photographic film recorder is calibrated to provide a delta density on the side of each incremental code value. The exposure required to produce the desired film density is derived from the film characteristic curve (similar to Figures 12A-12B) and applied to the film after it has been processed by the film processor m - a movie archive. In order to capture video content from the video, the film density is converted back to the rm value by a calibrated film scanner, as described below in Figure 1B. Figure 18 shows an example of a memory-reading or picking system m for restoring video from a-video archive (e.g., shadow archive 126 made by archive production system 100). The movie archive 126 may be recently made by the video storage system or may be substantially aged (i.e., the slot reading system 130 may operate the store 126 for about 50 years after the slot is created). Because the video data is converted from video to film density based on a non-linear transformation (for example, using CLUT)

S 159347.doc •17· 201230817S 159347.doc •17· 201230817

碼所以本發明之該影片存標具有改良的品質(相比 用,訊資料與影片密度碼之間之一線性變換之其他存 使仔藉由影片印刷輸出系統16〇自該存檔產生之—影片芒 刷品具有適於投射或顯示之足夠品質。 〜P 影片存檔126由影片掃描器132掃描以將影片密度轉換至 影片資料136(即,由密度碼值表示)。影片掃描器1321有 校準資料134,其類似於校準資料12〇,該校準資料…係 線性化且正規化該掃描器對影片密度之回應之—參數值集 ('J如T為非線性之偏移、縮放比例,可能是其自身 之:顏色查詢表利用—經校準掃描器,影片存槽⑵上 之雄、度經測量且產生影片資料136中之線性碼值(即,一增 量:值表示影片存檔126中之至少整個密度範圍内之㈣ 的讀變化)。在另—實施例中,校準資料134可線性化可 由衫片掃描器132測量之整個密度範圍内之密度碼。利用 一經適當校準掃描器(例如,利用密度碼值與影片密度之 間之一線性關係),由掃描器132讀取或測量用對應於來自 祕編碼槽案114之-碼值「c」之一密度記錄之一影像部 刀’且所#數字密度難(排除任何老化效應或處理漂移) 將約等於「C」(非精確)。 為創建用於空間及時間解碼之參數,解碼器138讀取且 檢查影片資料136以找到對應於特徵化圖案11〇之部分,進 步檢查該部分以識別影片資料136内之資料區域之位置 (即一’含有視訊資料108之表示之區域)。缝查將透露該視 訊貢料1〇8是否包含一漸進式或交錯光柵,且在何處找到 159347.doc 201230817 對應於圖框或圖場之資料區域。 為解碼δ亥影片存檔之比色法(即,將影片密度或影片密 度碼轉換成數位視訊碼),可基於來自該特徵化圖案11〇之 資λ由該解碼益創建一色度查詢表。取決於起初在該存稽 中如何編碼該特徵化圖案(即,是否使用與該視訊資料相 同的cLUT編碼該特徵化圖案),可使用此查詢表以獲得用 於解碼5亥影片存檔中之影像資料之資訊或變換。 若使用cLUT 128編碼該存檔中之該特徵化圖案,則解碼 :138(基於關於該特徵化圖案或自該特徵化圖案獲得之先 月'J头識或資。凡)邊識到景夕片資料1 3 6中之哪些密度碼值對應 於特徵化圖案UO中之原始像素碼,且在解碼器138内建立 -色度查詢表。舉例而言’關於該圖案之先前知識可預定 或分開提供至該解碼器,或資訊可包含在該圖案自身中 (明確地或按慣例已知)。此查詢表(其可為稀疏的)經建立 特定用於解料片資料136。隨後,使用此查詢表(視需 要,包含藉由内插),可解碼在影月資料136之部分中讀取 之對應於視訊内容資料的密度碼值(即,轉換成視訊資 科)。在此實施例中不需要—外部提供的倒置eLUT 148用 於解碼該存檔,此係因為該特徵化圖案含有足夠資訊供該 解碼器建構—倒置eLUT作為解碼㈣之部分。此係因為 對於原始特徵化圖案110中表示之該等視訊碼值之每一 者’嵌入在自該影片存檔126恢復之影片資料136中 化圖案現在包括對應的實際影片密度值。預定視訊資料值 與對應的觀察影月审片# 在A值之集合對於此等值係一精確倒置 159347.doc 201230817 cLUT,可内插該倒置cLUT以處理不在内部建構的倒置 CLUT中表不之值。結合圖6進一步閣述且說明此解碼方 法。 右不使用cLUT 128編碼該存檔中之該特徵化圖案11(), 則解碼器138(亦基於關於該圖案之先前知識或自該圖案獲 侍之資訊)認識到影片資料136中之哪些密度碼值對應於特 徵化圖案11〇中之原始像素碼,且在解碼器138内建立一查 δ旬表(其可為稀疏的)。接著透過一倒置cLut 148使此查詢 表加倍,產生特定適用於對應於視訊資料1〇8之影片資料 136之部分之一解碼變換。隨後,使用該解碼變換(視需 要,包含藉由内插),可解碼影片資料136之部分中對應視 汛資料108之密度碼值(即,轉換成視訊資料格式)。此解碼 程序可理解為:1)藉由使用基於該圖案建立之該查詢表來 變換該等影片密度碼值解釋該存檔之老化效應,及2)該倒 置cLUT接著將「去老化」(即,移除老化效應)的密度碼值 轉譯或變換成視訊碼值。 在此實施例中,需要該倒置cLUT 148(其係用於編碼該 視訊資料之該cLUT 128之倒置)來恢復該原始視訊資料。 將結合圖8及圖11進一步闡述且說明此解碼方法。 因此’視情況’逐圖場或逐圖框’藉由解碼器丨3 8自影 片資料136提取且色度解碼視訊資料。恢復的視訊資料14〇 由視訊輸出器件142讀取,該視訊輸出器件142可將該視訊 資料140格式化為適合於視訊記錄器M4之一視訊信號以產 生經再生之視訊内容146。 159347.doc -20- 201230817 舉例而言,視訊記錄器144可為一錄影帶或數位視訊碟 記錄器。或者,代替視訊記錄器144,可使用一廣播或内 容串流系統,且可在不具有一中間記錄形式之情況下直接 提供恢復的視訊資料140用於顯示。 作為存檔製作系統100及存檔讀取系統13〇之效應之一品 質檢查或證明,可利用視訊比較系統15〇檢查原始視訊内 容102及經再生視訊内容146,該視訊比較系統15〇可包含 顯不器152及154以允許一操作者觀察並排呈現之原始視訊 及恢復的視訊。在比較系統15〇之另一實施例中,一 A/B切 換器可交替在一共同顯示器上屐示一視訊且接著展示另一 視讯。在又另一實施例中,該兩個視訊可展示在一「蝶 开/」顯示器上,§亥顯示器在相同顯示器上呈現一原始視訊 之一半及該恢復的視訊之相同一半之一鏡射影像。此一顯 示器提供超越一雙重(例如,並排)顯示器之一優點,此係 因為該兩個視訊之對應部分呈現在類似環境中(例如,在 其等之各自背景下具有類似對比度),因此,有利於該兩 個視訊之間之視覺對比。自根據本發明之該影片存檔產生 之該視訊内谷146實質上等同於原始視訊内容1〇2。 此外’影片印刷輸出系統16〇使用一特定影片印刷膠片 162將影片存擋126供應至一經良好調整的影片印刷機 164(包含一顯影處理器,圖中未分開展示),以產生影片印 刷品166,接著使用投射系統168投射該影片印刷品166。 當利用原始視訊内容102或經再生視訊内容146之一顯示器 觀看影片印刷品166之投射時,假設影片存檔126與影片印 159347.doc 201230817 刷品166均未實質上老化,一操作者應發現兩個呈現係實 質匹配(即’不需要對影片顏色重定時序以匹配視訊顯示 器 152/154)。 圖2及圖3展示在一影片存檔126内編碼之視訊資料之圖 框之例示性實施例。在影片存檔2〇〇中,若干漸進式掃描 視訊圖框經編碼為影片上之圖框F i、F2及F3,且在影片存 檔300中,交錯掃描視訊圖框經編碼為分開的連續圖場(諸 如Fl-fl、F2-f2等等),其中F1_f1&amp;Fi_f2指代相同圖框fi 内之不同圖場fl、f2。影片存檔2〇〇及3〇〇分別儲存或寫在 膠片202及302上,對應穿孔(諸如204及3〇4)用於創建例示 性影片圖框220及320之各自位置及間隔。每一影片存檔可 具有一選用之聲軌206、306(其可為類比或數位或兩者)或 與分開存檔之一音軌同步之一時間碼轨(圖中未展示)。 影片存檔200之資料區域21〇、211及212與影片存檔300 之資料區域310、311、312、313、314及315含有在其等之 對應影片圖框(例示性圖框220及320)内間隔開之個別視訊 圖場之表示。此等資料區域具有:自該等對應影片圖框之 邊緣之水平間隔224、225、324及325 ;自該等對應影片圖 框之開始之垂直間隔221、321 ;垂直高度222及322,且交 錯圖場具有圖場間間隔323。此等參數或尺寸全部由特徵 化圖案中提供之空間及時間描述識別且下文結合圖4 A至圖 4B予以更詳細描述。 圖4A展示影片存檔126内記錄為一標頭400之一特徵化圖 案110,且在此實例中,用於具有交錯圖場之原始視訊内 159347.doc •22· 201230817 容1 02。影片圖框高度420係與形成一習知4穿孔(「4_ perf」)影片圖框之一系列四個穿孔(繪示為穿孔4〇4)相同 的長度。在一替代實施例中,可選擇一不同整數個影片穿 孔作為影片圖框高度。 在繪示的實施例中,在每一 4-perf影月圖框中,資料區 域412及413含有兩個視訊圖場(例如,類似於影片存標3〇〇 中之圖場312、313)之表示且可由其等之各自邊界加以界 疋。在此實例中’該資料區域之每一邊界由三個矩形標 不’如圖4B中更詳細展示,圖4B表示對應於形成資料區 域412之邊界之矩形451、452及453之角落部分之區域45〇 之一放大視圖。換言之’圖4A中具有角落區域450之矩形 包含二個矩形:451、452及453,其等在影片400上繪製為 像素,例如,每一矩形為一個像素厚。矩形452之顏色及/ 或影片密度不同於其相鄰矩形451及453,且由一雜湊圖案 展不該矩形452。在此實例中,用於圖場412之資料區域包 含定位在矩形452上或内之像素(即,矩形452内部之區域 412包含矩形453中之此等像素),但排除矩形451中之像 素或其外部的像素。矩形451可呈現為一可容易辨識之顏 色(例如,紅色)以有利於偵測資料區域與非資料區域之間 之邊界。 因此,在影片存檔300之每一各自含有資料圖框中,第 -圖場及第二圖場(例如’ F2_fl*F2_f2)佈置在對應影片 圖框(例如’圖框32〇)内,正如區域412及413佈置在特徵化 圖案圖框420内(包含邊界矩形452之外)。在此實施例中, 159347.doc •23· 201230817 需要影片S己錄器116及影片掃描器132分別準確且可重複地 4膠片118及衫片存檔i26,以確保將該經編碼槽案η4 可重現且準確地映射至—影片存標中且在視訊恢復期間自 該影片存檔映射至影片資料136中。 因此,當由掃描器132讀取時,矩形451至453精確指定 母影片圖框中之第—圖場之位置或邊界。該影片記錄器 及衫片掃描器根據能夠利用子像素準確性相對於該等穿孔 定位》亥影片之原理進行操#。因此,相對於影片之四 穿孔304 ’每第—圖場(例如,Fl-fl、F2-f2及F3-fl)與 其他奇數圖場一樣對於其圖框之四個穿孔具有相同的空間 關係,且第二圖場F1_f2、F2_f2&amp;F3_f2同樣如此。此等同 =間關係亦適用於特徵化圖案4〇〇,該特徵化圖案4〇〇界定 定位該等第一圖場及第二圖場之區域。因此,區域412(由 特疋邊界組態(諸如矩形d、452及4S3)表示)指定第一 圖場Fl-fl、F2_fl&amp;F3_fl等等之位置。 β類似地,圍繞資料區域413之矩形指定找到個別第二圖 ,(例如,Fl-n、F2_f2及Fs_f2)之處。對於一漸進式掃描 實施例,具有對應邊界(例如,類似於圖4B中詳述的矩形) 之一單一資料區域指定在隨後影片圖框(例如22〇)内找到漸 進式圖框視訊資料區域(例如,21〇至212)之處。 第—圖場412之頂部412T在圖4A及圖化中均予展示且界 疋頭部間隙421。連同側間隙424及425,及區域413下方之 一尾部間隙420,選擇頂部間隙421以確保資料區域412及 3充刀位於影片圖框“ο中,使得影片記錄器j j 6可可靠 159347.doc •24· 201230817 地定址全體資料區域412及413用於寫入,且影片掃描器 132可可靠地存取全體該等資料區域用於讀取。圖場交錯 視訊内容之存檔中之圖場間間隙423之存在(相比於第—圖 場412及第二圖場413以放大比例展示)確保可精確且清楚 地儲存並且恢復每一圖場,而不在經掃描影像中引入可起 因於掃描器中影片失準之明顯誤差。在另一實施例中,可 能不具有圖場間間隙423(即,實際上係〇之一間隙),兩個 圖場彼此接合。然而,在不具有一圖場間間隙423情況 下,該掃描器中之一失準可引起一圖場之一邊緣附近之像 素被讀取或掃描為一相鄰圖場之像素。 舉例而言,影片圖框420中之特徵化圖案包含色度元素 430至432。該等色度元素可包含一中和梯度43〇,在一實 例中,該中和梯度430係涵蓋彩色染料之每一者中自最小 值至最大值之密度範圍(例如,在約〇 .丨5之步階中自約〇 至3.05之一密度,假設可自新影片存檔126内之膠片118達 成此等密度)之一 21步階灰階。如先前提到,一密度梯度 可用作為用於老化效應之一自校準工具。舉例而言,若在 將來掃描時發現梯度430之明亮端(即,最小密度為1〇%密 集,則解碼器Π8可藉由使存檔影片中之最亮或最低密度 減小一對應量來校正此老化效應。若該梯度之黑暗端 (即,最大密度)係5%較不密集,則該存檔影片中之類似黑 暗像素將增加—對應量。此外’可基於來自該梯度之兩個 讀取且藉由使用橫跨梯度430之額外讀取進行任何密度值 之一線性内插,該系統可補償非線性老化效應。 159347.doc •25- 201230817 該等色度元素亦可包含-或多個原色或輔色梯度43i, 在一實例中,該梯度431係實質上僅—種㈣⑺㈣量原 色)或兩種染料(用以測量輔色)之約最小密度至最大密度之 - 2】步階灰階。類似於上文對於該中和密度梯度之描述, 亦可測量且補償提供起因於個別染料老化之密度漂移。 對於一更完整的特徵化,該等色度元素可包含表示特定 顏色之色票432之一集合。一例示性顏色集合一般係用於 顏色傳遞及控制之ANSI叮8標準中找到的類似顏色,例 如’ IT8.7/1 R2003圖形技術·用於輸入掃描器校準之顏色 傳輸目標「!T8.7/I R2003 Graphic Techn〇l〇gy _ c〇i〇rTherefore, the film of the present invention has an improved quality (compared to the use of a linear transformation between the data and the film density code, the other is generated by the film print output system 16 from the archive - the film The ram brush has sufficient quality for projection or display. The ~P movie archive 126 is scanned by the film scanner 132 to convert the film density to the film material 136 (i.e., represented by the density code value). The film scanner 1321 has calibration data. 134, which is similar to the calibration data 12〇, which linearizes and normalizes the scanner's response to the density of the film—the set of parameter values ('J such as T is a nonlinear offset, scaling, may be Its own: the color lookup table utilizes a calibrated scanner, the male on the film storage slot (2), the degree of measurement and produces a linear code value in the movie material 136 (ie, an increment: the value represents at least the entire of the movie archive 126) In the other embodiment, the calibration data 134 linearizes the density code over the entire density range that can be measured by the tablet scanner 132. The quasi-scanner (e.g., utilizing a linear relationship between the density code value and the film density) is read or measured by the scanner 132 using a density record corresponding to a code value "c" from the secret code slot 114. An image portion of the knife 'and the # digital density is difficult (excluding any aging effects or processing drift) will be approximately equal to "C" (inexact). To create parameters for spatial and temporal decoding, the decoder 138 reads and checks the film. The data 136 is found to correspond to the portion of the characterization pattern 11〇, and the portion is inspected to identify the location of the data region within the video material 136 (i.e., an 'region containing the representation of the video material 108). The seam inspection will reveal the video tribute Whether material 1〇8 contains a progressive or interlaced grating, and where to find 159347.doc 201230817 corresponds to the data area of the frame or field. To decode the colorimetric method of δ hai film archive (ie, the film density or Converting the film density code into a digital video code, a chrominance lookup table can be created from the decoding benefit based on the λ from the characterization pattern. Depending on how the code was originally encoded in the file. The pattern (i.e., whether the characterization pattern is encoded using the same cLUT as the video material) can be used to obtain information or transformations for decoding image data in a 5 liter video archive. If cLUT 128 is used to encode the pattern The characterization pattern in the archive is decoded: 138 (based on the characterization pattern or the syllabus obtained from the characterization pattern, the head of the syllabus or the capital.) The density code value corresponds to the original pixel code in the characterization pattern UO, and a chrominance lookup table is established within the decoder 138. For example, prior knowledge about the pattern may be predetermined or separately provided to the decoder, or information It may be included in the pattern itself (either explicitly or by convention). This lookup table (which may be sparse) is created specifically for the defragmentation material 136. Subsequently, using this lookup table (including interpolation by interpolation), the density code value corresponding to the video content material read in the portion of the shadow month data 136 can be decoded (i.e., converted to a video asset). What is not needed in this embodiment is that the externally provided inverted eLUT 148 is used to decode the archive because the characterization pattern contains sufficient information for the decoder to construct - invert the eLUT as part of the decoding (4). This is because the mediation pattern embedded in the video material 136 restored from the movie archive 126 for each of the video code values represented in the original characterization pattern 110 now includes the corresponding actual film density value. The predetermined video data value and the corresponding observation shadow review film # in the set of A values for this value is a precise inversion 159347.doc 201230817 cLUT, the inverted cLUT can be interpolated to handle the inverted CLUT not internally constructed. value. This decoding method is further described and illustrated in conjunction with FIG. Without characterization of the characterization pattern 11() in the archive using cLUT 128, the decoder 138 (also based on prior knowledge of the pattern or information from the pattern) recognizes which density codes in the film material 136 The value corresponds to the original pixel code in the characterization pattern 11 , and a look-up table (which may be sparse) is established within the decoder 138. The lookup table is then doubled by an inverted cLut 148 to produce a decoding transform that is specific to one of the portions of the video material 136 corresponding to the video material 1-8. Subsequently, using the decoding transform (including interpolation by interpolation), the density code value of the corresponding video data 108 in the portion of the video material 136 can be decoded (i.e., converted to a video material format). The decoding process can be understood as: 1) interpreting the aging effect of the archive by using the lookup table established based on the pattern to transform the film density code values, and 2) the inverted cLUT will then "de-age" (ie, The density code value of the aging effect is translated or converted into a video code value. In this embodiment, the inverted cLUT 148 (which is used to invert the cLUT 128 encoding the video material) is required to recover the original video material. This decoding method will be further explained and illustrated in conjunction with FIGS. 8 and 11. Therefore, the video data is extracted from the video material 136 by the decoder 丨3, and the video data is decoded by the decoder 丨8, depending on the situation. The recovered video data 14 is read by the video output device 142, which can format the video data 140 into a video signal suitable for the video recorder M4 to produce the reproduced video content 146. 159347.doc -20- 201230817 For example, video recorder 144 can be a video tape or digital video recorder. Alternatively, instead of video recorder 144, a broadcast or content streaming system can be used and the recovered video material 140 can be provided directly for display without an intermediate recording format. As a quality check or proof of the effect of the archive production system 100 and the archive reading system 13 , the original video content 102 and the reproduced video content 146 can be checked by the video comparison system 15 , and the video comparison system 15 can include The devices 152 and 154 allow an operator to view the original video and recovered video presented side by side. In another embodiment of the comparison system 15A, an A/B switcher can alternately display a video on a common display and then display another video. In still another embodiment, the two video displays can be displayed on a "butterfly/" display, and the HI display displays one and a half of the original video and one of the same half of the recovered video on the same display. . This display provides the advantage of overcoming a dual (eg, side-by-side) display because the corresponding portions of the two video are presented in a similar environment (eg, having similar contrast in their respective contexts), thus, A visual comparison between the two videos. The video intra-valley 146 generated from the video archive in accordance with the present invention is substantially identical to the original video content 1〇2. In addition, the 'movie print output system 16' uses a particular film print film 162 to supply the film archive 126 to a well-adjusted film printer 164 (including a development processor, not shown separately) to produce a film print 166, The movie print 166 is then projected using a projection system 168. When viewing the projection of the print 166 with the original video content 102 or one of the regenerated video content 146, it is assumed that both the movie archive 126 and the print 159347.doc 201230817 brush 166 are not substantially aged, and an operator should find two The rendering is a substantial match (ie, 'there is no need to reorder the movie colors to match the video display 152/154). 2 and 3 illustrate an exemplary embodiment of a frame of video material encoded within a video archive 126. In the video archive 2, a number of progressive scan video frames are encoded as frames F i, F2 and F3 on the movie, and in the video archive 300, the interlaced scan video frames are encoded as separate continuous fields. (such as Fl-fl, F2-f2, etc.), where F1_f1 &amp; Fi_f2 refer to different fields f1, f2 in the same frame fi. The video archives 2 and 3 are stored or written on the films 202 and 302, respectively, and the corresponding perforations (such as 204 and 3〇4) are used to create the respective positions and intervals of the exemplary movie frames 220 and 320. Each movie archive can have an optional soundtrack 206, 306 (which can be analog or digital or both) or a time code track (not shown) synchronized with one of the separate archived tracks. The data areas 21, 211, and 212 of the video archive 200 and the data areas 310, 311, 312, 313, 314, and 315 of the video archive 300 are spaced within their corresponding movie frames (exemplary frames 220 and 320). The representation of individual video fields. The data areas have horizontal intervals 224, 225, 324, and 325 from the edges of the corresponding movie frames; vertical intervals 221, 321 from the beginning of the corresponding movie frames; vertical heights 222 and 322, and interlaced The field has an inter-field spacing 323. These parameters or dimensions are all identified by the spatial and temporal description provided in the characterization pattern and are described in more detail below in connection with Figures 4A-4B. 4A shows a characterization file 110 recorded as a header 400 within a movie archive 126, and in this example, for use in an original video with an interlaced field 159347.doc • 22· 201230817 容 102. The frame height 420 is the same length as the series of four perforations (shown as perforations 4〇4) of a conventional 4 perforated ("4_ perf") film frame. In an alternate embodiment, a different integer number of film apertures can be selected as the movie frame height. In the illustrated embodiment, in each 4-perf shadow frame, the data areas 412 and 413 contain two video fields (eg, similar to the fields 312, 313 in the movie). The representations may be bounded by their respective boundaries. In this example, 'each boundary of the data area is represented by three rectangles' as shown in more detail in FIG. 4B, and FIG. 4B shows an area corresponding to the corner portions of the rectangles 451, 452, and 453 forming the boundary of the data area 412. One of 45 放大 magnified view. In other words, the rectangle having the corner area 450 in Fig. 4A contains two rectangles: 451, 452, and 453, which are drawn as pixels on the film 400, for example, each rectangle is one pixel thick. The color and/or film density of the rectangle 452 is different from its adjacent rectangles 451 and 453, and the rectangle 452 is not represented by a hash pattern. In this example, the data area for field 412 includes pixels positioned on or within rectangle 452 (ie, area 412 inside rectangle 452 contains such pixels in rectangle 453), but excludes pixels in rectangle 451 or Its external pixels. The rectangle 451 can be rendered as an easily identifiable color (e.g., red) to facilitate detection of the boundary between the data area and the non-data area. Therefore, in each of the video archives 300, each of which contains a data frame, the first field and the second field (eg, 'F2_fl*F2_f2) are arranged in the corresponding movie frame (eg, 'frame 32〇), just as the area 412 and 413 are disposed within the characterization pattern frame 420 (including the boundary rectangle 452). In this embodiment, 159347.doc • 23· 201230817 requires the film S recorder 116 and the film scanner 132 to accurately and reproducibly 4 film 118 and the shirt file i26 respectively, to ensure that the coded slot η4 can be Reproduces and accurately maps to the video indicia and maps from the movie archive to the movie material 136 during video recovery. Thus, when read by the scanner 132, the rectangles 451 through 453 accurately specify the position or boundary of the first field in the parent film frame. The film recorder and the film scanner operate according to the principle that the sub-pixel accuracy can be used to locate the perforation relative to the perforation. Therefore, each of the four perforations 304' per first field (eg, Fl-fl, F2-f2, and F3-fl) has the same spatial relationship to the four perforations of its frame as other odd fields. The same is true for the second field F1_f2, F2_f2 &amp; F3_f2. This equivalent=inter-ratio also applies to the characterization pattern 4〇〇, which defines the regions in which the first field and the second field are located. Therefore, the area 412 (represented by the feature boundary configuration (such as rectangles d, 452, and 4S3) specifies the positions of the first field F1-fl, F2_fl &amp; F3_fl, and the like. Similarly, the rectangular representation of the data area 413 is used to find individual second maps (e.g., Fl-n, F2_f2, and Fs_f2). For a progressive scanning embodiment, a single data area having a corresponding boundary (e.g., similar to the rectangle detailed in Figure 4B) specifies that the progressive frame video data area is found within the subsequent movie frame (e.g., 22 inches) ( For example, 21〇 to 212). The top 412T of the first field 412 is shown in both Figure 4A and the figure and defines the head gap 421. Together with the side gaps 424 and 425, and one of the tail gaps 420 below the area 413, the top gap 421 is selected to ensure that the data areas 412 and 3 are located in the film frame "o, making the film recorder jj 6 reliable 159347.doc • 24· 201230817 The local address all data areas 412 and 413 are used for writing, and the film scanner 132 can reliably access all of the data areas for reading. The inter-field gap 423 in the archive of the interlaced video content. The presence (shown in an enlarged scale compared to the first field 412 and the second field 413) ensures that each field can be accurately and clearly stored and restored without introducing a film in the scanner that can result from the scanner A significant error in misalignment. In another embodiment, there may be no inter-field gap 423 (ie, one of the gaps in the system), the two fields being joined to each other. However, there is no inter-field gap In the case of 423, one of the scanners may cause a pixel near one edge of a field to be read or scanned as a pixel of an adjacent field. For example, a characterization pattern in film frame 420 Inclusion color Elements 430 to 432. The chromaticity elements may comprise a neutralization gradient 43 〇, in an example, the neutralization gradient 430 encompasses a range of densities from a minimum to a maximum in each of the color dyes (eg, In the step of 〇.丨5, from a density of about 3.0 to 3.05, assuming that the film 118 in the new film archive 126 achieves such a density, one of the 21 steps gray scale. As mentioned earlier, a density The gradient can be used as a self-calibration tool for aging effects. For example, if the bright end of the gradient 430 is found in future scans (ie, the minimum density is 1% dense, the decoder Π8 can be made in the archived movie) The lightest or lowest density is reduced by a corresponding amount to correct for this aging effect. If the dark end of the gradient (ie, the maximum density) is 5% less dense, then similar dark pixels in the archived film will increase - the corresponding amount In addition, the system can compensate for nonlinear aging effects based on two reads from the gradient and linear interpolation of any density value by using an additional read across the gradient 430. 159347.doc • 25- 201230817 The color The degree element may also comprise - or a plurality of primary colors or a secondary color gradient 43i, which in an example is substantially only a (four) (7) (four) amount of primary color) or a minimum density of two dyes (to measure the secondary color) to The maximum density of - 2] step gray scale. Similar to the description of the neutralization density gradient above, it can also be measured and compensated for the density drift resulting from the aging of individual dyes. For a more complete characterization, the color The degree element may contain a collection of color tickets 432 representing a particular color. An exemplary color set is generally a similar color found in the ANSI 叮 8 standard for color transfer and control, such as 'IT8.7/1 R2003 graphics technology· Color transfer target for input scanner calibration "! T8.7/I R2003 Graphic Techn〇l〇gy _ c〇i〇r

Transmission Target f〇r Input Scanner ㈤化㈤仙」,由華 盛頓美國標準研究所出I,其通t用於校準掃描器;或由 美國密西根州Grand Rapids公司以X_Rite商標出售之 Munsell ColorChecker。此等顏色強調一色域之一更加中 和部分,提供比灰階或純原色或輔色更能表示肉色調及葉 子之顏色樣本。 可在一單一影片圖框42〇之標頭中提供該特徵化圖案。 在一替代實施例中,可在若干額外圖框之每一者中相同地 重現圖框420之該特徵化圖案,優點在於可基於多個讀取 及適當濾波拒斥雜訊(例如,自影響影片記錄、處理或掃 描之一污點)。在又另一實施例中,除在影片圖框42〇中提 供該特徵化圖案外,亦可在多個影片圖框(圖中未展示)之 標頭中提供該特徵化圖案,舉例而言,以提供更多特徵化 資訊(例如’額外色票或階狀梯度)。舉例而言,一 159347.docTransmission Target f〇r Input Scanner, issued by the Washington Institute of Standards, USA, for calibration scanners; or Munsell ColorChecker sold under the X_Rite trademark by Grand Rapids, Michigan, USA. These colors emphasize a more neutral part of a gamut, providing color samples that are more representative of flesh tones and leaves than grayscale or pure primary or secondary colors. The characterization pattern can be provided in a header of a single movie frame 42. In an alternate embodiment, the characterization pattern of frame 420 can be identically reproduced in each of a number of additional frames, with the advantage that noise can be rejected based on multiple reads and appropriate filtering (eg, from Affects the film's recording, processing, or scanning of one of the stains). In still another embodiment, in addition to providing the characterization pattern in the movie frame 42A, the characterization pattern may also be provided in a header of a plurality of movie frames (not shown), for example, To provide more characterization information (such as 'extra color tickets or step gradients'). For example, a 159347.doc

B -26- 201230817 ”可包含:提供在許多影片圖框上之一序列不同測試圖 案’例如,用於測試灰階之一第一圖框中之一測試圖案. 用於測試個別顏色(例如,分別為紅色、綠色及藍色)之三 個圖框中之三個不同測試圖t ;及具有涵蓋有用葉子及膚 色調色板之測試圖案之另四個圖框。此一特徵化圖案可考 慮為在八個圖框上延伸之—特徵化圖案或者作為提供在八 個圖框中之不同特徵化圖案。 圖5展示用於在影片上建立一可印刷視訊存槽之過程· :一實例。過程5〇〇(其可由諸如圖1A中之一影片存標系統 η施)在步驟510處開始,數位視訊資料1〇8被提供至一編 碼器112(或由該編碼器接收)。在步驟512處,亦提供與該 視訊資料㈣聯之-對應特徵化圖案⑽。該特徵化圖案 (d其具有與該編碼器相容(且亦與用於恢復該視訊之-解碼 器相容)之一格式)可提供為具有關於該視訊資料之資訊之 一文字檔案或作為與視訊®框合併之影像。可藉由前置附 加為標頭(以形成具有該特徵化圖案之-引導)或被包含或 作為連同影像資料之一或多個圖框之複合物來完成此併 ^ ’但在不含影像資料之可讀/可寫區域(諸如圖框内間隙 區域)中。g特徵化圖案包含經設計用於傳達關於以下之 ^者之貝訊之一或多個元素:視訊格式、視訊圖框之 時間碼、貝料區域之位置、顏色或密度值、影片存槽之老 化、影片記錄器及/或掃描器中之非線性或失真等等。 在步驟514處,使用該eLUT 128(下文結合圖$及圖⑺閣 述其之建立)編碼該視訊資料1〇8(例如,以Rec 7〇9格式)及B-26-201230817" can include: providing a sequence of different test patterns on one of many movie frames', for example, one of the test patterns used to test one of the grayscales in the first frame. For testing individual colors (for example, Three different test charts in three frames of red, green, and blue; and four other frames with test patterns covering the useful leaf and skin color palette. This characterization pattern can be considered To extend the eight frames - to characterize the pattern or to provide different characterization patterns in the eight frames. Figure 5 shows a process for creating a printable video slot on a movie: an example. Process 5〇〇 (which may be initiated by a video capture system η, such as in FIG. 1A) begins at step 510, and digital video data 〇8 is provided to (or received by) an encoder 112. At 512, a corresponding characterization pattern (10) is also provided in conjunction with the video material (4). The characterization pattern (d has compatibility with the encoder (and is also compatible with the decoder for restoring the video) a format) can be provided with A text file of the information of the data or as an image combined with the video frame. It can be attached as a header (to form a guided pattern with the characterization pattern) or included or as one or more of the image data. a composite of frames to accomplish this and 'but in a readable/writable area that does not contain image data (such as the interstitial area within the frame). The g-characterized pattern contains designs designed to convey the following One or more elements of the video: video format, time frame of the video frame, location of the bedding area, color or density value, aging of the movie slot, non-linearity in the video recorder and/or scanner Distortion, etc. At step 514, the video material 1 〇 8 (eg, in Rec 7 〇 9 format) is encoded using the eLUT 128 (described below in conjunction with FIG. $ and FIG. 7)

S 159347.doc -27- 201230817 特徵化圖案11 〇之全部像素值以產生經編碼資料114,其等 係對應於各自像素值之密度碼值。取決於藉由該特徵化圖 案描述之佈局,特徵化圖案及視訊資料像素兩者可存在於 或共同駐存於經編碼資料114之一或多個圖框中或該圖 案及視訊資料像素可佔據分開圖框(例如,在將該圖案前 置附加為標頭之情況中)。 使用CLUT編碼特徵化圖案或視訊資料之像素值专味著 基於一非線性變換將該圖案或視訊之資料轉換為對應密度 碼值。圖11之曲線U30係一 cLUT之一實例,該cLUT提二 視訊碼值與密度碼值之間之一非線性映射或關係。在此實 例中,來自該特徵化圖案中之各種元素(例如,中和梯度 430、原色或辅色梯度431或特定色票432)之原始像素碼由 該曲線1130上之實際資料點(圓點)表示。 在步驟516處,藉由影片記錄器1丨6將該經編碼資料1 μ 寫至膠片118»在已基於密度碼值(例如,Cine〇n碼值)與影 片密度值之間之一線性關係校準該記錄器的情況下,藉由 根據各自密度碼值之合適曝光而在底片上形成潛像。在步 驟5 18處,使用已知或習知技術處理或顯影曝光的膠片以 在步驟520處製作影片存檔126。 取決於所使用之該cLUT 128,可將可印刷影片存檔126 印刷至影片或直接以一電視電影轉換為視訊。一cLUT 128 可經最佳化用於印刷至一特定膠片或使用在具有一特定校 準之一電視電影上。可預測,印刷於一不同膠片上或使用 在一不同校準電視電影上將具有較低保真度結果。該 159347.doc •28- 201230817S 159347.doc -27- 201230817 Characterizes all of the pixel values of the pattern 11 to produce encoded data 114, which are equivalent to the density code values of the respective pixel values. Depending on the layout described by the characterization pattern, both the characterization pattern and the video data pixels may reside or co-locate in one or more of the encoded data 114 or the graphics and video data pixels may occupy Separate the frame (for example, in the case where the pattern is prepended as a header). Using the CLUT to encode the characterization pattern or the pixel value of the video material specifically converts the pattern or video data into a corresponding density code value based on a non-linear transformation. Curve U30 of Figure 11 is an example of a cLUT that provides a non-linear mapping or relationship between the video code value and the density code value. In this example, the original pixel code from the various elements in the characterization pattern (eg, neutralization gradient 430, primary or secondary color gradient 431, or particular color ticket 432) is the actual data point on the curve 1130 (dots) ) said. At step 516, the encoded material 1 μ is written to the film 118 by the film recorder 160 to linearize between the density code value (eg, Cine〇n code value) and the film density value. In the case of calibrating the recorder, a latent image is formed on the film by appropriate exposure according to the respective density code values. At step 518, the exposed film is processed or developed using known or conventional techniques to produce a film archive 126 at step 520. Depending on the cLUT 128 used, the printable movie archive 126 can be printed to a movie or converted directly to video in a television movie. A cLUT 128 can be optimized for printing to a particular film or for use on a television movie having a particular calibration. It is predictable that printing on a different film or using it on a different calibrated telecine will have lower fidelity results. The 159347.doc •28- 201230817

πυτ之目的係將該等原始視訊R /㈧碼值映射至最適合 於目標應用中直接使用之-組影片密度值,而仍允許恢復 该等原始Rec. 709碼值。 圖6展示用於自藉由存檔建立過程5〇〇製作之一可印刷影 片存檔(其可為一老化存檔)恢復視訊内容之一過程6〇〇之Γ 實例。在步驟6Π)處,該影片存播(例如,來自圖^之存槽 126)被提供至-影片掃描器,該影片掃摇器藉由讀取該影 片存檔上之密度且將該等密度轉換成對應影片密度碼值 (諸如,Cineon碼)而產生影片資料136。取決於特定存檔及 特徵化圖案’不必要掃描或讀取整個影片存槽,代之,僅 需掃描或讀取至少—或多個資料區域(即含有對應於視 訊内容之資料之部分)。舉例而言,㈣特徵化圖案僅含 有關於該視訊資料之空間及時間資訊(沒有色度資訊),則 在不需要掃描該特徵化圖案自身情況下可識別校正的視訊 資料部分。(類似於該影片記錄器,亦已基於密度碼值與 影片密度值之間之一線性關係校準該掃描器。) 在^驟614中,基於關於該特徵化圖案之先前知識,解 碼器138自影&gt;{冑料136拾取或識別特徵化圖案ιι〇之記 錄在步驟616中,解碼器138使用該特徵化圖案及/或關 於各種7C素(例如,對應於在白色處開始且在十個線性步 階令行進之一灰階梯度之特定色票,或表示一組特定次序 顏色之特疋色票)之組態之其他先前知識,以判定適合於 »亥衫片-貝料136的解碼資訊,包含資料區域之位置及時序 之規範及/或比色法。如先前闡述,因為在此實施例中藉The purpose of π υ τ is to map the original video R / (eight) code values to the group film density values most suitable for direct use in the target application, while still allowing recovery of the original Rec. 709 code values. Figure 6 shows an example of a process for restoring video content from a printable image archive (which can be an aging archive) by the archive creation process. At step 6), the video storage (eg, from slot 126) is provided to a video scanner that reads the density on the video archive and converts the density The movie material 136 is generated in response to a film density code value such as a Cineon code. Depending on the particular archive and the characterization pattern, it is not necessary to scan or read the entire movie slot, instead, only at least one or more data areas (i.e., portions containing data corresponding to the video content) need to be scanned or read. For example, if the (4) characterization pattern contains only spatial and temporal information about the video material (no chrominance information), the corrected portion of the video data can be identified without scanning the characterization pattern itself. (Similar to the film recorder, the scanner has also been calibrated based on a linear relationship between the density code value and the film density value.) In step 614, based on prior knowledge of the characterization pattern, the decoder 138 The shadow 136 picks up or identifies the characterization pattern ιι〇 is recorded in step 616, the decoder 138 uses the characterization pattern and/or with respect to various 7C elements (eg, corresponding to starting at white and at ten A linear step allows for the prior knowledge of the configuration of a particular color ticket that travels one gray scale, or a special color ticket that represents a particular order of colors, to determine the appropriate decoding for the » jersey-bean 136 Information, including the specification and/or colorimetric method of the location and timing of the data area. As previously stated, because in this embodiment

S 159347.doc •29- 201230817 由使用與用於視訊資料相同的cLUT來編碼該特徵化圖 案所以該特徵化圖案含有足夠資訊供該解碼器獲得或建 構倒置cLut作為解碼動作之部分。在步驟618中,解碼 益138使用來自步驟616之解碼資訊來解碼含有視訊資料之 存檔126内之資料區域’轉換該等影片密度碼值以產生視 Λ資料。過程6〇〇在步驟62〇處完成,其中視訊自視訊資料 恢復。 圖7繪不用於在影片上建立一可印刷視訊存檔之另一過 程700。在步驟71〇處,數位視訊資料1〇8被提供至一編碼 斋或由該編碼器接收。在步驟712處,使用該cLUT 128編 碼該視訊資料108之每一像素之值,即,將該視訊資料自 數位視訊格式(例如,Rec 709碼值)轉換為一基於影片 的格式(諸如密度碼值)。再者,圖u之曲線113〇係一 cLUT 之一實例。 在步驟714處’一對應特徵化圖案11〇(即,與該視訊資 料相關聯之一圖案)亦被提供至該編碼器。經編碼資料1 i 4 包含使用該cLUT編碼之視訊資料及不使用cLUT 128編碼 之特徵化圖案。代之’藉由使用一預定關係(諸如一線性 映射)來編碼該特徵化圖案以將該圖案中之色票之視訊碼 值轉換為密度碼值。 在—實施例中,藉由基於由圖11中之線1120表示之一線 性函數將Rec. 709碼值轉換為密度碼值來編碼該圖案之資 料(在此實例中,線1120具有一斜率1,使得該Rec_ 709碼 值與該密度碼值完全相同。 159347.doc 201230817 如上文提到,該特徵化圖案及該視訊 在不同圖框中(例如,如圖4中),戈 °刀別被提供 或該待徵化圊案可被包含 在亦含有影像資料之一圖框中以丨 Τ (例如,在非影像資料區域 中(如圖框内間隙323中))。 在步驟716處,利用影片記錄器⑴將經編碼資料114寫 至膠片m,在步驟7戦處理該膠片118以製作影月存檔 可印刷存檔建立過程7〇〇在步驟72〇處完成。在此實 =例t,在步驟712處未利用撕⑵編碼該特徵化圖 案0 與過程5〇0之產物一樣,可將來自過程7〇〇之存播126印 刷至影片或直接以一電視電影轉換為視訊,而具有類似結 果。 圖晴示用於自藉由存槽建立過程製作之一可印刷影 片存槽126恢復視訊之過程8⑽。在步驟㈣處,該可印刷 ^存樓126(例如,可為—「老化」存槽)被提供至-掃描 盗(諸如圖1B之影片掃描器132)。在步驟812處,藉由將經 掃描漬取自影片密度轉換成密度碼值而產生影片資料 136 °在步驟814處’基於關於該特徵化圓案之先前知識, 解碼138自影片資料136拾取或識別該特徵化圖案。在步 驟816處,使用該特徵化圖案及/或關於該圖案中之各種元 素之先前知識來判定適合於該影片資料136的解碼資訊。 邊解碼資訊包含資料區域之位置及時序之規範'一正規化 比色法,且為完成比色法規範,包含一倒置cLUT 148(其 係在影片存檔建立期間用於編碼該視訊資料之cLUT之倒S 159347.doc • 29- 201230817 The characterization pattern is encoded by using the same cLUT as used for the video material so the characterization pattern contains enough information for the decoder to obtain or construct the inverted cLut as part of the decoding action. In step 618, the decode benefit 138 uses the decoded information from step 616 to decode the data area in the archive 126 containing the video material to convert the film density code values to produce the view data. Process 6 is completed in step 62 where the video is restored from the video material. Figure 7 depicts another process 700 that is not used to create a printable video archive on a movie. At step 71, the digital video material 1 〇 8 is supplied to or received by the encoder. At step 712, the cLUT 128 is used to encode the value of each pixel of the video material 108, ie, convert the video material from a digital video format (eg, a Rec 709 code value) to a movie-based format (such as a density code). value). Furthermore, the curve 113 of Fig. u is an example of a cLUT. At step 714, a corresponding characterization pattern 11 (i.e., a pattern associated with the video material) is also provided to the encoder. The encoded data 1 i 4 contains the video data encoded using the cLUT and the characterization pattern not encoded using cLUT 128. Instead, the characterization pattern is encoded by using a predetermined relationship (such as a linear mapping) to convert the video code values of the color tickets in the pattern to density code values. In an embodiment, the pattern is encoded by converting the Rec. 709 code value to a density code value based on a linear function represented by line 1120 in FIG. 11 (in this example, line 1120 has a slope of 1) So that the Rec_ 709 code value is exactly the same as the density code value. 159347.doc 201230817 As mentioned above, the characterization pattern and the video are in different frames (for example, as shown in FIG. 4) The providing or the to-be-conquered file may be included in a frame that also contains image data (for example, in a non-image material area (in the gap 323 in the figure frame)). At step 716, the utilization is utilized. The film recorder (1) writes the encoded material 114 to the film m, and processes the film 118 in step 7 to make a shadow month archive printable archive creation process 7 which is completed at step 72. Here, the example = t At step 712, the characterization pattern 0 is not encoded by tearing (2). Like the product of process 5 〇 0, the 126 broadcast from process 7 can be printed to a movie or directly converted into a video by a television movie with similar results. The picture is used for self-storage One of the process-making processes can print the video storage slot 126 to resume the video process 8 (10). At step (4), the printable storage building 126 (for example, can be - "aged" storage slot) is provided to - scan theft (such as 1B's film scanner 132). At step 812, the film material 136° is generated by converting the scanned stain from the film density to the density code value at step 814 'based on prior knowledge about the characterization round, The decode 138 picks up or identifies the characterization pattern from the movie material 136. At step 816, the characterization pattern and/or prior knowledge of the various elements in the pattern are used to determine the decoded information suitable for the film material 136. The decoded information contains the specification of the location and timing of the data area 'a normalized colorimetric method, and for the completion of the colorimetric specification, includes an inverted cLUT 148 (which is used to encode the video data during the creation of the video archive)

S 159347.doc •3卜 201230817 置)。在步驟818處,解碼器π 8使用來自步驟816之解碼資 訊來解碼含有視訊資料之存檔126内之資料區域,且轉換 影片密度碼以產生視訊資料。在步驟82〇處自該視訊資料 恢復該視訊》 圖7至圖8之此編碼-解碼方法(其中利用該cLUT(諸如圖 11之曲線1130)編碼僅該視訊資料,且基於一線性變換(諸 如圖11之線1120)編碼該圖案)特徵化該影片之整個密度範 圍如何隨年齡移動或漂移,而圖5至圖6之方法(使用該 cLUT編碼該視訊資料與該等特徵化圖案兩者)不僅特徵化 用於編碼影像資料之影片密度值之子範園如何漂務,而且 亦體現該倒置cLUT使得當解碼時,不再分開需要或應用 該倒置eLUT。在圖7至圖8之方法中,在不保留編碼視訊 資料中使用之原始c L U T以用於一倒置查詢的情況下無法 自該特徵化圖案判定圖!!之曲線! i 3 〇上之&lt;、d一及。 之位置。 A H ’則在該影月在镗φ π赍1由a # 、 解碼該影片存檔。 及密度,則在該| 。舉例而言,若由一標準規定影像之位置S 159347.doc • 3 Bu 201230817 set). At step 818, decoder π 8 uses the decoded information from step 816 to decode the data area in archive 126 containing the video material and converts the film density code to produce video material. Retrieving the video from the video material at step 82 》 the encoding-decoding method of FIGS. 7-8 (where the video material is encoded using the cLUT (such as curve 1130 of FIG. 11) and based on a linear transformation (such as Line 1120 of Figure 11 encodes the pattern) to characterize how the entire density range of the film moves or drifts with age, and the method of Figures 5 through 6 (using the cLUT to encode both the video material and the characterization patterns) Not only does the feature of the film density value used to encode the image data float, but the inverted cLUT is also embodied such that when decoding, the inverted eLUT is no longer needed or applied separately. In the method of FIG. 7 to FIG. 8, the original c LUT used in the encoded video material is not retained for an inverted query, and the curve of the graph cannot be determined from the characterization pattern! ;, d and. The location. A H ' is decoded in the shadow month by a φ π 赍 1 by a # , decoding the movie. And density, then in the |. For example, if a standard specifies the location of the image

159347.doc 以上過程之其他變動可涉及自該影片存檔省略該特徵化 圖案或其之—部分,即使其用於編碼目的且提供在該經編 碼檔案中。在此情況中’一解碼器可需要額外資訊來正確 •32· 201230817 如,色示)。該解碼器可得到與該影片存槽分開的用於解 擇該等色票之額外資訊,用於解碼該存檔。 在闡述用於建立製作本發明之影片存檔中使用之— cLUT之方法x 义刖,下文介紹關於cLUT之額外細節及背 月、在電細圖形及影像處理中使用cLUT係已知的。_ 提供自帛一像素值(源)至一第二像素值(目的地)之 映射。在—實例中,該eLUT將呈Ree· 7〇9碼值之一標量 值映射至呈密度碼之-標量值(例如’圖11中之線1130, - Rec. 7G9碼表示僅—單—顏色分量,諸如該像素之紅 色 '綠^或藍色之一者)。該單一值LUT適合於不存在举擾 或對當前目的來說可忽略串擾之系統。此一 cLut可由一 -維矩陣表示,在此情況下單獨處理個別原色(紅色、綠 色、藍色),例如,可將具有一紅色值1〇之一源像素變換 成具有一紅色值20之一目的地像素,而不考慮該源像素之 綠色值及藍色值。 在另貫例中,該cLUT將表示源值之一像素之—顏色 三元組(例如,R、G&amp;B之三個Rec 7〇9碼值)映射至密度 碼之一對應三元組。在三個顏色軸不真正地正交時此表= 係適當的(例如,由於紅敏影片染料與綠敏影片染料之間 之串擾,右該綠敏染料對於紅光也稍微敏感或若該綠敏染 料在顯影時對於除了,綠光之外的其他光具有非零吸收率木 則可能導致三個顏色軸不真正地正交)。 此cLUT可表示為一二維(3D)矩陣,在此情況中三種原 色可處理為待變換成一目的地像素之一源色立方體中之— 159347.doc ^ 201230817 3D座標。在一3D CLUT中,源像素中之每一原色之值可影 響目的地像素中之原色之任一者、全部或沒有影響。舉例 而言’進一步取決於綠色分量及/或藍色分量之值,可將 具有一紅色值10之一源像素變換成具有一紅色值20、〇、 50等等之一目的地像素。 通常’尤其在具有大量(例如,10個或更多)的表示每一 顏色分量之位元之系統中,一 cLUT可為稀疏的,即,在 該LUT中僅提供少量值,内插其他值用於其之使用(如需 要)。此節省記憶體及存取時間。舉例而言,具有1〇位元 原色值之一密集3D cLUT可需要(2Μ0)Λ3(其中2Λ1〇指代2 的1 〇冪次),或微大於十億個項目以提供用於每一可能源 像素值之一映射。對於良性(即,沒有極度彎曲或不連續) 之一 cLUT,可建立一稀疏cLUT且藉由習知方法内插目的 地像素之值’習知方法包含基於最近相鄰者之對應源像素 與感興趣源像素之相對距離分派最近相鄰者(或最近的相 鄰者,及其等之相鄰者)6 Rec. 7〇9值之一稀疏cLUT之一 通常可行密度係17八3(即,沿顏色立方體之每一軸,每一 原色為17個值),其引起該111丁中稍小於5000個目的地像 素項目。 圖9繪示用於建立本發明中使用之一適當(^11丁(例如,圖 1A中之cLUT 128)之一過程900。在此實例中,意圖建立將 視訊碼值變換成適合於曝光影片記錄器丨16中之膠片118之 影片密度碼值之一 cLUT且所得影片存檔126最適用於製作 一影片印刷品166使得檢查來自投射系統168及顯示器ι52 159347.doc -34- 201230817 及154之任一者之輸出之一操作者可感知一大體匹配。 過程900在步驟91〇處以原始視訊碼空間(在此實例中, Rec. 709 ’ #曰疋為場景參考 在步驟911處,將該視訊資料自其之原始顏色空間(例 如,Rec. 709)轉換至一觀察者參考(〇bserver_referred)顏色 . 空間(諸如XYZ,其係1931 CIE色度圖之座標系統)。此係 藉由施加一指數至該等Rec. 7〇9碼值來完成(例如,厶”或 2_4,作為適合於被看作表示用於看電視之一典型客廳或 書房之一「昏暗環繞」觀看環境之丫值)。轉換至一觀察者 參考顏色空間之原因係因為cLUT之目的係使一影片在被 呈現給觀察者時看起來儘可能像該視訊。此在將觀察者看 作參考點(因此得出術語「觀察者參考」)之一顏色空間中 最易於實現。 注意熟習此項技術者已知之術語「場景參考」或「輸出 參考(〇lltpUt-referred)」用於指定一碼值在一給定顏色空間 中實際上定義何物。在Rec. 7〇9之情況中,「場景參考」意 .才曰參考%景中之某物,特定言之,參考在攝影機之視野中 反射離開一校準卡(在其上具有經特定印刷、特定為不光 滑之色票之一實體紙板薄片)之光的量(該卡之白色應係碼 值940,該卡之黑色應係碼值64,亦定義一特定灰色色 票,其設定一指數曲線之參數)。「輸出參考」意指一碼值 應在一監視器或投射螢幕上產生一特定光量。舉例而言, 對於一碼,一螢幕應發射多少英尺-朗伯(f〇〇t-Lambe⑴ 光。Rec. 709指定應使用何種原色且何種顏色對應於白 I59347.doc -35- 201230817 色’且因此該標準令存在笨絲「a ^在某種「輸出參考」,但碼值之鍵 疋義㈣definitiGn)係「場景參考」。)「觀察者參考」與 人類如何感知光及顏色關聯。χγζ顏色空間係基於人類如 何感知顏色之測量,且盆又典 .也、 其不受诸如一系統使用何種原色來 捕獲或顯不一影像之事物的影響。χυζ空間中定義之一顏 色將看起來相同’而無需考慮其如何產生。因此,對岸於 相同聊值之兩種呈現(例如’影片及視訊)將看起來相 同。存在其他觀察者參考顏色空間(例如,Yuv Yx# 等),其等全部源於1931 CIEf料或其之更現代改良(盆略 有改變某些細節)a 在步驟912處,做出一檢查或查詢以判定所得色域(即, 轉換至該觀察者參考顏色空間(識別為χγΖι)之後之影像資 料之色域)是否明顯超過影片中可表示之色域(可構成「顯 著要素」的是策略’尤其可能關於超過影片色域之度及持 續時間)。若判定未明顯超過該影片色域,則觀察者參考 碼(色域χγζΑ)被傳遞至步驟914。該影片色域指可在影 片媒體上表示之全部顏色之一軌跡。當需要有無法在影片 中表達之顏色時’「超過」—影片色域。影片《色域在一 些地方(例如’飽和青色 '黃色、品紅色)超過視訊之色域 且視訊之色域在其他地方(例如,飽和紅色、綠色及藍色) 超過影片之色域。 否則,若在步驟912處擔心XYZl*之色域明顯超過一影 片印刷品166之色域,則在步驟913處再映射該色域以在— 重定形的色域中產生碼(仍在該χγζ顏色空間中,但現在 159347.doc -36· 201230817 識別為ΧΥΖ2)。注意色域並非顏色空間而是—顏色空間中 之值之-軌跡。影片之色域係可在影片中表達之全部可能 顏色視Λ之色域係可在視訊中表達之全部可能顏色,且 -特定視訊資料(例如,視訊資料副)之色域係此視訊資料 之整體中實際使用的獨特顏色之集合。藉由在ΧΥΖ顏色空 間中表連色域,可比較其他與影像不同之色域(影片係一 吸收性媒體,視訊顯示係發射性)。 已知用於色域再映射之許多技術,且最成功的是在色域 之不同區域中組合來自不同技術之結果之混合。一般而 5,取好在一感知上均勻之顏色空間(觀察者參考顏色空 間之一特殊子集)中實施色域再映射,cIE 1976 (L' a' b )顏色空間(CIELAB)特別適合。因此,在色域再映射步 驟913之—實施例中,使用Rec. 709白點(發光物)將該ΧΥΖι 色域中之碼轉換成CIELAB,再映射所得碼大體不超過影 片色域,且接著再轉換回至該χγζ顏色空間以產生經修改 的色域ΧΥΖ2,現在具有不明顯超過可用影片色域之性 質。 在CIELAB而不是χΥΖ顏色空間中執行色域之再映射之 價值或優點係使得對某些顏色做出的一特定比例之改變在 感知的改變度上類似於對該色域中之其他地方(即,其他 顏色)做出的相同比例之改變(此係CIELAB之一性質,此係 因為其在感知上係均勻的)。換言之,在CIELab空間中, 在任何方向上沿該顏色空間中之任一轴之某一量的相同改 變被人類感知為「相同大小」之改變。此有助於提供不產159347.doc Other variations to the above process may involve omitting the characterization pattern or portions thereof from the movie archive, even if it is used for encoding purposes and provided in the encoded file. In this case, a decoder can require additional information to be correct. • 32· 201230817 eg, color). The decoder can obtain additional information separate from the movie slot for decoding the color tickets for decoding the archive. In describing the method x used to create the video archive for making the present invention, the following describes the additional details about the cLUT and the back month, the use of the cLUT system in electrical thin graphics and image processing. _ Provides a mapping from a pixel value (source) to a second pixel value (destination). In the example, the eLUT maps a scalar value of one of the Ree·7〇9 code values to a scalar value of the density code (eg, 'line 1130 in Figure 11, - Rec. 7G9 code indicates only—single - a color component, such as one of the red 'green' or blue of the pixel). This single value LUT is suitable for systems where there is no interference or negligible crosstalk for current purposes. This cLut can be represented by a one-dimensional matrix, in which case individual primary colors (red, green, blue) are processed separately, for example, one of the source pixels having a red value of 1 变换 can be transformed to have one of the red values 20 The destination pixel, regardless of the green and blue values of the source pixel. In another example, the cLUT maps a color triplet representing one of the source values (e.g., three Rec 7 〇 9 code values of R, G &amp; B) to one of the density codes corresponding to the triplet. This table = is appropriate when the three color axes are not truly orthogonal (eg, due to crosstalk between the red sensitive film dye and the green sensitive film dye, the right green sensitive dye is also slightly sensitive to red light or if the green Sensitive dyes with non-zero absorption of wood other than green light during development may result in three color axes not being truly orthogonal). This cLUT can be represented as a two-dimensional (3D) matrix, in which case the three primary colors can be processed into one of the source color cubes to be transformed into a destination pixel - 159347.doc ^ 201230817 3D coordinates. In a 3D CLUT, the value of each primary color in the source pixel can affect any, all or none of the primary colors in the destination pixel. For example, depending on the value of the green component and/or the blue component, a source pixel having a red value of 10 can be transformed into a destination pixel having a red value of 20, 〇, 50, and the like. Usually, 'in particular in a system with a large number (eg, 10 or more) of bits representing each color component, a cLUT can be sparse, ie, only a small number of values are provided in the LUT, interpolating other values Used for its use (if needed). This saves memory and access time. For example, a dense 3D cLUT with one 原 primary color value may require (2Μ0) Λ 3 (where 2 Λ 1 〇 refers to 2 〇 powers), or slightly greater than one billion items to provide for each possible One of the source pixel values is mapped. For a benign (ie, without extreme bending or discontinuity) cLUT, a sparse cLUT can be established and the value of the destination pixel is interpolated by conventional methods. The conventional method includes the corresponding source pixel and sense based on the nearest neighbor. The relative distance of the source pixel of interest assigns the nearest neighbor (or nearest neighbor, and its neighbors) 6 Rec. One of the values of one of the sparse cLUTs is usually a density of 17 8 3 (ie, Along each axis of the color cube, each primary color is 17 values, which causes slightly less than 5000 destination pixel items in the 111-square. Figure 9 illustrates a process 900 for establishing one of the uses in the present invention (e.g., cLUT 128 in Figure 1A). In this example, it is intended to establish a conversion of video code values suitable for exposure to a movie. One of the film density code values of the film 118 in the recorder c16 is cLUT and the resulting movie archive 126 is most suitable for making a film print 166 such that inspection is performed from either the projection system 168 and the display ι 52 159347.doc -34 - 201230817 and 154 One of the output of the operator can perceive a large match. Process 900 takes the original video code space at step 91 (in this example, Rec. 709 ' #曰疋 is the scene reference at step 911, the video data is self-contained The original color space (eg, Rec. 709) is converted to an observer reference (〇bserver_referred) color. Space (such as XYZ, which is the coordinate system of the 1931 CIE chromaticity diagram). This is done by applying an index to the Wait for the Rec. 7〇9 code value to complete (for example, 厶” or 2_4, as a value suitable for a “dark surround” viewing environment that is considered to be one of the typical living rooms or study rooms for watching TV. The reason the observer refers to the color space is because the purpose of the cLUT is to make a film look as if it is as possible when presented to the viewer. This is to treat the observer as a reference point (thus the term "observer reference" is derived One of the color spaces is the easiest to implement. Note that the term "scene reference" or "output reference (〇lltpUt-referred)" is known to those skilled in the art to specify that a code value is actually defined in a given color space. In the case of Rec. 7〇9, the “scene reference” means to refer to something in the % scene. In particular, the reference is reflected in the field of view of the camera away from a calibration card (on which there is a The amount of light of a particular printed, one of the physical cardboard sheets that is specifically a matte color ticket (the white color of the card should be a code value of 940, the black of the card should be a code value of 64, also defining a specific gray color ticket, Set the parameter of an exponential curve.) "Output reference" means that a code value should produce a specific amount of light on a monitor or projection screen. For example, for a code, how many feet should be emitted on a screen - Lambert (f 〇t-Lambe(1) Light. Rec. 709 specifies which primary color should be used and which color corresponds to white I59347.doc -35- 201230817 color 'and therefore the standard has a stupid "a ^ in some sort of "output reference", However, the key value of the code value (4) definitiGn) is the "scene reference".) "Observer reference" and how humans perceive light and color correlation. χ ζ ζ color space is based on how humans perceive color measurement, and the basin is also code. Also, its It is not affected by things such as the primary color used by a system to capture or display an image. One of the colors defined in the space will look the same 'without thinking about how it is produced. Therefore, two presentations (such as 'video and video') that look at the same chat value will look the same. There are other observer reference color spaces (eg, Yuv Yx#, etc.), all of which are derived from the 1931 CIEf material or its more modern improvements (the basin slightly changes some details) a at step 912, making an inspection or The query determines whether the resulting color gamut (ie, the color gamut of the image data after conversion to the observer reference color space (identified as χγΖι)) significantly exceeds the color gamut that can be represented in the movie (a strategy that can constitute a "significant element") 'Especially about the degree and duration of the film's color gamut. If it is determined that the film gamut is not significantly exceeded, the viewer reference code (gamut χ ζΑ ζΑ) is passed to step 914. The film gamut refers to one of the trajectories of all colors that can be represented on the film media. When you need a color that cannot be expressed in the movie, 'Exceeded' - the color gamut of the movie. The color gamut of the film (such as 'saturated cyan 'yellow, magenta) exceeds the color gamut of the video and the color gamut of the video is elsewhere (for example, saturated red, green, and blue) than the color gamut of the movie. Otherwise, if at step 912 it is feared that the color gamut of XYZl* significantly exceeds the color gamut of a film print 166, then the color gamut is remapped at step 913 to produce a code in the -reshaped color gamut (still in the χ ζ ζ color) In space, but now 159347.doc -36· 201230817 is identified as ΧΥΖ 2). Note that the color gamut is not the color space but the value of the trajectory in the color space. The color gamut of a film is the color gamut that can be expressed in the film. The color gamut is the full color that can be expressed in the video, and the color gamut of the specific video material (for example, the video data pair) is the video data. A collection of unique colors that are actually used throughout. By splicing the color gamut in the ΧΥΖ color space, it is possible to compare other gamuts that are different from the image (the film is an absorptive medium, and the video display is emissive). Many techniques for gamut remapping are known, and the most successful is the combination of combining results from different techniques in different regions of the gamut. In general, 5, a color-domain remapping is implemented in a perceptually uniform color space (a special subset of the observer's reference color space), and the cIE 1976 (L' a' b ) color space (CIELAB) is particularly suitable. Therefore, in the embodiment of the gamut remapping step 913, the Rec. 709 white point (light illuminant) is used to convert the code in the ΧΥΖι color field into CIELAB, and the mapped code substantially does not exceed the film gamut, and then It is then converted back to the χγζ color space to produce a modified color gamut ,2, now having properties that do not significantly exceed the available film gamut. The value or advantage of performing gamut remapping in CIELAB rather than χΥΖ color space is such that a particular ratio change made to certain colors is similar in perception to the rest of the gamut (ie, , other colors) make the same proportion change (this is one of the properties of CIELAB, this is because it is perceptually uniform). In other words, in the CIELab space, a certain change in any amount along any axis in the color space in any direction is perceived by humans as a "same size" change. This helps to provide no birth

S 159347.doc •37· 201230817 生令人不安或過量假像之—辛敁 色域再映射’此係因為在該色 域之一些區域中在一個方向上 上u改顏色且在該色域之其他 區域中在一不同方向上(或抱太、、Λ 士 u飞很本,又有)修改顏色。(因為一視 訊顯示具有不同於一影片色域 巴埤之色域,所以在視訊色域 中將具有影片色域中不存在之宜此拓么 子隹之某些顏色。因此,若在該影 片色域中無法找到該視訊色域中夕 ^ 巴场1〒之一明亮飽和綠色,則可 藉由在該XYZ空間中(一般而色+^ 、奴而。)在負y方向上移動該綠色來S 159347.doc •37· 201230817 The disturbing or excessive illusion of life—the gamut gamut remapping' is because the color is changed in one direction and in the gamut in some areas of the gamut. In other areas, the color is modified in a different direction (or hug, Λ u u flying very, and there are). (Because a video display has a color gamut different from that of a movie gamut, there will be some colors in the video gamut that do not exist in the color gamut of the movie. Therefore, if the video is in the film If the color gamut cannot be found in the video color gamut, one of the bright spots in the field, you can move the green in the negative y direction by using the XYZ space (generally +^, slave). Come

再映射此綠色。此具有使此特定终A i此将疋綠色傾向於更不飽和(朝 向XYZ空間之一 CIE圖表之白耷F托你去「 衣又臼色區域移動「白色區(white- ward)」)之效應。然而,當該声诚 田必巴域中之綠色再映射至一淺 綠色時,亦可能需要在一類似方南 貝似万向上(但可能以一不同量) 移動或修改原始視訊色域中盆jUf ih. /Jr 匕4 T I具他綠色,以便保持該色域 中局部化之某種效應。) 舉例而言,若在視訊資料1〇8中需要某些飽和綠色,但 此等綠色在由影片印刷品166可重現之色域之外,則可在 再映射步驟913期間使視訊資料1〇8中之此等飽和綠色較不 飽和及/或較不明亮。然而,對於其他鄰近值(其等可能未 超過可用影片色域),一再映射將係必要的以避免與必須 再映射之此等值重疊。此外,除了避免一重疊之外,'應努 力使再映射儘可能平滑(在感知顏色空間中)以便最小化可 見假像(例如,馬赫帶(Mach band))之概度。 在步驟914處,透過一倒置影片印刷品模擬(iFpE)處理 自然色域(XYZ!)或再映射色域(ΧγΖ2)内之碼。該11?1^可表 示為函數或表示該函數之cLUT ’正如構建其他clut(儘 159347.doc -38- 201230817 S出於一不同原因且利用一不同經驗基礎)^在此情況 中,表示該iFPE之該CLUT將XYZ顏色值轉換成影片密度 碼且可實施為一 3D cLUT。一影片印刷品模擬(FpE)係膠片 11 8與162及投射系統168之發光物(投射器燈及反射器光學 器件)之一特徵,其將提供至一影片記錄器丨16之一組密度 值(例如’ Cineon碼)轉譯成當觀看投射系統ι68時期望測量 之顏色值。FPE在動畫產業之數位中間製作工作中係熟知 的,因為其等允許一操作者自一數位監視器工作以對一畫 面做出顏色校正且期望該校正在電影之數位發行與基於影 片的發行兩者中看起來正確。 如在上文稀疏cLUT之描述中,一 ρρΕ可充分表示為一 17乂17乂17稀疏吐1;1',具有極佳結果。倒置一171^以產生 iFPE係一簡單的數學練習(一般技術者熟知)。然而在許多 例項中,一 17x1 7x17 cLUT之倒置不可提供足夠平滑性質 及/或良性邊界效應。在此等情況中,可以一較不稀疏矩 陣(例如,34x34x34)或在展現較高改變率之區域中使用具 有較岔集取樣之一不均勻矩陣來模型化待倒置的FpE。 在步驟914處之iFPE之結果將產生對應於所提供色域 (即’ Rec. 709之色域)之χΥΖ值之影片密度碼(例如, Cineon碼)。因此,聚合變換915將視訊碼值(例如, 7〇9)轉譯成經編碼檔案U4中可使用之密度碼用於製作一 底片,如在印刷品166中,該底片被印刷時將產生影片上 之原始視訊内容102之可理解近似值。在步驟91〇處對應於 初始視訊碼之影片密度碼在步驟916處儲存gcLUT 128。 201230817 cLUT建立過程900在步驟917處結束,已產生CLUT 128。 該cLUT可為ID或3D。 圖10展示另一cLUT建立過程1〇〇〇 ’其在步驟ι〇1〇處以 視訊碼開始(再次使用Rec. 709作為一實例)。在步驟1015 處’使用聚合函數915之一較簡單近似值來表示自視訊碼 空間至影片密度資料(再次使用Cineon碼作為一實例)之變 換。一簡化實例係跳過步驟912及913。另一簡化係將該 Rec· 709對XYZ對密度資料組合為一單一 γ指數及3x3矩 陣’可能包含足夠縮放以確保不超過影片色域。然而,注 意當印刷該存檔時此等簡化將引起影像品質降低。此等簡 化可改變或不改變所恢復視訊資料之品質。在步驟1〇16 處,將值填入一簡化CLUT中,該CLUT可與步驟916中一樣 後集或可經更簡單模型化(例如,對於該等原色之每一者 模型化為一 1維(ID)LUT)。在步驟1〇17處,此簡化的cLUT 可供用作cLUT 128。 圖11展不表示自Rec. 709碼值mi至Cineon密度碼值 1112之一例示性轉換之一圖形111 〇。 可使用線性映射或函數112〇來製作非意欲被印刷之視訊 内谷之一影片存檔,因為線性映射或函數丨丨2〇之性質意欲 最佳化以最佳或幾乎敢佳雜訊分佈(即,寫入的每一碼值 由影片上相同大小範圍的密度值表示)寫入及恢復碼值(透 過影片記錄器116及影片掃描器132)之能力。在此實例 中,線性映射1120將Rec. 709碼值之範圍(64至94〇)映射至 相同值的(且「合法」’即,遵循Rec 7〇9)Cine〇n碼值(64至 159347.doc 201230817 940)。併入此一做法之一方法在Kutcka等人的題為「將視 訊存檔至影片中之方法及系統」(「Method and System ofMap this green again. This has the specific end A i which will tend to be more unsaturated than the green (the head of the CIE chart towards the XYZ space, you can go to "the white area of the clothing and the white area (white-ward)") effect. However, when the green color of the sound in the Chengtianba area is remapped to a light green color, it may also be necessary to move or modify the original video color basin in a similar direction (but possibly in a different amount). jUf ih. /Jr 匕4 TI has his green color in order to maintain some effect of localization in this color gamut. For example, if some saturated green is required in the video material 1〇8, but the green color is outside the color gamut reproducible by the film print 166, the video data may be made during the remapping step 913. These saturated greens in 8 are less saturated and/or less bright. However, for other neighboring values (which may not exceed the available movie gamut), a re-mapping will be necessary to avoid overlapping with such values that must be remapped. Furthermore, in addition to avoiding an overlap, 'should try to make the remapping as smooth as possible (in the perceived color space) in order to minimize the approximation of visible artifacts (eg, Mach bands). At step 914, the code within the natural color gamut (XYZ!) or the remapped gamut (Χ γ Ζ 2) is processed through an inverted film print simulation (iFpE). The 11?1^ can be expressed as a function or a cLUT representing the function 'as in constructing other cluts (by 159347.doc -38-201230817 S for a different reason and using a different empirical basis) ^ In this case, indicating the The CLUT of the iFPE converts the XYZ color value into a film density code and can be implemented as a 3D cLUT. A feature of a film print simulation (FpE) film 11 8 and 162 and projection system 168 illuminator (projector lamp and reflector optics) that will provide a density value to a film recorder 丨 16 ( For example, the 'Cineon code' translates to the color value that is expected to be measured when viewing the projection system ι68. FPE is well known in the digital mid-production work of the animation industry because it allows an operator to work from a digital monitor to color correct a picture and expects that the correction is in the digital distribution of the movie and the release based on the film. It looks correct. As described in the above description of the sparse cLUT, a ρρΕ can be fully expressed as a 17乂17乂17 sparse spit 1;1' with excellent results. Invert a 171^ to produce a simple mathematical exercise for the iFPE (known to those of ordinary skill). However, in many cases, an inverted 17x1 7x17 cLUT does not provide sufficient smoothing properties and/or benign boundary effects. In such cases, the FpE to be inverted may be modeled using a less heterogeneous matrix (e.g., 34x34x34) or in an area exhibiting a higher rate of change using a non-uniform matrix having a larger set of samples. The result of the iFPE at step 914 will produce a film density code (e.g., Cineon code) corresponding to the threshold of the provided color gamut (i.e., the color gamut of Rec. 709). Thus, the aggregation transform 915 translates the video code value (e.g., 7〇9) into a density code usable in the encoded file U4 for making a negative film, such as in print 166, which will be produced on the film when printed. An appreciable approximation of the original video content 102. The film density code corresponding to the initial video code at step 91A stores the gcLUT 128 at step 916. 201230817 The cLUT establishment process 900 ends at step 917, and the CLUT 128 has been generated. The cLUT can be ID or 3D. Figure 10 shows another cLUT setup process 1 ’ ' which begins with a video code at step ι〇1〇 (using Rec. 709 again as an example). At step 1015, a simpler approximation of one of the aggregate functions 915 is used to represent the transformation from the video code space to the film density data (using the Cineon code again as an example). A simplified example skips steps 912 and 913. Another simplification is to combine the Rec·709 versus XYZ pair density data into a single gamma index and a 3x3 matrix&apos; that may contain sufficient scaling to ensure that the film gamut is not exceeded. However, it is noted that such simplification will result in reduced image quality when printing the archive. Such simplification may or may not alter the quality of the recovered video material. At step 1 〇 16, the values are populated into a simplified CLUT, which may be the same as in step 916 or may be modeled more simply (eg, for each of the primary colors, modeled as a 1 dimension) (ID) LUT). At step 1〇17, this simplified cLUT is available for use as cLUT 128. Figure 11 does not show one of the exemplary conversions from the Rec. 709 code value mi to the Cineon density code value 1112. A linear mapping or function 112 can be used to create a video archive of a valley of video that is not intended to be printed, since the nature of the linear mapping or function is intended to be optimized for optimal or almost ambiguous noise distribution (ie The ability of each of the written code values to be written and recovered by the density value of the same size range on the film (through the movie recorder 116 and the film scanner 132). In this example, linear map 1120 maps the range of Rec. 709 code values (64 to 94 〇) to the same value (and "legal"', ie, Rec 7〇9) Cine〇n code value (64 to 159347) .doc 201230817 940). One of the ways to incorporate this approach is in Kutcka et al., entitled "Methods and Systems for Archiving Video to Film" ("Method and System of

Archiving Video to Film」)之美國臨時專利申請案第 61/393,858號中教示。然而,線性映射112〇不適用於期望 自其實現一影片印刷品166或電視電影轉換之一影片存 樓’此係因為暗色將顯得太暗(若非黑色),且亮色將顯得 太明亮(若非經修剪白色)。 如可由cLUT 128(為清楚起見此處展示為一 ID cLUT)描 述之非線性映射或函數113〇係過程9〇〇之結果,呈一單一 維度(而非3D)。在此實例中,即應用於Rec 7〇9視訊碼值 範圍(64…940) ’正規化為標準之線性光值(Hnear Hght value),k升至一指數yv〖de〇=2.35(用於一「昏暗環繞」觀 看環境之一適宜γ,但另一常見選擇係2.4〇),其產生如下 列方程式中展示的線性光值r丨(ν)」之範圍: 方程式1 :Archiving Video to Film" is taught in U.S. Provisional Patent Application Serial No. 61/393,858. However, the linear map 112〇 is not suitable for a movie store that is expected to implement a movie print 166 or a telecine from it. This is because the dark color will appear too dark (if not black) and the bright color will appear too bright (if not trimmed) white). As a result of the non-linear mapping or function 113 described by cLUT 128 (shown here as an ID cLUT for clarity), it is a single dimension (rather than 3D). In this example, it is applied to the Rec 7〇9 video code value range (64...940) 'normalized to the standard Hnear Hght value, k is raised to an index yv 〖de〇=2.35 (for One of the "dark surround" viewing environments is suitable for gamma, but another common choice is 2.4"), which produces a range of linear light values r 丨 (ν) as shown in the following equation: Equation 1:

fv _ V Λ \yVIDEO /(v) = --LQwl_^ χ Λ mYvideo _ I rviDEo Λ 其中vL〇w=64及vHIGH=940係下限碼值及上限碼值,各分別 對應於線性光值1L〇W=90%及1high= 1 %。此源於Rec. 709中 之規範:值64應係指派至一黑色(1%反射比)測試色票之碼 值,且值940應係指派至一白色(90%反射比)測試色票之碼 值’因此早先陳述Rec. 709係「場景參考」。注意對於使用 其他視訊資料碼之實施例,可使用不同值或方程式。 159347.doc -41 - 201230817 為轉換至影片密度碼,判定一中點視訊碼VMID,其對應 於可對應於一灰色(18%反射比)測試色票之視訊碼值, 即,滿足方程式: 方程式 2 : = 針對vmid對方程式1及方程式2進行求解可提供約431之 一值。在該等Cineon影片密度碼中,亦對應於一灰色(丨8〇/〇 反射比)測試色票之一影片密度碼值dMID係445。一常見影 片γ係Yfilm=〇.60 ’但取決於使用的膠片118,可選擇其他 值。Cineon影片密度碼每增量提供密度之一線性改變,且 畨度係透射率之倒數之log!〇,因此一額外常數s=5〇〇指定 每十進位級的步進數(number of steps per decade)。利用建 立的此等值’自視訊碼值至影片密度值之轉譯在此方程式 中表達: 方程式3 :咖)=,應 i.(l〇giQ(/⑺)_1〇g丨。(/(v廳)))+£/娜 圖开&gt; 1110中之非線性映射113〇係64至940之範圍中之視 訊碼之d(V)之一圖形。舉例而言,dL〇w=d(VL〇w=64)=68, dMm=d(vM1D=431)=445,且 dHIGH=d(VHIGH=940)=655。注意 雄、度碼值可四捨五入至最接近的整數值。 因為曲線1130之非線性特性,對於小於約256之視訊碼 值(v) ’增量視訊碼「v」可引起不連續影片密度碼「d」, 此係因為在此區域中,曲線113〇之斜率大於1。(舉例而 。,代替具有對應於連續或增量視訊碼之連續影片密度 碼,諸如1、2及3,序列中的密度碼可為當藉由 掃描該影片存檔進行密度讀取時(可能具有少量雜訊),3、 159347.doc -42· 201230817 4或5之密度讀取可全部映射至對應於密度碼4之視訊碼。 因此,此等密度讀取具有一定程度的雜訊抗擾性。)對於 大於約256之視訊碼值,曲線;π30之斜率小於】,且增量視 訊碼當四捨五入為整數時可引起重複密度碼(即,可存在 具有相同密度碼值之大於256之兩個不同視訊碼值)。(作為 一實例,對於一密度碼701,可存在對應於此密度碼之兩 個不同視訊碼。若一密度碼回讀為在密度上具有一個計數 之誤差,則此可引起相差若干計數之一視訊碼。因此,在 此區域中,讀取及往回轉換係額外雜訊。)結果,當自影 片存檔126恢復視訊碼時,影像之較明亮部分將稍微更具 雜讯且該影像之黑暗部分相比於使用丨:丨線性轉換丨12〇自 影片上之一視訊存檔恢復之視訊碼具有稍微較少雜訊。然 而,當需要將該存㈣刷i影片或利用—電視電影掃描之Fv _ V Λ \yVIDEO /(v) = --LQwl_^ χ Λ mYvideo _ I rviDEo Λ where vL〇w=64 and vHIGH=940 are the lower limit code value and the upper limit code value, each corresponding to a linear light value of 1L〇 W = 90% and 1 high = 1%. This is derived from the specification in Rec. 709: the value 64 should be assigned to a black (1% reflectance) test color ticket code value, and the value 940 should be assigned to a white (90% reflectance) test color ticket. The code value 'so earlier stated Rec. 709 is the "scene reference". Note that for embodiments using other video data codes, different values or equations can be used. 159347.doc -41 - 201230817 To convert to the film density code, determine a midpoint video code VMID, which corresponds to a video code value that can correspond to a gray (18% reflectance) test color ticket, ie, satisfy the equation: 2 : = Solving the vmid equation 1 and Equation 2 provides a value of approximately 431. In the Cineon film density codes, one of the color density code values dMID is 445 corresponding to a gray (丨8〇/〇 reflectance) test color ticket. A common movie gamma system Yfilm = 〇.60 ′, but depending on the film 118 used, other values can be selected. The Cineon film density code provides a linear change in density per increment, and the twist is the logarithm of the reciprocal of the transmittance; therefore, an additional constant s = 5 〇〇 specifies the number of steps per decade (number of steps per Decade). The translation of the value of the self-visual code value to the film density value is expressed in this equation: Equation 3: coffee) =, should be i. (l〇giQ(/(7))_1〇g丨. (/(v Hall))) +£/Natukai&gt; The non-linear mapping 113 in 1110 is one of the d(V) of the video code in the range of 64 to 940. For example, dL〇w=d(VL〇w=64)=68, dMm=d(vM1D=431)=445, and dHIGH=d(VHIGH=940)=655. Note that the male and female code values can be rounded to the nearest integer value. Because of the non-linear nature of curve 1130, the incremental video code "v" can cause a discontinuous video density code "d" for a video code value (v) less than about 256, because in this region, curve 113 The slope is greater than 1. (Examples. Instead of having a continuous film density code corresponding to a continuous or incremental video code, such as 1, 2, and 3, the density code in the sequence may be when density reading is performed by scanning the movie archive (possibly A small amount of noise), 3, 159347.doc -42· 201230817 4 or 5 density readings can all be mapped to the video code corresponding to density code 4. Therefore, these density readings have a certain degree of noise immunity. For a video code value greater than about 256, the curve; the slope of π30 is less than], and the incremental video code may cause a repetition density code when rounded to an integer (ie, there may be more than 256 of the same density code value) Different video code values). (As an example, for a density code 701, there may be two different video codes corresponding to the density code. If a density code is read back as having a count error in density, this may cause one of the counts to differ. Video code. Therefore, in this area, reading and back conversion are additional noise.) As a result, when the video code is restored from the video archive 126, the brighter portion of the image will be slightly more noisy and the darkness of the image. In part, compared to the use of 丨: 丨 linear conversion 丨 12 视 video code recovered from one of the video archives on the video has slightly less noise. However, when you need to save the (four) brush i video or use - TV movie scan

月b力時,此權衡係值得的。(注意因為線性轉換函數1 HQ 相比於曲線1130具有—較大最大密度,所以來自此線性轉 換方法之一影片存檔將導致亮色爆裂(即,過亮)之一影片 P刷。α類似地,該影片印刷品之暗色將比使用曲線丨13〇 製作之β片印刷品之對於暗色更暗。效應係自使用線性 轉換1120製作之-影片存稽之印刷可產生具有過高對比度 ⑴如達到該衫像之大部分太暗或太亮的程度)之一影片 印刷品。) 使用一LUT作為一有效計算工具或方 在上文實例中 法,作為用以涵蓋一争—μ μ _ 更—奴變換之一「速記」,可視情況 將該變換模型化為一可士 +曾π虹 J什算函數。右需要,可判定表示該 159347.doc -43· 201230817 變換之實際方程式,且重複進行計算以為待轉譯或變換之 每一像素或值獲得對應碼值。該cLUT,無論呈1〇或3d, 稀疏或為其他方式,均是用於處理該變換之可行實施方 案。使用一 cLUT係有利的,此係因為其使用於每圖框發 生數百萬次的計算中一般並不昂貴。然而,建立不同 cLUT可需要不同計算量(或不同數目及種類的測量,假若 因為貫際變換係未知的,難以計算或難以獲得參數必須 按經驗構建該cLUT)。 雖然前述關於本發明之各種實施例,但在不背離本發明 之基本範圍情況下可設計本發明之其他實施例。舉例而 言,上文實例中描述之一或多個特徵可被修改、省略及/ 或以不同組合使用。因此,根據隨附申請專利範圍判定本 發明之適當範圍。 【圖式簡單說明】 圖1A繪示用於將視訊存檔至適合於用在一電視電影中或 用於印刷之影片之一系統; 圖1B繪示用於恢復先前存檔至影片之視訊之一系統及用 於自存檔建立一影片印刷品之一系統; 圖2繪示存檔至影片之視訊之一序列漸進式圖框; 圖3繪示存檔至影片之視訊之一序列圖場交錯圖樞; 圖4A繪示在影片上之一漸進式圖框視訊存檔之最前面使 用之一特徵化圖案; 圖4B係圖4A之一部分之一擴展圖; 圖5繪示使用關於視訊資料及特徵化圖案之一顏色查詢 159347.doc 44- 201230817 表(cLUT)建立視訊之一影片存檔之一過程; 影片存檔恢復視 圖6緣示用於自藉由圖5之過程建立之一 訊之一過程; 圖7繪示僅使用關於視訊資料之— cLUT建立視訊之一影 片存檔之一過程; 圖崎示用於自藉由圖7之過程建立之一影片存槽恢復視 訊之一過程; 圖增示用於建立仰了之—第—實例之_過程,似了之 該第-實例供使用於產生適合於製作一影片印刷品之一影 片存樓之一方法中; 圖10繪示用於建立cLUT之另一督也丨々 力貫例之-過程,cLUT之 該另一實例適合在產生適合於製作— 衫片印刷品之一影片 存檔之一方法中使用; 圖11係表示一例示性cLUT之一圖形;及 圖12A至圖12B繪示一些膠片之特性曲線。 【主要元件符號說明】 100 影片存檔系統/存檔製作李 102 原始視訊内容 104 視訊源 106 視訊數位化器 108 視訊内容/視訊資料 no 特徵化圖案 112 編碼器 114 經編碼槽案/經編碼資料 I59347.doc 201230817 116 影片記錄器 118 膠片 120 校準資料 122 影片輸出 124 影片處理器 126 影片存檔 128 顏色查詢表(CLUT) 130 存檔讀取或擷取系統/存檔讀取系統 132 影片掃描器 134 校準資料 136 影片資料 138 解碼器 140 恢復的視訊資料 142 視訊輸出器件 144 視訊記錄益 146 經再生視訊内容 148 倒置cLUT 150 視訊比較系統 152 顯示器 154 顯示器 160 影片印刷輸出系統 162 影片印刷膠片 164 影片印刷機 166 影片印刷品 159347.doc -46- 201230817 168 投射系統 200 影片存槽 202 膠片 204 穿孔 206 選用之聲軌 210 貢料區域 211 貧料區域 212 貧料區域 220 影片圖框 221 垂直間隔 222 垂直高度 224 水平間隔 225 水平間隔 300 影片存檔/影片 302 膠片 304 穿孔 306 選用之聲軌 310 資料區域 311 資料區域 312 資料區域/圖場 313 資料區域/圖場 314 貧料區域 315 資料區域 320 影片圖框 159347.doc -47- 201230817 321 垂直間隔 322 垂直高度 323 圖場間間隔/圖框内間隙 324 水平間隔 325 水平間隔 400 標頭/影片/特徵化圖案 404 穿孔 412 資料區域/第一圖場 412T 第一圖場之頂部 413 資料區域/第二圖場 420 影片圖框高度/特徵化圖案圖框/影片圖框 421 頭部間隙/頂部間隙 423 圖場間間隙 424 側間隙 425 侧間隙 426 尾部間隙 430 色度元素/中和梯度 431 色度元素/原色或輔色梯度 432 色度元素/色票 450 角落區域 451 矩形/元素 452 矩形 453 矩形 FI 圖框 159347.doc -48- 201230817This balance is worthwhile when the monthly b force. (Note that because the linear transfer function 1 HQ has a larger maximum density than the curve 1130, one of the film archives from this linear conversion method will cause a bright color to burst (ie, too bright) one of the film P brushes. α Similarly, The darkness of the film print will be darker than the darkness of the beta print made using the curve 。13〇. The effect is made from the use of linear conversion 1120 - the print of the film can produce excessive contrast (1) if the shirt is reached One of the film prints that is mostly too dark or too bright. Using a LUT as a valid calculation tool or in the above example, as a "shorthand" to cover a competition - μ μ _ more - slave transformation, the transformation can be modeled as a sin + Zeng hong Hong J calculation function. Right, the actual equation representing the transformation of 159347.doc -43· 201230817 can be determined, and the calculation is repeated to obtain the corresponding code value for each pixel or value to be translated or transformed. The cLUT, whether 1 or 3d, sparse or otherwise, is a viable implementation for handling this transformation. The use of a cLUT is advantageous because it is generally not expensive because it is used in millions of calculations per frame. However, establishing different cLUTs may require different amounts of computation (or different numbers and types of measurements, and if the continuous transformation is unknown, it is difficult to calculate or difficult to obtain parameters, the cLUT must be constructed empirically). While the foregoing is a description of various embodiments of the present invention, other embodiments of the present invention can be devised without departing from the scope of the invention. By way of example, one or more of the features described in the above examples can be modified, omitted, and/or used in various combinations. Accordingly, the proper scope of the invention is determined by the scope of the appended claims. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1A illustrates a system for archiving video to a movie suitable for use in a television movie or for printing; FIG. 1B illustrates a system for restoring a video previously archived to a video. And a system for creating a movie print from a self-archiving; FIG. 2 is a progressive sequence of one of the video archived to the video; FIG. 3 is a cross-sectional view of the sequence of the video archived to the video; A characterization pattern is used at the forefront of one of the progressive frame video archives on the film; FIG. 4B is an expanded view of one of the portions of FIG. 4A; FIG. 5 illustrates the use of one of the video data and the characterization pattern. Query 159347.doc 44-201230817 Table (cLUT) is a process of establishing one of the video archives of the video; the movie archive recovery view 6 is used for the process of establishing one of the messages by the process of FIG. 5; Using the video data - cLUT to establish one of the video archives of a video; Tuqi shows a process for recovering video from a video storage slot created by the process of Figure 7; - - the example of the process, it seems that the first instance is used in a method for producing a film store suitable for making a film print; Figure 10 shows another supervisor for establishing a cLUT For example, the other example of the cLUT is suitable for use in a method of producing a film archive suitable for making a print of a shirt; FIG. 11 is a diagram showing one of the exemplary cLUTs; and FIGS. 12A-12B Show some characteristic curves of the film. [Main component symbol description] 100 video archive system / archive production Lee 102 original video content 104 video source 106 video digitizer 108 video content / video data no characterization pattern 112 encoder 114 encoded slot / encoded data I59347. Doc 201230817 116 Film Recorder 118 Film 120 Calibration Data 122 Movie Output 124 Film Processor 126 Movie Archive 128 Color Lookup Table (CLUT) 130 Archive Read or Capture System/Archive Read System 132 Film Scanner 134 Calibration Data 136 Video Data 138 decoder 140 recovered video data 142 video output device 144 video recording benefit 146 regenerated video content 148 inverted cLUT 150 video comparison system 152 display 154 display 160 film print output system 162 film print film 164 film printer 166 film print 159347 .doc -46- 201230817 168 Projection System 200 Film Storage 202 Film 204 Perforation 206 Optional Sound Track 210 Digestion Area 211 Poor Area 212 Poor Area 220 Film Frame 221 Vertical Interval 222 Vertical Height 22 4 Horizontal interval 225 Horizontal interval 300 Movie archive/film 302 Film 304 Perforation 306 Selected sound track 310 Data area 311 Data area 312 Data area/field 313 Data area/field 314 Poor area 315 Data area 320 Film frame 159347 .doc -47- 201230817 321 Vertical spacing 322 Vertical height 323 Inter-field spacing/in-frame gap 324 Horizontal spacing 325 Horizontal spacing 400 Header/film/characterization pattern 404 Perforation 412 Data area/first field 412T First Top of the field 413 Data area / second field 420 Film frame height / characterization pattern frame / film frame 421 Head gap / top gap 423 Inter-field gap 424 Side gap 425 Side gap 426 Trail gap 430 color Degree Element / Neutral Gradient 431 Chroma Element / Primary Color or Auxiliary Color Gradient 432 Chroma Element / Color Ticket 450 Corner Area 451 Rectangular / Element 452 Rectangular 453 Rectangular FI Frame 159347.doc -48- 201230817

Fl-fl 第 一 圖 場 Fl-f2 第 二 圖 場 F2 圖 框 F2-fl 第 一 圖 場 F2-f2 第 二 圖 場 F3 圖 框 F3-fl 第 一 圖 場 F3-f2 第 _ _ 圖 場 159347.doc -49- {-1Fl-fl First map field Fl-f2 Second map field F2 Frame F2-fl First map field F2-f2 Second map field F3 Frame F3-fl First map field F3-f2 No. _ _ Field 159347 .doc -49- {-1

Claims (1)

201230817 七、申請專利範圍: 1. 一種用於將視訊内容存檔至影片上之方法,該方法包 括: 藉由基於一非線性變換至少將數位視訊資料轉換成影 片密度碼來編碼該數位視訊資料; 提供包含該經編碼數位視訊資料及與該數位視訊資料 相關聯之一特徵化圖案之經編碼資料; 根據該等影片密度碼將該經編碼資料記錄在影片上;及 自具有該記錄的經編碼資料之該影片製作一影片存 標。 2. 如π求項1之方法,其中藉由基於該非線性變換將該特 徵化圖案之像素值轉換成影片密度碼來編碼該經編碼資 料中之該特徵化圖案。 求項1之方法,其中藉由基於一線性變換將該特徵 化圖案之像素值轉換成影片密度碼來編碼該經編碼資料 令之該特徵化圖案。 3·如明求項丨之方法,其中使用表示該非線性變換之一顏 色查詢表執行該編碼。 4. 如μ求項i之方法,其中該特徵化圖案提供關於該數位 視訊資料之時間、空間及色度資訊之至少一者。 5. 如請求項1 9之方去,其中该特徵化圖案包含視訊圖框之 時間碼、指示該影片存檔上視訊資料之位置之元素及表 示預弋像素碼值之色票之至少一者。 6 · 如清求J§ 1 &gt; 1丄 之方法,其中該特徵化圖案包含資料、文字 159347.doc 201230817 及圖形元素之至少一者。 7. 如請求項1之方法,其中該特徵化圖案進一步包括: 一密度梯度及表示不同顏色分量之色票之至少一者。 8. 如請求項丨之方法,其中藉由以下建立該非線性變換: 將該數位視訊資料自一原始顏色空間轉換至具有不超 過該影片之一色域之一色域之一觀察者參考顏色空間; 使用一倒置影片印刷模擬變換將該觀察者參考顏色空 間中之該數位視訊資料之碼值轉換成影片密度碼; 儲存該等經轉換的影片密度碼以用作為該非線性變 換。 種用於自衫片存槽恢復視訊内容之方法,該方法包 含: 掃描含有編碼為基於影片的資料之數位視訊資料及與 °亥數位視訊資料相關聯之一特徵化圖案之該影片存檔之 至夕-部分;纟中已藉由一非線性變換將該數位視訊資 料編碼為基於影片的資料;及 〇 影片 的資 基於該特徵化圖案中含有的資訊解碼該影片存檔 2求項9之方法,其中已藉由該非線性變換將該 °中之該特徵化圖案之像素值編碼為基於影片 料。 11 ·如清求項9夕士·. 方法’其中該特徵化圖案提供關於該數位 視訊資料之办P彳 ^ gg 二間、時間及色度資訊之至少一者。 12·如清求項9之太土 ’,、中该特徵化圖案包含資料、文字 及圖形元素之至少一者。 J59347.doc 201230817 月求項9之方法,其t基於關於該非線性變換之資訊 執行該解碼。 14. -種用於將視訊内容存檔至影片上之系統,該系統包 括: -編碼器’其料產生含有對應於數位視訊資料之基 於〜片的資料及與該視訊資料相關聯之一特徵化圖案之 經編碼資料’其中該數位視訊資料及該特徵化圖案之像 &gt;、值係藉由非線性變換而編碼為該基於影片的資料; 一影片記錄器,其用於將該經編碼資料記錄在一影片 上;及 ’ 一影片處理器,其用於處理該影片以產生一影片存 檀。 15. -種用於自一影片存檀恢復視訊内容之系統,該系統包 括: 一影片掃描器,其用於掃描該影片存檔以產生基於影 片的資料; 、〜 一解碼益,其用於識別來自該基於影片的資料之—特 徵化圖案且用於基於該特徵化圖案解碼該基於影片的資 料以產生用於在恢復該視訊内容中使用之視訊資料;其 中該基於影片的資料係藉由一非線性變換而與該視訊資 料相關。 159347.doc 、201230817 VII. Patent application scope: 1. A method for archiving video content onto a movie, the method comprising: encoding the digital video data by converting at least digital video data into a film density code based on a non-linear transformation; Providing encoded data comprising the encoded digital video material and a characterization pattern associated with the digital video material; recording the encoded data on the video based on the film density codes; and encoding from the record having the record The film of the material is used to make a video. 2. The method of claim 1, wherein the characterization pattern in the encoded material is encoded by converting the pixel values of the feature pattern to a film density code based on the nonlinear transformation. The method of claim 1, wherein the characterized pattern of the encoded data is encoded by converting the pixel values of the characterization pattern to a film density code based on a linear transformation. 3. The method of claim </ RTI> wherein the encoding is performed using a color lookup table representing the nonlinear transformation. 4. The method of claim i, wherein the characterization pattern provides at least one of time, space, and chrominance information for the digital video material. 5. The method of claim 19, wherein the characterization pattern includes at least one of a time code of the video frame, an element indicating a location of the video material on the video archive, and a color ticket indicating a pre-pitch value. 6 · A method of seeking J§ 1 &gt; 1丄, wherein the characterization pattern comprises at least one of data, text 159347.doc 201230817 and graphic elements. 7. The method of claim 1, wherein the characterization pattern further comprises: a density gradient and at least one of color tickets representing different color components. 8. The method of claim 1, wherein the non-linear transformation is established by: converting the digital video material from an original color space to an observer reference color space having a color gamut that does not exceed one of the color gamuts of the movie; An inverted film print simulation transform converts the code value of the digital video material in the viewer reference color space into a film density code; and stores the converted film density code for use as the nonlinear transform. A method for recovering video content from a tablet storage slot, the method comprising: scanning a video archive containing digital video data encoded as video-based data and a characterization pattern associated with the digital video data to The eve-part; the digital video has been encoded into a video-based material by a non-linear transformation; and the video is decoded based on the information contained in the characterization pattern to decode the video archive 2 item 9 The pixel value of the characterization pattern in the ° has been encoded by the nonlinear transformation as a film based material. 11. The method of claim 9 wherein the characterization pattern provides at least one of P, ^ gg, time and chrominance information for the digital video material. 12. The characterization pattern includes at least one of a material, a text, and a graphic element. J59347.doc 201230817 The method of claim 9, wherein t performs the decoding based on information about the nonlinear transformation. 14. A system for archiving video content onto a video, the system comprising: - an encoder that produces a material based on the slice-based data corresponding to the digital video material and associated with the video material The coded data of the pattern 'where the digital video data and the image of the characterization pattern>, the value is encoded as the film-based material by nonlinear transformation; a film recorder for the encoded material Recorded on a movie; and 'a film processor that processes the movie to produce a movie. 15. A system for recovering video content from a video storage system, the system comprising: a video scanner for scanning the video archive to generate a video based material; , a decoding benefit, for identifying a characterization pattern from the film-based material and for decoding the film-based material based on the characterization pattern to generate video material for use in restoring the video content; wherein the film-based data is by A nonlinear transformation is associated with the video material. 159347.doc,
TW100137382A 2010-10-15 2011-10-14 Method and system for producing video archive on film TW201230817A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US39385810P 2010-10-15 2010-10-15
US39386510P 2010-10-15 2010-10-15

Publications (1)

Publication Number Publication Date
TW201230817A true TW201230817A (en) 2012-07-16

Family

ID=44860564

Family Applications (2)

Application Number Title Priority Date Filing Date
TW100137381A TW201230803A (en) 2010-10-15 2011-10-14 Method and system of archiving video to film
TW100137382A TW201230817A (en) 2010-10-15 2011-10-14 Method and system for producing video archive on film

Family Applications Before (1)

Application Number Title Priority Date Filing Date
TW100137381A TW201230803A (en) 2010-10-15 2011-10-14 Method and system of archiving video to film

Country Status (11)

Country Link
US (2) US20130194492A1 (en)
EP (2) EP2628295A2 (en)
JP (2) JP2013543182A (en)
KR (2) KR20130138267A (en)
CN (2) CN103155546A (en)
BR (2) BR112013008742A2 (en)
CA (2) CA2813774A1 (en)
MX (2) MX2013004152A (en)
RU (2) RU2013122105A (en)
TW (2) TW201230803A (en)
WO (2) WO2012051486A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10785496B2 (en) * 2015-12-23 2020-09-22 Sony Corporation Video encoding and decoding apparatus, system and method
JP2017198913A (en) * 2016-04-28 2017-11-02 キヤノン株式会社 Image forming apparatus and method for controlling image forming apparatus
RU169308U1 (en) * 2016-11-07 2017-03-14 Федеральное государственное бюджетное образовательное учреждение высшего образования "Юго-Западный государственный университет" (ЮЗГУ) Device for operative restoration of video signal of RGB-model
US11425313B1 (en) * 2021-11-29 2022-08-23 Unity Technologies Sf Increasing dynamic range of a virtual production display

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5086310A (en) * 1988-05-09 1992-02-04 Canon Kabushiki Kaisha Print control apparatus for effective multiple printing of images onto a common printing frame
DE69114083T2 (en) * 1990-08-29 1996-04-04 Sony Uk Ltd Method and device for converting a film into video signals.
US5430489A (en) * 1991-07-24 1995-07-04 Sony United Kingdom, Ltd. Video to film conversion
EP0648399A1 (en) * 1992-07-01 1995-04-19 Avid Technology, Inc. Electronic film editing system using both film and videotape format
US5667944A (en) * 1995-10-25 1997-09-16 Eastman Kodak Company Digital process sensitivity correction
JPH11164245A (en) * 1997-12-01 1999-06-18 Sony Corp Video recording device, video reproducing device and video recording and reproducing device
US6697519B1 (en) * 1998-10-29 2004-02-24 Pixar Color management system for converting computer graphic images to film images
EP1037459A3 (en) * 1999-03-16 2001-11-21 Cintel International Limited Telecine
US6866199B1 (en) * 2000-08-09 2005-03-15 Eastman Kodak Company Method of locating a calibration patch in a reference calibration target
US7167280B2 (en) * 2001-10-29 2007-01-23 Eastman Kodak Company Full content film scanning on a film to data transfer device
US20030081118A1 (en) * 2001-10-29 2003-05-01 Cirulli Robert J. Calibration of a telecine transfer device for a best light video setup
CN100474907C (en) * 2003-06-18 2009-04-01 汤姆森特许公司 Apparatus for recording data on motion picture film
DE102004001295A1 (en) * 2004-01-08 2005-08-11 Thomson Broadcast And Media Solutions Gmbh Adjustment device and method for color correction of digital image data
JP2005215212A (en) * 2004-01-28 2005-08-11 Fuji Photo Film Co Ltd Film archive system
US20080158351A1 (en) * 2004-06-16 2008-07-03 Rodriguez Nestor M Wide gamut film system for motion image capture
US7221383B2 (en) * 2004-06-21 2007-05-22 Eastman Kodak Company Printer for recording on a moving medium
US7298451B2 (en) * 2005-06-10 2007-11-20 Thomson Licensing Method for preservation of motion picture film
US7636469B2 (en) * 2005-11-01 2009-12-22 Adobe Systems Incorporated Motion picture content editing
JP4863767B2 (en) * 2006-05-22 2012-01-25 ソニー株式会社 Video signal processing apparatus and image display apparatus

Also Published As

Publication number Publication date
KR20130122621A (en) 2013-11-07
JP2013543182A (en) 2013-11-28
US20130194416A1 (en) 2013-08-01
WO2012051486A1 (en) 2012-04-19
MX2013004154A (en) 2013-10-25
JP2013543181A (en) 2013-11-28
CA2813777A1 (en) 2012-04-19
CN103155546A (en) 2013-06-12
EP2628295A2 (en) 2013-08-21
MX2013004152A (en) 2013-05-14
BR112013008742A2 (en) 2016-06-28
RU2013122105A (en) 2014-11-20
BR112013008741A2 (en) 2016-06-28
KR20130138267A (en) 2013-12-18
US20130194492A1 (en) 2013-08-01
RU2013122104A (en) 2014-11-20
EP2628294A1 (en) 2013-08-21
WO2012051483A2 (en) 2012-04-19
WO2012051483A3 (en) 2012-08-02
CN103155545A (en) 2013-06-12
CA2813774A1 (en) 2012-04-19
TW201230803A (en) 2012-07-16

Similar Documents

Publication Publication Date Title
David Stump Digital cinematography: fundamentals, tools, techniques, and workflows
US6285784B1 (en) Method of applying manipulations to an extended color gamut digital image
JP5314029B2 (en) Color signal conversion apparatus and color signal conversion method
US6282313B1 (en) Using a set of residual images to represent an extended color gamut digital image
US6748106B1 (en) Method for representing an extended color gamut digital image on a hard-copy output medium
KR20100043191A (en) Method of color mapping from non-convex source gamut into non-convex target gamut
Spaulding et al. Using a residual image to extend the color gamut and dynamic range of an sRGB image
JP2008507864A (en) Wide gamut system for video
TW201230817A (en) Method and system for producing video archive on film
JP2009514483A (en) Editing motion picture content
EP0991019B1 (en) Method of applying manipulations to a color digital image
Selan Cinematic color: From your monitor to the big screen
JP2004007410A (en) Decoding method of data encoded in monochrome medium
US20080158351A1 (en) Wide gamut film system for motion image capture
JP2002262125A (en) Processing film image for digital cinema
US6937362B1 (en) Method for providing access to an extended color gamut digital image and providing payment therefor
Hill (R) evolution of color imaging systems
WO2001078368A2 (en) Film and video bi-directional color matching system and method
Perez The color management handbook for visual effects artists: digital color principles, color management fundamentals & aces workflows
Giorgianni et al. Color Encoding in the Photo CD System
Krasser Post-Production/Image Manipulation
Madden et al. Retaining Color Fidelity in Photo CD Image Migration
MADDEN COLOR ENCODING| N THE PHOTO CD SYSTEM
Stump What Is Digital
Alvira BASIC VIDEO AND DIGITAL IMAGING THEORY IN PATHOLOGY