TW201137788A - Computerized method and system for pulling keys from a plurality of color segmented images, computerized method and system for auto-stereoscopic interpolation, and computer program product - Google Patents

Computerized method and system for pulling keys from a plurality of color segmented images, computerized method and system for auto-stereoscopic interpolation, and computer program product Download PDF

Info

Publication number
TW201137788A
TW201137788A TW099143101A TW99143101A TW201137788A TW 201137788 A TW201137788 A TW 201137788A TW 099143101 A TW099143101 A TW 099143101A TW 99143101 A TW99143101 A TW 99143101A TW 201137788 A TW201137788 A TW 201137788A
Authority
TW
Taiwan
Prior art keywords
image
pixel
dimensional image
color
dimensional
Prior art date
Application number
TW099143101A
Other languages
Chinese (zh)
Inventor
Kuniaki Izumi
Original Assignee
Deluxe 3D Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US12/634,379 external-priority patent/US8638329B2/en
Priority claimed from US12/634,368 external-priority patent/US8538135B2/en
Application filed by Deluxe 3D Llc filed Critical Deluxe 3D Llc
Publication of TW201137788A publication Critical patent/TW201137788A/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/97Determining parameters from multiple pictures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/261Image signal generators with monoscopic-to-stereoscopic image conversion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Abstract

Described are computer-based methods and apparatuses, including computer program products, for three dimensional image rendering. In some embodiments, data indicative of a two dimensional image is stored in a data storage device, the two dimensional image comprising a plurality of pixels. A plurality of color segmented frames are generated based on the two dimensional image, wherein each color segmented frame comprises one or more objects. For each of the color segmented frames, a key is generated based on the one or more objects. A depth map is calculated for the two dimensional image based on the keys, wherein the depth map comprises data indicative of three dimensional information for each pixel of the two dimensional image. In some embodiments, a first two dimensional image and a second two dimensional image are received. A reduced pixel image is generated for each of the first and second two dimensional images, wherein each reduced pixel image comprises a reduced pixel size that is less than the original pixel size. Boundary information is calculated for each of the first and second two dimensional images. A depth map is calculated for the first and second reduced pixel images, wherein the depth map comprises data indicative of three dimensional information for one or more objects in the first and second reduced pixel images. A depth map is calculated for the first and second two dimensional images based on the boundary information for each of the first and second two dimensional images and the depth map of the first and second reduced pixel images.

Description

201137788 六、發明說明: 【發明所屬之技術領域】 本發明概言之係關於用於三維再現之基於電腦之方法及 设備,包含電腦程式產品,且特定而言係關於自色彩分段 影像拉取基色域及自動立體内插。 【先前技術】 一維(3D)成像係一種在一影像中產生深度錯覺以便由一 觀看者感知該深度之技術。憑藉立體成像,可藉由針對在 媒體内描繪之場景將一稍微不同之影像呈現給每一眼睛來 產生深度錯覺(例如’針對二維(2D)影像、攝影或電影)。 通常,為使觀看者感知媒體之深度,使用者必須透過某種 類型之特殊觀看設備(諸如特殊帽盔或眼鏡)觀看該等立體 影像。與立體觀看相比較,自動立體成像係一種顯示可在 不使用任何特定觀看設備之前提下觀看到之3D影像之技 術。 儘管媒體行業已在犯成像方面有所進步,但仍存在有效 地及準確地自影像提取物件及正確地產生該等物件之深度 資之諸多挑戰。色彩分段係可用於基於色彩及/或紋理 來提取大的同|區域之—演繹過程。色彩分段獲取可具有 數以百4或數以千計之色彩之原始2D影像,並將該扣影 像中之色彩數目減少至不同色彩之—較小子集。所得之經 色彩分段之影像可用於產生代表該影像内之每—像素或物 件之深度資訊之一深度影像。 另外,為使得自2D影像產生3D影像所需之計算時間加 152792.doc 201137788 速,-個解決方案係採用自動轉描(r〇t〇sc〇ping)。轉描係 關於用於繪製-影像内之物件之過程。於轉描最傳統形式 下,轉描係關於針對一實景底片上之一元素產生一色版 (matte)(其用於將兩個或更多個影像元素組合成—單個最 Z影像)以使得可在另—背景上方合成該元h可採用貝 兹曲線(BeZier eurves)以藉由在經不同地間隔開之若干點 處評估一物件且然後轉換線性分段之大致次序以呈現該物 件之2D輪廟來自動界^線。^,藉由僅❹㈣ 曲線’由於限制貝兹點之數目以提高3〇再現效率之期望 (例如,逐圖框轉置所需之點越少,由於需要操縱之點越 少而使處理時間越快)而通常僅「鬆散地」勾勒該物件之 輪廓。因此’使用儘可能少之貝茲點係有利地,此導致僅 一粗略跡線之輪廓。 可產生深度圖以指示在該等扣影像内之哪些物件區更接 近觀看者或更遠離觀看者1管可基於—單個卿像產生 深度圖,但通常使用立體像對來產生深度圖(由一對對應 相機獲取之-對影像,纟中該等相機經組態以便由每一: 機捕獲兩個不同有利點且在相機之間存在―習知關. 為產生深度圖’通常要求使影像之間的—共同點與有利點 資訊正確地關聯起來。舉例而言,可在影像之間比較若干 分段(例如’一預定像素正方形)以判定共同點。然而,此 過程極其相依於-共同點在該兩個影像内之選擇之準確 度’該選擇係一重要且費時之選擇。因此,儘管各種技術 經調適以產生3D影像以增加效率及速度,但該過程仍係費 152792.doc -6 - 201137788 時、複雜及昂貴的。 【發明内容】 於項態樣中,本發明之特徵在於一種用於自複數個色 彩分段影像拉取基色域之電腦化方法。該方法包含將指示 二維影像之資料儲存於一資料儲存裝置中,該二維影像包 括複數個像素。該方法進一步包含由一電腦之一色彩分段 單元基於該一維影像產生複數個色彩分段圖框,其中每一 色彩分段圖框包括一或多個物件。該方法進一步包含:由 忒色彩分段單元針對該等色彩分段圖框中之每一者基於該 一或多個物件產生一基色域;及由該電腦之一深度圖單元 基於该等基色域針對該兩個二維影像計算一深度圖其中 該深度圖包括指示該二維影像中之每一像素之三維資訊之 資料。 於另態樣中,本發明包含一種用於自複數個色彩分段 景“象拉取基色域之系統。該系統包含一資料健存裝置其 經組態以健存指示二維影像之資料,該二維影像包括複數 個像素。該系統包含與資料儲存裝置通信之_色彩分段單 元’其經組態以基於該二維影像產生複數個色彩分段圖 框’其令每-色彩分段圖框包括一或多個物件,且針對节 等色彩分段圖框中之每一者基於該一或多個物件產生一: 色域。該系統包含與該色彩分段單元及資料儲存裝置通二 之一深度圖單元,其經組態以基於該等基色域釺 影像計算一深度圖,其令該深度 ^ '' 备浼本 。料不该二維影像之 母一像素之三維資訊之資料。 152792.doc 201137788 於另-態樣中’本發明包含一電腦程式產品。該電腦程 式產品有形地實施於-電腦可讀儲存媒體中。該電腦程式 產品包含可操作以致使―資料處理設備執行以下之指令: 將指示二維影像之資料儲存於—㈣儲存裝置巾,該二維 影像包括複數個像素;及由一電腦之一色彩分段單元基於 該二維影像產生複數個色彩分段圖框,其中每一色彩分段 圖框包括-或多個物件。該電腦程式產品亦包含可操作以 致使-資料處理設備執行以下之指令:由該色彩分段單元 針對色彩分段圖框中之每一者基於該一或多個物件產生一 基色域;&由該電腦之-深度圖單元基於該等基色域針對 該二維影像計算-深度H中該深度圖包括指示該二維 影像中之每一像素之三維資訊之資料。 於另-態樣中,本發明包含用於自動立體内插之一種電 腦化方法。該方法包含:由_電腦之—輸人單元接收一第 -二維影像及-第二二維影像,每—二維影像包括一像素 大小;及由該電腦之-預處理單元針對該第__二維影像及 該第二二維影像中之每一者產生一減少像素影像,其中每 -減少像素影像包括小於該像素大小之—減小像素大小。 該方法亦包含:由該預處理單元針對該第_二維影像及該 第二二維影像中之每一者計算邊界資訊;由該電腦之-深 度圖單元針對該第-減小之像素影像及該第三減小之像素 影像計算-深度圖’其中該深度圖包括指示該第—減小之 像素影像及該第二減小之像素影像巾之—或多個物件之三 維資訊之資料;及由該電腦之深度圖單元基於該第一二維 I52792.doc 201137788 影像及該第二二維影像令之每 ,Λ /A ± 可之邊界貧訊及該第一減 小之像素影像及該第二減 減 像素影像之深度圖針對該第 一二維影像及該第二二維影像來計算_深度目。 於另-態樣中,本發明包含用於自動立體内插之_201137788 VI. Description of the Invention: [Technical Field] The present invention relates to a computer-based method and apparatus for three-dimensional reproduction, including computer program products, and in particular, for self-color segmentation image pull Take the base color gamut and autostereoscopic interpolation. [Prior Art] One-dimensional (3D) imaging is a technique of generating a depth illusion in an image to be perceived by a viewer. With stereoscopic imaging, a depth illusion (e.g., 'for two-dimensional (2D) images, photography, or movies) can be produced by presenting a slightly different image to each eye for a scene depicted within the medium. Typically, in order for the viewer to perceive the depth of the media, the user must view the stereoscopic images through some type of special viewing device, such as a special helmet or glasses. Autostereoscopic imaging is a technique for displaying a 3D image that can be viewed without using any particular viewing device, as compared to stereoscopic viewing. Although the media industry has made advances in imaging, there are still many challenges in effectively and accurately extracting objects from images and properly generating them. Color segmentation can be used to extract large homo-regional-deductive processes based on color and/or texture. Color segmentation acquires an original 2D image that can have hundreds or thousands of colors and reduces the number of colors in the image to a smaller subset of different colors. The resulting color segmented image can be used to generate a depth image representative of the depth information for each pixel or object within the image. In addition, in order to increase the calculation time required to generate 3D images from 2D images by 152792.doc 201137788, the solution is to use automatic scanning (r〇t〇sc〇ping). Transcoding system The process used to draw objects within an image. In the most traditional form of translating, the mapping is about producing a matte for one element on a real film (which is used to combine two or more image elements into a single most Z image) to Combining the element h over the other background may employ a Bezier eurves to present the 2D wheel of the object by evaluating an object at several points that are spaced apart differently and then converting the approximate order of the linear segments. The temple comes to the automatic line ^ line. ^, by only ❹ (four) curve 'because of limiting the number of Bayes points to increase the efficiency of 3 〇 reproduction efficiency (for example, the fewer points required for frame-by-frame transposition, the less processing time due to the fewer points that need to be manipulated Fast) and usually only "loosely" outlines the object. Therefore, it is advantageous to use as few Bates points as possible, which results in only a rough outline of the trace. A depth map can be generated to indicate which of the object regions within the image of the buckle are closer to the viewer or further away from the viewer. The tube can generate a depth map based on a single image, but typically uses a stereo pair to generate a depth map (by one For the corresponding camera-to-image, the cameras are configured so that each camera captures two different vantage points and there is a “know-closed” between the cameras. To generate a depth map, it is usually required to make the image The inter-common point is correctly associated with the vantage point information. For example, several segments can be compared between images (eg, a predetermined pixel square) to determine commonalities. However, this process is extremely dependent on the commonality. The accuracy of the selection within the two images 'this selection is an important and time consuming choice. Therefore, although various techniques have been adapted to produce 3D images to increase efficiency and speed, the process is still 152792.doc -6 - 201137788 Time, complexity and expensive. [Invention] In the aspect of the invention, the invention is characterized by a computerized method for extracting the primary color gamut from a plurality of color segmentation images. The method includes storing data indicating the two-dimensional image in a data storage device, the two-dimensional image comprising a plurality of pixels, the method further comprising generating a plurality of colors based on the one-dimensional image by a color segmentation unit of a computer a segmentation frame, wherein each color segmentation frame includes one or more objects. The method further includes: based on the one or more of each of the color segmentation frames by the color segmentation unit The object generates a primary color gamut; and the depth map unit of the computer calculates a depth map for the two two-dimensional images based on the primary color gamut, wherein the depth map includes three-dimensional information indicating each pixel in the two-dimensional image. In another aspect, the present invention comprises a system for self-complexing a color segmentation scene "image capture color gamut. The system includes a data storage device configured to store a two-dimensional image. Data, the two-dimensional image comprising a plurality of pixels. The system includes a color segmentation unit responsive to the data storage device configured to generate a plurality of colors based on the two-dimensional image A color segment frame 'which causes each color segment frame to include one or more objects, and a color gamut is generated based on the one or more objects for each of the color segment frames in the section. The system includes a depth map unit coupled to the color segmentation unit and the data storage device, configured to calculate a depth map based on the primary color gamut images, such that the depth is '' The data of the three-dimensional information of the mother pixel of the two-dimensional image. 152792.doc 201137788 In another aspect, the invention comprises a computer program product tangibly embodied in a computer readable storage medium. The computer program product is operative to cause the data processing device to execute the following instructions: storing data indicative of the two-dimensional image in the (4) storage device towel, the two-dimensional image comprising a plurality of pixels; and color segmentation by a computer The unit generates a plurality of color segmentation frames based on the two-dimensional image, wherein each color segmentation frame includes - or a plurality of objects. The computer program product also includes an operation operative to cause the data processing device to: generate, by the color segmentation unit, a primary color gamut based on the one or more objects for each of the color segmentation frames; & The depth map of the computer is calculated for the two-dimensional image based on the base color gamut - the depth map includes data indicating three-dimensional information of each pixel in the two-dimensional image. In another aspect, the invention includes a method of computerization for autostereoscopic interpolation. The method includes: receiving, by the computer-input unit, a first-two-dimensional image and a second two-dimensional image, each of the two-dimensional images including a pixel size; and the pre-processing unit of the computer Each of the two-dimensional image and the second two-dimensional image produces a reduced pixel image, wherein each-reduced pixel image includes a smaller than the pixel size - a reduced pixel size. The method also includes: calculating, by the pre-processing unit, boundary information for each of the second-dimensional image and the second two-dimensional image; wherein the computer-depth image unit is for the first-reduced pixel image And the third reduced pixel image calculation-depth map, wherein the depth map includes data indicating three-dimensional information of the first-reduced pixel image and the second reduced pixel image towel or the plurality of objects; And the depth map unit of the computer is based on the first two-dimensional I52792.doc 201137788 image and the second two-dimensional image order, Λ /A ± boundary boundary information and the first reduced pixel image and the The depth map of the second reduced pixel image is calculated for the first two-dimensional image and the second two-dimensional image. In another aspect, the invention includes for autostereoscopic interpolation

統。該系統包含經組態以接收一第-二維影像及-第二I :影像之一輸入單元’每一二維影像包括一像素大小。續 系統亦包含與該輸入單元通信之 ,u . ^ ^ 預處理早兀,其經組態 以·針對该第一二維影像及該第二二維影像中之每—者產 生一減小之像素影像,其中备 、·,成小之像素影像包括小於 該像素大小之一減小像辛女水. 像素大小,及針對該第一二維影像及 “第一—維影像中之每一者計算邊界資訊^該系統包含與 該預處理單元通信之一深度圖單元,其經細態以針對該第 -減小之像素影像及該第二減小之像素影像計算一深度 圖,其中該深度圖包括指示該第-減小之像素影像及該第 -減小之像㈣像中之—或多個物件之三維資訊之資料; 及基於該第-二維影像及該第二二維影像中之每一者之邊 界資訊以及該第—減小之像素影像及該第二減小之像素影 像之深度圖針對㈣—二維影像及㈣二二維影像計算一 深度圖。 於另n中’本發明包含—電腦程式產品。該電腦程 式產品有形地實施於—電腦可讀健存媒體中。該電腦程式 產品包含可操作以指示-資料處理設備執行以下之指令: 接收-第-二維影像及一第二二維影像每—二維影像包 括-像素大小’·及針對該第一二維影像及該第二二維影像 152792.doc 201137788 中之每一者產生一減小之像素影像,其中每一減小之像素 影像包括小於該像素大小之-減小像素大小。該電腦程式 產品亦包含可操作以指示一資料處理設備執行以下之指 令.針對該第一二維影像及該第二二維影像中之每一者計 算邊界資訊;針對該第一減小之像素影像及該第二減小之 像素影像計算-深度圖,其中該深度圖包括指示該第一減 小之像素影像及該第二減小之像素影像中之一或多個物件 之三維資訊之資料;及基於該第—二維影像及該第二二維 影像中之每一者之邊界資訊及該第一減小之像素影像及該 第二減小之像素影像之深度圖來針對該第一二維影像及該 第二二維影像計算一深度圖。 ^其他實财,上述態樣巾之任—者可包含以下特徵中 之-或多者。於某些實例中,該等系統及方法可允許對一 物件之—貝兹圖之貝兹點之調整(例如,在初始色彩分段 之準確度不足或不能產生期望效果之後)。在某些實例 中,針對每一色彩分段圖框,可藉由下列方式界定該一或 多個物件令之每一物件之一邊緣:自動地計算該物件之一 貝相,該貝兹圖包括關於該物件之複數個貝兹點及連接 ^复數個貝兹點之複數個貝兹曲線;接收指示調整該貝兹 圖之資料’ ·及基於該資料產生該物件之一更詳細圖其中 錢詳細圖包括與該複數個㈣點組合之複數個額外點。 於某些實例中,產生該等色 &各刀殽圖框可包含界定複數 個基本色,其中每一基本 巴係用於產生—色彩分段圖框。System. The system includes an input unit configured to receive a first-two-dimensional image and a second I--image. Each of the two-dimensional images includes a pixel size. The continuation system also includes communicating with the input unit, u. ^ ^ pre-processing, configured to generate a decrease for each of the first two-dimensional image and the second two-dimensional image a pixel image in which a small pixel image includes less than one of the pixel sizes, such as a pixel size, and for each of the first two-dimensional image and the "first-dimensional image" Calculating boundary information, the system includes a depth map unit in communication with the pre-processing unit, wherein the depth map is configured to calculate a depth map for the first-reduced pixel image and the second reduced pixel image, wherein the depth The figure includes data indicating three-dimensional information of the first-reduced pixel image and the first-reduced image (four) image or the plurality of objects; and based on the first-two-dimensional image and the second two-dimensional image The boundary information of each of the boundary information and the depth map of the first reduced pixel image and the second reduced pixel image are used to calculate a depth map for the (four)-two-dimensional image and the (four) two-dimensional image. The invention comprises a computer program product. The product is tangibly embodied in a computer readable storage medium. The computer program product includes an operation operative to indicate that the data processing device executes the following instructions: a receive-first-two-dimensional image and a second two-dimensional image each-two-dimensional The image includes a pixel size '· and a reduced pixel image is generated for each of the first two-dimensional image and the second two-dimensional image 152792.doc 201137788, wherein each reduced pixel image includes less than the Pixel size - reducing the pixel size. The computer program product also includes instructions operable to instruct a data processing device to perform the following operations: calculating boundary information for each of the first two-dimensional image and the second two-dimensional image Calculating a depth map for the first reduced pixel image and the second reduced pixel image, wherein the depth map includes one of the first reduced pixel image and the second reduced pixel image Or three-dimensional information of the plurality of objects; and boundary information based on each of the first two-dimensional image and the second two-dimensional image and the first reduced pixel image and the second subtraction The depth map of the pixel image is used to calculate a depth map for the first two-dimensional image and the second two-dimensional image. ^ Other real money, any of the above-mentioned aspects can include one or more of the following features. In some instances, the systems and methods may allow adjustment of the Bezier point of an object--Bez plot (eg, after the initial color segmentation is not accurate enough or does not produce the desired effect). In an example, for each color segment frame, one edge of each object of the one or more object orders may be defined by automatically calculating one of the object phases, the Bates diagram including a plurality of Bezier points of the object and a plurality of Bezier curves connecting the plurality of Bezier points; receiving the indication to adjust the information of the Bezier diagram' and generating a detailed picture of the object based on the data, wherein the detailed map of the money includes A plurality of additional points combined with the plurality of (four) points. In some instances, generating the equal color & each knife frame may include defining a plurality of basic colors, wherein each element is used to generate a color segmentation frame.

來自该複數個基本色中之每—A 母基本色了包含一預定義色彩 152792.doc 201137788 值範圍,其中包括在該預定義色彩值範圍内之一色彩值之 像素與基於該基本色產生之一色彩分段圖框相關聯。可 接收指示該複數個基本色中之一或多個色彩之一新色彩值 犯圍之資料,且可基於該資料來針對該一或多個色彩中之 每者調整預定義色彩值範圍。針對該複數個基本色中之 每色%,遠複數個基本色可包含—色彩對,該色彩對包 3戋色彩及一深色彩。該複數個基本色可包含褐色及米 黃色。 於/、他貫例中,計算深度圖可包含:判定該等二維影像 中之物件之二維表示導致在該二維影像中不在視野内的 :物件之一部分進入視野内;及拉伸該物件後方之一背 /物件之側或一者以填充於進入視野内的該物件之 P刀中° st算深度圖可包含:判定該等二維影像中之一 牛之一維表不導致在該二維影像中在視野内的該物件之 -部分在視野内消失;及收縮該物件之一側以隱藏在視野 内消失的該物件之該部分。 某一 κ例中,计算深度圖可包含基於該複數個色彩分 奴圖框之一關鍵因子來針對每一像素計算三維資訊。該關 鍵因::包含該像素之HLS(色相、亮度及飽和度)色彩空 :▲冲异可包含判定飽和度之位準、亮度之位準或二者是 :向位準還I低位準;及在該飽和度或亮度之位準係一 :位準之情況下’指派—近深度值給該像素;或在該飽和 =亮度位準係—低位準之情況下,指派-遠深度值給該 一該關鍵因子可包含在該像素及—相鄰像素群組當中 152792.doc • 11 - 201137788 的一值改變量。計算可句八. 否係-平面之部分二" 變來判定該像素是 在°亥像素係該平面之部分之产 下,私派一近深度值給該像素。 月况 於其他實例巾,# Μ μ m _ i节3亥關鍵因子可包 之一或多個物件中之每^刀奴圖框中 ^者之一位置。計算可 -物件:在該物件之位置係 、 之愔,,兄IT 4t ·. 巴杉刀奴圖框之一下部位置 日蜋一近深度值給該物件内之 物件之位置_色彩分 冑素,在該 ^ &圖框之一上部位置之情況下,指 派一通深度值給該物件内 母像素,在该物件之位置係 在另一色彩分段圓框之— 對應物件内之情況下,指派一近 冰度值給該物件内之备一 像素,或在該物件之位置係在該Each of the plurality of basic colors - the A basic color contains a predefined color 152792.doc 201137788 value range, including pixels of one of the color values within the predefined color value range and based on the basic color A color segmentation frame is associated. Information indicative of a new color value of one or more of the plurality of basic colors may be received, and a predefined range of color values may be adjusted for each of the one or more colors based on the material. For each of the plurality of basic colors, a plurality of basic colors may include a color pair, the color pair and the dark color. The plurality of basic colors may include brown and beige. In the example of the method, the calculating the depth map may include: determining that the two-dimensional representation of the object in the two-dimensional image is not in the field of view in the two-dimensional image: one part of the object enters the field of view; and stretching the The side of the back/object of the object or one of the P-cuts of the object filled in the field of view may include: determining that one of the two-dimensional images does not result in The portion of the object in the field of view that disappears within the field of view in the field of view; and shrinks one side of the object to conceal the portion of the object that disappears within the field of view. In a κ example, calculating the depth map may include calculating three-dimensional information for each pixel based on one of the key factors of the plurality of color binning frames. The key factor:: HLS (hue, brightness and saturation) color space containing the pixel: ▲Differential can include the level of the determination of saturation, the level of brightness or both: to the level of the I low level; And assigning a near-depth value to the pixel in the case of the level of saturation or brightness: or in the case of the saturation=brightness level-low level, assigning a far depth value to The one of the key factors may include a value change of 152792.doc • 11 - 201137788 among the pixel and the adjacent pixel group. The calculation can be sentenced to eight. No system - part of the plane two " change to determine that the pixel is in the part of the plane of the pixel, privately send a near depth value to the pixel. In the case of other instances, # Μ μ m _ i section 3 hai key factor can be packaged in one or more of the objects in each of the knives in the box. Calculate the object - the position of the object, then, the brother IT 4t ·. The lower position of one of the frames of the Bachelor knife is the position of the object in the lower part of the position. In the case of an upper position of the ^ & frame, assigning a pass depth value to the parent pixel in the object, where the position of the object is in the corresponding object of another color segmented circle frame, Assigning a near-ice value to a pixel in the object, or at the location of the object

色知刀#又圖框之—邊续A 认兮铷姓 刀處之情況下,指派-遠深度值 給該物件内之每—僮去 ^ 像素。該關鍵因子可包含每一色彩分段 圖框中之—或多個物 牛中之母一者之一大小與該色彩分段 ' 】之比率,且計算包括針對每一物件,基於 二二心& *度值給該物件内之每-像素。該關鍵因子 一 k纪 、准衫像中之一物件之一位置及在一連續 二㈣像中之—對應物件之—位置之資訊。計算可包含: 2定在該二維影像中之該物件之位置不同於在該連續二維 影像中之該對應物侔 件之位置,及指派一近深度值給該物件 中之每一像素。 於某些實例中,可攸& _ _ 』將心示一先前產生之深度圖之資料儲 存於資料儲存裝置中 卜 £ τ 邊冰度圖包括指示一對應二維影像 每像素之二維資訊之資料,且可基於先前產生之深 152792.doc •12- 201137788 度圖來產生一新深度圖,其中該新深度圖包括指示一對應 二維影像中之每一像素之一較大範圍之三維資訊之資料。 計算深度圖可包含應用一或多個體驗規則以調整深度圖, 每一規則經組態以基於該二維影像中之一或多個物件之一 人類感知來調整該深度圖。 於其他實例中’該系統可包含一輸入單元,其經組態 以:接收指示該二維影像之資料並將該資料儲存於資料儲 存裝置中;及接收指示一先前產生之深度圖之資料並將指 示該先前產生之深度圖之該資料儲存於該資料儲存裝置 中。該系統可包含一邊緣產生單元,其經組態以針對該二 維影像中之每一物件產生邊緣資訊。該系統可包含三維體 驗單元,其經組態以應用—或多個體驗規則以調整該深度 圖,每-規則經組態以基於該二維影像中之一或多個物件 之一人類感知來調整該深度圖。 /Α π w丨芥久梭罘二二维ψ, 像之深度圖來產生一第三二維影 维〜 包括介;Mr笸 ^ 其中^第二二維影像 。括"於5亥第一二维影像之— 像之-第二有料之心* ㈣點與0二二維影 素—減:::二:::第-減小之像 等減小之像素影像,之該等物件心_度=基於該 一差來產生該資料。 考之像素位置之 於其他實例中,該方法包含 該兩個對應像素*之—第 X兩個對應像素,其中 像令’且該兩個對應像素^位於4第—減小之像素影 152792.doc —第二像素位於該第二減小 -13. 201137788 之像素影像内;計算指干兮坌 、旁……亥第—像素與該第二像素分開多 = = 及指派指示該兩個對應像素在該深度圖中 視圖之一 物件之#料係遠離包含該物件之三維 辛·” 。該方法可進—步包含:比較兩個對應像 二=兩個對應像素中之—第—像素位於該第一減小 =素:像内’且該兩個對應像素中之—第二像素係位於 ;1小之像素影像内;計算指示第-像素與第二像素 ΐ = 二又值;及指派指示該兩個對應像素在該深 二訊,以指示該物件之資料係接近於包含該 物件之二維視圖之一觀看者。 ^某^實例中’該像素大小可被計算為每—對應二維影 中之-像素長度與一像素寬度之乘積。針對該第一二維 影像及該第二二維影像計算—深度圖可包含:基於該第— -維影像及該第二二維影像來針對該邊界資訊計算每一邊 界像素之深度圖;及憑藉接近第—減小之像素影像及第二 減小之像素影像之對應邊界像素之資料判定剩餘像素之深 度圖之深度資訊。 於其他實财,計算邊界資訊包含基於該二維影像產生 複數個色彩分段圖框’其中每一色彩分段圖框包括一或多 個物件;及針對該第-減小之像素影像及該第二減小之像 素影像中之每一像素’基於該等色彩分段圖框設定該像素 ,-邊界點指示符’其中該邊界點指示符包括指示該像素 疋否係4界點之資料。可驗證該複數個色彩分段圖框中 之每-者以確保其包含一或多個内聚物件,每一内聚物件 152792.doc 14. 201137788知知刀#又框框——End A 兮铷 兮铷 In the case of a knife, assign - far depth value to each child in the object to ^ pixels. The key factor may include a ratio of the size of one of the plurality of mothers in each color segment frame to the color segment', and the calculation includes for each object based on the two-two heart & *degree value for each pixel in the object. The key factor is the position of one of the objects in the image, and the position of the corresponding object in a continuous two (four) image. The calculating may include: 2 positioning the object in the two-dimensional image differently than the position of the corresponding object in the continuous two-dimensional image, and assigning a near depth value to each pixel in the object. In some examples, the data of a previously generated depth map can be stored in the data storage device. The edge ice level map includes two-dimensional information indicating a pixel corresponding to a two-dimensional image. Data, and may generate a new depth map based on the previously generated deep 152792.doc •12-201137788 degree map, wherein the new depth map includes a three-dimensional representation indicating a larger range of each pixel in a corresponding two-dimensional image Information on information. Calculating the depth map can include applying one or more experience rules to adjust the depth map, each rule configured to adjust the depth map based on human perception of one of the one or more objects in the two-dimensional image. In other examples, the system can include an input unit configured to: receive data indicative of the two-dimensional image and store the data in a data storage device; and receive information indicative of a previously generated depth map and The data indicating the previously generated depth map is stored in the data storage device. The system can include an edge generation unit configured to generate edge information for each of the two dimensional images. The system can include a three-dimensional experience unit configured to apply - or a plurality of experience rules to adjust the depth map, each-rule configured to be based on human perception of one or more of the objects in the two-dimensional image Adjust the depth map. / Α π w 丨 久 久 久 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 Including the first two-dimensional image of 5 Hai - like the second - the heart of the material * (four) points and 0 two-dimensional pixels - minus::: two::: the first - the reduction of the image, etc. Pixel image, the objects of the object _ degrees = based on the difference to generate the data. The pixel position of the test is in other examples, the method includes the two corresponding pixels * the Xth corresponding pixel, wherein the image is 'and the two corresponding pixels ^ are located at the 4th - reduced pixel shadow 152792. Doc—the second pixel is located in the pixel image of the second decrease-13.201137788; the calculation refers to the coherence, the side...the pixel is separated from the second pixel by more == and the assignment indicates the two corresponding pixels In the depth map, the material of one of the objects is away from the three-dimensional symplectic inclusion of the object. The method can further include: comparing two corresponding images two = two corresponding pixels - the first pixel is located First reduction = prime: in the image 'and the two corresponding pixels - the second pixel is located in; 1 small pixel image; the calculation indicates the first pixel and the second pixel ΐ = two values; and the assignment indication The two corresponding pixels are in the deep two-in-one to indicate that the data of the object is close to one of the two-dimensional views of the object. ^In an instance, the pixel size can be calculated as each corresponding two-dimensional In the shadow - the pixel length and the width of a pixel Calculating the depth map for the first two-dimensional image and the second two-dimensional image may include: calculating a depth map of each boundary pixel for the boundary information based on the first-dimensional image and the second two-dimensional image; Determining the depth information of the depth map of the remaining pixels by using the data of the corresponding boundary pixels of the pixel image and the second reduced pixel image. For other real money, calculating the boundary information includes generating a complex number based on the two-dimensional image. a color segmentation frame, wherein each color segment frame includes one or more objects; and based on the pixels of the first-reduced pixel image and the second reduced pixel image The color segmentation frame sets the pixel, - the boundary point indicator 'where the boundary point indicator includes information indicating whether the pixel is a boundary point. The verification can be performed for each of the plurality of color segmentation frames. Make sure it contains one or more internal polymer parts, each internal polymer part 152792.doc 14. 201137788

包括可識別之邊界線D 於某些實例中,計算深度圖包含藉由識別在該第一減小 之像素影像内之一可見像素來識別該第一減小之像素影像 之-隱藏像素,其中該可見像素;^具有在該第:減小之像 素影像内之一對應像素。可基於該第一二維影像及該第二 二維影像之深度圖來產生一第三二維影像,其中該第三二 維影像包括介於該第一二維影像之一第一有利點與該第二 二維影像之一第二有利點之間的一有利點,且其中該第三 二維影像基於所識別之隱藏像素包括進入視野内之一區或 在視野内消失之一區。 於其他實例中’該系統可包含與預處理單元通信之一色 衫分段單元,其經組態以基於該二維影像產生複數個色彩 /刀段圖框,其中每一色彩分段圖框包括一或多個物件。該 系統可包含一轉換單元,其經組態以基於該深度圖產生一 第二二維影像,其中該第三二維影像包括介於該第一二維 影像之一第一有利點與該第二二維影像之一第二有利點之 間的一有利點。 本文所述之包含方法及設備二者之技術可提供下述優勢 中之一或多者。該等技術藉由自二維影像產生一準確之深 度圖且然後將該深度圖轉換成三維影像(例如經由一立體 對)而具備比現有轉換工具更快速、更有效及更準確之三 維轉換。藉由將該等二維影像色彩分段且基於該等色彩分 段圖框來識別物件邊界,本發明系統及方法實現優於藉由 其他手段產生邊界資訊的一顯著時間節省。另外,最初可 152792.doc -15- 201137788 由使用者產生及調整用於產生該等色彩分段影像之每一色 彩之飽和度以根據一特定影像(或圖框)集來客製化該等技 術。進一步地,儘管該系統提供用於針對色彩分段圖框内 之物件自動產生一貝茲圖,但亦可手動地調整貝茲點(例 如,在初始色彩分段之準確度不足或不能產生一期望之三 維效果之情況下)。 本文揭示之系統及方法不僅可操作於二維影像上,亦可 操作於一先前產生之深度圖(例如,藉由使用一較大之自 訂深度範圍,因此深度資訊包含更多資訊以判定及/或區 分較近及較遠物件,此導致比一廣泛範圍資料點(c〇arsei厂 grained)深度圖更準確之一深度圖)上。進一步地,可使用 人類體驗資料以更準確地轉換二維影像(例如,關於面部 特徵之人類體驗資料)。另外,可轉換一圖框序列之第一 一維影像及最後二維影像(或圖框),且然後可基於該第一 一維影像及s亥最後二維影像自動地轉換該第一二維影像與 該最後二維影像之間的所有二維影像。然後可將該序列之 最後二維影像設定為一第二二維影像序列之第一二維影 像,且遞回地重複該過程。另外,若期望,本文所揭示之 該等系統及方法可編輯該二維影像序列之一自動轉換二維 影像。 結合僅以實例方式圖解說明本發明之原理之附圖,將自 以下詳細說明中明瞭本發明之其他態樣及優點。 【實施方式】 結合附圖,閱讀下文對各實施例之說明,將更全面地瞭 152792.doc -16· 201137788 解本發明之上述及其他目的、特徵及優點以及本發明本 身。 概言之,本文所揭示之技術提供用於二維影像(例如, 一動畫之若干圖框)之自動色彩分段、基於該等色彩分段 圖框之物件邊界之識別(例如一貝茲圖,其可經自動地調 整及然後接續地微調)、自二維影像產生一準確深度圖(例 如,具有一自訂深度範圍)、且然後將該深度圖轉換成三 維影像(例如,經由一立體對P該等技術可操作於任意類 型之二維影像上,且亦可操作於一先前產生之深度圖(例 如,缺乏由當前技術使用之微調細節之一深度圖)上。 圖1圖解說明根據本發明之一實例性3D再現系統1〇〇。 3D再現系統1〇〇包含電腦1〇2。電腦1〇2包含深度產生器 104、一輸入單元1〇6、及一轉換單元1〇8。輸入單元1〇6與 深度產生器104通信。深度產生器1〇4包含一預處理單元 110及資料庫112(例如,一資料儲存裝置)。深度產生器1〇4 亦包含一色彩分段單元114及一深度圖製作單元116。色彩 分段單元114包含一邊緣產生單元118。深度圖製作單元 116包含一 3D體驗單元120。深度產生器1〇4之各種組件(例 如預處理單元110至深度圖製作單元116之30體驗單元12〇) 彼此通信。 電腦102係特別地經組態以產生3D影像之一計算系統(例 如,一可程式化、基於處理器之系統)。電腦1〇2可包含(舉 例而言)一微處理器、一硬碟機(例如,資料庫丨12)、隨機 存取記憶體(RAM)、唯讀記憶體(R〇M)、輸入/輸出(1/〇)電 152792.doc •17· 201137788 路及任意其他必要之雷聪έ 4 萆恥組件。電腦102較佳地經調適以 與各種類型之儲存裝置(持續及可抽換的)—起使用,諸如 (舉例而言卜可攜式碟機、磁性儲存器(例…軟磁 碟)、固態儲存器(例如—快閃記憶卡)、光學儲存器(例如 -光碟或CD)及/或網路/網際網路儲存ρ電賴2可包括 -或多個電腦’包含(舉例而言)在一 Windows'··. UNIX或其他適合之作業系 一 IBM/PC相容電腦)或— 統下操作之一個人電腦(例如, 工作站(例如一SUN或Silicon phics工作站)’ 較佳地包含—圖形使用者介面(gui)。 未度產生器104可產生2D影像之深度圖及,或處理及精緻 化先刖產生之深度圖。舉例而言,深度產生器1〇4可匯入 (例如’經由輸入單元106、資料庫112或透過一網際網路 或乙太網路連接(圖中未展示))—先前之深度影像且更準確 地再計算及微調該深度圖之參數。深度產生器⑽亦可接 收不具有相關聯深度圖之—2D影像,且不憑藉任何其他額 外資料針對該2D影像計算—深度圖。色彩分段單元114可 產生複數個色彩分段影像(例如,針對一 2D影像),且深度 圖製作單元116可使用該等色彩分段影像以產生該深度 圖0 輸入單元106使得能夠傳達資訊給深度產生器丨〇4。舉例 而吕,輪入單元106提供使得一使用者能夠經由輸入裝置 122與3D再現系統1〇〇通信之一介面(例如,輸入裝置122可 將資料124發送至輸入單元106)。術語使用者及操作者二 者皆係指使用該3D再現系統1〇〇之一人且可互換地使用。 152792.doc *18· 201137788 輸入裝置122可包含使得_使用者提供輸人至—電腦之任 意裝置。舉例而言,輸入裝置122可包含一習知輸入裝 置,諸如鍵盤、—滑鼠、一軌跡球、一觸控螢幕、一觸 控板、語音辨識硬體、撥號盤、開關、按鈕、一腳踏開 關、-遠端控制裝置、一掃描器、—相機、一麥克風及/ 或一操縱桿。舉例而言,輸人單元1G6可經組態以自輸入 裝置122接收二維影像。 轉換皁元108可將一 2D影像之一深度圖轉換成一则 像。於某些實施例中,轉換單元1〇8轉換_圖框序列(例如 一動畫之循序圖框)。舉例而言,深度產生器1〇4可轉換一 動晝之任意給定分段之第一圖框及最後圖框,且然後自動 地轉換在該第-圖框與該最後圖框之間的所有圖框。該最 後-圖框可然後被設定為一第二圖框序列之第一圖框。另 外,右期望,深度產生器1〇4可編輯一圖框序列之第一圖 框與最後圖框之間的一圖框。 邊緣產生單元11 8經組態以針對二維影像中之每一物件 產生邊緣資訊⑽,邊緣產生單元118轉描影像内之物 件、轉描色彩分段圖框料)。儘管邊緣產生單元ιΐ8係展 示為在色彩分段單元m内之-單元,但此僅係出於實例 性目的,且邊緣產生單元可包括與色彩分段單元ιΐ4分離 之一單元。 三維體驗單元赚組態以應用一或多個體驗規則以在 產生及/或調整一深度圖(例如,由深度圖製作單元116產生 之-深度圖)時使用。每-規則經組態以基於人類感知來 152792.doc •19· 201137788 調整該深度圖。舉例而言,該等規則可經組態以基於該二 維影像中之一或多個物件之一人類感知來調整該深度圖。 於某些實施例中,當深度圖製作單元116計算一深度圖 時,深度圖製作單元116經組態以應用一或多個體驗規則 以調整該深度圖。 有利地,3D體驗單元120可經組態以納入人類體驗資 料。在不考量影像屬性(例如,陰影、光角度、非預期色 彩4 4)或景> 像本身内之物件(例如人臉、天體等等)之屬性 而產生一深度圖時,多數3D再現程式將相同規則應用於一 特定影像。因此,在依一特定影像中之某一角度之光投射 一奇怪陰影之情況下,此等程式通常基於該等色彩來確切 地解譯該物件且可由於該奇怪陰影而不正確地呈現該物 件。此可能總是不能產生與一人類觀看者在觀看相同物件 時將感知之3D影像關聯之一「正確」3D影像。與直接應 用基於像素色彩之規則相比,人們並不總是僅基於色彩來 感知一影像或物件。不同於程式,人眼具有一錯覺,其中 人知可在即使陰影或色彩「不正確」時理解深度(例如, 一鼻子總是自人臉突出,無論在鼻子上是否投射一陰影致 使像素資訊否則將指示鼻子向内陷落)。因此,自動2d_3D 轉換由於人類錯覺而不總是導致一正確3D影像。 於某些實例中’深度產生器104可組態有一 r面部辨 識」錯覺’其併入有關於人類如何感知面部之資訊。舉例 而δ,如上文所述,s亥面部錯覺驗證鼻子總是自人臉突出 而非在人臉内向内延伸(例如,即使陰影指示鼻子向内突 152792.doc -20- 201137788 出或其他h規則可定義於灣驗單元12〇中以實施面部辨 識錯覺。於某些實例中,可將額外色彩添加至色彩分段單 兀114以供在分段該影像時使用。舉例而言,可由於在人 臉内存在之色彩深淺之數目而添加色彩米黃色及褐色。淺 色對及深色對亦可用於米黃色及褐色以進_步微調人體解 剖學之色m有利地,可由使用者最初產生及調整每 一色彩之—飽和度以根據—或多個影像之—特定集來調整 深度產生器104。此及其他錯覺可内建於系統中以更準確 地界定2D-3D轉換過程。 輸入裝置122與電腦102操作性通信。舉例而言,輸入裝 置122可經由一介面(圖中未展示)耦合至電腦1〇2。該介面 可包含-實體介面及/或一軟體介面。該實體介面可係任 意習知介面’諸如(舉例而言)一有線介面(例如,串列、 USB、乙太網路、CAN匯流排及/或其他電纜通信介面)及/ 或一無線介面(例如,無線乙太網路、無線串列、紅外 線、及/或其他無線通信系統)^該軟體介面可駐存於電腦 102上(例如,在輸入單元1〇6中)。 顯示器126係電腦102與使用者之間的一視覺介面。顯示 器126連接至電腦102,且可係適合用於顯示文字、影像、 圖形、3D影像及/或其他視覺輸出之任意裝置。舉例而 言,顯示器126可包含一標準顯示螢幕(例如,lcd、 CRT、電漿等)、一觸控螢幕、一隨身顯示器⑼如,諸如 眼鏡(glass)或護目鏡(g0ggie)之眼鏡防護(eyewea⑴一投 影顯不器、一頭載式顯示器、一全像顯示器及/或任意其 152792.doc •21 - 201137788 他視覺輪屮驻® 出裝置。顯示器126可佈置於電腦102上或附近 ('kp ,公 〇+. 文褒於亦包括電腦102之一機櫃内)或可遠離電腦 102(例如,也 u+. ^ . 戈'裝於適合用於由使用者觀看之一壁或其他位 置上)。龜-。。 顒不态126可用於顯示對一 3D再現有用之任意資 Λ諸如(舉例而言)深度圖、色彩分段影像、立體影像、 自動立體影像等等。 圖2圖解說明展示根據本發明自一影像導出之色彩分段 〜像之實例性圖式200。二維影像202包含一背景204、 ’景2〇6(例如一桌或櫃)、一第一物件208、及一第二物 210 。一維影像202出於例示性目的而包含簡化物件,而 迷之方法及系統可處理任意影像而無論該等物件在 該景/像内之複雜度。圖式200包含四個色彩分段圖框2丨2Α 至12D(統稱為色彩分段圖框212)。色彩分段圖框包 3針對背景204之一色彩分段物件214。舉例而言,用於產 务刀^又衫像2 12A之色彩包含捕獲二維影像2〇2之整個 月厅、204之一色彩值(例如RGB或HLS色彩空間)範圍,其係 表示為個色彩分段物件2 14。類似地,色彩分段圖框 212B包含針對前景2〇6之色彩分段物件216,色彩分段圖框 21 2C包含針對第一物件208之色彩分段物件218,及色彩分 段圖框212D包含針對第二物件210之色彩分段物件220。 如下文將闡釋,深度產生器104可基於色彩分段圖框212 自動地轉描一維影像202。可自色彩分段圖框212中之每一 者拉取基色域。一基色域表示在色彩分段過程期間使用之 母一色彩。舉例而言,—綠色基色域可表示在綠色色譜中 152792.doc •22- 201137788 之一特定範圍之色彩值(例如,針對具有綠葉之一樹,相 依於與綠色基色域相關聯之色彩值範圍,針對該綠色基色 域捕獲具有在該綠色值範圍内之綠色之綠葉之特定部 分)。舉例而言,色彩分段圖框212D之基色域可表示色彩 分段物件220之輪廓,其係自動地由深度產生器ι〇4轉描以 導出基色域。如下文將更詳細地闡述,可藉由採用在圍繞 色彩分段物件220之線上之貝兹點之一组合且然後微調該 等貝茲圖以使得更準確地適合色彩分段物件2 2 〇 (若必要)來 轉描色彩分段物件220。於某些實例中,在所轉描之區 中,使用者可調整深度資訊致使可個別地針對該影像中之 每一物件(例如,針對第一物件2〇8及第二物件21〇)計算該 影像之深度圖。有利地,在(舉例而言)初始色彩分段之準 確度不足或不能產生一期望效果(例如,一不正確之3〇效 果)時可在深度產生器丨〇4自動地轉描二維影像2〇2之後手 動地調整一貝茲圖之貝茲點。 圖3圖解說明根據本發明用於自色彩分段影像(例如,圖 2之二維影像202,其被色彩分段為色彩分段圖框212)拉取 基色域之一實例性方法300。在步驟302處,深度產生器 1〇4將指示二維影像(例如,二維影像2〇2)之資料儲存於一 資料儲存裝置(例如,f料庫112)中,該二維影像包括複數 個像素。在步驟3〇4處,深度產生器丨〇4接收資訊以界定複 數個基本色’其中每—基本色用於產生—色彩分段圖框 用於產生色彩分段圖框212中之一者)。在步驟3〇6 處,深度產生器104基於該二維影像產生(例如由色彩分段 I52792.doc -23· 201137788 單元m)複數個色彩分段圖框,其中每—色彩分段圖框包 括一或多個物件(例如色彩分段圖框2丨2)。 在步驟處,深度產生器104自該複㈣色彩 選擇一色彩分段圖框4步驟31Q處,深度產生器ι〇4針對 色彩分段圖框中之每一者基於該一或多個物件產生⑼ 如’由色彩分段單元)一基色域。在步驟312處,深度產生 器判定是否存在任何剩餘之色彩分段圖框。若存在剩 餘之色彩分段圖框,則方法則返回至步驟则。若不存在 剩餘色彩分段圖框,則方法300繼續至步驟31“在步驟 川處’深度產生器104基於該複數個色彩分段圖框之一關 鍵因子來針對每一像素計算三維資訊。在步驟316處,深 度產生器104基於該等基色域針對該二維影像計算(例如, 由深度圖製作單元116)一深度圖’其中該深度圖包括指示 5亥一維影像之每一像素之三維資訊之資料。 關於步驟302,在某些實施例中,深度產生器⑽接收 ⑽如’㈣輸人單元1G6)指示該二維影像之資料且將該資 儲存於資料儲存裝置(例如’資料庫m)中。在某些實施 2中’深度產生器104可經組態以接收指示一先前產生之Including an identifiable boundary line D. In some examples, calculating the depth map includes identifying a hidden pixel of the first reduced pixel image by identifying a visible pixel within the first reduced pixel image, wherein The visible pixel has a corresponding pixel in the first: reduced pixel image. Generating a third two-dimensional image based on the depth map of the first two-dimensional image and the second two-dimensional image, wherein the third two-dimensional image includes a first advantageous point between the first two-dimensional image and An advantageous point between the second advantageous point of the second two-dimensional image, and wherein the third two-dimensional image comprises a region that enters one of the fields of view or disappears within the field of view based on the identified hidden pixel. In other examples, the system can include a color shirt segmentation unit in communication with the pre-processing unit configured to generate a plurality of color/segment frames based on the two-dimensional image, wherein each color segmentation frame includes One or more items. The system can include a conversion unit configured to generate a second two-dimensional image based on the depth map, wherein the third two-dimensional image includes a first vantage point between the first two-dimensional image and the first An advantageous point between the second advantageous point of one of the two two-dimensional images. The techniques described herein, including both methods and apparatus, may provide one or more of the following advantages. These techniques provide faster, more efficient, and more accurate three-dimensional conversions than existing conversion tools by generating an accurate depth map from a two-dimensional image and then converting the depth map into a three-dimensional image (e.g., via a stereo pair). By segmenting the two-dimensional image colors and identifying object boundaries based on the color segment frames, the system and method of the present invention achieves a significant time savings over the generation of boundary information by other means. In addition, initially, 152792.doc -15- 201137788 may be generated and adjusted by the user to generate saturation of each color of the color segmented images to customize the techniques according to a particular image (or frame) set. . Further, although the system provides for automatically generating a Bezier map for objects within the color segmentation frame, the Bezier point can also be manually adjusted (eg, the accuracy of the initial color segmentation is insufficient or cannot be produced) In the case of a desired three-dimensional effect). The system and method disclosed herein can operate not only on a two-dimensional image, but also on a previously generated depth map (eg, by using a larger custom depth range, the depth information contains more information to determine and / or distinguish between nearer and farther objects, which results in a more accurate depth map than a wide range of data points (c〇arsei plant grained) depth map). Further, human experience data can be used to more accurately convert 2D images (e.g., human experience data about facial features). In addition, the first one-dimensional image and the last two-dimensional image (or frame) of a frame sequence can be converted, and then the first two-dimensional image can be automatically converted based on the first one-dimensional image and the last two-dimensional image All 2D images between the image and the last 2D image. The last two-dimensional image of the sequence can then be set to a first two-dimensional image of a second two-dimensional image sequence, and the process repeated recursively. Additionally, if desired, the systems and methods disclosed herein can edit one of the two-dimensional image sequences to automatically convert the two-dimensional image. Other aspects and advantages of the present invention will be apparent from the description of the appended claims. The above and other objects, features and advantages of the present invention, as well as the present invention, will be more fully understood from the following description of the embodiments. In summary, the techniques disclosed herein provide automatic color segmentation for two-dimensional images (eg, several frames of an animation), identification of object boundaries based on such color segmentation frames (eg, a Bezier map) , which can be automatically adjusted and then fine-tuned), generate an accurate depth map from the 2D image (eg, have a custom depth range), and then convert the depth map into a 3D image (eg, via a stereo These techniques can operate on any type of two-dimensional image and can also operate on a previously generated depth map (eg, lacking one of the fine-tuned details used by the current technology). Figure 1 illustrates An exemplary 3D reproduction system 1 of the present invention includes a computer 1〇2. The computer 1〇2 includes a depth generator 104, an input unit 1〇6, and a conversion unit 1〇8. The input unit 1〇6 is in communication with the depth generator 104. The depth generator 1〇4 includes a pre-processing unit 110 and a database 112 (e.g., a data storage device). The depth generator 1〇4 also includes a color segmentation unit. 114 and a depth map creation unit 116. The color segmentation unit 114 includes an edge generation unit 118. The depth map creation unit 116 includes a 3D experience unit 120. Various components of the depth generator 1〇4 (eg, the pre-processing unit 110 to depth) The experience unit 12 of the map making unit 116 communicates with each other. The computer 102 is specifically configured to generate a 3D image computing system (eg, a programmable, processor-based system). Including, for example, a microprocessor, a hard disk drive (eg, database 12), random access memory (RAM), read only memory (R〇M), input/output (1/〇) 152792.doc •17· 201137788 Road and any other necessary Lei Cong έ 4 shame components. The computer 102 is preferably adapted for use with various types of storage devices (continuous and replaceable), such as (for example, portable disk drives, magnetic storage (such as floppy disk), solid state storage (such as - flash memory card), optical storage (such as - CD or CD) and / or network / Internet The network storage system can include - or more The brain 'includes (for example, a Windows/.. UNIX or other suitable operating system, an IBM/PC compatible computer) or one of the personal computers (for example, a workstation (such as a SUN or Silicon phics workstation) Preferably, the graphical user interface (gui) is included. The non-generator 104 can generate a depth map of the 2D image and/or process and refine the depth map generated by the prior art. For example, the depth generator 1〇 4 can be imported (eg 'via input unit 106, database 112 or via an internet or Ethernet connection (not shown)) - previous depth image and more accurately recalculate and fine tune the depth map The parameters. The depth generator (10) can also receive a 2D image that does not have an associated depth map and does not rely on any other additional data to calculate a depth map for the 2D image. The color segmentation unit 114 can generate a plurality of color segmentation images (eg, for a 2D image), and the depth map creation unit 116 can use the color segmentation images to generate the depth map. The input unit 106 enables communication of information to Depth generator 丨〇 4. For example, the wheeling unit 106 provides an interface for a user to communicate with the 3D rendering system 1 via the input device 122 (e.g., the input device 122 can transmit the data 124 to the input unit 106). The terms user and operator refer to one of the 3D reproduction systems and are used interchangeably. 152792.doc *18· 201137788 Input device 122 may include any device that enables the user to provide input to the computer. For example, the input device 122 can include a conventional input device such as a keyboard, a mouse, a trackball, a touch screen, a touchpad, a voice recognition hardware, a dial, a switch, a button, and a foot. A step switch, a remote control device, a scanner, a camera, a microphone, and/or a joystick. For example, the input unit 1G6 can be configured to receive a two-dimensional image from the input device 122. The conversion soap element 108 converts a depth map of a 2D image into an image. In some embodiments, the conversion unit 〇8 converts the _frame sequence (e.g., an animated sequence frame). For example, the depth generator 1〇4 can convert the first frame and the last frame of any given segment of a moving frame, and then automatically convert all of the between the first frame and the last frame. Frame. The last-frame can then be set to the first frame of a second sequence of frames. In addition, rightly, the depth generator 1〇4 can edit a frame between the first frame and the last frame of a sequence of frames. The edge generation unit 117 is configured to generate edge information (10) for each of the two-dimensional images, and the edge generation unit 118 rotates the objects within the image, and the color segmentation frame is rotated. Although the edge generation unit ι 8 is shown as a unit within the color segmentation unit m, this is for exemplary purposes only, and the edge generation unit may include one unit separate from the color segmentation unit ι 4 . The three-dimensional experience unit earns configuration to apply one or more experience rules for use in generating and/or adjusting a depth map (e.g., a depth map generated by depth map authoring unit 116). Each-rule is configured to be based on human perception 152792.doc •19· 201137788 Adjust the depth map. For example, the rules can be configured to adjust the depth map based on human perception of one of the one or more objects in the two-dimensional image. In some embodiments, when depth map authoring unit 116 calculates a depth map, depth map authoring unit 116 is configured to apply one or more experience rules to adjust the depth map. Advantageously, the 3D experience unit 120 can be configured to incorporate human experience material. Most 3D rendering programs do not take into account the properties of an image (eg, shadow, light angle, unintended color 4 4) or scenes (such as faces, celestial bodies, etc.) to produce a depth map. Apply the same rules to a specific image. Thus, in the case where a certain angle of light in a particular image projects a strange shadow, the programs typically interpret the object based on the colors and may render the object incorrectly due to the strange shadow. . This may not always produce a "correct" 3D image associated with a 3D image that a human viewer would perceive when viewing the same object. Compared to the rule of direct application based on pixel color, people do not always perceive an image or object based solely on color. Unlike the program, the human eye has an illusion that the person knows the depth even if the shadow or color is "incorrect" (for example, a nose always protrudes from the face, whether or not a shadow is projected on the nose to cause pixel information. Indicates that the nose is falling inward). Therefore, automatic 2d_3D conversion does not always result in a correct 3D image due to human illusions. In some instances the 'depth generator 104 can be configured with an r-face recognition illusion' that incorporates information about how a person perceives a face. For example, δ, as described above, the smear facial illusion verifies that the nose always protrudes from the face rather than inward in the face (for example, even if the shadow indicates the nose inward 152792.doc -20- 201137788 or other h Rules may be defined in the Bay Inspection Unit 12A to implement a facial recognition illusion. In some instances, additional colors may be added to the color segmentation unit 114 for use in segmenting the image. For example, due to Add color beige and brown to the number of shades in the face. Light-colored pairs and dark pairs can also be used for beige and brown to fine-tune the color of human anatomy. Advantageously, it can be initially used by the user. The saturation of each color is generated and adjusted to adjust the depth generator 104 according to a particular set of - or multiple images. This and other illusions can be built into the system to more accurately define the 2D-3D conversion process. The device 122 is in operative communication with the computer 102. For example, the input device 122 can be coupled to the computer 1〇2 via an interface (not shown). The interface can include a physical interface and/or a software interface. The interface can be any conventional interface such as, for example, a wired interface (eg, serial, USB, Ethernet, CAN bus and/or other cable communication interface) and/or a wireless interface (eg, Wireless Ethernet, wireless serial, infrared, and/or other wireless communication system) The software interface can reside on the computer 102 (eg, in the input unit 〇6). The display 126 is a computer 102 and uses A visual interface between the displays. The display 126 is coupled to the computer 102 and can be any suitable device for displaying text, images, graphics, 3D images, and/or other visual output. For example, the display 126 can include a standard. Display screen (eg, lcd, CRT, plasma, etc.), a touch screen, a portable display (9) such as glasses (g) or goggles (g0ggie) glasses protection (eyewea (1) a projection display, one-headed The display, a hologram display and/or any of its 152792.doc •21 - 201137788 his visual rims are located on the device. The display 126 can be placed on or near the computer 102 ('kp, gong+. Included in one of the cabinets of the computer 102) or remote from the computer 102 (for example, also u+.^.go's installed on one wall or other location suitable for viewing by the user). Turtle-. Any of the assets useful for a 3D rendering such as, for example, depth maps, color segmented images, stereoscopic images, autostereoscopic images, etc. are shown. Figure 2 illustrates a color segmentation derived from an image in accordance with the present invention. An exemplary image 200 is shown. The two-dimensional image 202 includes a background 204, a scene 2 (e.g., a table or cabinet), a first object 208, and a second object 210. The one-dimensional image 202 includes simplified objects for illustrative purposes, and the method and system can handle any image regardless of the complexity of the object within the scene/image. The drawing 200 includes four color segment frames 2丨2Α to 12D (collectively referred to as color segmentation frames 212). The color segmentation frame package 3 is for one of the backgrounds 204 color segmentation object 214. For example, the color used for the production knife and the shirt image 2 12A includes a range of color values (for example, RGB or HLS color space) of the entire moon hall, 204 capturing the two-dimensional image 2〇2, which is expressed as a Color segmentation object 2 14. Similarly, color segmentation frame 212B includes color segmentation object 216 for foreground 2〇6, color segmentation frame 21 2C includes color segmentation object 218 for first object 208, and color segmentation frame 212D includes A color segmentation object 220 for the second object 210. As will be explained below, depth generator 104 may automatically scan one-dimensional image 202 based on color segmentation frame 212. The primary color gamut can be pulled from each of the color segmentation frames 212. A primary color gamut represents the mother color used during the color segmentation process. For example, the green primary color gamut may represent a particular range of color values in the green chromatogram 152792.doc • 22- 201137788 (eg, for a tree with one green leaf, depending on the range of color values associated with the green primary color gamut, A specific portion of the greenish green leaf having the green value range is captured for the green primary color gamut). For example, the base color gamut of color segmentation frame 212D can represent the outline of color segmentation object 220, which is automatically rotated by depth generator ι4 to derive the primary color gamut. As will be explained in more detail below, the combination of the Betz points on the line around the color segmentation object 220 can be combined and then fine-tuned to make it more accurate to fit the color segmentation object 2 2 〇 ( If necessary, to translate the color segment object 220. In some instances, in the region being scanned, the user can adjust the depth information such that each object in the image can be individually calculated (eg, for the first object 2〇8 and the second object 21〇) The depth map of the image. Advantageously, the 2D image can be automatically rotated at the depth generator 丨〇4 when, for example, the accuracy of the initial color segment is insufficient or does not produce a desired effect (eg, an incorrect 3 〇 effect) Manually adjust the Bezier point of a Bezier after 2〇2. 3 illustrates an exemplary method 300 for pulling a base gamut from a color segmented image (e.g., 2D image 202 of FIG. 2, which is color segmented into color segmentation frame 212) in accordance with the present invention. At step 302, the depth generator 1〇4 stores data indicating the two-dimensional image (eg, the two-dimensional image 2〇2) in a data storage device (eg, the f library 112), the two-dimensional image including the plural Pixels. At step 〇4, depth generator 丨〇4 receives information to define a plurality of basic colors 'where each—a basic color is used to generate—a color segmentation frame for generating one of color segmentation frames 212) . At step 3〇6, the depth generator 104 generates a plurality of color segmentation frames based on the two-dimensional image (eg, by color segmentation I52792.doc-23·201137788 unit m), wherein each color segmentation frame includes One or more objects (eg, color segmentation frame 2丨2). At the step, the depth generator 104 selects a color segmentation frame 4 from the complex (four) color, step 31Q, and the depth generator ι4 generates for each of the color segmentation frames based on the one or more objects. (9) A primary color gamut such as 'by color segmentation unit. At step 312, the depth generator determines if there are any remaining color segmentation frames. If there is a remaining color segment frame, the method returns to the step. If there are no remaining color segmentation frames, then the method 300 continues to step 31 "At step Chuan" the depth generator 104 calculates three-dimensional information for each pixel based on one of the plurality of color segmentation frames. At step 316, the depth generator 104 calculates (eg, by the depth map generation unit 116) a depth map for the two-dimensional image based on the basic color gamuts, wherein the depth map includes a three-dimensional representation of each pixel of the five-dimensional one-dimensional image. Information about the information. In some embodiments, the depth generator (10) receives (10) the information indicating the two-dimensional image as the '(four) input unit 1G6) and stores the asset in the data storage device (eg, 'database m). In some implementations 2 the depth generator 104 can be configured to receive an indication of a previously generated

St之資料且將指示該先前產生之深度圖之資料儲存於 遠貧料儲存裝置中。 ::步驟304,色彩分段單元可經組態以儲存預定數g Hr色’其中該等預定色彩中之每一者用於產生色彩分 又私。如上文參照圖1之3D體驗單元1〇2所述,基本色可 包含諸如黃色、青色、洋紅色、紅色、綠色、米黃色、褐 152792.doc • 24· 201137788 色、藍色等等之色彩。另外,在某些實施例中,可針對包 含針對每一基本色之一淺色彩及一深色彩之每一基本色使 用色彩對(例如’如深紅色及淺紅色、深米黃色及淺米黃 色等等)。針對每一基本色或來自一色彩對之色彩(統稱為 色彩)’該色彩可包含一預定義色彩值範圍(例如RGB或 HLS色彩空間)。可在分析像素以判定一像素色彩是否在色 彩範圍内時使用該等色彩範圍。若該影像之一像素具有在 該預定義色彩值範圍内之一色彩值,則彼像素與基於該基 本色產生之色彩分段圖框相關聯,舉例而言,假設基本色 「淺紅色」具有一範圍為r=128-255、G=0-95、B=〇-95, 且基本色「深紅色」具有一範圍為r=64_i27、G=0-95、 B=0-95。則具有在R=128_255、g=0-95、b = 〇_95之範圍内 之色彩值之一區與基本色「淺紅色」相關聯。在一區具有 在R=100-150、G=0-95、B=〇-95之範圍内之色彩值之情況 下,該區與「淺紅色」及「深紅色」基本色之一集相關 聯。於某些實施例中,基本色之範圍值經自動調整以涵蓋 對應於具有一類似色彩之一個物件之整個區。 深度產生器丨04可經組態以允許調整與每一基本色相關 聯之色彩值範圍(例如,基於自一使用者接收之資料)。色 彩分段單元114可經組態以接收指示該複數個基本色中之 一或多個色彩之一新色彩值範圍之資料,且基於該資料針 H或多個色彩中之每—者調整該預定義色彩值範圍。 有利地,在一特定影像主要包含一個色彩(例如,各種深 欠之綠色)之情況下,色彩值可經調整以在各種深淺之綠 152792.doc •25· 201137788 色之間區分致使色彩分段單元n4產生多個色彩分段圖 框。否則’由於該影像主要係由綠色深淺構成,因此色彩 分段單元114可使用預設值產生較少之色彩分段圖框,由 於色彩值範圍太大而難以在各種深淺之綠色之間精細地區 分。 關於步驟306及參照圖2,深度產生器1 〇4基於二維影像 202產生一色彩分段圖框212。每一色彩分段圖框212包括 一或多個物件(例如,色彩分段影像212D包含色彩分段物 件220)。 關於步驟3 1 0 ’深度產生器104基於在該色彩分段圖框内 之一或多個物件針對每一色彩分段圖框產生一基色域。如 上文提及,色彩分段圖框之一基色域可表示在一特定色彩 为段圖框内之物件。圖4圖解說明用於根據本發明界定一 物件在一影像内之邊緣之一實例性方法4〇〇。在步驟4〇2 處,邊緣產生單元11 8自複數個色彩分段圖框丨〖2選擇一色 彩分段圖框。在步驟404處,邊緣產生單元自該色彩分段 圖框内之一或多個物件選擇一物件(例如,在色彩分段圖 框212C内之色彩分段物件218) ^在步驟4〇6處邊緣產生 單兀110自動地計算該選定物件之一貝茲圖。該貝茲圖包 含關於該物件之複數個貝茲點及連接該複數個貝茲點之複 數個貝茲曲線。該貝茲圖可係用於產生每一物件之邊緣之 一初始開始點。在步驟4〇8處,邊緣產生單元118接收指示 調整該貝兹圖之資料。舉例而言,一使用者可期望添加關 於一物件之額外點以更準確地界定圍繞該物件之該等曲 152792.doc •26· 201137788 線。在步驟410處,邊緣產生單元118基於該資料產生該物 件之一更詳細圖,其中該更詳細圖包括與該複數個貝茲點 組合之複數個額外點。 在步驟412處,邊緣產生單元丨18判定在該選定色彩分段 圖框内是否存在任何剩餘物件。若邊緣產生單元ιΐ8判定 存在額外物件,則方法400繼續返回至步驟404。若邊緣產 生單兀118判定在該選定色彩分段圖框内不存在額外物 件,則方法400繼續至步驟414。在步驟4M處,邊緣產生 單元完成界定每一物件在選定影像内之邊緣。針對來自複 數個色彩分段圖框中之每一色彩分段圖框來執行方法 4〇〇。有利地,在某些實施例中,彳出於效率目的及為一 使用者提供用於界^ -物件之邊界之—開始點而使用一初 始貝茲ffi ’且使用纟可隨後接續地微調該物件之邊緣圖。 關於步驟314,深度產生器1〇4基於該複數個色彩分段圖 框之-關鍵因子針對每一像素計算三維資訊。大體而言, 深度圖係破視為黑白影像、然而本發明系統及方法藉由使 用-自訂深度範圍而針對每一像素包含更多資料。舉例而 言’一自訂深度範圍〇至65,28〇可用於每—像素以使該深 度資訊資料點範圍細緻化(fine_grain)(例如, 圍為〇之一像素係至觀看者之最遠深度值,而、具有一衣= 範圍65,28〇之一像素係距觀看者之最近深度值)。舉例而 言,可藉由使用預定方程式(例如,透過深度圖製作單元 H6組態)而將習用RGB值(例如自〇至255)轉換為自訂深度 範圍。有利地,藉由使用較大之自訂深度範圍,深度資訊 152792.doc -27· 201137788 包含更多資訊以判定及/或區分較近及較遠物件,此導致 比自廣泛範圍資料點(coarsely-grain)之0至255範圍之深 度圓更準確之度圖。在某些實施例中,可憑藉方程 式:深度=GxB+R來達成至自訂深度範圍之轉換。於此等 實施例中,最大深度為65,280=255x255+255。有利地,藉 由使用此-轉換,本發明系統及方法可容易地比較自訂深 度值與原始像素資料。 於某些實施例中1鍵因子包含像素之则色彩空間(色 相(H)、飽和度(S)及亮度(L))e可藉由判定飽和度之位準' 売度之位準、及/或諸如此類且基於該判定將一深度值指派 給該像素來計算每-像素之三維資訊。舉例而言,該等範 圍係針對Η=0·360、Ι^()·1(Κ)、㈣,若該像素具有 一高位準之飽和度或亮度(例如s=6〇_1〇〇,l=3〇i〇〇),則深 度產生器104指派-近深度值(例如,扇(若該深度範圍係 針對〇至255設定))給該像素。類似地,舉例而言,若該像 素具有一低位準之飽和度或亮度(例如,s=〇6〇,乙=〇3〇) 則深度產生㈣4指派-遠深度值(例如,5Q(若該深度範圍 係針對0至255設定))給該像素。 於某些實施例中,關鍵因子包含在該像素及—相鄰像素 群組當中的-值改變量。舉例而纟,值改變量係該像素與 周圍像素之間的飽和度之一改變。舉例而言,基於該值: 變’深度產生器104判定該像素是否係一平面(例如—壁 -桌面)之部分。在深度產生器104判定該像素係平面之部 分之情況下,深度產生器104指派一遠深度值給該像素。° J52792.doc -28- 201137788 於某些實施例中’該關鍵因子包含每—色彩分段圖框中 =-或多個物件中之每一者之一位置。該物件之位置可係 二絕對位置(例如’該物件位於—色彩分段圖框之下半部 分上)’或該位置可係一相對位置(例如,該物件位於該色 :分段圖框内之另-物件下方)。舉例而言,參照圖2,色 々刀&物件220之位置可#皮闡述為位於色彩分段圖框2 之下半邛分中,色彩分段物件22〇之位置可被闡述為位於 色彩分段圖框212D中之另一物件(圓中未展示)下彳及/或 諸如此類。/罙度產生器! 〇4可基於相關聯之位置資料指派 該物件之像素之深度值。舉例而言,在該物件之位置係該 色彩分段圖框之-下部位置(例如在一色彩分段圖框2ud 之下半邛为上或在另一物件下方)之情況下深度產生器 1〇4指派一近深度值(例如,2〇〇(若深度範圍係針對〇至255 又疋)),·σ »玄物件内之母一像素(例如色彩分段物件之每 一像素)。 在》亥物件之位置係該色彩分段圖框之一上部位置(例 如色彩分段物件21 8係在色彩分段圖框2丨2C之上半部分 上,或色彩分段物件218係在一第二物件(圖中未展示)上 方)之情況下,深度產生器1〇4指派一遠深度值(例如, 5〇(若深度範圍係針對0至255設定))給該物件内之每一像 素。可指派任意範圍給該深度值。在該範圍係針對〇至255 设定之情況下,最遠之深度值係設定為〇且最近之深度值 係設定為255。在該範圍係針對〇至65,28〇設定之情況下, 最遠之深度值係設定為〇且最近之深度值係設定為6 5,2 8 〇。 152792.doc -29- 201137788 舉例而言,在該物件之位置係在另一色彩分段圖框之一對 應物件内(例如,若在疊加色彩分段圖框212C&2i2D時, 色彩分段圖框212D中之色彩分段物件22〇疊加於色彩分段 圖框212C中之大於色彩分段物件22〇且在色彩分段物件22〇 的周圍延伸之一色彩分段物件(圖中未展示)上)之情況下, /木度產生器104指派一近深度值給該物件(亦即色彩分段物 件220)内之每一像素。舉例而言,在該物件之位置係在色 彩分段圖框之一邊緣位置處(例士〇,色彩分段物件216係在 色彩分段圖框U2B之一邊緣部分處)之情況下’深度產生 器104指派-遠深度值給該物件(例如,色彩分段物件216) 内之每一像素。 於某些實施例中,關鍵因子包含每一色彩分段圖框中之 一或多個物件中之每-者之-大小與色彩分段圖框之一大 小之-比率。舉例而言,—比率可被計算為色彩分段物件 220之像素大小對色彩分段圖框212d之總像素大小。舉例 而言,若色彩分段物件220之像素大小係62,5〇〇個像素且 色彩分段圖框2㈤之總像素大小係2,〇73,_個像素(亦即 1080x1920) ’ 則該比率將係 62 5〇〇/2〇73 6〇〇=〇〇3〇。針對 每物件,深度產生器104可基於該比率將一深度值指派 給《玄物件内之每-像素(例如,(在該深度範圍係針對〇至 255設定之情況下)深度值=1〇〇(針對介於〇〇1與〇1之間的 -比率)、15()(針對介於ο」與G2之間的—比率)、扇(針對 介於0.2至0.5之間的-比率)或255(針對介於G5與丨〇之 的一比率))。 152792.doc 201137788The information of St and the data indicating the previously generated depth map are stored in the far fat storage device. 'Step 304, the color segmentation unit can be configured to store a predetermined number of g Hr colors' wherein each of the predetermined colors is used to generate color bins and private. As described above with reference to the 3D experience unit 1〇2 of FIG. 1, the basic colors may include colors such as yellow, cyan, magenta, red, green, beige, brown 152792.doc • 24·201137788 color, blue, and the like. . Additionally, in some embodiments, color pairs may be used for each of the basic colors including one of a light color and a deep color for each of the basic colors (eg, 'such as 'dark red and light red, deep beige, and light beige and many more). For each basic color or color from a color pair (collectively referred to as color)', the color may include a predefined range of color values (e.g., RGB or HLS color space). These color ranges can be used when analyzing pixels to determine if a pixel color is within the color range. If a pixel of the image has a color value within the predefined color value range, the pixel is associated with a color segmentation frame generated based on the basic color, for example, assuming that the basic color "light red" has A range is r=128-255, G=0-95, B=〇-95, and the basic color "dark red" has a range of r=64_i27, G=0-95, B=0-95. Then, one of the color values having a range of R = 128_255, g = 0-95, and b = 〇_95 is associated with the basic color "light red". In the case where a region has a color value in the range of R=100-150, G=0-95, B=〇-95, the region is related to one of the basic colors of “light red” and “dark red”. Union. In some embodiments, the range of basic colors is automatically adjusted to cover the entire area corresponding to an item having a similar color. Depth generator 丨04 can be configured to allow adjustment of the range of color values associated with each elementary color (e.g., based on data received from a user). The color segmentation unit 114 can be configured to receive data indicative of a new color value range of one or more of the plurality of basic colors, and adjust the based on the data pin H or each of the plurality of colors A predefined range of color values. Advantageously, in the case where a particular image primarily contains a color (eg, various deep greens), the color values can be adjusted to distinguish between various shades of green 152792.doc •25·201137788 colors to cause color segmentation. Unit n4 produces a plurality of color segmentation frames. Otherwise, since the image is mainly composed of green shades, the color segmentation unit 114 can generate fewer color segment frames using preset values, and it is difficult to finely distinguish between various shades of green due to a large range of color values. Minute. With respect to step 306 and referring to FIG. 2, depth generator 1 〇4 generates a color segmentation frame 212 based on two-dimensional image 202. Each color segmentation frame 212 includes one or more objects (e.g., color segmentation image 212D includes color segmentation object 220). Regarding step 3 1 0 ' depth generator 104 generates a primary color gamut for each color segmentation frame based on one or more objects within the color segmentation frame. As mentioned above, one of the primary color gamuts of a color segmentation frame can represent an object within a particular color segment frame. Figure 4 illustrates an exemplary method for defining an edge of an object within an image in accordance with the present invention. At step 4〇2, the edge generation unit 11 selects a color segmentation frame from a plurality of color segmentation frames 丨2. At step 404, the edge generation unit selects an object (eg, color segmentation object 218 within color segmentation frame 212C) from one or more objects within the color segmentation frame ^ at step 4〇6 The edge generation unit 110 automatically calculates a Bezier map of the selected object. The Bezier diagram includes a plurality of Bezier points for the object and a plurality of Bezier curves connecting the plurality of Bezier points. The Bezier diagram can be used to generate an initial starting point for the edge of each object. At step 4〇8, the edge generation unit 118 receives the information indicating the adjustment of the Bezier map. For example, a user may desire to add additional points about an item to more accurately define the line around the item 152792.doc • 26· 201137788. At step 410, edge generation unit 118 generates a more detailed view of the object based on the material, wherein the more detailed map includes a plurality of additional points combined with the plurality of Bezier points. At step 412, edge generation unit 丨 18 determines if there are any remaining objects within the selected color segment frame. If edge generation unit ι 8 determines that there is an additional object, then method 400 continues with a return to step 404. If the edge generation unit 118 determines that there are no additional items in the selected color segment frame, the method 400 continues to step 414. At step 4M, the edge generation unit completes defining the edge of each object within the selected image. Method 4 is performed for each color segmentation frame from a plurality of color segmentation frames. Advantageously, in some embodiments, an initial bezial ffi ' is used for efficiency purposes and for a user to provide a starting point for the boundary of the object - and the user can subsequently fine tune the The edge map of the object. With regard to step 314, depth generator 1 4 calculates three-dimensional information for each pixel based on the key factor of the plurality of color segmentation frames. In general, the depth map is broken into black and white images, however the system and method of the present invention contains more data for each pixel by using a custom depth range. For example, 'a custom depth range 65 to 65, 28 〇 can be used for each pixel to fine-grain the depth information data point (fine_grain) (eg, one pixel to the farthest depth of the viewer) Value, but with a clothing = range 65, one of the 28 pixels is the closest depth value to the viewer). For example, conventional RGB values (e.g., from 〇 to 255) can be converted to a custom depth range by using a predetermined equation (e.g., configured by depth map authoring unit H6). Advantageously, by using a larger custom depth range, depth information 152792.doc -27· 201137788 contains more information to determine and/or distinguish between nearer and farther objects, which results in a wider range of data points (coarsely -grain) A more accurate map of the depth circle from 0 to 255. In some embodiments, the conversion to the custom depth range can be achieved by the equation: depth = GxB + R. In these embodiments, the maximum depth is 65, 280 = 255 x 255 + 255. Advantageously, by using this-conversion, the system and method of the present invention can easily compare custom depth values to raw pixel data. In some embodiments, the 1-key factor includes the color space (hue (H), saturation (S), and luminance (L)) e of the pixel, which can be determined by determining the level of saturation. / or the like and based on the decision assigning a depth value to the pixel to calculate three-dimensional information per pixel. For example, the ranges are for Η=0·360, Ι^()·1(Κ), (4), if the pixel has a high level of saturation or brightness (eg, s=6〇_1〇〇, l=3〇i〇〇), the depth generator 104 assigns a near-depth value (eg, a fan (if the depth range is set to 255 to 255)) to the pixel. Similarly, for example, if the pixel has a low level of saturation or brightness (eg, s = 〇 6 〇, B = 〇 3 〇) then the depth produces (4) 4 assignment - far depth value (eg, 5Q (if The depth range is set for 0 to 255)) to the pixel. In some embodiments, the key factor includes the amount of -value change in the pixel and the group of adjacent pixels. For example, the value change amount is one of the saturations between the pixel and the surrounding pixels. For example, based on the value: the variable 'depth generator 104 determines if the pixel is part of a plane (e.g., wall-desktop). Where depth generator 104 determines a portion of the pixel system plane, depth generator 104 assigns a far depth value to the pixel. ° J52792.doc -28- 201137788 In some embodiments, the key factor includes a position in each of the color segment frames =- or one of the plurality of objects. The position of the object can be two absolute positions (eg 'the object is located on the lower half of the color segment frame') or the position can be a relative position (eg, the object is in the color: segmented frame) The other - below the object). For example, referring to FIG. 2, the position of the color trowel & object 220 can be illustrated as being located in the lower half of the color segment frame 2, and the position of the color segment object 22 can be described as being located in the color point. Another object (not shown in the circle) in the segment frame 212D is squatted and/or the like. / 罙度器! 〇4 assigns a depth value for the pixel of the object based on the associated location data. For example, the depth generator 1 in the case where the position of the object is the lower position of the color segment frame (for example, under one color segment frame 2ud or below another object) 〇4 assigns a near depth value (for example, 2〇〇 (if the depth range is for 〇 to 255 and 疋)), σ » a parent pixel within the object (eg, each pixel of the color segment object). The position of the object is the upper position of the color segment frame (for example, the color segment object 218 is on the upper half of the color segment frame 2 丨 2C, or the color segment object 218 is attached to the In the case of a second object (not shown), the depth generator 1〇4 assigns a far depth value (eg, 5〇 (if the depth range is set for 0 to 255)) to each of the objects Pixel. Any range can be assigned to this depth value. In the case where the range is set to 255, the farthest depth value is set to 〇 and the most recent depth value is set to 255. In the case where the range is set to 65, 28 〇, the farthest depth value is set to 〇 and the most recent depth value is set to 6 5, 2 8 〇. 152792.doc -29- 201137788 For example, the position of the object is within a corresponding object of another color segment frame (for example, if the color segment frame 212C & 2i2D is superimposed, the color segmentation map The color segment object 22 in block 212D is superimposed on the color segment frame 212C and is greater than the color segment object 22 and extends around the color segment object 22〇. One color segment object (not shown) In the case of the above, the /wood generator 104 assigns a near depth value to each pixel within the object (i.e., the color segment object 220). For example, in the case where the position of the object is at an edge position of one of the color segment frames (eg, the color segment object 216 is at one edge portion of the color segment frame U2B), the depth is The generator 104 assigns a far depth value to each pixel within the object (eg, color segmentation object 216). In some embodiments, the key factor includes a ratio of the size of each of the one or more objects in each color segmentation frame to the size of one of the color segmentation frames. For example, the ratio can be calculated as the total pixel size of the pixel size of the color segment object 220 versus the color segmentation frame 212d. For example, if the pixel size of the color segment object 220 is 62, 5 pixels and the total pixel size of the color segment frame 2 (5) is 2, 〇73, _ pixels (ie, 1080x1920)' then the ratio Will be 62 5 〇〇 / 2 〇 73 6 〇〇 = 〇〇 3 〇. For each object, depth generator 104 may assign a depth value to each pixel within the object based on the ratio (eg, (in the case where the depth range is set to 255) depth value = 1 〇〇 (for a ratio between 〇〇1 and 〇1), 15() (for a ratio between ο" and G2), a fan (for a ratio between 0.2 and 0.5) or 255 (for a ratio between G5 and ))). 152792.doc 201137788

於某些實施例中,關M 因子包含指示在該二維影像中之 一物件之一位置及在—連靖_ 迷續一維影像中之一對應物件之一 位置之資訊。舉例而+ , , ° 此位置資訊可用於判定連續二維 影像之間的物件之運動(你 咬勒(例如,若一汽車正自左至右移 動,則在比較兩個連續导彡伯* ⑸逆、.只衫像時,於第二連續影像(此係在 較晚時間獲取之影傻彳φ,…± , ?、d冢)中,&車將位於更右側)。深度產生 器1〇4可判定在該二維影像中之該物件之位置不同於在該 連續二維影像令之該對應物件之位置(例如,基於在含有 每-二維影像之色彩分段物件的色彩分段圖框中之對應色 彩分段物件之相對位置),且指派一近深度值(例:, 200(在該深度範圍係針對〇至255設定之情況下))給該第一 二維圖框之色彩分段物件内之每—像素。應瞭解,上文所 述各種實施例可經組合且無需單獨使用(例如像素之色 相、飽和度及亮度可結合色彩分段物件之位置等等來使 用)。 關於步驟316,深度產生器1〇4基於該等基色域針對該二 維影像計算(例如’由深度圖製作單元116卜深度圖。該深 度圖包括指示該二維影像中之每一像素之三維資訊之資 料。如上文所述,深度圖針對該二維影像内之物件具有增 加之深度資訊(亦即,由於深度圖針對每一像素包括一自 訂深度範圍’此增加深度圖之準確度)。 於某些實例中,在深度產生器1〇4處理該二維影像時, 在該二維影像中之一物件之三維表示(例如經由立體對)可 導致該物件進入視野内及/或在視野内消失(例如,在該立 152792.doc •31 · 201137788 體對之左影像保持相同之情況下,由於自與左影像不同之 一有利點描繪右影像,因此右影像中之物件之位置可進入 視野内及/或在視野内消失深度產生器1〇4可需要調整深 度圖以考量到此等情境。圖5圖解說明根據本發明用於操 縱物件之一實例性方法5〇〇。在步驟5〇2處,深度產生器 104選擇該二維影像中之一物件之三維表示(例如X,圖2之 二維影像202之第一物件208之三維表示)。 在步驟504處,深度產生器104判定該二維影像中之一物 件之三維表示是否導致該物件之在二維影像中不在視野内 之一部分進入視野内。舉例而言,深度產生器1〇4可判定 第一物件208之側之在二維影像202中不可見之一部分將在 呈現該三維影像(或針對一立體對之影像)時進入視野内。 若深度產生器104做出此一判定之情況下,則方法5〇〇前進 至步驟506 ’其中深度產生器1〇4拉伸在該物件後方之背 景、拉伸將進入視野内之該物件之側,或二者之一組合, 以填充於進入視野内之該物件之該部分中。舉例而言,在 第一物件208之右側進入視野内之情況下,深度產生器ι〇4 可拉伸第一物件208之右側之一部分(及/或拉伸背景2〇4及 刖景206之鄰近第一物件208之右側之一部分)以填充於否 則將存在於第一物件208之右側處之間隙中。 §在步驟5 0 4處沐度產生器1 〇 4不做出此一判定之情況 下’則該方法500進行至步驟508且判定該二維影像中之一 物件之三維表示是否導致該物件之在二維影像中在視野内 之一部分進入視野内。若深度產生器1 04做出此一判定, 152792.doc •32· 201137788 則方法500進行至步驟51G且深度產生器i(m收縮該物件之 一側以隱藏在視野内消失的該物件之部分。舉例而言在 第一物件208之左側在視野内消失之情況下深度產生器 104可收縮第一物件2〇8之左側之—部分以補償第一物件 208之左側之丟失部分。 當在步驟508處該方法不做出此_判定之情況下,該方 法進行至步驟512,其十深度產生器HM判;t是否存在任何 待分析之剩餘三維物件。在存在剩餘物件之情況下,該方 法藉由選擇該等剩餘物件中之一者進行回至步驟5〇2。否 則,該方法進行至步驟514且終止,π因已分析及處理所 有三維物件(若必要)(根據步驟504至5 1 〇)。 當深度產生II 1G4基於料基色域針對二維影像計算深 度圖時,深度產生器104可微調一先前產生之深度圖。深 度產生器104儲存(例如經由資料庫丨12)指示一先前產生之 深度圖之資料。先前i生之;罙度圖包含指示該二維影像中 之每一像素之對應於深度圖(例如該深度圖針對每一像素 包含自0至255之一深度範圍)之三維資訊之資料。深度產 生器HH可基於先前產生之深度圖產±_新深度圖’其中 該新深度圖包括指示一對應二維影像中之每一像素之一較 大範圍之二維資訊(例如,如上文所述介於〇與65,2肋之間 的-範圍)之資料。有利地,本文所闡述之系統及方法可 僅基於一先前產生之深度圖(例如經由方程式:深度 -GxB+R,如上文所述)來對該先前產生之深度圖進行快速 及有效率的微調。 ' 152792.doc -33· 201137788 圖6圖解說明展示根據本發明之影像操縱之一實例性圖 式600。二維影像602包括複數個像素6〇4a、6〇4b、 604C(統冑為像素604)。二維影像6〇2之像素大小係該影像 沿二維影像602之一垂直側(例如二維影像6〇2之左側)之像 素之長度與該影像之沿二維影像6〇2之—水平側(例如二維 影像602之底側)之像素之寬度之乘積。二維影像6〇2包括 表示二維影像602中之一物件之物件像素6〇6八、6〇6b及 606C(統稱為物件像素606卜二維影像6〇2包括沿物件像素 606表示之物件之邊界的邊界像素6〇8八、6〇犯及6〇8c(統 稱為邊界像素608)。減小之像素影像61〇包括複數個像素 612A、612B(統稱為像素612)0減小之像素影像61〇包括表 達在減小之像素影像㈣中之對應於二維影像6〇2中之由物 件像素606表示之物件之一物件的物件像素6i4A、6i4B(統 稱為物件像素614)。減小之像素影像61〇包括沿物件像素 614表示之物件之邊界的邊界像素616八、6i6B(統稱為邊界 像素616)。 圖7圖解說明一種根據本發明用於自動立體内插之實例 性方法70(^參照圖丨,在步驟7〇2處,輸入單元1〇6接收一 第一維衫像及一第二二維影像,每一二維影像包括一像 素大小。在步驟7〇4處,處理單元11〇產生第一二維影像及 第一二維影像中之每一者之一減小之像素影像,其中每一 減小之像素影像包括小於該像素大小之一減小像素大小。 在步驟706處,預處理單元110針對第一二維影像及第二二 、食景V像中之每一者計算邊界資訊。在步驟708處,深度圖 152792.doc -34- 201137788 早兀116針對第—減小之像素影像及第二減小之 計算一深度圖,其中該深度圖包括指示該第-減小=像 衫像及該第二減小之像素影像中之一或素 訊之資料。在步驟川處,深度圖單元116基於第^ = 像及第二二維影像中之每—者之邊界資訊== 素影像及第二減小之像素影像之深度圖來針對4 影像及該第二二維^ 于。亥第—二維 換單元_藉由基Λ 深度圖。在步驟712處,轉 圖產生/二維影像及第二二維影像之深度 圖框,“ST在第一影像與第二影像之間内插 第右二 "^影像包括介於該第—二維影像之- 第-有利點與該第二二維影像之一第二有利 利點。 』〜 , 參照步驟702,輸入單元1〇6接收一左影像(第一二維影 =)及右影像(第二二維影像)。左影像及右影像係同一場 π (例如,物件、景觀等等)之兩個不同透視圖。輸入單元 106以原始像素大小接收每一二維影像。參照步驟7〇4 , 預處理早tl 11 0針對左影像及右影像中之每—者產生一減 小之像素影像。每-減小之像素影像包括小於左影像及右 “象之原始像素大小之_減小像素大小。舉例而言,如圖 6中旦展不’預處理單元U〇基於二維影像6〇2產生減小之像 素影像610 °減小之像素影像610係具有較少像素之二維影 象2之表不。舉例而言,覆疊於三維影像6〇2上方之箭 頭650A、65GB(統稱為箭頭65G)將:維影像6()2劃分成包括 四個像素之正方形。當預處理單元m將二維影像6〇2轉譯 152792.doc •35- 201137788 成減小之像素影像61〇 ⑴設定為該四個像辛:群了減小之像素影像㈣之像素 於二維影像602之對應正方、·且内之主像素。舉例而言,由 係邊緣像素,因此=7令之四個像素中之三個像素 正方形轉譯成试、丨^ 3邊緣像素6〇8B之二維影像602之 ° 之像切像_之邊緣像素616Ββ 一,.隹影像602及減小之偿 的, 像素衫像61〇僅係出於例示性目 轉譯技術來執行(例1轉:成-:小之像素影像可使用不同 像素映射5诸丨 一.准影像中之較多/較少個 景/傻中, 之料^ "之—制料,將每-二維 夕1^素映射至減小之像素影像中之多個像素, =)。另外’可調整預處理單元11〇產生之減小之像素影 大小(例如,⑽個像素個像素,⑽個像素x200 個像素等等)。由於每—影像可具有其他影像可能不具有 之像素(例如,由於影像之視點),因此可減小每一左影像 及右影像。舉例而言’如上文所述,該左影像及右影像中 之一物件之僅-個側可自—個角度可見,可存在—隱藏 點’及/或諸如此類。 參照步驟706,帛處理4元i 10針對左影像及右影像中之 每-者計算邊界資|。於此步驟期間,;罙度產±器1〇2判 定左影像及右影像中之哪些點(例如,一或多個點)更可能 係用於最終在左影像與右影像之間產生深度圖(例如,在 步驟708及710處)之重要點(亦即,物件邊界)。預處理單元 Π0可針對左影像及右影像計算邊緣資訊,且使用邊緣資 訊判定該等立體影像之一共同點。一共同點提供用於比較 152792.doc •36· 201137788 左影像及右影像之一參考點。舉例而言,預處理單元"〇 可判定左影像與右影像之間的—共同點係位於該影像中之 -物件之側上之-像素(例%,二維影像術之像素 608C)。有利i也,藉由找出左影像與右影像之間的一共同 點’預處理單元11G無需匹配左影像與右影像中之多個 點。 於某些實施例中’基於自左影像及右影像產生之色彩分 段圖框(而非原始左影像及右影像)來選擇邊緣點(例如,色 彩分段單元114針對如上文所述在色彩分段過程中使用之 每一色彩產生左影像及右影像之多個色彩分段圖框)。因 此,該等色彩分段影像可用於判定物件邊界。針對每一像 素,邊緣產生單元118判定該像素是否係一邊界點,且產 生包括邊界資訊之像素集(例如,邊界像素6〇8)。有利地, 藉由執行色彩分段及基於該等色彩分段圖框來識別邊界, 本發明系統及方法實現優於藉由其他手段產生邊界資訊之 一顯著時間節省。 參知、步驟708,深度圖單元u 6針對第一減小之像素影像 及第一減小之像素影像計算一深度圖。如參照下文圖8所 述,深度圖單元116可計算關於在產生深度圖時減小之像 素影像中之像素位置之差的資訊。參照步驟7〗〇,基於該 兩個左影像及右影像中之每一者之邊界資訊(在步驟7〇6處 產生之邊界資訊)及該等減小之像素影像之深度圖(在步驟 708處產生之深度圖)來計算左影像及右影像之深度圖。舉 例而言,於某些實施例中,基於對應於左減小之像素影像 152792.doc -37- 201137788 及右減小之像素影像之深度值(例如,基於減小之像素與 像㈣之邊界像素616之深度值)來計算每一邊#像素^ 如,二維影像602之邊界像素608)之深度值。於某此實广 例中,基於減小之像素影像之深度圖之平均值及^业^ 值來判定(例如,基於在左減小之像素影像及右減小之像 素影像之對應邊界像素附近之資料來判定)剩餘像素(亦 即,並非邊界像素之像素)之深度圓之深度資訊。 α參照步驟708及71〇,深度產生器1〇4(亦即,深度圖製作 單元116)可識別在__個影像中可見但在其他影料不可見 之一物件之像素。舉例而言,當在左影像中之-物件包含 =一物件之最外左側上之-像素之情況下,該像素可能在 右影像中不可見。此乃因,由於兩個影像之間的有利點角 度差’在自右影像之有利點觀看該物件時該物件之左側上 之最外像素移至在視野㈣失。類似地,舉㈣/在自 左影像之有利點觀看該物件時不可見之該物件之^右側上 之一像素在左影像中可進入視野内(亦即’變得可見)。因 此’轉換單元108可基於左二維影像及右二維影像之深度 圖來產生(或内插)—第三二維影像,該第三二維二 所識別之隱藏像素包含進入視野内之一區(亦即,在左影 像中不可見但在右影像中可見之一像素)或在視野内消2 之—區(亦即,在左影像中可見但在右影像中不可見之_ 像素)。 …〜驟712 ’轉換單元1()8在左影像與右影像之間内插 圖拖。出於例示性目的,假設以〇。之一參考角度在一有利 152792.doc -38- 201137788 點處獲取左影像’ 自2。之一有利點處獲取右影像(實際 上’可使用任意適當範圍之角度)。因此,藉由產生左二 維影像及右二維影像之深度圖,轉換單元1〇8現在可自左 影像與右影像之有利點之間(例如,在Q。與2。之間)的一有 利點產生-影像。舉例而言’轉換單元ι〇8可基於具有 0·5 〇_53、G.l等等之—有利點之左二維影像與右二维 影像之深度圖產生一第三二維影像。 圖8圖解說明-種根據本發明詩指派深度資訊之實例 性方法800,其中基於像素位置之差(例如,減小之像素影 像中之物件t之每—者之像素位置之差)來計算深度圖(例 如,左減小之像素影像及右減小之像素影像之深度圖)之 資料。在步驟802處,深度產生器1〇4(例如,經由深度圖 製作单元U6)比較兩個對應像素,其中該兩個對應像素中 之:第一像素位於第-減小之像素影像内(例如,左影像 之減小之像素影像),且該兩個對應像素中之一第二像素 係位於第—減小之像素影像内(例如’右影像之減小之像 ㈣像)。在步驟804處,深度產生器104判定該兩個像素 疋刀離還疋父叉。若該等像素係分離的(例如,當疊加左 影像及右影像時,左影像之對應像素位於右側,且右 ,對應像素位於距左影像之該像素之右側之-距離處 1 則方法_進行至步驟806’其中深度產生器104計算卜 :影像之第-像素與右影像之第二像素分開多遠之 。在步驟808處,深度產生器1〇4指派指示該深度圖中之 兩個對應像素之深度資訊之資料以指示該物件係遠離包含 152792.doc -39- 201137788 該物件之三維視圖之一觀看者。 若在步驟804處深度產生器1 〇4判定該等像素係交叉的 (例如,在疊加左影像與右影像時,左影像之對應像素位 於右側,且右影像之對應像素位於距左影像之像素之左側 之一距離處),方法800進行至步驟81〇且深度產生器計 算指示左像素與右像素交又程度(例如,左像素與右像素 之間的距離)之一交又值。在步驟812處,深度產生器1〇4 指派指示該深度圖中之兩個對應像素之深度資訊之資料, 以指示該物件係接近於包含該物件之三維視圖之一觀看 者。步驟808及812二者進行至步驟814,其中深度產生器 104判定是否存在任何尚未被計算之剩餘像素(例如,左減 小之像素影像及右減小之像素影像之任何像素)^若存在 剩餘像素,則方法800進行返回至步驟8〇2。否則,該方法 進行至步驟816且深度產生器104完成計算深度圖816。 圖9圖解說明一種根據本發明用於操縱物件之實例性方 法900。在步驟902處,深度產生器104(亦即,色彩分段單 元114及/或邊緣產生單元u 8)基於該二維影像產生複數個 色彩分段圖框,其中每一色彩分段圖框包括一或多個物 件。深度產生器104可經組態以產生任意數目個色彩分段 圖框。舉例而έ ’深度產生器1 04可經組態以產生1 〇個色 彩分段圖框,每一色彩分段圖框對應於一不同色彩(例 如’深紅色、淺紅色等等)。在步驟904處,深度產生器 1 04自該複數個色彩分段圖框選擇一色彩分段圖框。在步 驟906處,該深度產生器驗證該色彩分段圖框包含一或多 152792.doc •40· 201137788 ::物件,每一内聚物件包括可識別邊界線。若在步驟 906處深度產生器104判定該色彩分段圖框不包含—二 =聚物件’則方法進行至步驟_且深度產生器1〇4丢 棄該色彩分段圖框(亦即,該圖框不用於計算一深度 若在步驟_處深度產生器⑽判定該色彩分段圖框包含一 或多個内聚物件,針對該笸一、士, …主, 情亥第減小之像素影像及該第二減 小之像素影像中之每一像素 ν 豕京,木度產生器104基於該等色 彩为段圖框設定該像素之一邊 遺界點才日不符,其中該邊界點 ^符包括指料像素是㈣—邊界點之資料。 二照步驟906,每一色彩分段圖框應係閉合的以做出深 枓。大體而言,此意指該色彩分段圖框應包含具有界 “ 内裝圖。在該色彩分段圖框具有多個 物件(例如,若該原始影像 阁“人* 〜“象係人類,且-特定色彩分段 包3表示該人類之僅部分之單獨物件,諸如手、足之 =獨物件’及面部之一個物件’但針對該特定分段圖框 不顯示該人體之其他部分)之情況下,出於步驟9〇6之目 物件具有界定之邊界,此一色彩分段圖框仍 子視為係閉合的。不閉合之—色彩分段圖框之一實例係並 料像主要由像素之一喷射物或散射物構成且在該色彩 为段圖框内不存在可界定之受關注主體之一圖框。 於某些實施例中,深度產生器1〇4可包含與控制三維物 u產生(例如’在由轉換單元⑽產生時)之深度圖相關聯 Γ可組態參數。舉例而言,該可組態參數可經設定或調 整(例如,經由“ λ „ _ 雨早70 106)以組態轉換單元108以將該等 152792.doc 41 201137788 衫像呈現為二維影像、:r维影後 体一維影像、經格式化以用於在一蜂 巢式電話上顯示之影像等等。類彳士 ^ ^ 頰似地,於某些實例中,該 可組態參數可經調整以控制三維影像之視差量以維持該三 維場景中之該等物件之正確尺寸,以針對各種螢幕大小及 空間大小調整三維料(例如,不僅針對―特定空間大小 計算三維顯示器一次,而且允碎扁 且凡。千在運作中調整三維影像以 用於各種顯示應用程式^可某於彳無办丨二^、^,、 W j丞於(舉例而言)影像安全來設 定或調整該可組態參數。 關 如 如 圖10圖解說明根據本發明在虛擬距離與真實距離之間的 係之-實例性®式1_ 1直軸展示虛擬距離⑽2(例 ’以米⑽為單位量測),且水平抽展示真實距離觀(例 ,以米(m)為單位量測)。帶丨〇〇6係針對一第一物件(例 如,一人)之一厚度與真實距離1〇〇4相比之虛擬距離⑺们 之一表示。帶1008係針對一第二物件(例如,一樹)之一厚 度與真實距離1004相比之虛擬距離1〇〇2之一表示。如線 1〇1〇展不,針對第一物件及第二物件之虛擬距離與真 實距離1004之間的關係係一 1:1關係,此正確地維持在將 二維影像内之物件呈現為三維繪圖時之真實距離及厚度。 圖11圖解說明根據本發明在虛擬距離與真實距離之間的 關係之一貫例性圖式丨丨00。垂直軸展示虛擬距離η ,且 水平軸展示真實距離1104,如參照圖10。帶1106係針對第 一物件之厚度與真實距離1104相比之虛擬距離1102之—表 不。帶1108係針對第二物件之厚度與真實距離〗丨〇4相比之 虛擬距離1102之一表示。線111〇展示在被正確維持之情況 152792.doc *42- 201137788 下虛擬距離1102與真實距離1104之間的關係之一丨丨比率 是怎樣的。然而,曲線1112展示第—物件及第二物件之比 率如何相依於相機參數及螢幕大小(例如’在相機會聚係 窄的、視場之鏡頭角度係寬的、或螢幕大小係小的:可發 生「紙板效應」)。如曲線1112及帶11〇8展示,帶ιι〇8之虛 擬厚度比真實厚度薄,此導致「紙板效應」(例如,樹看 起來比其實際上在真實生活中看起來薄)。 類似於圖11 ,圖12圖解說明根據本發明在虛擬距離與真 實距離之間的關係之-實例㈣幻謂。t直軸展示虛擬 距離1202,且水平軸展示真實距離12〇4,如參照圖1〇及圖 11。線1206係針對第一物件與真實距離12〇4相比之虛擬距 離1202之一表示。線1208係針對第二物件與真實距離12〇4 相比之虛擬距離1202之一表示。線121〇展示在被正確維持 之情況下虛擬距離1202與真實距離丨204之間的關係之一 ι:ι比率是怎樣的。類似於圖u之曲線1112,曲線1212展示 第一物件及第二物件之比率如何相依於相機參數及螢幕大 小(例如,在相機會聚係寬的、視場之鏡頭角度係窄的、 或螢幕大小係大的時可發生小型化效應(midget effect))。 如曲線1212及線1206展示,虛擬距離12〇2比真實距離12〇4 短,此導致通常稱為「小型化效應」之一效應(例如,人 看起來比其貫際上在真實生活中看起來矮)。 有利地,該等系統及方法可經組態以在判定深度圖之深 度資訊時考量如上文所述之紙板效應及小型化效應。舉例 而έ,該深度資訊經組態以藉由乘以校準曲線函數來維持 I52792.doc -43· 201137788 一 1:1比率。 於某些實施例中’針對立體成像,可針對左影像及右㈤ 像二者使用一變形影像。舉例而言’在將-單個二維影: 轉換成-立體對時’原始影像可被用作視點之「中心 點」。因此’在針對原始影像產生立體對時,針對賴微在 原始影像之有利點左側之—有利點呈現立體對之左影像, 且針對補微在原始影像之有利點右側之一有利點呈現該立 體對之右影像。於此實施例中,用於產生立體對之原始影 像並未用作该對之左影像或右影像。 上述系統及方法可實施於數位電子電路、電腦硬體、勒 體及/或軟體中。實施方案可係作為一電腦程式產品(亦 即,有形地實施於一資訊載體中之一電腦程式卜舉例而 Γ實施方案可係在—機器可讀儲存裝置中1於由資料 處理設備執行或控制資料處理設備之操作。舉例而言,實 鉍方案可係一可程式化處理器、一電腦及/或多個電腦。 ^電腦程式可以㈣形式之程式設計語言(包含編譯及/ 吾ο寫入’且可以任意形式(包含作為一獨立程式 ,作為-副常式、元件及/或適用於_計算環境下之其他 署該電腦程式。一電腦程式可經部署以在一個電 岛上或在一個地點處之多個電腦上執行。 作步驟可由執行一電腦程式以藉由對輸入資料進行操 生輪出來執行本發明之功能之—或多個可程式化處 :執行。方法步驟亦可由專用邏輯電路執行,且一設備 貫施為專用邏輯電路。舉例而言,該電路可係- 152792.doc *44- 201137788 FPGA(場可程式化閘極阵列)及/或一 ASIC(特定應用積體電 路)。模組、副常式及軟體代理程式可係指實施彼功能之 電腦程式、處理器、特定電路、軟體及/或硬體之部分。 舉例而言,適於執行一電腦程式之處理器包含通用微處 理器及專用微處理器二者,以及任意種類之數位電腦之任 意一或多個處理器。大體而言,一處理器自一唯讀記憶體 或一隨機存取記憶體或二者接收指令及資料。一電腦之基 本元件係用於執行指令之一處理器及用於儲存指令及資料 之或多個s己憶體裝置。大體而言,一電腦可包含用於儲 存Η料之或夕個大谷量儲存裝置(例如,磁碟、磁光碟 或光碟)、可以操作方式耦合以自該等大容量儲存裝置接 收資料及/或傳送資料至該等大容量儲存裝置。 資料傳輸及指令亦可出現於一通信網路上。適於體現電 腦程式指令及資料之資訊載體包含所有形式之非揮發性記 憶體,且以舉例之方式包含半導體記憶體褒置。該等資訊 載體可(舉例而言)係EpR〇M、EEpR〇M、快閃記憶體裝 置磁碟、内置硬碟、可抽換碟、磁光碟、CD-ROM及/或 DVD_R〇M碟。處理器及記憶體可由專用邏輯電路補充及/ 或併入專用邏輯電路中。 為提供用於與-使用者之互動,上述技術可實施於具有 顯不裝置之-電腦上。舉例而言,該顯示裝置可係一卜 極射線管(CRT)及/或—液晶顯示(LCD)監視器。與一使^ =互動可係(舉例而言)至使用者之一資訊顯示及使 可猎以提供輸人至電腦(例如,與—制者介面Μ互動) 152792.doc •45· 201137788 之一鍵盤及一指標裝置(例如,— 巧鼠或一軌跡球)。其# 種類之裝置可用於提供用於與一 、In some embodiments, the off M factor includes information indicating the location of one of the objects in the two-dimensional image and the location of one of the objects in the one-dimensional image. For example, + , , ° This location information can be used to determine the motion of objects between successive 2D images (you bite (for example, if a car is moving from left to right, compare two consecutive guides* (5) In the reverse, when the shirt is like, in the second continuous image (this is obtained at a later time, the shadows are φ, ...±, ?, d冢), & the car will be located on the right side.) Depth Generator 1 〇4 can determine that the position of the object in the two-dimensional image is different from the position of the corresponding object in the continuous two-dimensional image (for example, based on color segmentation of the color segment object in each of the two-dimensional images) The relative position of the corresponding color segment object in the frame, and assigning a near depth value (eg, 200 (in the case where the depth range is set to 255)) to the first two-dimensional frame Each pixel within a color segmented object. It will be appreciated that the various embodiments described above can be combined and need not be used separately (eg, the hue, saturation, and brightness of the pixel can be used in conjunction with the location of the color segmented object, etc.) Regarding step 316, the depth generator 1〇4 is based on The iso-basic gamut is calculated for the two-dimensional image (eg, 'by the depth map authoring unit 116. The depth map includes information indicative of three-dimensional information for each pixel in the two-dimensional image. As described above, the depth map is for The object within the two-dimensional image has increased depth information (ie, since the depth map includes a custom depth range for each pixel 'this increases the accuracy of the depth map). In some examples, the depth generator 1 When processing the two-dimensional image, the three-dimensional representation of one of the objects in the two-dimensional image (eg, via a stereo pair) may cause the object to enter the field of view and/or disappear within the field of view (eg, at the stand 152792.doc) • 31 · 201137788 When the left image remains the same, since the right image is drawn from one of the different points from the left image, the position of the object in the right image can enter the field of view and/or disappear in the field of view. The apparatus 1〇4 may need to adjust the depth map to account for such situations. Figure 5 illustrates an exemplary method 5 for manipulating an object in accordance with the present invention. At step 5〇2, The depth generator 104 selects a three-dimensional representation of one of the two-dimensional images (eg, X, a three-dimensional representation of the first object 208 of the two-dimensional image 202 of FIG. 2). At step 504, the depth generator 104 determines the two-dimensional representation. Whether the three-dimensional representation of one of the objects in the image causes the object to enter the field of view in a portion of the field of view in the two-dimensional image. For example, the depth generator 1〇4 can determine the two-dimensional image of the side of the first object 208. One of the invisible portions of 202 will enter the field of view while presenting the three-dimensional image (or for a stereo pair of images). If the depth generator 104 makes this determination, then the method 5 proceeds to step 506' Wherein the depth generator 1〇4 stretches the background behind the object, the stretch will enter the side of the object within the field of view, or a combination of the two to fill the portion of the object that enters the field of view. For example, in the case where the right side of the first object 208 enters the field of view, the depth generator ι 4 can stretch a portion of the right side of the first object 208 (and/or stretch the background 2〇4 and the view 206) A portion of the right side of the first article 208 is adjacent to fill in a gap that would otherwise be present at the right side of the first article 208. § If the measure generator 1 〇 4 does not make this determination at step 504, then the method 500 proceeds to step 508 and determines whether the three-dimensional representation of one of the two-dimensional images causes the object to be In a two-dimensional image, one part of the field of view enters the field of view. If the depth generator 104 makes this determination, 152792.doc • 32· 201137788 then the method 500 proceeds to step 51G and the depth generator i (m shrinks one side of the object to hide the portion of the object that disappears within the field of view For example, in the case where the left side of the first object 208 disappears within the field of view, the depth generator 104 may contract the portion to the left of the first object 2〇8 to compensate for the missing portion of the left side of the first object 208. In the case where the method does not make this determination at 508, the method proceeds to step 512, where the ten depth generator HM determines; t whether there are any remaining three-dimensional objects to be analyzed. In the case where there are remaining objects, the method Returning to step 5〇2 by selecting one of the remaining objects. Otherwise, the method proceeds to step 514 and terminates, π has analyzed and processed all three-dimensional objects (if necessary) (according to steps 504 to 5 1 When the depth generation II 1G4 calculates a depth map for the two-dimensional image based on the base gamut, the depth generator 104 may fine tune a previously generated depth map. The depth generator 104 stores (eg, via data)丨12) indicating data of a previously generated depth map. The previous graph is generated; the graph includes a depth map corresponding to each pixel in the two-dimensional image (eg, the depth map includes from 0 to each pixel) The data of the three-dimensional information of one of the depth ranges of 255. The depth generator HH may generate a new depth map based on the previously generated depth map, wherein the new depth map includes one of each pixel corresponding to a corresponding two-dimensional image. A wide range of two-dimensional information (eg, the range between 〇 and 65, 2 ribs as described above). Advantageously, the systems and methods described herein may be based solely on a previously generated depth map ( The previously generated depth map is quickly and efficiently fine-tuned, for example, via the equation: Depth-GxB+R, as described above. '152792.doc -33· 201137788 Figure 6 illustrates the image manipulation in accordance with the present invention An exemplary image 600. The two-dimensional image 602 includes a plurality of pixels 6〇4a, 6〇4b, 604C (respectively referred to as pixels 604). The pixel size of the two-dimensional image 6〇2 is the image along the two-dimensional image 602. One of the vertical sides (for example two The length of the pixel of the left side of the image 6〇2 is the product of the width of the image along the horizontal side of the two-dimensional image 6〇2—for example, the bottom side of the two-dimensional image 602. The two-dimensional image 6〇2 includes Object pixels 6 〇 6 八, 6 〇 6b, and 606 C representing one of the objects in the two-dimensional image 602 (collectively referred to as object pixels 606, the two-dimensional image 6 〇 2 includes boundary pixels 6 along the boundary of the object represented by the object pixel 606 〇 8 八, 6 〇 及 and 6 〇 8c (collectively referred to as boundary pixels 608). The reduced pixel image 61 〇 includes a plurality of pixels 612A, 612B (collectively referred to as pixels 612) 0 reduced pixel image 61 〇 includes expression minus The object pixels 6i4A, 6i4B (collectively referred to as object pixels 614) corresponding to one of the objects represented by the object pixel 606 in the small pixel image (4). The reduced pixel image 61 includes boundary pixels 616, 6i6B (collectively referred to as boundary pixels 616) along the boundary of the object represented by object pixel 614. Figure 7 illustrates an exemplary method 70 for autostereoscopic interpolation in accordance with the present invention (see Figure 丨, at step 7〇2, input unit 〇6 receives a first jersey image and a second two dimensional image The image, each of the two-dimensional images includes a pixel size. At step 7〇4, the processing unit 11 generates a reduced pixel image of each of the first two-dimensional image and the first two-dimensional image, wherein each A reduced pixel image includes less than one of the pixel sizes to reduce the pixel size. At step 706, the pre-processing unit 110 calculates boundary information for each of the first two-dimensional image and the second two, food V-images. At step 708, depth map 152792.doc -34 - 201137788 early 116 calculates a depth map for the first-reduced pixel image and the second reduction, wherein the depth map includes the indication of the first-decrease=image The image of the shirt and one of the second reduced pixel images or the information of the prime message. At the step, the depth map unit 116 is based on the boundary information of each of the ^^ image and the second two-dimensional image== a depth map of the prime image and the second reduced pixel image for 4 images The second two-dimensional image is formed by the base depth map. At step 712, the map generates a depth frame of the two-dimensional image and the second two-dimensional image, "ST is in the first Interpolating the second right image between the image and the second image includes a second profit point between the first-beneficial point of the first-two-dimensional image and the second two-dimensional image. Referring to step 702, the input unit 1〇6 receives a left image (a first two-dimensional shadow=) and a right image (a second two-dimensional image). The left image and the right image are the same field π (eg, object, landscape, etc.) Two different perspective views. The input unit 106 receives each two-dimensional image in the original pixel size. Referring to step 7〇4, the pre-processing tl 11 0 generates a reduced pixel for each of the left image and the right image. The image-per-reduced pixel image includes less than the left image and the right "image size of the original pixel size. The pixel size is reduced. For example, if the image is not shown in Figure 6, the pre-processing unit U is based on the two-dimensional image 6 〇2 produces a reduced pixel image 610 ° reduced pixel image 610 has fewer pixels For example, the arrows 650A, 65GB (collectively referred to as arrows 65G) overlaid on the 3D image 6〇2 divide the dimension image 6() 2 into squares including four pixels. The preprocessing unit m translates the 2D image 6〇2 152792.doc • 35- 201137788 into a reduced pixel image 61〇(1) set to the four image sympleries: the group of reduced pixel images (4) pixels in the 2D image 602 corresponds to the main square, and the main pixel. For example, by the edge of the pixel, therefore, three of the four pixels of the =7 order are translated into a test, 丨^3 edge pixel 6〇8B two-dimensional The image of the image 602 is imaged by the edge image 616 Β β I, the image 602 and the reduced image, and the pixel shirt image 61 is only executed by the exemplary translation technique (Example 1 turn: Small pixel images can be mapped using different pixel mappings. The more/less scenes/silly in the quasi-images, the material ^ " Decrease multiple pixels in the pixel image, =). In addition, the reduced pixel size produced by the pre-processing unit 11 can be adjusted (e.g., (10) pixels, (10) pixels x 200 pixels, etc.). Since each image can have pixels that other images may not have (for example, due to the viewpoint of the image), each of the left and right images can be reduced. For example, as described above, only one side of one of the left and right images may be visible from an angle, there may be - hidden points &/or the like. Referring to step 706, the processing unit 4 calculates the boundary element | for each of the left image and the right image. During this step, it is more likely that the points in the left and right images (eg, one or more points) are more likely to be used to ultimately produce a depth map between the left and right images. (eg, at steps 708 and 710) important points (ie, object boundaries). The pre-processing unit Π0 calculates edge information for the left and right images, and uses edge information to determine one of the stereo images. A common point is provided for comparison 152792.doc •36· 201137788 One of the left and right image reference points. For example, the pre-processing unit "〇 can determine that the common point between the left image and the right image is the pixel located on the side of the object in the image (example %, pixel 608C of two-dimensional imaging). Advantageously, by finding a common point between the left image and the right image, the preprocessing unit 11G does not need to match multiple points in the left and right images. In some embodiments 'the edge points are selected based on the color segmentation frames generated from the left and right images (rather than the original left and right images) (eg, the color segmentation unit 114 is for color as described above) Each color used in the segmentation process produces a plurality of color segmentation frames for the left and right images). Therefore, the color segmentation images can be used to determine object boundaries. For each pixel, edge generation unit 118 determines whether the pixel is a boundary point and produces a set of pixels including boundary information (e.g., boundary pixels 6〇8). Advantageously, the system and method of the present invention achieves a significant time savings over the generation of boundary information by other means by performing color segmentation and identifying boundaries based on the color segmentation frames. Referring to step 708, the depth map unit u 6 calculates a depth map for the first reduced pixel image and the first reduced pixel image. As described with reference to Figure 8 below, depth map unit 116 may calculate information regarding the difference in pixel locations in the reduced pixel image as the depth map is generated. Referring to step 7 〇, based on the boundary information of each of the two left and right images (the boundary information generated at step 7〇6) and the depth map of the reduced pixel images (at step 708) The depth map generated is used to calculate the depth map of the left and right images. For example, in some embodiments, based on pixel values corresponding to the left reduced pixel image 152792.doc -37- 201137788 and the right reduced pixel image (eg, based on the boundary between the reduced pixel and the image (4) The depth value of the pixel 616 is used to calculate the depth value of each side #pixel^, such as the boundary pixel 608 of the two-dimensional image 602. In a practical example, the average value and the value of the depth map of the reduced pixel image are determined (for example, based on the corresponding boundary pixel of the left-reduced pixel image and the right-reduced pixel image). The data is used to determine the depth information of the depth circle of the remaining pixels (ie, pixels that are not border pixels). Referring to steps 708 and 71, the depth generator 1〇4 (i.e., depth map authoring unit 116) can identify pixels of an object that is visible in __ images but not visible in other artifacts. For example, in the case where the object in the left image contains a pixel on the outermost left side of the object, the pixel may not be visible in the right image. This is because the outermost pixel on the left side of the object is moved to the field of view (4) when the object is viewed from the vantage point of the right image due to the favorable angular difference between the two images. Similarly, one of the pixels on the right side of the object that is not visible when viewing the object from the vantage point of the left image can enter the field of view in the left image (i.e., 'becomes visible'). Therefore, the conversion unit 108 can generate (or interpolate) the third two-dimensional image based on the depth maps of the left two-dimensional image and the right two-dimensional image, and the hidden pixels identified by the third two-dimensional image include one of entering the field of view. A region (that is, a pixel that is not visible in the left image but visible in the right image) or a region that is in the field of view (that is, a pixel that is visible in the left image but not visible in the right image) . ...~Step 712 'Conversion unit 1() 8 interpolates between the left image and the right image. For illustrative purposes, assume a 〇. One of the reference angles obtains the left image 'from 2' at a favorable point of 152792.doc -38- 201137788. One of the vantage points to obtain the right image (actually, the angle of any suitable range can be used). Therefore, by generating a depth map of the left two-dimensional image and the right two-dimensional image, the conversion unit 1 8 can now be between the advantageous points of the left image and the right image (for example, between Q and 2) Favorable points produce - images. For example, the 'conversion unit ι 8 can generate a third two-dimensional image based on the depth maps of the left two-dimensional image and the right two-dimensional image having a favorable point of 0·5 〇_53, G.l, and the like. 8 illustrates an exemplary method 800 of assigning depth information in accordance with the present invention, wherein the depth is calculated based on the difference in pixel locations (eg, the difference in pixel locations of each of the objects t in the reduced pixel image). The figure (for example, the left-reduced pixel image and the depth map of the right-reduced pixel image). At step 802, the depth generator 1〇4 compares the two corresponding pixels (eg, via the depth map authoring unit U6), wherein the first pixel is located within the first-reduced pixel image (eg, a reduced pixel image of the left image, and one of the two corresponding pixels is located within the first reduced pixel image (eg, a reduced image of the right image (four) image). At step 804, depth generator 104 determines that the two pixels are missing the parent fork. If the pixels are separated (for example, when the left image and the right image are superimposed, the corresponding pixel of the left image is located on the right side, and the right pixel is located at the right side of the pixel from the left image - the distance is 1) Go to step 806' where the depth generator 104 calculates how far apart the first pixel of the image is from the second pixel of the right image. At step 808, the depth generator 1〇4 assigns to indicate two of the depth maps. The depth information of the pixel is information indicating that the object is away from one of the three-dimensional views of the object containing 152792.doc -39 - 201137788. If at step 804 the depth generator 1 〇 4 determines that the pixels are intersecting (eg When the left image and the right image are superimposed, the corresponding pixel of the left image is located on the right side, and the corresponding pixel of the right image is located at a distance from the left side of the pixel of the left image. The method 800 proceeds to step 81 and the depth generator calculates One of the degrees indicating that the left pixel intersects the right pixel (eg, the distance between the left pixel and the right pixel) is a value. At step 812, the depth generator 1〇4 assigns an indication of the depth. Two of the corresponding depth information of the pixel to indicate that the object is close to one of the three-dimensional views of the object. Steps 808 and 812 proceed to step 814, wherein depth generator 104 determines if any The remaining pixels that have not yet been calculated (eg, the left-reduced pixel image and any pixels of the right-reduced pixel image) ^ If there are remaining pixels, the method 800 proceeds back to step 8〇2. Otherwise, the method proceeds to steps 816 and depth generator 104 completes computing depth map 816. Figure 9 illustrates an exemplary method 900 for manipulating objects in accordance with the present invention. At step 902, depth generator 104 (i.e., color segmentation unit 114 and / Or edge generation unit u 8) generates a plurality of color segmentation frames based on the two-dimensional image, wherein each color segmentation frame includes one or more objects. Depth generator 104 can be configured to generate any number of colors Segmentation frame. For example, 深度 'Depth generator 104 can be configured to generate 1 色彩 color segmentation frame, each color segmentation frame corresponds to a different color (example) 'Dark red, light red, etc.." At step 904, depth generator 104 selects a color segmentation frame from the plurality of color segmentation frames. At step 906, the depth generator verifies the color segment. The segment frame contains one or more 152792.doc • 40· 201137788 :: objects, each of which includes an identifiable boundary line. If at step 906 the depth generator 104 determines that the color segmentation frame does not contain - two = The polymer member's method proceeds to step _ and the depth generator 1〇4 discards the color segmentation frame (ie, the frame is not used to calculate a depth if the depth generator (10) determines the color segment at step _ The frame comprises one or more internal polymer parts, for each of the pixel images of the first and second pixels, and the pixel image of the second reduced pixel image. The device 104 sets a boundary point of the pixel based on the color of the segment to be inconsistent, wherein the boundary point includes the data of the (four)-boundary point. Following step 906, each color segmentation frame should be closed to make a deep slap. In general, this means that the color segment frame should contain a bounded "built-in map. There are multiple objects in the color segment frame (for example, if the original image is "human * ~" like humans, And the specific color segmentation package 3 represents a separate part of the human body, such as the hand, the foot = the single object 'and one of the objects of the face 'but the other part of the human body is not displayed for the particular segment frame) In the case where the object of step 9〇6 has a defined boundary, the color segmentation frame is still considered to be closed. The one of the color segmentation frames is not closed. One of the pixels is composed of a scatter or scatter and there is no frame of definable subject of interest within the segment of the color. In some embodiments, the depth generator 1 可 4 can contain and control the three-dimensional object u generates (eg, when generated by the conversion unit (10)) a depth map associated with the configurable parameter. For example, the configurable parameter can be set or adjusted (eg, via "λ „ _ 雨早 70 106 ) to configure the conversion unit 108 to such 152792 .doc 41 201137788 The shirt image is presented as a two-dimensional image, a one-dimensional image of the r-dimensional image, formatted for display on a cellular phone, etc. Class gentleman ^ ^ cheek-like, In some instances, the configurable parameter can be adjusted to control the amount of parallax of the three-dimensional image to maintain the correct size of the objects in the three-dimensional scene to adjust the three-dimensional material for various screen sizes and spatial sizes (eg, not only for ―The size of a specific space is calculated once, and it is smashed and simplistic. Thousands of images are adjusted for use in various display applications during operation. ^ 某 彳 彳 丨 ^ ^ ^ ^ ^ ^ ^ ( ( ( For example, image security to set or adjust the configurable parameter. As illustrated in Figure 10, the system between the virtual distance and the real distance according to the present invention - the example of the formula 1-1 straight axis shows the virtual distance (10) 2 ( The example 'measured in meters (10)), and the horizontal pumping shows the true distance view (for example, measured in meters (m)). The belt 6 is for one of the first objects (for example, one person) Thickness and true distance 1 One of the virtual distances (7) compared to 〇4 indicates that the band 1008 is represented by one of the virtual distances 1 〇〇 2 of one of the second objects (eg, a tree) compared to the true distance 1004. 1 〇展不,, the relationship between the virtual distance of the first object and the second object and the real distance 1004 is a 1:1 relationship, which is correctly maintained when the object in the two-dimensional image is rendered as a three-dimensional drawing. Distance and Thickness. Figure 11 illustrates a consistent exemplary graph 00 of the relationship between virtual distance and true distance in accordance with the present invention. The vertical axis shows the virtual distance η and the horizontal axis shows the true distance 1104, as described with reference to Figure 10 The band 1106 is for the virtual distance 1102 of the thickness of the first object compared to the true distance 1104. The band 1108 is represented by one of the virtual distances 1102 for the thickness of the second object compared to the true distance 丨〇4. Line 111〇 shows what is the ratio of the virtual distance 1102 to the true distance 1104 in the case of being properly maintained 152792.doc *42- 201137788. However, curve 1112 shows how the ratio of the first object to the second object depends on the camera parameters and the screen size (eg 'the camera is narrow, the field of view is wide, or the screen size is small: can occur "Cardboard effect"). As shown by curve 1112 and strip 11〇8, the virtual thickness of ιι〇8 is thinner than the true thickness, which results in a "cardboard effect" (for example, the tree looks thinner than it actually looks in real life). Similar to Fig. 11, Fig. 12 illustrates an example (four) phantom relationship between a virtual distance and a true distance in accordance with the present invention. The straight axis shows the virtual distance 1202, and the horizontal axis shows the true distance 12〇4, as shown in Fig. 1 and Fig. 11. Line 1206 is representative of one of the virtual distances 1202 of the first object compared to the true distance 12〇4. Line 1208 is represented for one of the virtual distances 1202 of the second object compared to the true distance 12〇4. Line 121 〇 shows one of the relationships between virtual distance 1202 and true distance 丨 204 when properly maintained. What is the ratio of ι:ι. Similar to curve 1112 of Figure u, curve 1212 shows how the ratio of the first object to the second object is dependent on camera parameters and screen size (eg, narrower camera angles, narrower camera angles, or screen size) When the system is large, a midget effect can occur. As shown by curve 1212 and line 1206, the virtual distance 12〇2 is shorter than the true distance 12〇4, which results in one effect commonly referred to as the “miniaturization effect” (for example, people seem to look at their real life more than they are in real life) It's short). Advantageously, the systems and methods can be configured to take into account the board effect and miniaturization effects as described above when determining the depth information of the depth map. For example, the depth information is configured to maintain a 1:1 ratio by multiplying the calibration curve function by I52792.doc -43· 201137788. In some embodiments, for stereo imaging, a distorted image can be used for both the left image and the right (five) image. For example, the 'original image' can be used as the "center point" of the viewpoint when converting a single two-dimensional image into a stereo pair. Therefore, when a stereo pair is generated for the original image, the left image of the stereo pair is presented to the left of the favorable point of the original image, and the stereo image is presented at a favorable point on the right side of the advantageous point of the original image. Right image. In this embodiment, the original image used to generate the stereo pair is not used as the left or right image of the pair. The above systems and methods can be implemented in digital electronic circuits, computer hardware, hardware, and/or software. The embodiment may be implemented as a computer program product (ie, tangibly embodied in a computer program in an information carrier, and the implementation may be in a machine readable storage device 1 executed or controlled by the data processing device The operation of the data processing device. For example, the actual solution can be a programmable processor, a computer and/or a plurality of computers. ^ The computer program can be in the form of a programming language (including compiling and / writing) 'and may be in any form (included as a stand-alone program, as a secondary routine, component and/or other computer program suitable for use in a computing environment. A computer program may be deployed to be on an electric island or in a Executing on a plurality of computers at a location. The steps may be performed by executing a computer program to perform the functions of the present invention by performing an operation on the input data - or a plurality of programmable portions: execution. The method steps may also be performed by dedicated logic The circuit is executed, and a device is implemented as a dedicated logic circuit. For example, the circuit can be - 152792.doc *44-201137788 FPGA (field programmable gate array) and / or an ASIC (Application-Specific Integrated Circuit). Modules, sub-normal and software agents may refer to computer programs, processors, specific circuits, software and/or hardware that implement the functions. A processor suitable for executing a computer program includes both a general purpose microprocessor and a dedicated microprocessor, and any one or more processors of any kind of digital computer. In general, a processor from a read only memory The body or a random access memory or both receive instructions and data. A basic component of a computer is a processor for executing instructions and a device for storing instructions and data or a plurality of devices. A computer may include a storage device (eg, a magnetic disk, a magneto-optical disk, or a compact disk) for storing data, or operatively coupled to receive data from the mass storage device and/or transmit data to the device. Such mass storage devices. Data transmission and instructions may also appear on a communication network. The information carrier suitable for embodying computer program instructions and data contains all forms of non-volatile memory. And including, by way of example, a semiconductor memory device. Such information carriers may, for example, be EpR〇M, EEpR〇M, flash memory device disk, internal hard disk, removable disk, magnetic A disc, CD-ROM and/or DVD_R.M. The processor and memory may be supplemented by dedicated logic circuitry and/or incorporated into dedicated logic circuitry. To provide for interaction with the user, the techniques described above may be implemented with For example, the display device can be a polar ray tube (CRT) and/or a liquid crystal display (LCD) monitor. Interaction with a ^^ can be (for example) One of the user's information displays and enables the hunter to provide input to the computer (for example, interacting with the system interface) 152792.doc •45· 201137788 One of the keyboards and an indicator device (for example, - a mouse or a Trackball). Its # type of device can be used to provide

(舉例而言)係以任意形式夕片恩 ^ J "之感覺回饋(例如視覺回饋、 回饋或觸覺回饋)提供至使用者 之用者之回饋。來自使用者之輸 入可(舉例而言)以任音形或技 式接收,包含聽覺、 觸覺輸入。 ^(for example) providing feedback to the user of the user in any form of sensation feedback (eg, visual feedback, feedback, or tactile feedback). Inputs from the user can be received, for example, in any form or pattern, including audible, tactile inputs. ^

上述技術可實施於包含一後端組件之-分散式計算夺統 中。該後端組件可(舉例而言)係—資料伺服器、一中間體 組件及/或-應用程式伺服器。上述㈣可實心H 前端組件之一分散式計笪备# & 少 式寸异系統中。该前端組件可(舉例而 言)係-用戶端電腦,該用戶端電腦具有一圖形使用者介 面、一使用者可藉以與-實例性實施方案互動之—網路劉 覽器、及/或用於一傳輸裝置之其他圖形使用者介面。計 算系統之組件可藉由任意數位資料通信形式或媒體(例 如,一通信網路)互連。通信網路之實例包含一區域網路 (LAN)、一廣域網路(WAN)、網際網路、有線網路及/或無 線網路。 該系統可包含用戶端及伺服器。一用戶端與一伺服器一 般彼此遠離且通常透過一通信網路互動。用戶端與伺服器 之關係係憑藉運行於各別電腦上且彼此之間具有一用戶 端-伺服器關係之電腦程式而產生。 基於封包之網路可包含(舉例而言)網際網路、一運營商 網際網路協定(IP)網路(例如,區域網路(LAN)、廣域網路 (WAN)、校園區域網路(CAN)、都會網路(MAN)、家庭區 152792.doc • 46· 201137788 域網路(HAN))、_直田ττ»The above techniques can be implemented in a distributed computing system including a backend component. The backend component can be, for example, a data server, an intermediate component, and/or an application server. The above (4) one of the solid H front-end components can be used in a decentralized system. The front end component can, for example, be a client computer having a graphical user interface, a user interface with which the user can interact with the exemplary embodiment, and/or Other graphical user interfaces of a transmission device. The components of the computing system can be interconnected by any number of data communication forms or media (e.g., a communication network). Examples of communication networks include a local area network (LAN), a wide area network (WAN), the Internet, a wired network, and/or a wireless network. The system can include a client and a server. A client and a server are generally remote from each other and typically interact through a communication network. The relationship between the client and the server is generated by a computer program running on a separate computer and having a client-server relationship with each other. A packet-based network may include, for example, the Internet, a carrier Internet Protocol (IP) network (eg, regional network (LAN), wide area network (WAN), campus area network (CAN) ), Metro Network (MAN), Family Area 152792.doc • 46· 201137788 Domain Network (HAN)), _Takada ττ »

專用IP網路、一 IP專用交換機(IPB 一無線網路(例如I+ …線電存取網路(RAN)、802.1 1網路、 • 6,周路、通用封包無線電服務(GPRS)網路、 H】perLAN)、及/或其他基於封包之網路。基於電路之網路 可匕3 (舉例.而言)公共交換電話網路(PSTN)、-專用交換 機(PBX)、-無線網路(例如,ran、藍芽、分碼多重存取 (CDMA)網路、分時多重存取(tdma)網路、全球行動通信 (GSM)網路)、及/或其他基於電路之網路。 該傳輸裝置可包含(舉例而言)一電腦、具有—割覽器襄 置之一電腦 '一電話、—IP電話、一行動裝置(例如蜂巢 式電話、個人數位助理(PDA)裝置、膝上型電腦、電子郵 件裝置)及/或其他通信裝置。該瀏覽器裝置包含(舉例而 言)具有-全球資訊網„器(例如,可自微軟公司講得之Private IP network, an IP private switch (IPB - wireless network (such as I + ... line access network (RAN), 802.1 1 network, • 6, road, general packet radio service (GPRS) network, H]perLAN), and/or other packet-based networks. Circuit-based networks can be (3 (for example) public switched telephone network (PSTN), private branch exchange (PBX), wireless network ( For example, ran, Bluetooth, code division multiple access (CDMA) networks, time division multiple access (tdma) networks, global mobile communications (GSM) networks, and/or other circuit-based networks. The transmission device may comprise, for example, a computer, a computer with a viewer, a telephone, an IP telephone, a mobile device (eg a cellular telephone, a personal digital assistant (PDA) device, a laptop Computer, email device, and/or other communication device. The browser device includes, for example, a global information network device (eg, available from Microsoft Corporation)

Microsoft® Internet Εχρ1〇_、可自 M〇ziiia公司購得之 論洲_ Firefox)之一電腦(例#,桌上型電腦、膝上型電 腦卜該行動計算裝置包含(舉例而言)一個人數位助理 (PDA)。 每一者之包括、包含及/或複數形式係開放式的且包 含所列舉部件且可包含未列舉之額外部件。及/或係開放 式的’且包含所列舉部件中之一或多者及所列舉部件之組 合0 熟習此項技術者將認識到,本發明可實施為其他具體形 式’此並不背離本發明之主旨或實質特徵。上述實施例因 此應在所有態樣下視為例示性,而非限制本文中所述之本 152792-doc •47· 201137788 發明。本發明之範相此由隨附中請專利範圍而非由上述 說^曰不,且本發明意欲囊括歸屬於中請專利範圍之等效 内谷之含義及範圍内之所有變化。 【圖式簡單說明】 一實例性3D再現系統; 明自一影像導出之色彩分段 影 圖1圖解說明根據本發明之 圖2圖解說明展示根據本發 像之一實例性圖式; 基 圖3圖解說明一種根據本發明用於自色彩 色域之實例性方法; 分段影像拉取 圖4圖解說明-種根據本發明用於界定在—影像内之一 物件之邊緣之實例性方法;. 圖5圖解說明一種根據本發明用於操縱物件之實例性方 法; ' 縱之一實例性圖 立體内插之實例 深度資訊之實例 圖6圖解說明展示根據本發明影像操 式; 圖7圖解說明一種根據本發明用於自動 性方法; 圖8圖解說明一種根據本發明用於指派 性方法; 圖9圖解說明一種根據本發明用於操縱物件之實例性方 法; 圖10圖解說明根據本發明在虛擬距離與真實距離之門的 關係之一實例性圖式; 圖11圖解說明根據本發明在虛擬距離與真實距離之間的 152792.doc -48- 201137788 關係之一實例性圖式;及 圖12圖解說明根據本發明在虛擬距離與真實距離之間的 關係之一實例性圖式。 【主要元件符號說明】 100 3 D再現系統 102 電腦 104 深度產生器 106 輸入單元 108 轉換單元 110 預處理單元 112 資料庫 114 色彩分段單元 116 深度圖製作單元 118 邊緣產生單元 120 三維體驗單元 122 輸入裝置 124 資料 126 顯示器 202 二維影像 204 背景 206 前景 208 第一物件 210 第二物件 212 色彩分段圖框 152792.doc -49- 201137788 214 色彩分段物件 216 色彩分段物件 218 色彩分段物件 220 色彩分段物件 602 二維影像 604 像素 606 物件像素 608 邊界像素 610 減小之像素影像 612 像素 614 物件像素 616 邊界像素 650 箭頭 1002 虛擬距離 1004 真實距離 1006 帶 1008 帶 1010 線 1102 虛擬距離 1104 真實距離 1106 帶 1108 帶 1110 線 1112 曲線 •50- 152792.doc 201137788 1202 虛擬距離 1204 真實距離 1206 線 1208 線 1210 線 1212 曲線 I52792.doc -51Microsoft® Internet Εχρ1〇_, one of the computers available from M〇ziiia, _ Firefox) (example #, desktop, laptop) The mobile computing device contains, for example, a number of people Assistant (PDA). Each of the included, including, and/or plural forms are open-ended and include the recited components and may include additional components not listed, and/or are open-ended and include the recited components Combinations of one or more of the recited components. Those skilled in the art will recognize that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics of the invention. The following is intended to be illustrative, and not to limit the invention of the present invention as described in the 152 792- doc s s s s s s s s s s s s s s s s s s s s s s s s s s s s s s s s s s s s s s s All changes in the meaning and scope of the equivalent inner valley belonging to the scope of the patent application. [Simple description of the schema] An exemplary 3D rendering system; Color segmentation image 1 derived from an image illustrates the root Figure 2 of the present invention illustrates an exemplary diagram in accordance with the present imagery; Base Figure 3 illustrates an exemplary method for a color-coded color domain in accordance with the present invention; Segmented Image Pulling Figure 4 illustrates An exemplary method for defining an edge of an object within an image in accordance with the present invention; Figure 5 illustrates an exemplary method for manipulating an object in accordance with the present invention; 'an example of a stereoscopic interpolation of an example image Example of Depth Information FIG. 6 is a diagram showing an image manipulation method according to the present invention; FIG. 7 illustrates an automatic method according to the present invention; FIG. 8 illustrates a method for assigning according to the present invention; An exemplary method for manipulating an object of the present invention; FIG. 10 illustrates an exemplary diagram of a relationship between a virtual distance and a true distance gate in accordance with the present invention; FIG. 11 illustrates a virtual distance to a true distance in accordance with the present invention. 152792.doc -48- 201137788 one of the relationship diagrams; and FIG. 12 illustrates the relationship between the virtual distance and the true distance in accordance with the present invention An example schema of the relationship. [Main component symbol description] 100 3 D reproduction system 102 computer 104 depth generator 106 input unit 108 conversion unit 110 pre-processing unit 112 database 114 color segmentation unit 116 depth map creation unit 118 edge Generation unit 120 3D experience unit 122 Input device 124 Data 126 Display 202 2D image 204 Background 206 Foreground 208 First object 210 Second object 212 Color segmentation frame 152792.doc -49- 201137788 214 Color segmentation object 216 Color division Segment object 218 color segment object 220 color segment object 602 2D image 604 pixel 606 object pixel 608 boundary pixel 610 reduced pixel image 612 pixel 614 object pixel 616 boundary pixel 650 arrow 1002 virtual distance 1004 true distance 1006 with 1008 band 1010 Line 1102 Virtual Distance 1104 Real Distance 1106 with 1108 with 1110 Line 1112 Curve • 50- 152792.doc 201137788 1202 Virtual Distance 1204 Real Distance 1206 Line 1208 Line 1210 Line 1212 Curve I52792.doc -51

Claims (1)

201137788 七、申請專利範圍: 1. -種用於自複數個色彩分段影像拉取基色域之電腦化方 法’該方法包括: 將指不二維影像之資料儲存於一資料儲存裝置中,該 二維影像包括複數個像素; 由一電腦之-色彩分段單元基於該二維影像產生複數 個色衫分段圖框’其中每一色彩分段圖框包括一或多個 物件; 由該色彩分段I元針對該等色彩分段圖框中之每一者 而基於該一或多個物件產生一基色域;及 由該電腦t一深度圖卩元基於該等基色域針對該二維 影像計算-深度圖,其中該深度圖包括指示該二維影像 之每一像素之三維資訊之資料。 2·如請求項1之方法,其進-步包括允許對-物件之一貝 茲圖之貝茲點之調整。 3.如請求項2之方法,其進—步包括針對每—色彩分段圖 框界定該-或多個物件中之每一物件之一邊緣, 括: 〃 自動地計算該物件之一貝茲圖 冑玆固°亥貝炫圖包括關於該 件之複數個貝茲點及連接該複數個貝茲點之複數個貝 茲曲線; ' 接收指示調整該貝茲圖之資料;及 基於該資料產生該物件之一更詳細圖,其中該更詳細 圖包括與該複數個貝茲點組合的複數個額外點。 152792.doc 201137788 4.如請求们之方法,其,產生該等色彩分段圖框 定複數個基本色,其中每一基本色係 段圖框。 王色衫分 5. 如請求項4之方法’其中來自該複數個基本色之每一基 本色包括—職義色彩值範圍,其中包括該預定義色^ 值範圍内之-色彩值之—像素係與基於該基本色產生^ 一色彩分段圖框相關聯。 6. 如請求項5之方法,其包括: 接收指示該複數個基本色中之一或多個色彩之一 彩值範圍之資料;及 j於該資料針對該一或多個色彩中之每一者調整該預 疋義色彩值範圍。 7·如請求項4之方法,其中續递金f彻 亥後數個基本色針對該複數個 基本色中之每一色彩包括一色彩 巴冬對,该色彩對包含一淺 色彩及一深色彩。 8. 如請求項4之方法,苴中命益廿, 八^是數個基本色包括褐色及米 黃色。 9. 如請求们之方*,其中計算該深度圖包括: 判定該二維影像中之一物件 奶仔之二維表示導致在該二維 影像中不在視野内的該物件之一部分進入視野内·及 拉伸該物件後方之一背哥、兮 反牙、5亥物件之一側或二者以填 充於進入視野内的該物件之該部分中。 10.如請求項1之方法中計算該深度圖包括·· 判定該二维影像中之—物件 · 初件之二维表示導致在該二維 I52792.doc •2. 201137788 影像中在視野内的該物件之_部分在視野内消失;及 收縮D亥物件之-側以隱藏在視野内消失的該物件之該 部分。 11 12 13. 14. •如請求項1之方法,其中計算該深度圖包括基於該複數 個色彩分段圖框之1鍵因子針對每—像素計算該 資訊。 一 .如請求項η之方法’其中該關鍵因子包含該像素之色 相、飽和度及亮度’且計算包括: 判定飽和度之位準、亮度之位準或二者是—高 是一低位準;及 在飽和度或亮度之該位準係一高位準之情況下,指派 一近深度值給該像素;或 在飽和度或亮度之該位準係一低位準之情況下 一遠深度值給該像素。 m ::員11之方法’其中該關鍵因子包含該像素及 4像素群組當中的-值改變量,且計算包括: 基於該值改變判定該像素是否係一平面之部分;及 在該像素係該平面之部分之情況下,指派 給該像素。 退冰度值 如請求項11之方法’其中該關鍵因子 圖框中之該-或多個物件中之每—者之一位置色Ί 包括針對每一物件: 且什算 ㈣物件^位置係該色彩分段圖框之— 情況下,指派-近深度值給該物件内之每_像素 之 152792.doc 201137788 产2物:之該位置係該色彩分段圖框之—上部位置之 月/卜,指派一遠深度值給該物件内之每一像素· 在該物件之該位置係在另一色 件内之棒、 ^刀段圖框之-對應物 月况下,指派一近深度值給該物 音··忐 丁 n &lt;母—像 仕琢物件之該位置係在該色彩分段圖框之 15. 16. 17. 處,情況下’指派一遠深度值給該物件内之每—邊::; 項U之方法,其中該關鍵因子包含每-色彩分段 圖忙中之该一或多個物件中之每一 分段圖框之-大小之ϋ 以、對該色彩 :、且汁算包括針對每一物件 而基於該比率將一深度值指派給該物件内之每。 =未項U之方法,其中該關鍵因子包含 像中之一物件之一位w^ —維衫 ㈣(位m續二維影像 件之一位置之資訊,且計算包括: 對應物 判定該二維影像中之該物件之該位置不同於該 維影像中之該對應物件之該位置;及 續一 指派一近深度值給該物件内之每一像素。 如請求項1之方法’其進一步包括: 將指示-先前產生之深度圖之資料儲 裝置中,該深度圖包括指干㈣ 貫抖錯存 之三維資狀資料;及H轉像之每-像素 基於該先前產生之深度圖產生—新深度圖, 深度圖包括指示—對應二維影像之每-像素之 維資訊範圍之資料。 #八二 152792.doc 201137788 18·如清求項1之方法’其中計算該深度圖包括應用一或多 個體驗規則以調整該深度圖’每一規則經組態以基於該 一維影像中之一或多個物件之一人類感知來調整該深度 圖0 19·種用於自複數個色彩分段影像拉取基色域之系統,該 系統包括: 一育料儲存裝置’其經組態以儲存指示二維影像之資 才斗’該二維影像包括複數個像素; 一色彩分段單元,其與該資料儲存裝置通信,該色彩 分段單元經組態以: 基於該二維影像產生複數個色彩分段圖框,其中每 色彩分段圖框包括一或多個物件;及 針對該等色彩分段圖框中之每一者而基於該一或多 個物件產生一基色域;及 —深度圖單元’其與該色彩分段單元及該資料儲存裝 置通信,該深度®單元經組態以基於該f基色域針對該 -維影像計算-深度圖’其中該深度圖包括指示該二維 影像之每一像素之三維資訊之資料。 ’ 2〇·如請求項19之系統,其進一 單元經組態以: 步包括一輸入單元 該輸入 料儲存於該資 接收指示该二維影像之該資料且將該資 料儲存裝置中;及 產 接收指示一先前產生之深度圖 生之深度圖之該資料儲存於該 之資料且將指示該先前 資料儲存裝置中。 152792.doc 201137788 21. 22. 23. 24. 如請求項19之系統,其進一步包括經組態以針對該二維 影像中之每一物件產生邊緣資訊之一邊緣產生單元。 如請求項19之系統,其進一步包括經組態以應用—或多 個體驗規則以調整該深度圖之三維體驗單元,每—規則 經組態以基於該二維影像中之一或多個物件之一人類威 知來調整該深度圖。 一種電腦程式產品’其有形地體現於一電腦可讀儲存媒 體中’該電腦程式產品包含可操作以致使一資料處理設 備執行以下步驟之指令: 將指示二維影像之資料儲存於一資料儲存裝置中,該 二維影像包括複數個像素; 由一電腦之一色彩分段單元基於該二維影像產生複數 個色彩分段圖框,其中每一色彩分段圖框包括一或多個 物件; 由該色彩分段單元針對該等色彩分段圖框中之每_ 而基於該一或多個物件產生一基色域;及 ,由該電腦之-深度圖單元基於該等基色域針對該二 影像計算-深度®,其中該深度圖包括指示該二維影 之每一像素之三維資訊之資料。 一種用於自動立體内插之雷腦外古、土 由一電腦之一輸入單元接收—第 二維影像,每一二維影像包括一像素大小; 由該電腦之-預處理單元針對該第—二維影像及該 一一維影像中之每一者產生—減小之像素影像,其中 〈电胸化方法,該方法包括: 影像及一第 152792.doc 201137788 一減小之像素影像包括小於該像素大小之一減小之像素 大小; 由該預處理單元針對該第一二維影像及該第二二維斧 像t之每一者計算邊界資訊; 由該電腦之一深度圖單元針對該第一減小之像素影像 及該第二減小之像素影像計算一深度圖,其中該深度圖 包括指示該第一減小之像素影像及該第二減小之像素影 像中之一或多個物件之三維資訊之資料;及 由該電腦之該深度圖單元基於該第一二維影像及該第 二二維影像中之每一者之該邊界資訊及該第—減小之像 素影像及該第^減小之像素影像之該#度圖㈣該第一 二維影像及該第二二維影像計算一深度圖。 25.如晴求項24之方法’其進—步包括基於該第—二維影像 及該第二二維影像之該深度圖產生一第三二維影像,其 中該第三二維影像包括介於該第-二維影像之一第一有 利點與該第二二續旦彡徐 、准衫像之一第二有利點之間的一有利 點。 匀〜 26.如請求項24之方法 談m *中針對該第-敉小之像素影像及 :傻Γ 、之像素影像計算該深度®包括基於該等減小 :素影像中之該等物件中之每一者之像素位置 來產生該資料。 走 27·如請求項26之方法,其進一步包括: 比較兩個對應像幸, 像素係位於該第-減/ 兩個對應像素中之一第一 之像素影像内,且該兩個對應像 152792.doc 201137788 素中之一第一像素係位於該第_、、&amp; 片__ 乐一减小之像素影像内; 計算指TF該第一像素與該第二 值;及 像素刀開多遠之-距離 指派指示該深度圖中之該兩 _ 對應像素之深度資訊之 貝料以指示該物件係遠離包含該 看者。 ,物件之三維視圖之-觀 28. 如請求項26之方法,其進一步包括: 此較兩個對應像素’其中該兩個對應像素中之一第一 ^素係位於該第-減小之像素影㈣,且該兩個對應像 素中I第二像素係、位於該第二減小之像素影像内; 计算指不該第一像素與該第 值;及 像素父又程度之一交叉 指派指示該深度圖中之該兩個對應像素之深度資訊之 資料’以指不該物件接近於包含該物件之三維視圖之一 觀看者。 29. 如請求項24之方法,其中將該 了茨像常大小叶算為每一對應 二維影像之-像素長度與—像素寬度之乘積。 30. 如請求項24之方法,其中針對 _ 弟一維影像及該第二 二維影像計算一深度圖包括: 基於該第-二維影像及該第二二維影像針對該邊界資 讯計算每一邊界像素之該深度圖;及 憑藉接近該第-減小之像素影像及該第: 影像之對應邊界像素之資料教懸料之該深度圖之 深度資訊。 152792.doc 201137788 31·如請求項24之方法,其中計算該邊界資訊包括: 基於該二維影像產生複數個色彩分段圖框,其中每一 色彩分段圖框包括一或多個物件;及 針對該第一減小之像素影像及該第二減小之像素影像 之每一像素,基於該等色彩分段圖框設定該像素之一邊 界點指示符,其中該邊界點指示符包括指示該像素是否 係一邊界點之資料。 32. 如w求項3 1之方法,其包括驗證該複數個色彩分段圖框 中之每一者包括一或多個内聚物件,每一内聚物件包括 可識別邊界線。 33. 如請求項31之方法’其中計算該深度圖包括藉由識別該 第一減小之像素影像内之一可見像素來識別該第一減小 之像素影像之-隱藏像素,其中該可見像素在該第二減 小之像素影像内不具有一對應像素。 34. 如請求項33之方法,其進一步包括基於該第一二維影像 及該第二二維影像之該深度圖產生一第三二維影像其 中忒第二二維影像包括介於該第一二維影像之一第一有 利點與该第二二維影像之一第二有利點之間的一有利 點,且其中該第三二維影像基於該所識別之隱藏像素包 括進入視野内之一區或在視野内消失之一區。 35. -種用㈣動立體内插之系統’該系統包括: 一輪入單元’其經組態以接收一第一二維影像及一第 二二維影像,每一二維影像包括—像素大小; 一預處理單元,其與該輸入單元通信,該預處理單元 152792.doc 201137788 經組態以: 第一二維影像及該第二二維影像 產生-減小之像素影像,其令每 者 括小於竽傻去 J之像素影像包 於a像素大小之一減小之像素大小及 針對該第-二維影像及該第 計算邊界資訊1 ^像中之每-者 /果度圓單元,其與該預處理 元經組態以: 早7^通“,该深度圖單 針對該第一減小之傻音寻《德β * &amp; 像計算一深卢圖装由… δΛ第二減小之像素影 之像度圓包括指示該第-減小 =素:像及該第二減小之像素影像中之-或多個物 仟之二維貧訊之資料,·及 ::該第一二維影像及該第二二維影像中之每一者 =邊界資訊及該第一減小之像素影像及該第二減小 ^素影像之該深度圖針對該第—二維影像及該第二 一維影像計算一深度圖。 36. 37. :长項35之系統’其進_步包括與該預處理單元通信 )刀&amp;單疋’②色彩分段單元經組態以基於該二 :象產生複數個色彩分段圖框,其中每—色彩分段圖 框I括一或多個物件。 項5之系統’其進-步包括經組態以基於該深度 / 第 維影像之一轉換單元,其中該第三二維 景“象包括介於該第一二維影像之一第一有利點與該第二 -維影像之-第二有利點之間的—有利點。 152792.doc 201137788 认一種電腦程式產π〇σ,其有形地體現於_電腦可讀儲存媒 體令,該電腦程式產品包含可操作以致使一資料處理設 備執行以下步驟之指令: 接收一第一二維影像及一第二二維影像,每一二維影 像包括一像素大小; 針對該第一二維影像及該第二二維影像中之每—者產 生一減小之像素影像,其中每—減小之像素影像包括小 於該像素大小之一減小之像素大小; 針對該第一二維影像及該第二二維影像中之每一 算邊界資訊; ° 針對該第-減小之像素影像及該第二減小之像素影像 計算一深度圖,其中該深度圖包括指示該第—減小之像 素影像及該第二減小之像素影像中之一或多個物件之三 維資訊之資料;及 基於該第一二維影像及該第二二維影像中之每一者之 該邊界資訊及該第-減小之像㈣像及該第:減小之像 素影像之該深度圖針對該第一二維影像及該第二二維影 像計算一深度圖。 152792.doc -11 ·201137788 VII. Patent application scope: 1. A computerized method for extracting a primary color gamut from a plurality of color segmentation images. The method comprises: storing data of a non-two-dimensional image in a data storage device, The two-dimensional image includes a plurality of pixels; a computer-color segmentation unit generates a plurality of color shirt segment frames based on the two-dimensional image, wherein each color segment frame includes one or more objects; Segmenting I elements for each of the color segmentation frames to generate a primary color gamut based on the one or more objects; and wherein the computer t-depth map element is based on the primary color gamut for the two-dimensional image A calculation-depth map, wherein the depth map includes data indicative of three-dimensional information for each pixel of the two-dimensional image. 2. The method of claim 1, wherein the step further comprises allowing adjustment of the Bezier point of the Bezier graph of one of the objects. 3. The method of claim 2, further comprising: defining, for each color segment frame, an edge of each of the one or more objects, including: 〃 automatically calculating one of the objects图图兹固°海贝炫图 includes a plurality of Bezier points for the piece and a plurality of Bezier curves connecting the plurality of Bez points; 'receiving instructions to adjust the information of the Bates chart; and generating based on the data A more detailed view of one of the objects, wherein the more detailed map includes a plurality of additional points combined with the plurality of Bezier points. 152792.doc 201137788 4. The method of claimants, which produces the color segmentation frames to define a plurality of basic colors, each of which is a segment of the basic color frame. The method of claim 4, wherein each of the plurality of basic colors from the plurality of basic colors includes a range of color values, including a color value within the range of the predefined color values. The system is associated with generating a color segmentation frame based on the basic color. 6. The method of claim 5, comprising: receiving data indicative of a range of color values of one or more of the plurality of basic colors; and wherein the data is for each of the one or more colors Adjust the range of pre-existing color values. 7. The method of claim 4, wherein the plurality of basic colors after the refilling of the gold f, the color of each of the plurality of basic colors comprises a color Baton pair, the color pair comprising a light color and a deep color . 8. As in the method of claim 4, 苴中命益廿, 八^ is a number of basic colors including brown and beige. 9. The party of claimants, wherein calculating the depth map comprises: determining that the two-dimensional representation of the object in the two-dimensional image causes a portion of the object that is not in the field of view to enter the field of view in the two-dimensional image. And stretching one of the back of the object, the back of the tooth, one side of the object, or both to fill the portion of the object that enters the field of view. 10. Calculating the depth map in the method of claim 1 includes determining that the two-dimensional representation of the object in the two-dimensional image results in the field of view in the two-dimensional I52792.doc • 2. 201137788 image. The portion of the object disappears within the field of view; and the side of the D-object is shrunk to hide that portion of the object that disappears within the field of view. 11. The method of claim 1, wherein calculating the depth map comprises calculating the information for each pixel based on a one-key factor of the plurality of color segment frames. 1. The method of claim η, wherein the key factor includes the hue, saturation, and brightness of the pixel, and the calculation includes: determining the level of saturation, the level of brightness, or both—high is a low level; And assigning a near depth value to the pixel if the level of saturation or brightness is a high level; or the depth value is given to the low level when the level of saturation or brightness is a low level Pixel. m: the method of member 11 wherein the key factor includes the amount of -value change in the pixel and the 4-pixel group, and the calculating comprises: determining whether the pixel is a portion of a plane based on the value change; and in the pixel system In the case of a portion of the plane, the pixel is assigned. The deciduous value is as in the method of claim 11 wherein each of the one or more objects in the key factor frame is in a positional color including for each object: and the (four) object ^ position is Color segmentation frame - In the case, the assignment-near depth value is given to each _pixel in the object 152792.doc 201137788 2: This position is the color segment frame - the upper position of the month / Assigning a far depth value to each pixel in the object, assigning a near depth value to the bar in the other color component at the position of the object, and the corresponding month of the segment The position of the object · 忐 n n &lt; mother - like the official object is in the color segment frame 15. 16. 17. In the case, 'assign a far depth value to each edge of the object The method of item U, wherein the key factor comprises a size of each of the one or more objects in each of the color segment maps, the size of the color: A depth value is assigned to each of the objects based on the ratio for each item. = method of not item U, wherein the key factor includes information of one of the objects in the image w^-dimensional shirt (four) (bit m continues to one of the positions of the two-dimensional image, and the calculation includes: the corresponding object determines the two-dimensional The location of the object in the image is different from the location of the corresponding object in the dimension image; and successively assigning a near depth value to each pixel in the object. The method of claim 1 further comprising: In the data storage device that will indicate the previously generated depth map, the depth map includes three-dimensional data of the dry (four) jitter; and each pixel of the H-turn image is generated based on the previously generated depth map - a new depth The depth map includes information indicating the range of information per pixel corresponding to the two-dimensional image. #八二152792.doc 201137788 18·Method of clearing item 1 wherein calculating the depth map includes applying one or more bodies The rules are adjusted to adjust the depth map 'each rule is configured to adjust the depth map based on human perception of one of the one or more objects in the one-dimensional image. A system for priming a color gamut, the system comprising: a cultivating storage device configured to store a resource indicative of a two-dimensional image comprising a plurality of pixels; a color segmentation unit and the data The storage device communicates, the color segmentation unit configured to: generate a plurality of color segmentation frames based on the two-dimensional image, wherein each color segmentation frame includes one or more objects; and for the color segmentation maps Each of the frames generates a primary color gamut based on the one or more objects; and a depth map unit that communicates with the color segmentation unit and the data storage device, the depth® unit being configured to be based on the f The base color gamut calculates a depth map for the -dimensional image, wherein the depth map includes data indicative of three-dimensional information for each pixel of the two-dimensional image. '2〇· As in the system of claim 19, the further unit is configured to The step includes an input unit, the input material is stored in the data receiving the data indicating the two-dimensional image and is stored in the data storage device; and the production receiving indication is generated by a previously generated depth map. The data of the map is stored in the data and will be indicated in the prior data storage device. 152792.doc 201137788 21. 22. 23. 24. The system of claim 19, further comprising configured to target the two dimensional Each object in the image produces an edge generation unit of edge information. The system of claim 19, further comprising a three-dimensional experience unit configured to apply - or a plurality of experience rules to adjust the depth map, each ruled Configuring to adjust the depth map based on one of the one or more objects in the two-dimensional image. A computer program product 'is tangibly embodied in a computer readable storage medium' The operation of causing a data processing device to execute the following steps: storing the data indicating the two-dimensional image in a data storage device, the two-dimensional image comprising a plurality of pixels; and the color segmentation unit of the computer is based on the two-dimensional The image generates a plurality of color segmentation frames, wherein each color segmentation frame includes one or more objects; Each of the color segmentation frames generates a primary color gamut based on the one or more objects; and wherein the computer-depth map unit calculates a depth-depth® for the two images based on the primary color gamut, wherein the depth map Includes information indicative of three-dimensional information for each pixel of the two-dimensional image. A thunder brain for autostereoscopic interpolation, the soil is received by an input unit of a computer - a second-dimensional image, each two-dimensional image comprising a pixel size; and the pre-processing unit of the computer is directed to the first Each of the two-dimensional image and the one-dimensional image produces a reduced pixel image, wherein the <electric chesting method includes: the image and a 152792.doc 201137788 a reduced pixel image includes less than the a pixel size in which one of the pixel sizes is reduced; calculating, by the preprocessing unit, boundary information for each of the first two-dimensional image and the second two-dimensional axe image t; Calculating a depth map of the reduced pixel image and the second reduced pixel image, wherein the depth map includes one or more objects indicating the first reduced pixel image and the second reduced pixel image The data of the three-dimensional information; and the depth map unit of the computer is based on the boundary information of each of the first two-dimensional image and the second two-dimensional image and the first-reduced pixel image and the first ^ The reduced degree image of the pixel image (4) The first two-dimensional image and the second two-dimensional image calculate a depth map. 25. The method of claim 24, wherein the step of generating a third two-dimensional image based on the depth image of the second-dimensional image and the second two-dimensional image, wherein the third two-dimensional image comprises An advantageous point between the first advantageous point of the first two-dimensional image and the second advantageous point of the second two-dimensional image. Uniformly ~ 26. As in the method of claim 24, the calculation of the depth in the m* pixel image and the pixel image of the pixel is calculated based on the reduction: in the object in the prime image The pixel location of each of them produces the material. The method of claim 26, further comprising: comparing two corresponding images, wherein the pixel is located in a first pixel image of the first subtraction/two corresponding pixels, and the two corresponding images 152792 .doc 201137788 One of the first pixels is located in the pixel image of the _,, &amp; __ music-reduced pixel; the calculation refers to the first pixel and the second value of the TF; and how far the pixel is open The distance-assignment indicates the depth information of the two corresponding pixels in the depth map to indicate that the object is away from the viewer. The method of claim 26, further comprising: the two corresponding pixels, wherein one of the two corresponding pixels is located in the first-reduced pixel Shadow (4), and the second pixel system of the two corresponding pixels is located in the second reduced pixel image; the calculation refers to the first pixel and the first value; and the pixel parent degree is one of the cross assignments indicating The data of the depth information of the two corresponding pixels in the depth map is used to indicate that the object is close to one of the three-dimensional views containing the object. 29. The method of claim 24, wherein the constant-size leaf is calculated as the product of the pixel length and the pixel width of each corresponding two-dimensional image. The method of claim 24, wherein calculating a depth map for the one-dimensional image and the second two-dimensional image comprises: calculating, for the boundary information based on the first-two-dimensional image and the second two-dimensional image a depth map of a boundary pixel; and teaching the depth information of the depth map of the suspension by the data of the corresponding boundary pixel adjacent to the first-reduced pixel image and the first image. The method of claim 24, wherein calculating the boundary information comprises: generating a plurality of color segmentation frames based on the two-dimensional image, wherein each color segmentation frame comprises one or more objects; Setting a boundary point indicator of the pixel based on the color pixel of the first reduced pixel image and the second reduced pixel image, wherein the boundary point indicator includes the indication Whether the pixel is a boundary point data. 32. The method of claim 3, comprising verifying that each of the plurality of color segment frames comprises one or more inner polymer members, each inner polymer member comprising an identifiable boundary line. 33. The method of claim 31, wherein calculating the depth map comprises identifying a hidden pixel of the first reduced pixel image by identifying a visible pixel in the first reduced pixel image, wherein the visible pixel There is no corresponding pixel in the second reduced pixel image. 34. The method of claim 33, further comprising generating a third two-dimensional image based on the depth image of the first two-dimensional image and the second two-dimensional image, wherein the second two-dimensional image comprises the first An advantageous point between a first vantage point of one of the two-dimensional images and a second vantage point of the second two-dimensional image, and wherein the third two-dimensional image includes one of entering the field of view based on the identified hidden pixel Zone or area that disappears within the field of view. 35. - System for (four) dynamic stereo interpolation 'The system includes: a round-in unit configured to receive a first two-dimensional image and a second two-dimensional image, each two-dimensional image including - pixel size a pre-processing unit that is in communication with the input unit, the pre-processing unit 152792.doc 201137788 configured to: generate a reduced-pixel image of the first two-dimensional image and the second two-dimensional image, each of which a pixel size smaller than a pixel size of a pixel image package, and a pixel size for each of the second-dimensional image and the first calculated two-dimensional image and the first calculated image And the pre-processing element is configured to: "early 7", "the depth map is for the first reduction of the silly sound search "De β * &amp; like calculating a deep Lu Tu loading by ... δ Λ second reduction The image circle of the pixel image includes data indicating the second-dimensional poorness of the first-decrement=prime: image and the second reduced pixel image, and/or a plurality of objects, and: Each of the two-dimensional image and the second two-dimensional image = boundary information and the first reduced pixel image And the depth map of the second reduced image is used to calculate a depth map for the first two-dimensional image and the second one-dimensional image. 36. 37. The system of the long item 35 includes The pre-processing unit communication) knife &amp;single&apos;2 color segmentation unit is configured to generate a plurality of color segmentation frames based on the two: the image segmentation frame I includes one or more objects. The system of item 5, wherein the step further comprises configuring the unit to convert the unit based on the depth/dimensional image, wherein the third two-dimensional scene "image comprises a first advantageous point between the first two-dimensional image An advantage between the second advantageous point of the second-dimensional image. 152792.doc 201137788 recognizes a computer program that produces π〇σ, which is tangibly embodied in a computer readable storage medium, the computer program product comprising instructions operable to cause a data processing device to perform the following steps: receiving a first two a dimensional image and a second two-dimensional image, each of the two-dimensional images including a pixel size; for each of the first two-dimensional image and the second two-dimensional image, a reduced pixel image is generated, wherein each- The reduced pixel image includes a pixel size that is smaller than one of the pixel sizes; for each of the first two-dimensional image and the second two-dimensional image; ° for the first-reduced pixel image And calculating, by the second reduced pixel image, a depth map, wherein the depth map includes data indicating three-dimensional information of the first-reduced pixel image and the one or more objects in the second reduced pixel image; And the depth map based on the boundary information of each of the first two-dimensional image and the second two-dimensional image and the first-reduced image (four) image and the reduced-pixel image The first two-dimensional image and the second two-dimensional image calculate a depth map. 152792.doc -11 ·
TW099143101A 2009-12-09 2010-12-09 Computerized method and system for pulling keys from a plurality of color segmented images, computerized method and system for auto-stereoscopic interpolation, and computer program product TW201137788A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US12/634,379 US8638329B2 (en) 2009-12-09 2009-12-09 Auto-stereoscopic interpolation
US12/634,368 US8538135B2 (en) 2009-12-09 2009-12-09 Pulling keys from color segmented images

Publications (1)

Publication Number Publication Date
TW201137788A true TW201137788A (en) 2011-11-01

Family

ID=43558368

Family Applications (1)

Application Number Title Priority Date Filing Date
TW099143101A TW201137788A (en) 2009-12-09 2010-12-09 Computerized method and system for pulling keys from a plurality of color segmented images, computerized method and system for auto-stereoscopic interpolation, and computer program product

Country Status (2)

Country Link
TW (1) TW201137788A (en)
WO (1) WO2011071978A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI478102B (en) * 2012-01-20 2015-03-21 Realtek Semiconductor Corp Image depth generation device and method thereof
TWI485652B (en) * 2012-01-20 2015-05-21 Realtek Semiconductor Corp Image processing device and method thereof
CN106097273A (en) * 2016-06-14 2016-11-09 十二维度(北京)科技有限公司 The automatic complement method of 3D is turned for video 2D

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9876953B2 (en) 2010-10-29 2018-01-23 Ecole Polytechnique Federale De Lausanne (Epfl) Omnidirectional sensor array system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ID27878A (en) * 1997-12-05 2001-05-03 Dynamic Digital Depth Res Pty IMAGE IMPROVED IMAGE CONVERSION AND ENCODING ENGINEERING

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI478102B (en) * 2012-01-20 2015-03-21 Realtek Semiconductor Corp Image depth generation device and method thereof
TWI485652B (en) * 2012-01-20 2015-05-21 Realtek Semiconductor Corp Image processing device and method thereof
CN106097273A (en) * 2016-06-14 2016-11-09 十二维度(北京)科技有限公司 The automatic complement method of 3D is turned for video 2D

Also Published As

Publication number Publication date
WO2011071978A1 (en) 2011-06-16

Similar Documents

Publication Publication Date Title
US8638329B2 (en) Auto-stereoscopic interpolation
US8977039B2 (en) Pulling keys from color segmented images
US11423556B2 (en) Methods and systems to modify two dimensional facial images in a video to generate, in real-time, facial images that appear three dimensional
EP3139589B1 (en) Image processing apparatus and image processing method
KR102120046B1 (en) How to display objects
US20200302688A1 (en) Method and system for generating an image
JP4548840B2 (en) Image processing method, image processing apparatus, program for image processing method, and program recording medium
EP2775452B1 (en) Moving-image processing device, moving-image processing method, and information recording medium
US9443338B2 (en) Techniques for producing baseline stereo parameters for stereoscopic computer animation
TW201243763A (en) Method for 3D video content generation
JP2006325165A (en) Device, program and method for generating telop
TWI531212B (en) System and method of rendering stereoscopic images
JP2013235537A (en) Image creation device, image creation program and recording medium
JP5238767B2 (en) Parallax image generation method and apparatus
TW201417041A (en) System, method, and computer program product for extruding a model through a two-dimensional scene
WO2019050038A1 (en) Image generation method and image generation device
TW201137788A (en) Computerized method and system for pulling keys from a plurality of color segmented images, computerized method and system for auto-stereoscopic interpolation, and computer program product
Díaz Iriberri et al. Depth-enhanced maximum intensity projection
JP2013172214A (en) Image processing device and image processing method and program
CN108900825A (en) A kind of conversion method of 2D image to 3D rendering
Aksoy et al. Interactive 2D-3D image conversion for mobile devices
Engel et al. Evaluating the Perceptual Impact of Rendering Techniques on Thematic Color Mappings in 3D Virtual Environments.
WO2012096065A1 (en) Parallax image display device and parallax image display method
Chappuis et al. Subjective evaluation of an active crosstalk reduction system for mobile autostereoscopic displays
Maia et al. A real-time x-ray mobile application using augmented reality and google street view