TW588289B - 3-D digital image processor and method for visibility processing for use in the same - Google Patents

3-D digital image processor and method for visibility processing for use in the same Download PDF

Info

Publication number
TW588289B
TW588289B TW91123147A TW91123147A TW588289B TW 588289 B TW588289 B TW 588289B TW 91123147 A TW91123147 A TW 91123147A TW 91123147 A TW91123147 A TW 91123147A TW 588289 B TW588289 B TW 588289B
Authority
TW
Taiwan
Prior art keywords
month
depth
value
depth value
mapping function
Prior art date
Application number
TW91123147A
Other languages
Chinese (zh)
Inventor
Chien-Chung Hsiao
Kuo-Wei Yeh
Original Assignee
Silicon Integrated Sys Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Silicon Integrated Sys Corp filed Critical Silicon Integrated Sys Corp
Priority to TW91123147A priority Critical patent/TW588289B/en
Application granted granted Critical
Publication of TW588289B publication Critical patent/TW588289B/en

Links

Abstract

A three-dimensional (3-D) digital image processor and a method for processing a visibility for use in a displaying procedure of a 3-D digital image are disclosed. The 3-D digital image processor includes a depth map generator, a memory device and a rendering engine. The method includes steps of presetting a depth map according to a plurality of pixels received, the depth map storing the pixels and reference depths corresponding thereto, and receiving a pixel data and proceeding a visibility test with reference to the depth map, thereby determining whether to proceed a rendering operation on the 3-D digital image by the pixel data.

Description

588289 曰 _案號 91123147 五、發明說明(1) 發明領域 本案係為一種三度空間數位影像處理器與應用其上之 能見度處理方法’尤指個人電腦系統中之三度空間數位影 像處理器與應用其上之能見度處理方法。 發明背景 在三度空間圖像應用程式中,在一場景中之一物體係 藉由一三度空間圖像模型來表現。例如,利用一多邊形網 目(polygon mesh),一個物體之表面便可藉由數個相互連 接之多邊形來建立起模型。而表面著色過程(:rendering process)通常藉由轉換幾何基本元素(geometric primitives)上之端點來提供光栅化過程(rasterizing process)所需之模型數據。所謂光柵化一般係指根據投射 至或覆蓋於一像素上之幾何基本元素來計算該像素於視覺 空間中之像素值之過程。 請參見第一圖,其係一習用3D圖形引擎之功能方塊示 思圖’該3 D圖形引擎主要包含一轉換-打光引擎 (transform-lighting engine)ll用以進行幾何計算;一 設定引擎12(setup engine)用以初始化該基本元素;一 掃描轉換器13(scan converter)用以衍生像素座標;一 色彩計算器14(color calculator)用以產生平滑色彩; 一貼圖單元15(texture unit)用以處理貼圖;一透明度588289 _Case No. 91123147 V. Description of the Invention (1) Field of the Invention The present invention relates to a three-dimensional digital image processor and the visibility processing method applied thereto, especially the three-dimensional digital image processor and personal computer system. Apply the visibility processing method above. BACKGROUND OF THE INVENTION In a three-dimensional space image application, an object system in a scene is represented by a three-dimensional space image model. For example, using a polygon mesh, the surface of an object can be modeled by several interconnected polygons. The surface rendering process (: rendering process) usually provides the model data required for the rasterizing process by transforming the endpoints on the geometric primitives. The so-called rasterization generally refers to the process of calculating the pixel value of a pixel in visual space based on the geometric basic elements projected onto or covering the pixel. Please refer to the first figure, which is a functional block diagram of a 3D graphics engine. The 3D graphics engine mainly includes a transform-lighting engine for geometric calculations; a setting engine 12 (setup engine) is used to initialize the basic element; a scan converter 13 is used to derive pixel coordinates; a color calculator 14 is used to generate smooth colors; a texture unit 15 is used To deal with textures; a transparency

第6頁 588289 / 〇3 3. ίδ _案號91123147 ,基月 日 倐正__ 五、發明說明(2) 混色單元16(alpha blending unit)用以產生透明與半透 明的效果,一深度測試單元1 7 (d e p t h t e s t u n i t)用以移 除由像素組成之隱藏表面;一顯示控制器1 8 (d i s p 1 a y control ler)用以準確地於顯示器21上顯示影像等等。該 3 D圖形引擎接收並執行儲存於指令佇列1 〇中之指令,而記 憶體控制器1 9透過記憶體匯流排對該圖形記憶體2 0進行存 取。該指令佇列1 〇係一用以儲存指令資料之F丨F 〇單元,其 透過系統匯流排自該控制器1接收該指令資料。 而在一個給定的三度空間圖像場景中,可能有多個多 邊形同時投射在投射平面(pro jection piane)上之同一區 域’如此一來’在此場景中將有一些基本元素無法被看 見’而上述之深度測試單元17(depth test unit)便是用 以移除由像素組成之隱藏表面。因此,有許多隱藏表面去 除演异法(hidden surface removal algorithm)被發展出 來’其中最廣為人知的演算法便是利用一個Z緩衝器來儲 存每一個畫點(drawing point)之深度值之Z緩衝演算法 (Z-buffer alg0rithm)。此演算法之核心係關於一對於接 收到每一個點之深度值與原先已儲存在乙緩衝器中之深度 值所進行之珠度比較機制。對於在一平面(f a c e t)上之一 點(χ ’ y ),其深度值可由該平面之頂點之深度值内插而 得’座標(X,y )之相對應深度值係由z_缓衝器取得,進行 一深度測試(d e p t h t e s t),藉由比較兩深度值以決定哪一 個較接近觀看者。接著以較接近的深度值來更新Z -緩衝 器。因此’該Z-緩衝器反應出目前為止投射面中之每一點Page 6 588289 / 〇3 3. ίδ _ Case No. 91123147, the base month day is __ V. Description of the invention (2) The alpha blending unit 16 is used to produce transparent and translucent effects, a depth test The unit 17 (depthtestunit) is used to remove the hidden surface composed of pixels; a display controller 18 (disp 1 ay control ler) is used to accurately display images on the display 21 and so on. The 3D graphics engine receives and executes the instructions stored in the instruction queue 10, and the memory controller 19 accesses the graphics memory 20 through the memory bus. The instruction queue 1 0 is a F 丨 F 0 unit for storing instruction data, which receives the instruction data from the controller 1 through a system bus. In a given three-dimensional image scene, there may be multiple polygons projected on the same area on the projection plane at the same time. 'This way', there will be some basic elements that cannot be seen in this scene 'The depth test unit 17 described above is used to remove the hidden surface composed of pixels. Therefore, many hidden surface removal algorithms have been developed. One of the most well-known algorithms is a Z-buffer algorithm that uses a Z-buffer to store the depth value of each drawing point. Method (Z-buffer alg0rithm). The core of this algorithm is a sphericity comparison mechanism for the depth value of each point received and the depth value that was originally stored in the B buffer. For a point (χ 'y) on a plane (facet), its depth value can be interpolated from the depth value of the vertex of the plane to obtain the corresponding depth value of the' coordinate (X, y) by the z_buffer Obtain and perform a depth test to determine which one is closer to the viewer by comparing the two depth values. The Z-buffer is then updated with a closer depth value. So ’the Z-buffer reflects every point in the projection plane so far

第7頁 588289 ^ ^ 19 __案號91123147 年'月::日 佟i_ 五、發明說明(3) 之最接近的深度值狀態。例如,假設觀看者係位於Z座標 原點0,而且,觀看方向係朝向正Z -軸,於是,該z -緩衝 器便可保留住每一個繪點之目前為止最小的z值。 而該Z-緩衝器演算法亦為現代電腦繪圖系統中用以移 除隱藏表面之最簡單方法。Z-緩衝器演算法之虛擬碼 (pseudocode)如下戶斤示。Page 7 588289 ^ ^ 19 __ Case No. 91123147 'Month :: Day 佟 i_ V. The closest depth value state of the invention description (3). For example, suppose the viewer is located at the origin 0 of the Z coordinate, and the viewing direction is toward the positive Z-axis. Therefore, the z-buffer can retain the smallest z value of each drawing point so far. The Z-buffer algorithm is also the easiest way to remove hidden surfaces in modern computer graphics systems. The pseudocode of the Z-buffer algorithm is shown below.

For (each polygon) {For (each polygon) {

For (each pixel in polygon’s projection) { Calculate pixel’s z value (source-z) at coordinates (χ, y);For (each pixel in polygon ’s projection) {Calculate pixel ’s z value (source-z) at coordinates (χ, y);

Read destination-z from Z-buffer (x,y ); If (source-z is closer to the viewer) Write source-z to Z-buffer (x, y); 已知現代3 D應用之一主要問題為重覆描繪 (〇 v e r d r a w )。大部分的圖形處理器都無法知道場景的哪些 部分是看得見的,而哪些要在著色過程開始前先覆蓋住。 由於Z-緩衝器演算法之核心在於每一輸入像素深度值與儲 存於Z-緩衝器中之深度值之深度比較機制。在深度比較過 程中,很多像素會被寫到晝面緩衝器(f r a m e b u f f e r )中, 然後被更接近觀看者的新像素重覆蓋過。重覆描繪就是指Read destination-z from Z-buffer (x, y); If (source-z is closer to the viewer) Write source-z to Z-buffer (x, y); It is known that one of the main problems of modern 3D applications is Overlay drawing (〇verdraw). Most graphics processors have no way of knowing which parts of the scene are visible and which ones need to be covered before the shading process begins. Because the core of the Z-buffer algorithm lies in the depth comparison mechanism between the depth value of each input pixel and the depth value stored in the Z-buffer. During the depth comparison process, many pixels are written to the daytime buffer (f r a m e b u f f r r) and then overwritten by new pixels closer to the viewer. Repeated drawing means

第8頁 588289 JI:正替換頁 案號91123147 年年月月曰3 修正 11 I"···1·· LassaaiaaccB—^―—-rf··1 _ 五、發明說明(4) 這種畫面緩衝器中像素重覆寫入的情形。一場景中重覆描 繪之數量被稱為深度複雜度,代表總著色像素與可見像素 之比率。例如,若一場景具有之深度複雜度為4,代表被 著色像素為場景中確實可見之像素之4倍之多。在一複雜 的3 D場景中,大量的物件相互重疊。從深度比較機制的觀 點看來,以由前往後順序之多邊形(基本元素)為佳。具較 大深度值之像素(遠離觀看者)將在深度比較處理過後被丟 棄,因為較小深度值(較接近觀看者)之重疊像素已被畫 出。否則,新的像素將被著色,並於深度緩衝器與晝面緩 衝器之對應像素位置中分別重新寫入目前的深度值與色彩 值。在可見像素中,若沒有在繪圖動線的早期階段中丟 棄,則該著色過程顯然佔用了大量的處理與記憶體資源。 第二圖為一上視繪圖場景之實例。觀看者的視野以虛線表 示。場景中之可見物體以黑色虛線表示。如第二圖所示, 在此實例中之大部分物體都是隱藏起來的,由於重覆描繪 而大幅降低了圖形著色系統的效率。 傳統繪圖硬體試圖以Z-排序來解決此問題,以消除某 些多餘的資訊。前述方法雖可避免逐一進行像素可見度測 試時所需之記憶體頻寬,但無法克服重覆描繪的問題,仍 然保留了相當大量不必要的計算與記憶體需求。例如,若 幾何基本元素以由後往前(由遠而近)的順序描繪,會有一 群像素通過可見度測試,因而發生非所欲之重覆描繪。 因此,雖然Z -緩衝器演算法在軟體或硬體上都很容易 執行,且不需要預先排序。Z-緩衝器反應出目前為止投射Page 8 588289 JI: The page number 91123147 is being replaced 3 Amendment 11 I " ··· 1 ·· LassaaiaaccB — ^ ―—- rf ·· 1 _ 5. Description of the invention (4) This kind of screen buffer Repeated writing of pixels in the device. The number of repeated depictions in a scene is called depth complexity and represents the ratio of total shaded pixels to visible pixels. For example, if a scene has a depth complexity of 4, it means that the shaded pixels are 4 times as many pixels as are actually visible in the scene. In a complex 3D scene, a large number of objects overlap each other. From the perspective of the depth comparison mechanism, polygons (basic elements) in the order from the back are preferred. Pixels with larger depth values (away from the viewer) will be discarded after depth comparison processing, because overlapping pixels with smaller depth values (closer to the viewer) have already been drawn. Otherwise, the new pixels will be colored, and the current depth and color values will be re-written in the corresponding pixel positions of the depth buffer and day buffer. In visible pixels, if not discarded in the early stages of drawing lines, this shading process obviously consumes a lot of processing and memory resources. The second figure is an example of a top-view drawing scene. The viewer's field of view is indicated by a dotted line. Visible objects in the scene are represented by black dotted lines. As shown in the second figure, most of the objects in this example are hidden, which greatly reduces the efficiency of the graphics shading system due to repeated drawing. Traditional drawing hardware attempts to solve this problem with Z-ordering to eliminate some redundant information. Although the foregoing method can avoid the memory bandwidth required for the pixel visibility test one by one, it cannot overcome the problem of repeated rendering, and still retains a considerable amount of unnecessary calculations and memory requirements. For example, if the basic elements of the geometry are depicted in order from back to front (from far to near), a group of pixels will pass the visibility test, and undesired repetitive rendering will occur. Therefore, although the Z-buffer algorithm is easily implemented in software or hardware, it does not require pre-sequencing. Z-buffer reflects projection so far

588289 案號 91123147 $止晋狭:铒 曰 修正 五、發明說明(5) 平面上每一點最接近的深度值狀態。但如前所述,若物體 之著色順序為由後往前,則傳統Z-緩衝器演算法無法解決 重覆描繪的問題。故如何解決上述習用手段之缺失,係為 發展本案之主要目的。 發明概述 本案 位影像之 之複數個 函數係儲 接收 進行 像 能 度空間數 根據 能見度測 之一二維 度映射函 值與該待 參考深度 料進行表 根據 先建立該 點之二維 係為 顯示 像素 存有 素點 見度 位影 上述 試係 座標 數而 測深 值較 面著 上述 深度 座標 一種能 過程中 點來預 該等像 貧料並 測試, 像中該 構想, 包含下 見度 ,其 先建 素點 參考 It以 像素 本案 列步 與一待測深 對應出 度值中 接近觀 色動作 構想, 映射函 輸入該 一參 何者 察者 本案 數之 深度 處理方法,應用 方法包含下列步 立一深度映射函 與其相對應之參 已建立完成之該 判斷是否以該像 點進行表面著色 所述之能見度處 驟:取出該像素 度值;根據該二 考深度值;以及 較接近觀察者之 之深度值時,便 於 三度 驟:根據 數,該深 考深度值 深度映射 素點資料 動作。 理方法, 點資料中 維座標輸 比較該參 深度值, 不以該像 空間數 所接收 度映射 ;以及 函數來 對該三 其中該 所包含 入該深 考深度 而當該 素點資 所述之能見度處理方法,其中預 方法包含下列步驟:將該等像素 映射函數而對應出相對之原始參588289 Case No. 91123147 $ Zhijin Narrow: 铒 Amendment V. Description of the Invention (5) The closest depth value of each point on the plane. However, as mentioned earlier, if the coloring order of objects is from back to front, the traditional Z-buffer algorithm cannot solve the problem of repeated drawing. Therefore, how to resolve the lack of conventional means is the main purpose of developing this case. Summary of the Invention A plurality of functions of the image of the present case are stored in the image energy space number according to a two-dimensionality mapping function value of the visibility measurement and the depth data to be referenced. According to the two-dimensional system of the point is first established for the display pixel storage. The visibility of the prime points depends on the number of coordinates of the above-mentioned experimental system and the sounding value is more than facing the above-mentioned depth coordinates. An intermediate point of the process can be used to predict the poor image of the image and test. The concept of the image includes the following visibility. Refer to It ’s conception of approaching the color-viewing action in the pixels corresponding to the depth value corresponding to the depth to be measured. The mapping function is used to enter the depth processing method of the number of cases. The application method includes the following steps: Corresponding parameters have been established to determine whether to perform the color shading with the image point as described in the visibility step: take out the pixel degree value; according to the second test depth value; and the depth value closer to the observer, it is convenient Third degree step: According to the number, the depth of depth of the deep examination of the depth map prime point data action. The method uses the dimensional coordinate input in the point data to compare the parameter depth value, not to map with the degree of acceptance of the image space number; and a function to the three of which the inclusion is included in the depth of examination and when the prime point asset is described. Visibility processing method, wherein the pre-method includes the following steps: mapping these pixel functions to corresponding original parameters

第10頁 588289 換 :]月 ~年年 修正 la. 發明說明_______________ — ;果度值 · .....................................1 值=別、f該等預設參考深度值與該等像素點本身之深 映射函數=—j與更新動作,藉以決定是否更新該深 根t之該等原始參考深度值。 上述構想,本案所述 A 較與更新動作係包含下列步d f處理方法,其中該 ::素點本身之深度者:2該原始參考深度值 '者之深度值時,維持該原始i:遠參考深度值較接近 先建:據上述構想,本案所述之4ί;ί值不變。 g立該深度映射函數之方法更處理方法’其中預 作‘:$需進行另-能見度測試後才;驟:確認該像 下 不進仃該比較與更新動 乂據上述構想’本案所述之能 "j度測試係為透明度混色測試。又處方法,其中另 案之另_方面一 :收讀裝置包含:-深度映射函處理器 複數個像素點來預先建立—;f係根據所 ^映射函數中儲存有該像素點 又映射函數,而該深 應關係;一記愔 w 你二•、之—維座標與深度值之對 置,其係供:亥:;置’信f广接至該深度映射函數生成f 其係將映射函數存放;以及-表面著色引ΐ 五 考 度 度 比 與 對應之像素點進行一表面著色動作度位影像中該 勒作,而該表面著色引擎Page 10 588289 Change:] month to year correction la. Description of the invention _______________ —; fruit value .................. ........... 1 value = deep mapping function of preset reference depth values such as f and f and the pixels themselves = j and update action to determine whether to update the deep root t The original reference depth values. According to the above conception, the A comparison and update action described in the present case includes the following step df processing method, wherein :: the depth of the prime point itself: 2 the original i: far reference is maintained when the depth value of the original reference depth value ': The depth value is closer to the first one: According to the above concept, the value of 4ί in this case remains unchanged. g. The method of establishing the depth mapping function is more processing method 'which is pre-made': $ requires another-visibility test only; step: confirm that the image is not updated, the comparison and update are based on the above conception 'as described in this case The "j degree test" is a transparency color mixing test. Another method, wherein another aspect of the other aspect: the reading device includes:-a depth mapping function processor to establish a plurality of pixels in advance-; f is based on the mapping function stored in the mapping function of the pixel, and The depth should be related; a note 愔 w your two •,-the opposite of the dimensional coordinate and the depth value, which is provided for: Hai :; set 'belief f is widely connected to the depth mapping function to generate f, which stores the mapping function ; And-surface shading introduction, the five-degree degree ratio and corresponding pixel points perform a surface shading action in the image, and the surface shading engine

$ 11頁 發明說明(γ) 試^憶裝置中之該深度映射函數來進行—能見度測 中該i音=疋是否以該像素點貢料對該三度空間數位影像 象素點進行表面著色動作。 器,I = t述構想,本案所述之三度空間數位影像處理 像素黑行之該能見度測試係包含下列步驟:取出該 該二維座俨於所t含之一二維座標與一待測深度值;根據 以及;度映射函數而對應出一參考深度值; 者之深度: 該待測深度值中何者較接近觀察 色動:“以表面者色引擎不以該像素點資料進行表面著 根據上述構想,本案所述之三产命門者 口口 ,其中該深产映射乐叙瘅山 又&間數位影像處理 座標輪入兮撕=^生成裝置係將該等像辛點之-雜 後,再將該等預設參考深产^ 對之原始參考深度值 分別進行-比較與更新動;:邊等像素點本身之深度值 射函數中之該等原始參考深声=以決定是否更新該深度映 =據士述構想,本案所述 ° ,、中該深度映射函數 一又空間數位影像處理 :作係包含下列步驟:農置所執行之該比較與更新 ίί:3:’何者較接度值與該像素點 冰度值較接近觀察者^者之冰度值;當該像素點 j深度值取代該深度映射函咪度值時,將該像素點本身 新參考深度值;以及當該參=f原始參考深度值而成為 一考深度值較接近觀察者之深 ⑽289 疋替換Λ頁$ 11 pages of the invention description (γ) Try to recall the depth mapping function in the device-see if the i tone = 中 in the visibility measurement is to use the pixel data to perform surface shading on the three-dimensional digital image pixel point . Device, I = t. The visibility test of the black row of three-dimensional digital image processing pixels described in this case includes the following steps: taking out the two-dimensional coordinates at one of the two-dimensional coordinates contained in t and one to be measured Depth value; According to and; Degree mapping function corresponding to a reference depth value; Depth of the person: Which of the measured depth values is closer to the observation of color movement: "The surface color engine does not use the pixel data to make a surface basis According to the above conception, the third generation of the life gates described in this case, where the deep-producing map of Lexu Mountain and the digital image processing coordinates is turned on and off. The generating device is to create a complex image of these images. Then, the original reference depth values of these preset reference depths are separately compared and updated; the original reference depth sounds in the depth value projection function of the edge pixels themselves are determined to determine whether to update The depth mapping = according to the conception of the description, the depth mapping function described in this case, and the spatial digital image processing: the operation system includes the following steps: the comparison and update performed by the farm home Contact value and the pixel The ice value is closer to the observer's ice value; when the depth value of the pixel j replaces the depth mapping function, the new reference depth value of the pixel itself; and when the parameter = f original reference depth Value becomes a test depth value closer to the depth of the observer ⑽ ⑽ 疋 Replace Λ page

度值日本 ------' 、’維持該原始參考深度值不變。 器,:據^述構想,本案所述之三度空間數位影像處理 該傻:中該深度映射函數生成裝置更執行下列步驟:確認 =言點不需進行另一能見度測試後才進行該比較與更新 哭,根據上述構想,本案所述之三度空間數位影像處理 杰’其中另一能見度測試係為透明度混色測試。 „ f據上述構想,本案所述之三度空間數位影像處理 =,,、中更包含一畫面緩衝器’信號連接至該表面著色引 =丄其係供該表面著色引擎進行表面著色動作時將該像素 較佳實施例說明 請參見第三圖,其係本案針對3 D圖形引擎所發展出來 之一較佳實施例功能方塊示意圖,其中轉換—打光引擎 (transform-lighting engine)31用以進行幾何計算;設 定引擎32 (setup engine)用以初始化該基本元素;掃描 轉換器33(scan converter)用以衍生像素座標;色彩計算 器34(color calculator)用以產生平滑色彩;貼圖單元 35(texture unit)用以處理貼圖;透明度混色單元36Degree values Japan ------ ',' Keep the original reference depth value unchanged. According to the conception described above, the three-dimensional digital image processing described in this case is to perform the following steps: the depth mapping function generating device further performs the following steps: confirmation = speech point does not need to perform another visibility test before performing the comparison and Update crying, according to the above conception, the third-degree spatial digital image processing master described in this case is one of the visibility testing is the transparency color mixing test. „F According to the above conception, the three-dimensional digital image processing described in this case = ,,, and more includes a picture buffer 'signal connected to the surface coloring index = 丄 It is used by the surface coloring engine to perform surface coloring operations. For a description of the preferred embodiment of the pixel, please refer to the third figure, which is a functional block diagram of a preferred embodiment developed for the 3D graphics engine in this case, in which a transform-lighting engine 31 is used for Geometry calculation; setup engine 32 (setup engine) to initialize the basic element; scan converter 33 (scan converter) to derive pixel coordinates; color calculator 34 (color calculator) to produce smooth colors; texture unit 35 (texture unit) used to process textures; transparency blending unit 36

(alpha blending unit)用以產生透明與半透明的效果; 深度測試單元3 7 ( d e p t h t e s t u n i t)用以移除由像素組成 之隱藏表面;以及顯示控制器38 (display contfQl lerO(alpha blending unit) is used to produce transparent and translucent effects; depth test unit 37 (d e p t h t e s t u n i t) is used to remove hidden surfaces composed of pixels; and display controller 38 (display contfQl lerO

第13頁 588289 % Ύ Η : MU9 j\__ii 1 號 911HU7 五、發明說明(9) 用以準確地=顯示器41上顯示影像。其中表面著色引擎44 係由色彩彳异裔34(col〇r calculator)、貼圖單元35 (texture unit)、透明度混色單元 36(alpha bl ending 取 元 umt)及深度測試單元37(depth test unit)所組成。而該 3 D圖形引擎接收並執行儲存於指令佇列4 〇中之指令,記憶 體控制器3 9透過記憶體匯流排對該圖形記憶體3 〇進行存 % 至於該指令佇列3 〇係一用以儲存指令資料之F丨?〇單 其透過系統匯流排自該控制器3接收該指令資料。 而本案之主要特徵在於轉換-打光引擎(transform- ^ghtlng engine)3l與設定引擎 32(setup engine)之間增 設了深度映射函數生成裝置42(depth map generat〇r), 其係將每一個經過轉換—打光引擎(transf〇rm—Hghting englne)3^理過後之像素點資料,取出代表其三度空間 =置之一一維座標(X,Y )與一深度值Z來建立一深度映射函 J,而該深度映射函數中主要用以儲存顯示畫面中每個像 的在之—維座私(Χ,Υ)與其相對應之參考深度值Zr之對應 -然由於3腸像場景大多以複數個物件前後重疊構成 & - 圖之所不、)’因此,為了得到整個3D影像場景之正 HΓ佈,& ’衣度映射函數生成裝置4 2於後續過程中所 便將馬Ϊ同t ϊ ί標(X,Y)但深度不同之像素點資料時, 一比r :之蒼/衣度值Zr與該像素點本身之深度值Z進行 哕夂:斤動作,#以決定是否更新該深度映射函數中之 (a;^ ·· 乂 原始芩考沬度值與該像素點本身之深度值,何Page 13 588289% Ύ Η: MU9 j \ __ ii No. 1 911HU7 V. Description of the invention (9) Used to accurately display the image on the display 41. The surface shading engine 44 is composed of a color calculator 34, a texture unit 35, a texture unit 35, an alpha bl ending unit umt, and a depth test unit 37. composition. The 3D graphics engine receives and executes the instructions stored in the instruction queue 40. The memory controller 39 stores the graphics memory 3 through the memory bus. As for the instruction queue 3, it is a F for storing instruction data? 〇It receives the command data from the controller 3 through the system bus. The main feature of this case is that a depth map function generator 42 (depth map generator) is added between the transform- ^ ghtlng engine 3l and the setup engine 32 (setup engine). After the conversion-lighting engine (transf〇rm-Hghting englne) 3 ^ processed pixel data, take out the three-dimensional space = set one-dimensional coordinates (X, Y) and a depth value Z to establish a depth Mapping function J, and the depth mapping function is mainly used to store the correspondence of each image in the display screen-the correspondence between the dimension depth private (χ, Υ) and its corresponding reference depth value Zr-but because the 3 bowel scenes are mostly A plurality of objects are overlapped back and forth to make up &-what the picture does not,) 'Therefore, in order to obtain the positive HΓ cloth of the entire 3D video scene, & t ϊ ί (X, Y) When the data of pixels with different depths is used, a ratio r: Cang / clothing value Zr and the depth value Z of the pixel itself are performed: pound action, # to determine whether to update (A; ^ ... in the depth mapping function Baicalensis test respiratory droplets value itself of the pixel depth value, where

第14頁 588289Page 14 588289

案號 91123147 、發明說明(10) _ 該 值 較接近觀察者之深度值;(b )當該像素點 接近觀察者之深度值時,將該像素點本身本身^深度值 深度映射函數之該原始炎者 聋之/木度值取代 —(Ot . ^ Λ / # ^ ^ 維持該原始參考深度值不變。 ’、之冰度值時, 如此一來,在所有像素點通過深度映τ ★ 4 2後,吾人便可得到一建立— Μ 、射函數生成裝置 map)。而該深度映射函數二 二又映射函數(depth 記憶裝置43,或者是存;:圖所示之獨立設置 記憶體(temp〇rary mem〇ry)中。因I形;己憶體30中一臨時 色動作(render)時,便可藉由參考該深行後續表面= 不必要之重覆描繪(overdraw)之動作二射函數而痛 :進行表面著色動作時所接收二;::之,吾人:: 用已建立完成之該深度映射函!素點資料,先利 (Vlsiblllty test),藉以判斷是否仃—能見度測試 度空間數位影像中該像^ δ亥像素點資料對該 ^^,(viSlbllity ° 該像素點資料中所包含之_ 3下列步驟:U)取出 (b池據該二維座標輸入該度射^、一待測深度值; 深度值;以及㈣較該參“ 而對應出一參考 :較接近觀察者之深度·,而“心亥,罙度值中何 者之深度值時’便不以該像素料=值較接近:察 .於上述深度映射函數比較更^表面*色動作你 .‘“枓中之二維座標與待測深度值,=它例如 588289 -----案號 91123147 Ρ年 ' 1:4 曰 修正 五、發明說明(11) 貼圖(t e X t u r e )、色彩(c 〇 1 〇 r )等資料皆被忽略,因此可大 幅降低系統運算能力之消耗與記憶體頻寬之佔用,但是, 因為決定一個像素點資料是否被晝上去或是被丟棄之測試 並不僅有單一之能見度測試(visibility test),而是尚 存在有其它例如透明度混色測試(a 1 p h a b 1 e n d i n g t e s t ) 或牙透運异(operation of transparency)等各式能見度 測試(vi s ibi1i ty test)。其中透明度混色測試(aipha b 1 e n d i n g ΐ e s t)係將接收到之像素點資料中之透明度值 (alpha value)與一參考透明度值(reference aipha v a 1 u e )進行比較,如果該像素點未通過此透明度混色測試 時’則此像素點資料亦將被丟棄而不會去更新定義於圖形 吕己fe體3 0中之晝面緩衝器(f r a m e b u f f e r )與Z緩衝器(Z buffer)中之資料。 問題是,透明度值係由例如貼圖映射(texture mapping)與透明度混色測試(alpha blending test)等運 算中所產生。其中貼圖映射需要自貼圖暫存器中存取大量 的貼圖資料,而透明度混色需要目標圖框緩衝資料,用以 對來源色彩與目標色彩進行混色,為了 3 D繪圖場景之透明 度混色運异’前景物體必須與畫出之背景物體混色。由於 並非每一像素點之著色動作皆僅由其深度值決定,因此上 述初步完成之深度映射函數並無法符合實際之需求,而為 能改善此一問題,上述深度映射函數比較更新動作之較佳 實施例係如第四圖所示之步驟流程圖。對於多面體上之一 點(X,y)而言,其待測深度值可由多面體頂點之深度值内Case No. 91123147, description of the invention (10) _ This value is closer to the observer's depth value; (b) When the pixel point is close to the observer's depth value, the pixel itself ^ the original value of the depth value depth mapping function The inflammation person's deafness / woodiness value is replaced— (Ot. ^ Λ / # ^ ^ Keep the original reference depth value unchanged. ', When the ice degree value, in this way, the depth map is passed at all pixels τ ★ 4 After 2, we can get a set up-M, mapping function generating device map). The depth mapping function 22 and the mapping function (depth memory device 43, or memory ;: independent setting memory (tempOrary memry) shown in the figure. Because of I-shaped; self-memory 30 of the first For temporary color rendering (render), it can be painful by referring to the deep trajectory function of the subsequent subsequent surface = unnecessary overdraw motion: received when performing the surface coloring motion; 2 ::, my :: Use the depth mapping function that has been established! The prime point data, Vlsiblllty test, to determine whether or not 仃 —Visibility test The image in the degree space digital image ^ δ Hai pixel point data to ^^, (viSlbllity ° The following steps are included in the pixel data: 3) U) Take out (b) Enter the radiance ^ and a depth value to be measured according to the two-dimensional coordinates; the depth value; Reference: It is closer to the depth of the observer, and "When the depth of the heart, the depth value is not the same as the pixel value = the value is closer: Observe. Compared with the above depth mapping function, the surface is more colored. You. '"In the two-dimensional coordinates and the depth value to be measured, = it for example 588289 ----- Case No. 91123147 Year P: 1: 4 Amendment V. Description of Invention (11) Texture (te X ture), color (c 〇 〇 〇) and other data are ignored, so the system can be greatly reduced Consumption of computing power and memory bandwidth, but because the test to determine whether a pixel data is up or discarded is not only a single visibility test, but there are other such as transparency Various color visibility tests (a 1 phab 1 ending test) or operation of transparency (vi s ibi1i ty test). The transparency color mixing test (aipha b 1 ending ΐ est) is the pixels that will be received. The alpha value in the point data is compared with a reference transparency value (reference aipha va 1 ue). If the pixel fails the transparency blending test, the pixel data will also be discarded without going Update the data in the frame buffer and Z buffer defined in the graphic body 30. The problem is, The lightness value is generated by operations such as texture mapping and alpha blending test. The texture mapping requires access to a large amount of texture data from the texture register, and the transparency blending requires the target frame Buffer data, used to mix the source color and the target color. For the transparency of 3D drawing scenes, the color is mixed. The foreground object must be mixed with the drawn background object. Because the coloring action of each pixel is not only determined by its depth value, the preliminary depth mapping function mentioned above cannot meet the actual needs. To improve this problem, the above depth mapping function is better than the update action. The embodiment is a flowchart of the steps shown in the fourth figure. For a point (X, y) on a polyhedron, the depth value to be measured can be within the depth of the vertex of the polyhedron.

第16頁 588289 案號 91123147 修Page 16 588289 Case No. 91123147 Revised

丄匕.¾.; 3< Μ. 修正 五、發明說明 插而得, 數中推得 定何者較 映射函數 深度測試 明度混色 深度值將 著色階段 再請 步建立完 佳實施例 透明度混 能見度運 原先建立 更新,因 頻寬。 綜上 憶體中之 色動作時 動作,進 建立場景 任施匠思 保護者。 (12) 而座標 。利用 接近觀 。但是 ,亦需 測試等 不會被 去決定 參見第 成之深 步驟流 色測試 算後, 完成之 此仍可 為(X,y )者之參考深度則可從深度映射函 兩個深度值之比較進行一深度測試,以決 察者,接著以較接近之深度值來更新深度 ,若一個像素被晝出或捨棄不僅僅取決於 視另一能見度測試而定時,例如上述之透 ,則深度映射函數中座標為(X,y )之參考 修改,如此一來,能見度測試會留到表面 〇 五圖,其係本案於表面著色階段中對於初 度映射函數比較更新動作之較佳實施例較 程圖,其主要係針對需要再進行其它例如 、等能見度運算之像素,並於其完成該等 再進行一次深度測試之比較更新動作。而 深度映射函數中之大部分資料並不需要被 較習用手段節省大量之系統資源與記憶體 所述,本案利用少量資訊所預先建立並暫存於記 深度映射函數,可提供該表面著色引擎於執行著 進行參考,如此將可省去許多不必要之過度描繪 而節省大量之系統資源與記憶體頻寬,更可使得 之速度增加。故本案發明得由熟習此技藝之人士 而為諸般修飾,然皆不脫如附申請專利範圍所欲&. ¾ .; 3 < Μ. Amendment 5. Interpolation of the invention description, which is inferred from the number which is more than the mapping function depth test. Lightness and color mixing depth value. The coloring stage will be followed by a step to establish a perfect embodiment. Transparency and visibility. Create updates due to bandwidth. In summary, the action of the color in the memory is to move and enter the scene to set up any protector. (12) and coordinates. Use Close View. However, it also needs to be tested. It will not be decided. See the depth step of the first step. After calculation, the reference depth that can still be (X, y) can be compared from the depth map function's two depth values. Perform a depth test to determine the depth, and then update the depth with a closer depth value. If a pixel is day out or discarded depends not only on timing based on another visibility test, such as the above-mentioned penetration, the depth mapping function The reference coordinate of (X, y) is modified. In this way, the visibility test will be left on the surface. This figure is a comparison of the preferred embodiment of the initial mapping function in the case of surface coloring. It is mainly aimed at pixels that need to perform other visibility calculations, such as, and so on, and perform comparison and update operations after performing another depth test. Most of the data in the depth mapping function does not need to be described by conventional methods to save a lot of system resources and memory. This case is created in advance with a small amount of information and temporarily stored in the depth mapping function, which can provide the surface shading engine in Performing reference, this will save a lot of unnecessary over-drawing, save a lot of system resources and memory bandwidth, and make the speed increase. Therefore, the invention of this case can be modified by people who are familiar with this technique, but it is not as good as the scope of the patent application.

第17頁 588289 jX: 3. 19 _案號91123147_I ^ 月 曰 修正_ 圖式簡單說明 簡單圖式說明 本案得藉由下列圖式及詳細說明,俾得一更深入之了 解: 第一圖:其係一習用3 D圖形引擎之功能方塊示意圖。 第二圖:其係為一 3 D場景之上視實例示意圖。 第三圖··其係本案針對3D圖形引擎所發展出來之一較佳實 施例功能方塊示意圖。 第四圖··其係本案初步進行深度映射函數之比較更新動作 之較佳實施例之步驟流程圖。 第五圖:其係本案所發展出來於表面著色階段中進行深度 映射函數之比較更新動作之較佳實施例步驟流程圖。 本案圖式中所包含之各元件列示如 下: 控制器1 轉換-打光引擎11 設定引擎12 掃描轉換器13 色彩計算器1 4 貼圖單元1 5 透明度混色單元1 6Page 17 588289 jX: 3. 19 _Case No. 91123147_I ^ Month Revision _ Simple Illustration Simple Illustration This case has a deeper understanding through the following drawings and detailed description: First picture: its It is a functional block diagram of the 3D graphics engine. The second picture: it is a schematic diagram of a top view example of a 3D scene. The third figure is a functional block diagram of a preferred embodiment developed for the 3D graphics engine in this case. The fourth figure is a flowchart of the steps in the preferred embodiment of the present embodiment in which a comparison operation of the depth mapping function is performed. Fifth figure: It is a flow chart of the steps of a preferred embodiment of a depth mapping function developed during the surface coloring phase developed in this case. The components included in the drawings in this case are listed below: Controller 1 Conversion-Lighting Engine 11 Setting Engine 12 Scan Converter 13 Color Calculator 1 4 Mapping Unit 1 5 Transparency Mixing Unit 1 6

第18頁 588289 止. 案號 91123147 U日_修正 圖式簡單說明 深度測試單元1 7 顯示控制器1 8 記憶體控制器1 9 圖形記憶體2 0 顯示器2 1 指令佇列1 0 深度映射函數生成裝置4 2 記憶裝置4 3 表面著色引擎44 控制器3 轉換-打光引擎3 1 設定引擎32 掃描轉換器3 3 色彩計算器3 4 貼圖單元35 透明度混色單元3 6 深度測試單元3 7 顯示控制器3 8 記憶體控制器3 9 圖形記憶體4 0 顯示器4 1 指令佇列3 0 深度映射函數生成裝置4 2Page 18 588289 only. Case No. 91123147 U-day_correction diagram brief description of depth test unit 1 7 display controller 1 8 memory controller 1 9 graphics memory 2 0 display 2 1 instruction queue 1 0 depth mapping function generation Device 4 2 Memory device 4 3 Surface shading engine 44 Controller 3 Conversion-lighting engine 3 1 Setting engine 32 Scan converter 3 3 Color calculator 3 4 Mapping unit 35 Transparency mixing unit 3 6 Depth testing unit 3 7 Display controller 3 8 Memory controller 3 9 Graphics memory 4 0 Display 4 1 Instruction queue 3 0 Depth mapping function generator 4 2

第19頁Page 19

Claims (1)

588289 Ά 9Π2.ΊΗΤ]588289 Ά 9Π2.ΊΗΤ] 度空間數位影像之顯 六、申請專利範 i 種能見度處理方法,應用於一 禾過程中,其方法包含下列步驟: 根據所接收之複數個像素點來預先建立—深 數,該深度映射函數係儲存有該等像素點與其相 考深度值,以及接料並參考已建立完成之該深 數:門數i ί1藉以判斷是否以該像素 該三度工衫像中該像素點進行表面著色動 如申請專利範圍第1項 tip、目1丨试係句人返之此見度處理方法’ 見度測义你包含下列步騍: ^ 取出該像素點資料中 '丄 . T所包含之 度值, 根據該二維座榡輸 深度值;以及 】^ ^深度映射函數而對應 比較該參考深度值邀 者之深度值,而當讀朱^〜待測深度值中何者較 時,便不以該像素^ 1 味度值較接近觀察者之 〜4貧料進行表面著色動作。 度映射函 對應之參 度映射函 點資料對 作。 其中該 月匕* 維座標與一待測深 出一參考 接近觀察 深度值 3.如申請專利範圍苐 先建立該深度映射函數"、所迷之能見度處理方法,其中預 將該等像素點之二^方法包含下列步驟: 出相對之原始參考、、处、准座標輸入該深度映射函數而對應 膝外⑯ 7,水度值; 將该4預設參考^ /木&值與該等像素點本身之深度值分Visualization of digital images in degree space VI. Patent application i. Visibility processing methods are applied to a process, the method includes the following steps: According to the received plurality of pixels to establish in advance-depth number, the depth mapping function system These pixels are stored with their reference depth values, as well as the depth number that has been established and referenced: the number of gates i ί1 to determine whether the surface coloration of the pixel in the three-dimensional work shirt image of the pixel is dynamic. In the scope of the patent application, the first tip and item 1 of this test are the methods of processing the visibility returned by the sentencers. In the case of visibility measurement, you include the following steps: ^ Take out the degree value contained in the pixel data of '丄. T, Depth the depth value according to the two-dimensional coordinates; and] ^ ^ depth mapping function to compare the depth value of the inviter corresponding to the reference depth value, and when reading which one of the measured depth values is compared, the pixel is not used. ^ 1 The taste value is closer to the observer's ~ 4 lean material for surface coloring action. Corresponding parameter mapping function of degree mapping function. Among them, the dimensional coordinate of the month * and a reference depth to be measured, a reference value close to the observation depth value 3. If the scope of the patent application is first established, the depth mapping function " The second method includes the following steps: Enter the relative original reference, location, and quasi-coordinates and enter the depth mapping function to correspond to the knee outer ridge 7, the hydration value; preset the 4 reference ^ / wood & value to the pixels The depth value of the point itself 588289 案號 91123147 蔓/ jt賢半貝 年年月月曰 曰 修正 六、申請專利範圍 別進行一比較與更新動作,藉以決定是否更新該深度映射 函數中之該等原始參考深度值。 4.如申請專利範圍第3項所述之能見度處理方法,其中該 比較與更新動作係包含下列步驟: 比較該原始參考深度值與該像素點本身之深度值,何 者較接近觀察者之深度值; 當該像素點本身之深度值較接近觀察者之深度值時, 將該像素點本身之深度值取代該深度映射函數之該原始參 考深度值而成為一新參考深度值;以及 當該參考深度值較接近觀察者之深度值時,維持該原 始參考深度值不變。 5 .如申請專利範圍第3項所述之能見度處理方法,其中預 先建立該深度映射函數之方法更包含下列步驟:確認該像 素點不需進行另一能見度測試後才進行該比較與更新動 作。 6 .如申請專利範圍第5項所述之能見度處理方法,其中另 一能見度測試係為透明度混色測試。 7. —種三度空間數位影像處理器,其中包含: 一深度映射函數生成裝置,其係根據所接收之複數個 像素點來預先建立一深度映射函數,而該深度映射函數中588289 Case No. 91123147 Man / jx Xianbanbei Year, month, month, month, day, month, month, month, month, month, month, month, month, month, month, month, month, month, month, month, month, month, month, month, month, month, month, month, month, month, month, month, month, month, month, month, month, month, month, month, month, month, month, year, month, month, month, year, month, month, month, month, month, month, month, month, month, month, month, month, month, month, month, month, year, month, month, or month 4. The visibility processing method as described in item 3 of the scope of patent application, wherein the comparison and update action includes the following steps: compare the original reference depth value with the depth value of the pixel itself, which is closer to the observer's depth value ; When the depth value of the pixel point is closer to the depth value of the observer, replacing the original reference depth value of the depth mapping function with the depth value of the pixel point itself to become a new reference depth value; and when the reference depth value is When the value is closer to the depth value of the observer, the original reference depth value is maintained unchanged. 5. The visibility processing method described in item 3 of the scope of the patent application, wherein the method of establishing the depth mapping function in advance further includes the following steps: the pixel point is not required to perform another visibility test before the comparison and update operation is performed. 6. The visibility treatment method described in item 5 of the scope of patent application, wherein the other visibility test is a transparency blending test. 7. A three-dimensional spatial digital image processor, comprising: a depth mapping function generating device, which is based on a plurality of pixels received to establish a depth mapping function in advance, and the depth mapping function is j號9也3147 六、申請專利範圍 存有該等像素點之 588289 1Φ 曰 修正 儲 吖恃壯罢 維座標與深度值之對瘅關铉 8己L衣置,信號連接至 Μ關係; 其係供該深度映射函數存放;以及又、、函數生成裝置, 一表面著色弓丨擎’其係將所接收 〜 i度::ί:?像中該相對應之像素點進行二?:料對該 函數來,能見度測試度映射 對m間數位影像中該像素點進行表面‘色::資料 8.如申請專利範圍第7項所述之三空 器,其中所執行之該能見度測試係包含1 下數列位/像處理 取出該像素點資料中所包含之一 乂 丨度值; I深度ΪΠΓ座標輸入該深度映射函數而對應出-參考 比較該參考深度值與該待測深度值中何者較接近 者之深度值,而當該參考深度值較接近觀察者之深度值糸 時’便控制該表面著色引擎不以該像素點資料進行2 |色動作。 又者 9 ·如申請專利範圍第7項所述之三度空間數位影像處理 1§ ’其中該深度映射函數生成裝置係將該等像素點之-維 |座標輸入該深度映射函數而對應出相對之原始參考深户值 後,再將該等預設參考深度值與該等像素點本身之深^值 維座標與—待測深 第22頁 588289 安 Ikt 案號 91123U7、 修正 六 分 申請專利範圍 __ 別進行/比較與更新動作,藉以決〜θ 函數中之該等原始參考深度值。’、疋疋否更新該深度映 1 〇 .如申請專利範圍第9項所述之= 器,其中該深度映射函數生成裳二=間數位影像處理 動作係包含下列步驟: 、斤執行之該比較與更新 =較該原始參考深度值與該像素點 者較,近觀察者之深度值; 不身之,衣度值,何 將該像素本身之深度值較接近觀察者之深度值時, 考深度值而^身之深度值取代該深度映射函數之該原始參 卷今表成為一新參考深度值;以及 始參i ΐ ί考深度值較接近觀察者之深度值時,維持該原 巧’衣度值不變。 ,其申。^專利範圍第9項所述之三度空間數位影像處理 該像^ f該深度映射函數生成裝置更執行下列步驟:確認 動作。、點不需進行另一能見度測試後才進行該比較與更新 器·/申凊專利範圍第11項所述之三度空間數位影像處理 其中另一能見度測試係為透明度混色測試。 器·,如申清專利範圍第7項所述之三度空間數位影像處理 ° 其中更包含一晝面緩衝器,信號連接至該表面著色引 第23頁 588289 案號 91123147 1當正替換頁 尺羌V%· 曰 修正 六、申請專利範圍 擎,其係供該表面著色引擎進行表面著色動作時將該像素 點資料寫入。j number 9 also 3147 VI. The scope of the patent application contains 588289 1Φ of these pixels, which means that the relationship between the coordinate and depth of the storage area and the depth value is adjusted, and the signal is connected to the M relationship; For the depth mapping function to be stored; and, a function generating device, a surface coloring bow, which will receive ~ i degrees ::?:? The corresponding pixel points in the image are two? : It is expected that the function, the visibility test mapping maps the surface of the pixel in the digital image between m:: Data 8. The three-dimensional device as described in item 7 of the scope of patent application, the visibility test performed It contains 1 sequence of numbers / image processing to extract one of the 乂 degree values contained in the pixel data; I depth ΪΠΓ coordinates are entered into the depth mapping function to correspond-reference to compare the reference depth value with the measured depth value Which is closer to the depth value of the person, and when the reference depth value is closer to the depth value of the observer, then the surface shading engine is controlled not to perform a 2 | color action with the pixel data. Furthermore 9 · The three-dimensional digital image processing as described in item 7 of the scope of patent application 1§ 'wherein the depth mapping function generating device inputs the -dimensional | coordinates of these pixels into the depth mapping function and corresponds to the relative After the original reference depth value, the preset reference depth values and the depth dimension of the pixels themselves and the dimensional coordinate and — depth to be measured Page 22 588289 An Ikt case number 91123U7, amended by six points to apply for patent scope __ Do not perform / compare and update actions to determine the original reference depth values in the ~ θ function. ', Whether to update the depth mapping 1 10. As described in item 9 of the scope of the patent application, wherein the depth mapping function generates a second image processing operation including the following steps: The comparison performed by Jin And update = compare the original reference depth value with the pixel point, the depth value closer to the observer; if not, the clothing value, why the depth value of the pixel itself is closer to the depth value of the observer, the depth And the depth value of the body replaces the original reference list of the depth mapping function as a new reference depth value; and when the reference depth is closer to the observer's depth value, the original value is maintained. The degree value does not change. , Its application. ^ Three-dimensional space digital image processing described in item 9 of the patent scope. The image ^ f The depth mapping function generating device further performs the following steps: confirm the action. The point and point do not need to perform another visibility test before performing the comparison and update. The third-degree spatial digital image processing described in item 11 of the patent scope of the patent application. The other visibility test is a transparency blending test. Device, as described in item 7 of the scope of the patent application for digital image processing in three degrees. It also includes a daylight buffer, and the signal is connected to the surface coloring. Page 23 588289 Case No. 91123147羌 V% · Revision 6. The scope of the patent application engine is used to write the pixel data when the surface shading engine performs the surface shading action. 第24頁Page 24
TW91123147A 2002-10-08 2002-10-08 3-D digital image processor and method for visibility processing for use in the same TW588289B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW91123147A TW588289B (en) 2002-10-08 2002-10-08 3-D digital image processor and method for visibility processing for use in the same

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW91123147A TW588289B (en) 2002-10-08 2002-10-08 3-D digital image processor and method for visibility processing for use in the same

Publications (1)

Publication Number Publication Date
TW588289B true TW588289B (en) 2004-05-21

Family

ID=34057889

Family Applications (1)

Application Number Title Priority Date Filing Date
TW91123147A TW588289B (en) 2002-10-08 2002-10-08 3-D digital image processor and method for visibility processing for use in the same

Country Status (1)

Country Link
TW (1) TW588289B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9741154B2 (en) 2012-11-21 2017-08-22 Intel Corporation Recording the results of visibility tests at the input geometry object granularity

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9741154B2 (en) 2012-11-21 2017-08-22 Intel Corporation Recording the results of visibility tests at the input geometry object granularity

Similar Documents

Publication Publication Date Title
US11645801B2 (en) Method for synthesizing figure of virtual object, electronic device, and storage medium
Kaufman et al. Memory and processing architecture for 3D voxel-based imagery
JP4237271B2 (en) Method and apparatus for attribute interpolation in 3D graphics
Lu et al. Illustrative interactive stipple rendering
JP2007066064A (en) Image generating device and image generating program
JP2002304636A (en) Method and device for image generation, recording medium with recorded image processing program, and image processing program
JPH0950537A (en) Volume rendering device and its method
JP2006502508A (en) 3D modeling system
JPH0683979A (en) Method and system for displaying computer graphic accompaned by formation of shadow
JPH06231275A (en) Picture simulation method
TW200304626A (en) Image processor, components thereof, and rendering method
CN111508052A (en) Rendering method and device of three-dimensional grid body
CN108197555B (en) Real-time face fusion method based on face tracking
CN110428504B (en) Text image synthesis method, apparatus, computer device and storage medium
US7133052B1 (en) Morph map based simulated real-time rendering
Benson Morph transformation of the facial image
JP3012828B2 (en) Drawing method, apparatus, and recording medium
TW200809691A (en) A graphics processing unit and a method of processing border color information
Schwandt et al. Glossy reflections for mixed reality environments on mobile devices
JP3035571B2 (en) Image processing device
TW588289B (en) 3-D digital image processor and method for visibility processing for use in the same
JP3341549B2 (en) 3D shape data processing device
JP2739447B2 (en) 3D image generator capable of expressing wrinkles
Yu Efficient visibility processing for projective texture mapping
JP4292645B2 (en) Method and apparatus for synthesizing three-dimensional data

Legal Events

Date Code Title Description
MM4A Annulment or lapse of patent due to non-payment of fees