TW525078B - Image processing device, method and providing media - Google Patents
Image processing device, method and providing media Download PDFInfo
- Publication number
- TW525078B TW525078B TW088108144A TW88108144A TW525078B TW 525078 B TW525078 B TW 525078B TW 088108144 A TW088108144 A TW 088108144A TW 88108144 A TW88108144 A TW 88108144A TW 525078 B TW525078 B TW 525078B
- Authority
- TW
- Taiwan
- Prior art keywords
- image data
- memory
- aforementioned
- image
- processing device
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 200
- 238000000034 method Methods 0.000 title claims abstract description 95
- 230000008569 process Effects 0.000 claims abstract description 86
- 230000007246 mechanism Effects 0.000 claims description 71
- 238000009877 rendering Methods 0.000 claims description 24
- 238000001914 filtration Methods 0.000 claims description 23
- 238000004364 calculation method Methods 0.000 claims description 19
- 230000009471 action Effects 0.000 claims description 12
- 238000003672 processing method Methods 0.000 claims description 11
- 239000000463 material Substances 0.000 claims description 5
- PCTMTFRHKVHKIS-BMFZQQSSSA-N (1s,3r,4e,6e,8e,10e,12e,14e,16e,18s,19r,20r,21s,25r,27r,30r,31r,33s,35r,37s,38r)-3-[(2r,3s,4s,5s,6r)-4-amino-3,5-dihydroxy-6-methyloxan-2-yl]oxy-19,25,27,30,31,33,35,37-octahydroxy-18,20,21-trimethyl-23-oxo-22,39-dioxabicyclo[33.3.1]nonatriaconta-4,6,8,10 Chemical compound C1C=C2C[C@@H](OS(O)(=O)=O)CC[C@]2(C)[C@@H]2[C@@H]1[C@@H]1CC[C@H]([C@H](C)CCCC(C)C)[C@@]1(C)CC2.O[C@H]1[C@@H](N)[C@H](O)[C@@H](C)O[C@H]1O[C@H]1/C=C/C=C/C=C/C=C/C=C/C=C/C=C/[C@H](C)[C@@H](O)[C@@H](C)[C@H](C)OC(=O)C[C@H](O)C[C@H](O)CC[C@@H](O)[C@H](O)C[C@H](O)C[C@](O)(C[C@H](O)[C@H]2C(O)=O)O[C@H]2C1 PCTMTFRHKVHKIS-BMFZQQSSSA-N 0.000 claims 1
- 230000004044 response Effects 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 28
- 230000002079 cooperative effect Effects 0.000 description 18
- 230000000875 corresponding effect Effects 0.000 description 8
- 238000000605 extraction Methods 0.000 description 6
- 230000009467 reduction Effects 0.000 description 4
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 239000000284 extract Substances 0.000 description 3
- 230000008520 organization Effects 0.000 description 3
- 241000272525 Anas platyrhynchos Species 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000009434 installation Methods 0.000 description 2
- 238000002156 mixing Methods 0.000 description 2
- 239000002689 soil Substances 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 101100222092 Caenorhabditis elegans csp-3 gene Proteins 0.000 description 1
- 241000282326 Felis catus Species 0.000 description 1
- 206010021703 Indifference Diseases 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 210000000078 claw Anatomy 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000001186 cumulative effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 210000003608 fece Anatomy 0.000 description 1
- 235000011389 fruit/vegetable juice Nutrition 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 238000003702 image correction Methods 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 239000010871 livestock manure Substances 0.000 description 1
- 238000012856 packing Methods 0.000 description 1
- 238000010079 rubber tapping Methods 0.000 description 1
- 229920006395 saturated elastomer Polymers 0.000 description 1
- 238000009738 saturating Methods 0.000 description 1
- 235000015170 shellfish Nutrition 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Graphics (AREA)
- Image Processing (AREA)
- Image Generation (AREA)
- Studio Circuits (AREA)
Abstract
Description
525078 經濟部智慧財產局員工消費合作社印製 五、發明說明(1) A7 B7525078 Printed by the Consumer Cooperatives of the Intellectual Property Bureau of the Ministry of Economic Affairs V. Invention Description (1) A7 B7
I贫明尸/r屬之菽術領域j 本發明係關於一種圖像處理裝置、方法及提供媒體,特 別是關於一種以簡單結構且低成本裝w 一 ^ σ」進行圖像處理之 圖像處理裝置、方法及提供媒體。 [背景技術] 圖1顯示圖像處理裝置之結構例。 4規頻攝影機1拍摄失 圖示的被攝物體’將其被攝物體的圖 理部2和⑽5。圖像處理部2將^:枓細出到圖像處 邓知由視頻攝影機1所輸入 的圖像資料記憶於記憶體U。CPIJ5對私R箱南 " 運算部12命令預线算,運㈠12 =圖像處理部2之 、 4 12就與此命令對廡,斟 於記憶於記憶體U的圖像資料施以預定運算,:的 資料輸出到圖形處理部3。 的 圖形處理部3將由圖像處理部2所輸人的運算資 於1己憶體2卜CPU5控制圖形處理部3之顯 :; 22,從記憶於記憶體21的運算資料使顯示資料產生。= 圖形處理部3所產生的顯示資料輸出到CRT 4顯示。 CPU 5也接受由視頻攝影機1或圖像處理部2所輸出的资 按照需要施以預定處理,輸出到圖像處 I; 圖形處理部3。 在這種圖像處理裝置,例如 攝影機!所輸出的圖像資料”二下處理.對於由視頻 二像貝科她以捲積過攄處理,輸出到c R τ 4使其心’當時的動作例如如圖2之 處理基本上由CPU 5所執行。 此 CPU 5在步驟S1,你相此t k視頻攝影機1接受1幀分的圖 --------------裝--- (請先閱讀背面孓注意事寫本頁) 訂: ;線·The present invention relates to an image processing device, method and providing media, and more particularly to an image for image processing with a simple structure and low cost. Processing device, method and providing medium. [Background Art] FIG. 1 shows a configuration example of an image processing apparatus. 4 The video camera 1 captures a picture object ′ which is not shown in the figure, and the graphics sections 2 and 5 of the picture object. The image processing unit 2 stores ^: 枓 to the image, and stores the image data input by the video camera 1 into the memory U. CPIJ5 pre-calculates the operation of the private R box South " operation unit 12 and executes 12 = the image processing unit 2 and 4 12 opposes this command, and performs predetermined operations on the image data stored in the memory U The data of,: is output to the graphics processing unit 3. The graphics processing unit 3 uses the computational data input by the image processing unit 2 to save the memory of the graphics processing unit 3 by the CPU 2 and CPU 5: 22, and generates the display data from the computing data stored in the memory 21. = The display data generated by the graphics processing unit 3 is output to the CRT 4 for display. The CPU 5 also accepts the data output from the video camera 1 or the image processing unit 2 and performs predetermined processing as necessary to output to the image processing unit I; the graphics processing unit 3. In this image processing device, such as a camera! The output image data "is processed twice. For the second image by Beko, she is processed by convolution, and output to c R τ 4 to make her mind 'the action at that time is basically processed by the CPU 5 as shown in Figure 2. This CPU 5 is in step S1, and the tk video camera 1 accepts a picture of 1 frame. -------------- Install --- (Please read the back first, note the matter and write (This page) Order:; Line ·
525078 A7 B7 經濟部智慧財產局員工消費合作社印製 五、發明説明(2 ) 料供應。此圖像資料作為根源幀的圖像資料,例如如圖3 所示’以ΗΜΑΧ X VMAX個圖素資料Csp構成。在步騾Sl ,CPU 5也設定捲積過濾係數c [m][η]。此捲積過濾係數 .Cv [m] [η]在圖4之例的情況,以3 X 3個係數構成。 其次,在步驟S2,CPU 5作為目標幀的圖素資料cdp ,如圖5所示般設想ΗΜΑΧ X VMAX個圖素資料,將其中 的座標(1,1)的圖素資料Cdp[l][i]值起始設定成〇。再在 步驟S3,在變數j起始設定1,在步驟S4,在變數i起始 設定1。此變數i、j如圖3和圖5所示,表示根源幀的圖 像(圖3 )及目標幀的圖像(圖5 )的水平方向座標⑴或垂直方 向座標(j、)。i取從〇到HMΑΧ-1的值,j取從〇到vmAX-1的值。 在步驟S 5和步.驟S 6,在變數η和變數m分別起始設定 0。此變數m和η如圖4所示,表示捲積過濾係數的水平 方向座標(m)和垂直方向座標(η)。此例的情況,m和η都 取從0到2的值。 其次,在步驟S7,CPU 5執行下式所示的運算: Cdp [i][j] = Cdp [i][j] + Cv [m][n]* CSP [i + m-1 ][j + n-1 ] (1) 現在設 j=l、i=l、n=0、m=0,設 Cdp[i][j]的起始值 為0,所以可得到下式: Cdp [1][1] = Cv [〇][0]* CSp [〇][0] (2) _—___-5- 本紙張尺度適用中國國家標準(CNS ) Α^^Γ( 210、X297公釐)一" ' -- (請先閱讀背面之洼意事項 ί本頁) •裝· 訂 線 $25078 A7 R7 --------- ------ 五、發明說明(3 ) 其次,進入步驟S8,CPU 5判斷變數m是否比2小。現 在的情況,m=〇,比2小,所以進入步驟S9,m只增加^ ,設定成hi=1。然後,回到步驟S7,再進行上述(丨)式的 運算。其結果,可得到下式:525078 A7 B7 Printed by the Consumer Cooperatives of the Intellectual Property Bureau of the Ministry of Economy V. Invention Description (2) Material supply. This image data is used as the image data of the root frame. For example, as shown in FIG. 3 ′, it is composed of ΗMAX × VMAX pixel data Csp. At step 骡 S1, the CPU 5 also sets the convolution filter coefficient c [m] [η]. This convolution filter coefficient .Cv [m] [η] is composed of 3 × 3 coefficients in the case of the example in FIG. 4. Next, in step S2, the CPU 5 regards the pixel data cdp of the target frame as shown in FIG. 5 and assumes ΜΜAX × VMAX pixel data, and the pixel data Cdp [l] [of the coordinates (1, 1) therein i] The value is initially set to zero. At step S3, 1 is initially set at variable j, and at step S4, 1 is set at variable i. This variable i, j is shown in Fig. 3 and Fig. 5 and represents the horizontal coordinate ⑴ or the vertical coordinate (j,) of the image of the source frame (Figure 3) and the image of the target frame (Figure 5). i takes a value from 0 to HMAX-1, and j takes a value from 0 to vmAX-1. In step S5 and step S6, 0 is initially set at the variable η and the variable m, respectively. The variables m and η are shown in Fig. 4 and represent the horizontal coordinate (m) and the vertical coordinate (η) of the convolution filter coefficient. In this case, m and η both take values from 0 to 2. Next, in step S7, the CPU 5 performs the operation shown in the following formula: Cdp [i] [j] = Cdp [i] [j] + Cv [m] [n] * CSP [i + m-1] [j + n-1] (1) Now let j = l, i = l, n = 0, m = 0, and let Cdp [i] [j] start value be 0, so we can get the following formula: Cdp [1 ] [1] = Cv [〇] [0] * CSp [〇] [0] (2) _____- 5- This paper size applies to China National Standard (CNS) Α ^^ Γ (210, X297 mm) I " '-(Please read the ins and outs on the back of this page first) • Binding and Binding Line $ 25078 A7 R7 --------- ------ 5. Description of the Invention (3) Next Go to step S8, the CPU 5 determines whether the variable m is smaller than two. In the present case, m = 0, which is smaller than 2, so it proceeds to step S9, m only increases by ^, and is set to hi = 1. Then, it returns to step S7 and performs the calculation of the above-mentioned expression (丨). As a result, the following formula can be obtained:
Cdp [1][1] = Cdp [1][1] + Cv [1][0] * CSP [1][0] (3) 又’此處上述(3)式右邊的Cdp [1][1]値係在上述(2)式所 得到的値。 然後’進入步驟· S 8 ’再判斷變數πι疋否比2小。現在的 情況,m、= 1,所以進入步驟s 9,變數m再只增加1,說定 成m = 2。其後,回到步驟S7,再運算上述(1),可得到下 式:Cdp [1] [1] = Cdp [1] [1] + Cv [1] [0] * CSP [1] [0] (3) Also 'here Cdp [1] [ 1] 値 is 値 obtained by the above formula (2). Then, it proceeds to step S8, and determines whether the variable πm 疋 is smaller than 2. In the present case, m, = 1, so it proceeds to step s9, and the variable m is increased by only 1 and said to be m = 2. After that, return to step S7, and then calculate the above (1) to obtain the following formula:
Cdp [1][1] - Cdp [1][1] +Cv [2][0] * CSP [2][0] (4) 根據以上處理’根源圖素資料Csp [〇][〇]、Csp [1][〇]、 csp [2][〇]分別乘以捲積過濾係數Cv [〇h〇]、cv [ι][0]、 cv[2][0],可得到累計其乘値的値。 經濟部智慧財產局員工消費合作社印製 其次,進入步驟s 8,判斷變數m是否比2小。現在的情 沉,m = 2,所以進行否的判斷,進入步驟S10。在步驟Sl〇 ,判斷η是否比2小。現在的情況,n=〇,所以進行是的 判斷’進入步驟S 1 1。在步驟s 11,變數η只增加1,現在 的情況,設定成η=1。 後’回到步驟S6 ’起始設定成m=0後’在步驟S7, 525078 A7 五、發明說明(4 進行上述⑴式的運算。根據此,進行以下的運算: Cdp [!][!] = Cdp[1][1]+Cv [0][1]%Csp[〇][i] 其次’進入步驟S8,判斷變數m是否比二〗:,現在的 況,m=G,所以進行是的判斷,在切S9,設定成m: 後’回到步驟S7,再進行⑴式的運算,運算以下的⑹式 Cdp [1][1] = Cdp [!][!] +(:ν [1][1;]^ Csp Π][1](6) 反覆以上處理的結果,進行以下示的⑺式至(i〇)式的算: 、 Cdp [1][1] = Cdp [l][i] +Cv [2][1]* cSP [2][1] (請先閱讀背面之注音?事 寫 r本頁) 壯衣 訂·Cdp [1] [1]-Cdp [1] [1] + Cv [2] [0] * CSP [2] [0] (4) Process the 'root source pixel data Csp [〇] [〇] according to the above, Csp [1] [〇], csp [2] [〇] are multiplied by the convolution filter coefficients Cv [〇h〇], cv [ι] [0], cv [2] [0], and the cumulative multiplication can be obtained.値 値 値. Printed by the Consumer Cooperatives of the Intellectual Property Bureau of the Ministry of Economic Affairs Next, proceed to step s8 to determine whether the variable m is smaller than two. In the current situation, m = 2, so whether to make a judgment is made, and the process proceeds to step S10. In step S10, it is determined whether η is smaller than 2. In the present case, n = 0, so a "yes" determination is made and the process proceeds to step S 1 1. In step s11, the variable η is only increased by 1, and in the present case, η = 1 is set. After 'return to step S6' after initial setting is m = 0 ', in step S7, 525078 A7 V. Description of the invention (4 Perform the operation of the above formula. Based on this, perform the following operation: Cdp [!] [!] = Cdp [1] [1] + Cv [0] [1]% Csp [〇] [i] Next, go to step S8 to determine whether the variable m is more than two. 〖: In the current situation, m = G, so the process is To judge, cut S9, set it to m: and then return to step S7, and then perform the following formula Cdp [1] [1] = Cdp [!] [!] + (: Ν [ 1] [1;] ^ Csp Π] [1] (6) Repeat the results of the above process, and perform the following formulas to (i〇): Cdp [1] [1] = Cdp [l] [i] + Cv [2] [1] * cSP [2] [1] (Please read the phonetic on the back? Write this page first)
經濟部智慧財產局員工消費合作社印Tfe'JA ⑺Printed by Tfe'JA, Employee Consumer Cooperative of Intellectual Property Bureau, Ministry of Economic Affairs ⑺
Cdp [1][1] = Cdp [1][1] -fCv [0][2]* Csp [〇][2] (8)Cdp [1] [1] = Cdp [1] [1] -fCv [0] [2] * Csp [〇] [2] (8)
Cdp [1][1] = Cdp [1][1] +Cv [1][2] * CSP [i][2] (9)Cdp [1] [1] = Cdp [1] [1] + Cv [1] [2] * CSP [i] [2] (9)
Cdp [1][1] = Cdp [1][1] +Cv [2][2]* CSP [2][2] (10) 根據此,以1個圖素資料CSP[1][1]爲對象圖素的捲積過 濾處理完畢。 此時’在步驟S10 ’ 11 = 2,所以進行否的判斷,進入步 驟S12。在步驟S12,判斷變數i是否比HMAX-2(此例的 情況,HMAX二6,所以HMAX-2 = 4)小。現在的情況,i=1 本纸張尺嗖適用中國國家標準(CNS)A4規格(210 X 297公釐) 525078 部 智 慧 財 費 A7 五、發明說明(5 ) ’比HMAX-2小,所以進行是的判斷,進入步驟s i 3,變 數1 /、、加1,設定成1 = 2。其後,回到步驟S5,執行其 以後的處!。即,和以圖素資料CSP⑴⑴爲對象圖素的捲 積過濾運异處理同樣進行以圖素資料CsppHi]爲對象圖素 的捲積過濾運算。 依^進仃卜丨之列的對於圖素資料的捲積過濾處理運算 ’進仃到圖素資料Csp[1][HMAX_2](圖3之例的情況, 〜[1][4])’在步驟S12,就進行否的判斷,進入步驟川 。在步驟su,判斷變數j是否比νΜΑχ_2(此例的情況, VMAX=6、,所以νΜΑχ_2=4)小。現在的情況,Μ,所以 進行是的判斷,進入步驟Sl5 ’』只增加i,設定成」=2。 其後’回到步驟S4,執行其以後的處理。即,和上述情況 同樣執行j=2之列的對於圖素資料的捲積過濾處理運算。 如以上,j爲νΜΑΧ·2之列的4ΗΜΑχ_2之行的對於 =圖素資料Csp_AX_2][VMax_2]的捲積 運 運處就進行否蝴 部Γ円5上所得到運算結果的資料供應给圖形處理 二。圖形處理部3將由哪5所輸入的"貞分的圖像資 ^於C憶體21。顯示資料產生部22將此圖像資料變 成續不貧料,輸出到CRT 4,使其顯示。 在以上係使捲積過濾運算處理以 圖像處理部2執行。或者有時另外=,但有時使 的專用硬體,使其執行。丰備馬了執行這種處理 印 尺(21() x 二 525〇78 A7 B7 五、發明說明(6 經濟部智慧財產局員工消費合作社印製 L贫明欲解決之課題] 在如上述的圖像處理裝置,係利用咖5、圖像處理部 ^用硬體使預定運算進行。其結果,對於㈣5或圖 二處理邵2的負擔變大或需要專用硬體,不但結構變成複 而且有成本增高的課題。 本發明係鑑於這種情沉所完成的,係以簡單結構且 本可執行圖像處理。 [解決課題之手段] •申請f利範圍第i項所載之圖像處理裝置,其特徵在於 具備第、-5己憶機構:以圖素單位記憶根源圖像資料;第 二記憶機構:以圖素單位記憶目標圖像資料…描繪機 構:反覆進行下述動作到得到預定運算結果··對記情二第 :記憶,構的根源圖像資料以圖素單位施以預定運算而以 多邊形單位騎於第:記憶機構作爲目標圖像資料者。 :請專利範g第8項所载之圖像處理方法,係具備第_ 己L機構.以圖素單位記憶根源圖像的圖像資料;及,第 二記憶機構:以圖素單位記憶目標圖像資料之圖像處理裝 置《圖像處理方法,其特徵在於:包含描緣步驟:反覆進 订下述動作到得到預定運算結果:對記憶於第_記 ,源圖像資料以圖素單位施以預定運算而以多邊形單位 福%於第二記憶機構作爲目標圖像資料者。 申請專利範圍第9項所載之提供媒體,其特徵在於:提 供使圖像處理裝置執行包含描繪步驟的處理的程式,、該^ 像處理裝置具備第-記憶機構··以圖素單位記憶根源圖像 --------------裝------ (請先閱讀背面t注意事寫本頁)_ 訂: -丨線· i n - 525078 發明說明( 經濟部智慧財產局員工消費合作社印製 的圖像資料;及,筮-七卜立w w 吊一圮機構··以圖素單位記悻目俨圖 像資料,該描繪步酽$希,彳、/ k目& 0 q步驟反覆進行下述動作到得到預定運营结 果··對記憶於第一今产她技认』 ,、疋連# π 以預定«而”==特源圖像資料以圖素單位施 圖像資料者。^早位騎於第二記憶機構作爲目播申π 犯m 1G項所載之圖像處理裝置,其特徵在於: 具備記憶機構、產;ψ4 、 、 生機構及執仃機構,該記憶機構具備第 一記憶部:以圖去置户卜立α 常早位记,丨思根源圖像資料;及,第二記憶 #以圖素單^^己憶目標圖像資料,該產生機構產生描繪 ^ 气柄π叩令係使下述動作反覆進行到得到預定運算 果+ ^匕杰第一屺憶邯的根源圖像資料以圖素單位施 以預疋運#而以多邊形單位描緣於第二記憶部作爲目標圖 像資料,該執行機構執行爲產生機構所產生的描繪命令者。 申請專利範園第丨7項所載之圖像處理方法,其特徵在於 :包含記憶步驟:使根源圖像資料以圖素單位記憶於第_ a己部,同時使目標圖像貧料圖素單位記憶於第二記憶部 :及,產生步驟:產生描繪命令,該描繪命令係使下述動 作反覆進行到得到預定運算結果:在記憶步驟對記憶於第 一記憶部的根源圖像資料以圖素單位施以預定運算而以多 邊形卓位描纟會於弟一 έ己憶部作爲目標圖像資料者。 申請專利範圍第1 8項所載之提供媒體,其特徵在於:提 供使包含記憶步驟及產生步驟的處理執行的程式,該記憶 步驟係使根源圖像資料以圖素單位記憶於第一記憶部,同 時使目標圖像資料以圖素單位記憶於第二記憶部,該產生 10 私纸張尺度適用中國國家標準(CNS)A4規格(210 x 297公釐) C請先閲讀背面各涑意事Νξ W本頁) --裝 ;線 五 、發明說明(8 到=產生騎命令,該描繪命令係、使下述動作反覆進行 根運算結果:在記憶步驟對記憶於第―記憶部的 給=像貧㈣圖素單位施以料運算而以多邊形單位描 、,'曰万;弟二記憶部作爲目標圖像資料者。 專利範_19項所载之圖像處理裝置,其特徵在於 •具備弟—記憶機構:,素單位記憶根源圖像资料;第 ^己憶機構:關素單位域目標时資料;第—描緣機 .對記憶於第一記憶機構的根源圖 以預定運算中的一部分運曾;丨、,夕、自 口乐平1犯 ^ ^ Μ ΑΛ: - ^ 夕圪形單位描繪於第二記 ㈣構作爲目標圖像資料構:對記憶於 弟-記憶機構的根源圖像資料以圖素單位施以預定運赏中 :其他—部分運算,與爲第—騎機構所已騎的圖像資 料加或減而以多邊形單位描繪於m — 像資料者。 目標圖 申請專利範圍第26項所載之圖像處理方法,其特徵在於 :包含第一描繪步驟:對記恃认贫 、· 、之以㈤t於弟一記憶機構的根源圖像 抑、 , , 汁中的一邵分運算而以多邊形 早位描緣於第二記憶機構作爲 繪步驟:對記憶於第-記憶機第二描 構的根源圖像資料以圖素單 位施以預定運算中的其他一部v ” 運异,與在第一描繪步驟 所已描緣的圖像資料加或減而以多邊形單位描緣於第二記 憶機構作爲目標圖像資料者。 申請專利範圍第27項所載之提供媒體,其特徵在於:提 μ包含第-描繪㈣及第二插綠步驟的處理執行的程式 -11 . J^〇78 五、 發明說明(9 ) 科㈣係對記憶於第—記憶機構 灯以圖素早位施以預定運糞φ Μ冬貝 位描緣於第-記憶機構=算而以多邊形單 驟係對記憶於第—記憶機構的 以預定運算中的其他一部分運,原圖,::料以圖素單位施 或減而以多邊形單位描_二 乍馬目^圖像資料者。 在申請專利範圍第1項所載之 範園"項所載之圖像處理;== 里裝置、申請專利 對、…反復執仃下述動作到得到預定運算結果: 十^於弟一記憶機構的根源圖像資料以圖素單位施以預 資^异而以多邊形單位描績於第二記憶機構作爲目標圖像 =申請專利範圍第1〇項所載之圖像處理裝置、申請專利 =第:7項所載之圖像處理方法及申請專利範圍第I8項 經齊部智慧財產局員工消費合作社印制衣 . t 〜二乃忒夂甲請專利範圍第 作^提供媒體,產生騎命令,該描繪命令係使下述 报:後執行到得到預定運算結果:對記憶於第一記憶部的 =圖像資料以圖素單位施巧定運算而以多邊形單位揭 、曰万;弟一兒憶部作爲目標圖像資料。 一在申請專利範圍第19項所載之圖像處理裝置、申請專利 =26項所載之圖像處理方法及申請專利範圍第”項 1 <提供媒體,進行下述動作後:對記憶於第一記憶機 苜勺根源圖像貧料以圖素單位施以預定運算中的一部分運 才而以夕邊形單位描繪於第二記憶機構作爲目標圖像, 述動 12-Cdp [1] [1] = Cdp [1] [1] + Cv [2] [2] * CSP [2] [2] (10) Based on this, 1 pixel data CSP [1] [1] The convolution filtering for the object pixels is processed. At this time, 'at step S10', 11 = 2, so it is judged whether to go to step S12. In step S12, it is determined whether the variable i is smaller than HMAX-2 (in the case of this example, HMAX-2 is 6, so HMAX-2 = 4). The current situation, i = 1 This paper size is applicable to the Chinese National Standard (CNS) A4 specification (210 X 297 mm) 525078 Ministry of Intelligent Finance A7 V. Invention Description (5) 'Small than HMAX-2, so If yes, go to step si 3, add 1 /, and add 1 to set 1 = 2. After that, it returns to step S5 and executes the subsequent steps! . That is, the convolution filtering operation using the pixel data CSP⑴⑴ as the target pixel is performed in the same manner as the convolution filtering operation with the pixel data CSP⑴⑴ as the target pixel. Convolution filtering processing operations on pixel data according to the following list: 'Enter the pixel data Csp [1] [HMAX_2] (in the case of the example in FIG. 3, ~ [1] [4])' In step S12, a determination is made as to whether or not to proceed, and the process proceeds to step S12. In step su, it is determined whether the variable j is smaller than νΜΑχ_2 (in the case of this example, VMAX = 6, so νΜΑχ_2 = 4). The current situation is M, so the judgment is yes, and the process proceeds to step S15, where "i" is incremented only and "2" is set. After that, it returns to step S4 and executes subsequent processing. That is, the convolution filtering processing operation on the pixel data in the j = 2 range is performed in the same manner as described above. As above, the convolutional transport office where j is a row of 4ΧΜΑχ_2 in the column of νΜAX × 2 for = picture element data Csp_AX_2] [VMax_2] will perform data processing on the result obtained on the part Γ 円 5 for graphic processing. two. The graphics processing unit 3 uses the "quoted" image input from which 5 to the C memory 21. The display data generating section 22 converts this image data into non-depleted material and outputs it to the CRT 4 for display. In the above, the convolution filtering arithmetic processing is executed by the image processing unit 2. Or sometimes other =, but sometimes use dedicated hardware to make it execute. Feng Beima executed this type of printing ruler (21 () x 2 525.078 A7 B7 V. Description of invention (6 Issues to be printed by the Consumer Cooperatives of the Intellectual Property Bureau of the Ministry of Economic Affairs, which are to be solved by the poor) In the picture as above The image processing device uses the hardware 5 and the image processing unit ^ to perform predetermined calculations in hardware. As a result, the burden on processing 5 or 2 in Figure 2 becomes larger or special hardware is required. Not only the structure becomes complex but also costs. Increasing subject. The present invention has been completed in view of this situation, and has a simple structure and can perform image processing. [Means to Solve the Problem] • Apply for the image processing device included in item i of the f benefit scope, It is characterized by having a first and a fifth memory mechanism: memorizing the root image data in pixel units; a second memory mechanism: memorizing target image data in pixel units ... a drawing mechanism: repeatedly performing the following operations until a predetermined calculation result is obtained ·· For the second memory: the memory, the source of the structure of the image data is subject to predetermined operations in pixel units and riding in the polygon unit: the memory mechanism as the target image data. Contained image The second method is an image processing device that stores the image data of the source image in pixel units. , Which is characterized in that it includes the step of drawing a margin: repeatedly ordering the following actions to obtain a predetermined operation result: the memory is stored in the _th record, the source image data is subjected to a predetermined operation in pixel units, and the polygon unit is %% to the second The memory mechanism serves as the target image data. The provided medium described in item 9 of the scope of patent application is characterized by providing a program for causing the image processing device to execute processing including a drawing step, and the image processing device has a first-memory Organization ·· Remember the root image in pixel units -------------- install ------ (please read the note on the back to write this page) _ Order:-丨 线· In-525078 Description of the invention (Image data printed by the Consumer Cooperatives of the Intellectual Property Bureau of the Ministry of Economic Affairs; and, 筮-七 卜 立 ww Hanging out of the institution ... · Record the image data in pixel units, the depiction Step 希 $ 希, 彳, / k 目 & 0 q Steps are repeated The operation described above is to get the scheduled operation result.... "Recognized in the first modern product her technical recognition," 疋 连 # π scheduled «and" == special source image data in pixel units for the image data. ^ The early position rides on the second memory mechanism as the image processing device included in the item 1m of the crime. It is characterized by: It has a memory mechanism, an industry; ψ4, a health institution, and an enforcement agency. A memory department: using a map to set up the homepage, often thinking about the source image data; and, the second memory #taking a picture element ^^ to recall the target image data, and the generating mechanism generates a drawing ^ gas The 叩 π 叩 order is to make the following actions repeatedly until a predetermined calculation result is obtained + ^ Jiejie The first root image data of Yihan is pre-transported in pixel units and traced to the second memory in polygon units As the target image data, the execution unit executes the drawing order generated by the generation unit. The image processing method contained in the patent application No. 丨 7 is characterized by including a memory step: the root image data is memorized in the unit of pixel _ a in pixel units, and the target image is lean pixels. The unit is memorized in the second memory unit: and, generating step: generating a drawing command, which repeatedly performs the following actions to obtain a predetermined operation result: in the memory step, the root image data stored in the first memory unit is plotted The prime unit performs a predetermined operation and uses the polygonal trajectory to describe the target image data in the brother's own memory. The provided medium contained in item 18 of the scope of the patent application is characterized in that a program for executing a process including a memory step and a generation step is provided, and the memory step is to cause the root image data to be memorized in the first memory unit in pixel units. At the same time, the target image data is memorized in the second memory unit in pixel units. The generated paper size is in accordance with the Chinese National Standard (CNS) A4 specification (210 x 297 mm). C Please read the intentions on the back first Νξ W this page)-equipment; line five, description of the invention (8 to = generate a riding command, the drawing command system, the following actions are repeated to perform the root operation result: in the memory step to the memory stored in the first- For example, the pixel unit is used to calculate the data and the polygon unit is used to describe, "Yuan Wan; the second memory unit is the target image data. The image processing device contained in the patent _19 item is characterized by: Brother-memory mechanism: The prime unit memorizes the source image data; the first memory mechanism: the target unit data of the prime unit field; the first-drawing edge machine. The root-memory map stored in the first memory mechanism uses a part of the predetermined operation Transport ; 丨 ,, Xi, Zikou Leping 1 commit ^ ^ Μ ΑΛ:-^ Xi 圪 unit is depicted in the second record structure as the target image data structure: the root image data memorized in the brother-memory mechanism is illustrated The prime unit is given a reward: other—part of the calculation, plus or minus the image data that has been used by the first rider and drawn in polygonal units on the m—image data. The target picture applies for patent scope No. 26 The contained image processing method is characterized in that it includes a first drawing step: the recognition of poverty, the use of a root image in the memory of the younger one, and the operation of a subdivision in the juice. The polygon drawing at the second memory mechanism is used as the drawing step: the root image data stored in the second memory structure of the first-memory machine is applied to the other part of the predetermined operation in pixel units. The image data that has been traced in the first drawing step is added or subtracted, and the second memory mechanism is used as the target image data in polygon units. The provided media described in item 27 of the scope of the patent application is characterized by: Enhancement μ includes the first drawing and the second interpolation The execution procedure of the green step-11. J ^ 〇78 V. Description of the invention (9) The Department of Science is to memorize the memory in the first place—the memory mechanism applies the predetermined position of the manure φ Μ winter shellfish to the picture in the early position. -Memory mechanism = calculate and use polygon in a single step to memorize in the first—memory mechanism is operated by other parts of the predetermined operation. Project ^ Image data. The image processing described in Fanyuan " in Item 1 of the scope of patent application; == device, patent application, ... repeatedly perform the following operations until the predetermined calculation result is obtained : The image data of the root of Yudi's first memory organization is pre-funded in pixel units, and the second memory organization is described in polygonal units as the target image = the image contained in item 10 of the scope of patent application Like image processing device, patent application = # 7: The image processing method and the scope of patent application No. I8 are printed on clothing by the Consumers ’Cooperative of the Intellectual Property Bureau of Qibei. ^ Provide media to generate riding commands, the drawing commands are The following report: After the execution, the predetermined operation result is obtained: the image data stored in the first memory unit is calculated in pixel units and is exposed in polygon units, and the memory unit is used as the target image data. First, the image processing device described in item 19 of the scope of patent application, the image processing method described in item 26 of the patent application, and item 1 of the scope of patent application "1. Provide the media and perform the following actions: The root memory image of the first memory machine is given a part of the predetermined operation in pixel units, and is depicted in the second memory mechanism as the target image in rim units.
本’':氏知尺度翻 (CNS)A4 規格(21G xlgf^TT 525078 A7 B7Ben ’': Known Scale Turning (CNS) A4 Specification (21G xlgf ^ TT 525078 A7 B7
五、發明說明(11 ) '、工 生描繪印令,該描緣命令係使下述動作反覆進行到得到預 =運算結果:對記憶於第—記憶部的根源圖像辛 ==運算而以多邊形單位描續於第二記憶部作爲 目標圖像讀;及,執行機構(例如圖6之描緣引擎: 執行爲產生機構所產生的描繪命令者。 1 =利範圍第19項所載之圖像處理裝$,其特徵在於 :具備弟-記憶機構(例如圖7之紋理區51): 記憶根源圖像資料;第:記憶機構(例如圖7之騎區^ 二圖素、單位記憶目標圖像資料;第一描緣機構 :步請、步驟S38、步驟S39):對記憶於第一記情機 f的根源圖像資料以圖素單位施以預定運算中的—❹運 .及Γ I:早位锸繪於罘二?己憶機構作爲目標圖像資料 ’及,弟—描緣機構(例如圖12之步驟s37 驟叫··對記憶於第-記憶機構的根源圖像資料以圖素; 位犯以預疋運算中的其他一部 μ ^ 所已描緣的圖像資料加㈣π ^ ^㈣一描緣機構 憶機構作爲目標圖像資料者。乂 "^邊形早位描繪於第二記 申請專利範圍第21項所載之圖像處理裝置,其特徵在於 资ί ΓΓΓ定機構(例如圖12之步驟S34):指定根源圖像 貝枓和目標圖像資料之間的運算模態者。 【:爲顯示適用本發明之圖像處理裝置之電腦娛樂裝置 H万塊圖。透過匯流排34,主記憶體32和圖像處理 曰曰片33連接於主CPU31。主CPU3I產生騎命令,V. Description of the invention (11) 'Working student drawing seal order, the drawing order is to repeatedly perform the following actions to obtain the pre-calculation result: the root image memorized in the first-memory part Xin == operation and The polygon unit description is read in the second memory as the target image; and, the execution mechanism (such as the drawing engine of FIG. 6: the execution order of the drawing command generated by the generation mechanism. 1 = the picture in the profit range item 19) The image processing device is characterized by having a brother-memory mechanism (for example, the texture area 51 in FIG. 7): memorizing the root image data; and a memory mechanism (for example, the riding area in FIG. 7 ^ two pixels, unit memory target image Image data; first tracing mechanism: step, step S38, step S39): The root image data stored in the first memory machine f is subjected to predetermined operations in pixel units— 单位 运. And Γ I : Early position is drawn in the second? Jiyi mechanism as the target image data 'and brother-drawing mechanism (for example, step s37 in Fig. 12 screams. · The source image data stored in the -memory mechanism is illustrated The culprit adds the image data of the other μ ^ in the pre-calculation operation. π ^ ^ ㈣ The tracing mechanism recalls the mechanism as the target image data holder. 乂 " ^ The edge is depicted in the image processing device contained in item 21 of the scope of the second patent application, which is characterized by ΓΓΓΓ (For example, step S34 in FIG. 12): a person who specifies a calculation mode between the source image frame and the target image data. [: A block diagram of a computer entertainment device H for displaying the image processing device to which the present invention is applied. Through the bus 34, the main memory 32 and the image processing chip 33 are connected to the main CPU 31. The main CPU 3I generates a riding command,
訂 ‘線 (— ------ —__ · 14 525078 A7 經濟部智慧財產局員工消費合作社印製 五、發明說明(12 ) 圖像處理晶片33的動作。在主記憶體32適當記憶κρυ3ι 在執行各種處理上所需的程式或資料等。 圖像處理晶片33 <插繪引擎41與由主cpu 31所供應的 4田u對應’透過《憶體介面42執行在圖像記憶體43 描输預定圖像資料的動作。匯流排45連接於記憶體介面42 和描繪引擎4 1之間,匯流排46連接於記憶體介面42和圖 像記憶體43之間。匯流排46例如具有128位元的位元寬 度’描緣引擎4 1對於圖像記憶體43可高速執行描繪處理 。描繪引擎41例如具有以下能力:即時(reaitime)地在(1/3〇 秒至1/60、秒)之間描繪十幾次至幾十次以上NTSC方式pAL 方式等320 X 240圖素的圖像資料或者64〇 χ 48〇圖素的圖 像資料。 圖像處理晶片33還有可程式規劃的陰極射線管控制器 (PCRTC)44,此PCRTC 44具有以下功能:即時地控制由 視頻攝影機35所輸入的圖像資料位置、大小、解析度等。 PCRTC 44將由視頻攝影機35所輸入的圖像資料透過記憶 體介面42寫入圖像記.憶體43之紋理區5 1 (圖7)。此外, PCRTC 44透過記憶體介面46讀取描繪於圖像記憶體43之 描繪區52(圖7)的圖像資料,將此輸出到crt 36,使其顯 示。 圖像1己憶體43如圖7所示,形成可在同一區指定紋理區 5 1和描緣區5 2的統一(u n i f丨e d)記憶體構造。 主CPU 3 1在將紋理區5 1之預定紋理描繪於描繪區52時 ’產生平面紋理多邊形(FlatTexturePolygon)命令,將此輸 -15- (請先閱讀背面之注意事 -裝 本頁) 訂- .•f· 本紙張尺度適丨Η中國國家標準(CNS)A4規格(210 X 297 公釐) 525078 A7 __B7 五、發明說明(13 ) 出到描繪引擎4 1。例如描繪三角形多邊形的命令係如下的 命令:Order line (------- --__ 14 525078 A7 Printed by the Intellectual Property Bureau of the Ministry of Economic Affairs Employee Consumer Cooperatives V. Invention description (12) Actions of the image processing chip 33. Appropriate memory κρυ3ι in the main memory 32 Programs and data required for executing various processes, etc. Image processing chip 33 < Insertion engine 41 corresponds to the 4U supplied by the main CPU 31 'is executed in the image memory 43 through the memory interface 42 An operation for drawing predetermined image data. The bus 45 is connected between the memory interface 42 and the rendering engine 41, and the bus 46 is connected between the memory interface 42 and the image memory 43. The bus 46 has 128, for example. The bit width of the bit 'drawing engine 41' can perform drawing processing at a high speed with respect to the image memory 43. The drawing engine 41 has, for example, the following capabilities: (reaitime) at (1/30 to 1/60, second ) Describing image data of 320 X 240 pixels, such as NTSC method, pAL method, or 64 × 480 pixels, between dozens to dozens of times. Image processing chip 33 is also programmable. Cathode Ray Tube Controller (PCRTC) 44 which has The following functions: real-time control of the position, size, resolution, etc. of the image data input by the video camera 35. The PCRTC 44 writes the image data input by the video camera 35 into the image record through the memory interface 42. The texture area 5 1 (Figure 7). In addition, the PCRTC 44 reads the image data drawn in the drawing area 52 (Figure 7) of the image memory 43 through the memory interface 46, and outputs this to the crt 36 to make it Display. As shown in FIG. 7, image 1 and memory 43 form a unified memory structure that can specify texture area 51 and tracing area 5 2 in the same area. The main CPU 3 1 5 When the predetermined texture is drawn in the drawing area 52, the "Generate FlatTexturePolygon" command, enter this -15- (Please read the precautions on the back first-install this page) Order-. • f · The paper size is suitable丨 ΗChinese National Standard (CNS) A4 specification (210 X 297 mm) 525078 A7 __B7 V. Description of the invention (13) Out to the drawing engine 41 1. For example, the command to draw a triangular polygon is the following command:
Flat Texture Triangle (DxO、DyO、Dxl、Dyi、j) 2Flat Texture Triangle (DxO, DyO, Dxl, Dyi, j) 2
Dy2、SxO、SyO、Sxl、Syl.Sx2、Sy2、L) 此處,DxO、DyO、Dxl、Dyl、Dx2、Dv,土 、 •X uxz Uy2表示描繪 於目標(描繪區52)的三角形頂點座標,SxO、Sy〇、Sxl 、Syl、Sx2、Sy2表示根源(紋理區51)的三角形頂點座 標。此外,L·表示與在點行(Sxn、Syn)所表現的多邊形( 三角形)内的紋理圖素値相乘的亮度値。 同樣地’四角形描繪命令如下所示:Dy2, SxO, SyO, Sxl, Syl. Sx2, Sy2, L) Here, DxO, DyO, Dxl, Dyl, Dx2, Dv, soil, • X uxz Uy2 represents the coordinates of the vertices of the triangle drawn on the target (drawing area 52) , SxO, Sy0, Sxl, Syl, Sx2, Sy2 represent the coordinates of the triangle apex of the root (texture area 51). In addition, L · represents a brightness 値 multiplied by a texture pixel 値 in a polygon (triangle) represented by a dot line (Sxn, Syn). Similarly the 'quadrilateral drawing command is as follows:
Flat Texture Rectangle (DxO、DyO、Dxl、Dyl、Dx2Flat Texture Rectangle (DxO, DyO, Dxl, Dyl, Dx2
Dy2、Dx3、Dy3、SxO、SyO、Sxl、Syl、Sx2、Sy2 、Sx8 、 Sy8 、 Sy3 , L) 此處也是 DxO、DyO、Dxl、Dyl、Dx2、Dy2、 、D y 3表不4田纟會於目標(描繪區5 2)的四角形頂點座彳巧,Dy2, Dx3, Dy3, SxO, SyO, Sxl, Syl, Sx2, Sy2, Sx8, Sy8, Sy3, L) This is also DxO, DyO, Dxl, Dyl, Dx2, Dy2, Dyy 3 Will hit the quadrilateral vertex of the target (drawing area 5 2),
SxO、SyO、Sxl、Syl、Sx2、Sy2、Sx3、Sy3 表示根 源(紋理區5 1)四角形頂點座標。 經濟部智慧財產局員工消費合作社印製 -------------菜--- C請先閱讀背面之注意事寫本頁) 描繪引擎4 1從主CPU 3 1透過匯流排34例如輸入三角形 描繪命令,就將對於在紋理區51的3個頂點座標(Dx〇, DyO)、(Dxl ,Dyl)、(Dx2,Dy2)所規定的多邊形内的點 (Sxn,Syn)圖素値乘以値L的結果描繪於在座標(Dx2 , Dy2)、(SxO,SyO)、(Sxl,Syl)所規定的多邊形内的對應 點(Dxn,Dyn)。與紋理區5 1的座標(Sxn,Syn)對應的描 繪區52上的座標(Dxn,Dyn)可利用在下(1 1)式所表示的 -16 本纸張尺度適罔中國國家標準(CNS)A4規格(2.10 X 297公釐) 525078SxO, SyO, Sxl, Syl, Sx2, Sy2, Sx3, Sy3 represent the origin (texture area 5 1) quadrilateral vertex coordinates. Printed by the Consumer Cooperatives of the Intellectual Property Bureau of the Ministry of Economic Affairs -------------- Dish --- C Please read the notes on the back first and write this page) Draw the engine 4 1 from the main CPU 3 1 through the confluence For example, in row 34, if a triangle drawing command is input, the points (Sxn, Syn) in the polygon specified by the three vertex coordinates (Dx0, DyO), (Dxl, Dyl), and (Dx2, Dy2) in the texture area 51 are input. The result of multiplying the pixel 値 by 値 L is plotted at the corresponding points (Dxn, Dyn) within the polygon specified by the coordinates (Dx2, Dy2), (SxO, SyO), (Sxl, Syl). The coordinates (Dxn, Dyn) on the drawing area 52 corresponding to the coordinates (Sxn, Syn) of the texture area 51 can be expressed by the following formula (1 1). -16 The paper size is suitable for the Chinese National Standard (CNS) A4 size (2.10 X 297 mm) 525078
五、發明說明(14 仿射變換(affine transformation)處理求出 【數1】V. Description of the invention (14 affine transformation) [Number 1]
…(11〉 又,上述式中的a至d係爲了旋轉的係數,〇FX、〇F. 係爲了平行移動的係數。 貫際上在描繪區52以圖素單位進行描繪,就所描繪的田 素値而言、,使用從其座標利用在下(12)式所表示的反仿身 變換處理求出的紋理區5 1上的座標圖素値。 【數2】/x+h 卜厂 v Ί I Λ~^Λ、 ••(12)(11) In the above formula, a to d are coefficients for rotation, and 0FX and 0F. Are coefficients for parallel movement. In the drawing area 52, drawing is performed in units of pixels. For Tian Suzhen, the coordinate map element 的 on the texture area 51, which is obtained from the coordinates by using the inverse-fitting transformation expressed by the following formula (12), is used. [Number 2] / x + h 卜 厂 v Ί I Λ ~ ^ Λ, •• (12)
(請先閱讀背面之注意事 本頁) 經濟部智慧財產局員工消費合作社印^^ 又,上述式中的h、v係0以上1 · 0不滿的係數,x、y 、X、Y係整數。 此外,通常此座標値具有小數部,如圖8所示,成爲位 於各圖素間的座標,所以按照以下所示之式,使用雙線性 内插(bilinear interpolation)的圖素値 SP (X,Y)。 SP (X, Y) = (1 — h)氺(1 一 v)氺 TP (X、 y)" + h 氺(1 一 V)氺 TP (x+1, y ) + (1 — h)氺 v氺 TP (x、 y+1) + h 氺 v氺TP (x+1、 y十 1) (13) ______-17-_____ 本纸張尺·度適$中國國家標準(CNS)A4規格(210 x 297公釐) 525078 A7 B7 五、發明說明(15 ) 即’從紋理區5 1上的點TP (x,y)在X軸方向只離開+h 、y軸方向只離開切之點(x+h、y+v)的圖素値與來自其周 圍之點 Tp (x ’ y)、tp (x+i,y)、tp (X,y十0、TP ,y+1)的距離對應,使用所加權之値。 而且,描繪引擎41與模態對應,該模態爲來自主CPU 31 的混合模態設定函數Set Mode (MODE)所指定,在描繪區 52上的目標圖素値DP (X,Y)和紋理區51上的根源圖素 値SP (X,Y)之間使混合處理進行。 描繪引擎4 1執行的混合模態中存在模態〇至模態3,在 各模態執行如下的混合: MODEO ·· SP(x、γ) MODE1 : DP (X、γ) + Sp (X、γ) MODE2 : DP (X、γ) _ SF (X、γ) MODE3 · (1 - asp (χ、γ))* Dp (χ,γ) + aSP (X 、 Υ)* SF (X 、 γ) 又’ aSP (X ’ Υ)表示根源圖素値的&値。 即,在模態0將根源圖素値照樣描繪於目標上,在模態1 將根源圖素値加在目標圖素値上描繪,在模態2從目標圖 素値減去根源圖素値進行描繪。此外,在模態3進行與根 源a値對應加權’合成根源圖素値和目標圖素値。 將描繪於圖素記憶體43之描繪區52的圖像資料透過記 L、把’丨面46項出到pCRTC 44,由該處輸出到%顯示。 其次,就對於由視頻攝影機35所輸入的圖像资料進行捲 積過滤處理後,輸出卿36時的處理二:。 ί請先閱讀背面之注意事 π本頁} 裝 經濟部智慧財產局員工消費合作社印制衣(Please read the caution page on the back first) Printed by the Consumer Cooperatives of the Intellectual Property Bureau of the Ministry of Economic Affairs ^^ In the above formula, h and v are coefficients of 0 or more and 1 · 0, and x, y, X, and Y are integers . In addition, this coordinate 値 usually has a decimal part. As shown in FIG. 8, it becomes a coordinate located between pixels. Therefore, bilinear interpolation is used for the pixel 値 SP (X , Y). SP (X, Y) = (1 — h) 氺 (1-v) 氺 TP (X, y) " + h 氺 (1 -V) 氺 TP (x + 1, y) + (1 — h)氺 v 氺 TP (x, y + 1) + h 氺 v 氺 TP (x + 1, y ten 1) (13) ______- 17 -_____ This paper rule is suitable for the Chinese National Standard (CNS) A4 specifications (210 x 297 mm) 525078 A7 B7 V. Description of the invention (15) That is, 'From the point TP (x, y) on the texture area 51 only leaves + h in the X-axis direction and leaves only the tangent point in the y-axis direction. The pixel 値 of (x + h, y + v) and the points Tp (x 'y), tp (x + i, y), tp (X, y 0, TP, y + 1) from its surroundings For distance, use the weighted 値. Further, the rendering engine 41 corresponds to a modal which is specified by the mixed modal setting function Set Mode (MODE) from the main CPU 31, and the target pixel 値 DP (X, Y) and texture on the rendering area 52 The root pixel 値 SP (X, Y) on the area 51 is subjected to a blending process. Among the mixed modes executed by the drawing engine 41, there are modes 0 to 3, and the following modes are performed in each mode: MODEO · SP (x, γ) MODE1: DP (X, γ) + Sp (X, γ) MODE2: DP (X, γ) _ SF (X, γ) MODE3 · (1-asp (χ, γ)) * Dp (χ, γ) + aSP (X, Υ) * SF (X, γ) Also 'aSP (X' Υ) represents & 値 of the root pixel 値. That is, in the mode 0, the root pixel 値 is still drawn on the target, in the mode 1 the root pixel 値 is added to the target pixel 描绘, and in the mode 2 the root pixel 値 is subtracted from the target pixel 値Draw. Further, in mode 3, the source pixel 値 and the target pixel 値 are synthesized by weighting corresponding to the source a 値. The image data drawn in the drawing area 52 of the pixel memory 43 is transmitted to the pCRTC 44 by recording L, and the 46 items are output to the pCRTC 44, where they are output to% display. Next, after performing convolution filtering processing on the image data input by the video camera 35, the second processing is performed when the output is 36. ί Please read the note on the back π this page} Clothing printed by the Consumers ’Cooperative of the Intellectual Property Bureau of the Ministry of Economic Affairs
525078 A7 _B7_五、發明說明(17 ) for(m=0 ; m<3 ; m++){ Flat一Texture一Rectangle (1 ,1 ,HMAX-1 ,1 ,HMAX-1, VMAX-1 , 1 , VMAX-1 , m 、 n.HMAX-2+m 、 n , HMAX-2+m 、 VMAX-2+n、m、VMAX-2+n Cv[m][n]): (請先閱讀背面之注意事 經濟部智慧財產局員工消費合作社印製525078 A7 _B7_ V. Description of the invention (17) for (m = 0; m <3; m ++) {Flat-Texture-Rectangle (1, 1, HMAX-1, 1, HMAX-1, VMAX-1, 1, VMAX-1, m, n.HMAX-2 + m, n, HMAX-2 + m, VMAX-2 + n, m, VMAX-2 + n Cv [m] [n]): (Please read the first Attention printed by the Consumer Cooperative of the Intellectual Property Bureau of the Ministry of Economic Affairs
將以上處理表示成流程圖,則如圖1 2所示。 即,最、初在步驟S3 1,主CPU 3 1使描繪引擎4 1清除作 爲目標的描繪區52(輸出區域)後,使捲積過濾係數Cv[m] [η] 設定成預定値。現在的情況,如圖11所示,準備3 x 3個 捲積過濾係數。 其次,在步驟S32,主CPU 31對於描繪引擎41輸出作 爲混合模態使模態〇設定的命令。描繪引擎4 1與此命令對 應,將混合模態設定成模態〇。 其次,主CPU 31在步驟S33,以ΗΜΑΧ X VMAX個圖 素所構成的目標中,在四角形(1 , 1 , HMAX-1 , 1 , HMAX-1,VMAX-1,1,VMΑΧ-1)產生使圖素値0描繪 的命令,輸出到描繪引擎4 1。描繪引擎4 1與此命令對應 ,如圖13所示,ΗΜΑΧ X VMAX的目標中,在除了 j = 0 之列及i = 0之行以外的各圖素描績値0。 其次,在步驟S34,描繪引擎41與來自主CPU 31的命 令對應,作爲混合模態設定模態1 。即,使下述模態固定 -20- 本纸張尺度適用中國國家標準(CNS)A4規格(210x297公釐) Γ本頁) •裝·- ^>25078 A7The above process is represented as a flowchart, as shown in FIG. 12. That is, first, in step S31, the main CPU 31 clears the drawing engine 52 as the target drawing area 52 (output area), and sets the convolution filter coefficient Cv [m] [η] to a predetermined value. In the present case, as shown in FIG. 11, 3 x 3 convolution filter coefficients are prepared. Next, in step S32, the main CPU 31 outputs a command to the rendering engine 41 to set the mode 0 as the mixed mode. In response to this command, the drawing engine 41 sets the mixed mode to the mode 0. Next, in step S33, the main CPU 31 generates a quadrangle (1, 1, HMAX-1, 1, HMAX-1, VMAX-1, 1, VMAX-1) among the targets composed of UMAX X VMAX pixels. The command for rendering pixels 値 0 is output to the rendering engine 41. The drawing engine 41 corresponds to this command. As shown in FIG. 13, among the targets of the UMAX X VMAX, the sketches of each drawing except for the row of j = 0 and the row of i = 0 are 0. Next, in step S34, the rendering engine 41 corresponds to a command from the main CPU 31, and sets the mode 1 as the hybrid mode. That is, the following modalities are fixed. -20- This paper size applies the Chinese National Standard (CNS) A4 specification (210x297 mm) Γ page) • Packing ·-^ > 25078 A7
五、發明說明(20) ,dpl2 ’〇〇 + CSpi〇 X Ci〇 + CSP20 x c2〇 + c SP01 斤乂進行否的判斷,進入步驟S40。在步驟s4〇 、,判斷η疋否比2小。現在的情況,n = 〇,所以進行是的 ’變數n只增加i ’設定成㈣後 到步驟S36。 在步騍S36,將變數历再起始設定成〇,在步驟S37, 下述處理:在卜0至4和尸1至4所規定範園的根源圖 Γ貝料Cdp乘以捲積過濾係數cv[〇]n](=c0丨)之値,再 所已描繪的目標値上描繪。 藉此,例如如圖17所示,Cdpn、Cdp、Cdp2i、C Cdp22成爲如下之値 Cdpl 1 = CSP〇〇 X C〇〇 + X Cn, c dp2i = csp丨0 X c00 + cSP20 X c10 + c X c c, 0 1 sp3〇 x C20 + c SPl l dpi2 = CSP01 X c00+ Cspm X C10+ cSP21 X c20+ c ^ ! SP025. Description of the invention (20), dpl2 ′ 〇〇 + CSpi〇 X Ci〇 + CSP20 x c2〇 + c SP01 It is judged whether to go to step S40. In step s40, it is determined whether η 疋 is smaller than 2. In the present case, n = 0, so proceed to YES. The variable n is only incremented by i, and is set to ㈣, and then go to step S36. In step S36, the variable calendar is reset to 0, and in step S37, the following processing is performed: multiplying the source map Cdp of the fan garden specified by Bu 0 to 4 and 1 to 4 by the convolution filter coefficient cv [〇] n] (= c0 丨), and then draw on the target that has been drawn. By this, for example, as shown in FIG. 17, Cdpn, Cdp, Cdp2i, and C Cdp22 become as follows: Cdpl 1 = CSP〇〇XC〇〇 + X Cn, c dp2i = csp 丨 0 X c00 + cSP20 X c10 + c X cc, 0 1 sp3〇x C20 + c SPl l dpi2 = CSP01 X c00 + Cspm X C10 + cSP21 X c20 + c ^! SP02
Cdp22 = Csph X C〇〇 + CSP21 X Cl0 + CSP31 x 〇Cdp22 = Csph X C〇〇 + CSP21 X Cl0 + CSP31 x 〇
20 + C SP12 蛵濟部智慧財產局員工消費合作社' 以下,將同樣處理在步驟S40反覆執行到判 :2 _,。其結果,在目標區進行如圖i8:示的描20 + C SP12 Employee Consumer Cooperatives of the Intellectual Property Bureau of the Ministry of Economic Affairs', the same processing will be repeatedly executed to the judgment in step S40: 2 _ ,. As a result, the target area is traced as shown in Figure i8:
讀。如同圖所示,例如Cdpi 成爲如下之値。 C dp 1 1 - C s POO X C 〇〇 + C s P 1 0 X C 1 〇 + Qread. As shown in the figure, for example, Cdpi becomes as follows. C dp 1 1-C s POO X C 〇〇 + C s P 1 0 X C 1 〇 + Q
dp、 Cdp, C
dP2l 、 C dp 1 c dp22dP2l 、 C dp 1 c dp22
X C〇i + Cspn X Cii + CsP21 X C2 c SP20 X C20 + c SP02 X c02 + c spo SPl —製 -23 尸、变適6中國國家標準(CNS)M規格(210>< 297公釐) CC 07 5 2 5 A7 _B7_ 五、發明說明(21 ) X C12 X + CsP22 X CsP22XC〇i + Cspn X Cii + CsP21 X C2 c SP20 X C20 + c SP02 X c02 + c spo SPl-system-23 corpses, adaptable 6 Chinese National Standard (CNS) M specifications (210 > < 297 mm) CC 07 5 2 5 A7 _B7_ V. Description of the invention (21) X C12 X + CsP22 X CsP22
Cdp2i = CSP10 x C00 CSP20 x C10 + CSP3〇 x C20 + cSP11 X C〇i + CsP21 X Ch + CsP31 X C21 + CSP12 X C〇2 + CsP22 X C 】2 X + c s p 3 2 X C s p 2 2Cdp2i = CSP10 x C00 CSP20 x C10 + CSP3〇 x C20 + cSP11 X C〇i + CsP21 X Ch + CsP31 X C21 + CSP12 X C〇2 + CsP22 X C] 2 X + c s p 3 2 X C s p 2 2
Cdpl2 = Csp〇l X C〇〇 + CSP11 X C10 + CSP21 X C20 + Csp〇2 X C〇i + CSP12 X Cn + CSP22 X C21 + Csp〇3 X C〇2 + CSp13 X C12 X + CsP23 X CsP22Cdpl2 = Csp〇l X C〇〇 + CSP11 X C10 + CSP21 X C20 + Csp〇2 X C〇i + CSP12 X Cn + CSP22 X C21 + Csp〇3 X C〇2 + CSp13 X C12 X + CsP23 X CsP22
Cdp22 = Cspil X Ch>〇 + CSP21 X C10 + CSP31 X C20 + CSP12 X C01 + CSP22 X Cn + CsP32 X C2 丨 + CsP13 X C〇2 + CSP23 X C 1 2 X + c s p 3 3 X C s P 2 2 又,這種情況,作爲根源圖素資料,需要cSP16的圖素値 ,但這種圖像資料實際上不存在,所以如圖16所示,j = 5 ( = VMAX-1)的圖素資料Cdp被認爲是無效的圖素資料。 如以上,在i=l至HMAX-2和j = l至VMAX-2所規定的 範圍可得到執行捲積過濾處理的圖素資料。 以上處理係捲積過濾係數正時的處理,但捲積過濾係數 存在負數時,從主CPU 3 1輸出到描繪引擎4 1的描繪命令 例如如下所示:Cdp22 = Cspil X Ch > 〇 + CSP21 X C10 + CSP31 X C20 + CSP12 X C01 + CSP22 X Cn + CsP32 X C2 丨 + CsP13 XC〇2 + CSP23 XC 1 2 X + csp 3 3 XC s P 2 2 In this case, as the root pixel data, cSP16 pixel 値 is needed, but this image data does not actually exist, so as shown in Figure 16, j = 5 (= VMAX-1) pixel data Cdp is Think of invalid pixel data. As described above, the pixel data for performing the convolution filtering process can be obtained in the ranges specified by i = 1 to HMAX-2 and j = 1 to VMAX-2. The above processing is the processing of the convolution filter coefficient timing, but when the convolution filter coefficient is negative, the rendering command output from the main CPU 31 to the rendering engine 41 is as follows:
Set_Mode(0);Set_Mode (0);
Flat—Rectangle (1,1,HMAX-1,1,HMAX-1,VMAX-1,1, VMAX-1,CMAX/2); for(n=0 ; n<3 ; n++){ for (m=0 ; m<3 ; m++){ if(Cv [m][n]>0 { -24- 本纸張尺度適用中國國家標準(CNS)A4規格(210 X 297公釐) (請先閱讀背面之注意事is -裝--- I寫本頁) 訂: 經濟部智慧財產局員工消費合作社印製 525078 A7 B7 五、發明說明(22 )Flat—Rectangle (1,1, HMAX-1,1, HMAX-1, VMAX-1,1, VMAX-1, CMAX / 2); for (n = 0; n <3; n ++) {for (m = 0; m <3; m ++) {if (Cv [m] [n] > 0 {-24- This paper size applies to China National Standard (CNS) A4 (210 X 297 mm) (Please read the back first Note for the is-installation --- I write this page) Order: Printed by the Consumer Cooperatives of the Intellectual Property Bureau of the Ministry of Economic Affairs 525078 A7 B7 V. Description of Invention (22)
SetJViode(l);SetJViode (l);
Flat—Texture—Rectangle (1,1,HMAX-1,1,HMAX_1, VMAX-1,1,VMAX-1,m,n,HMAX-2+m,η,HMAX-2+m、 VMAX-2+n、m、VMAX-2+n Cv[m][n]): } if(Cv [m][n] < 0 ) }Flat_Texture_Rectangle (1,1, HMAX-1,1, HMAX_1, VMAX-1,1, VMAX-1, m, n, HMAX-2 + m, η, HMAX-2 + m, VMAX-2 + n, m, VMAX-2 + n Cv [m] [n]):} if (Cv [m] [n] < 0)}
Set一Mode(2);Set_Mode (2);
Flat一Texture一Rectangle (1,1,HMAX-1,1,HMAX-1, VMAX-1,1,VMAX-1,m,n,HMAX-2+m,li,HMAX_2+m、 VMAX-2+n、m、VMAX-2+n -Cv[m][n]): }Flat_Texture_Rectangle (1,1, HMAX-1,1, HMAX-1, VMAX-1,1, VMAX-1, m, n, HMAX-2 + m, li, HMAX_2 + m, VMAX-2 + n, m, VMAX-2 + n -Cv [m] [n]):}
Set_Mode (2);Set_Mode (2);
Flat—Rectangle (1,1,HMAX-1,1,HMAX-1,VMAX-1,1, VMAX-1,CMAX/2); 此外,將此處理表示成流程圖,則如圖1 9和圖20所示。 即,最初在步驟S51,將捲積過濾係數Cv[m][n]設定成 預定値。其次,在步驟S52,描繪引擎41與來自主CPU 31 的命令對應,作爲混合模態也設定模態0。然後,在步驟 S53,描繪引擎41在圖像記憶體43之描繪區52之目標區 四角形(1 , 1 ,HMAX-1 , 1 ,HMAX-1 ,VMAX-1 , 1 ,VMAX-1)描繪CMAX/2。CMAX意味著圖素値的最大値 。即,這種情況,如圖21所示,在目標區的i=l至HMAX-1 和至VMAX-1所規定的範圍描繪圖素値最大値的1/2 値。 -25- 本纸張尺度適⑴卡國國家標準(CNS)A4規格(210 X 297公釐)Flat—Rectangle (1, 1, HMAX-1, 1, HMAX-1, VMAX-1, 1, VMAX-1, CMAX / 2); In addition, this process is represented as a flowchart, as shown in Figure 19 and Figure 20 shown. That is, first, in step S51, the convolution filter coefficient Cv [m] [n] is set to a predetermined value. Next, in step S52, the rendering engine 41 corresponds to a command from the main CPU 31, and the mode 0 is also set as the mixed mode. Then, in step S53, the rendering engine 41 renders CMAX on the target area quadrilateral (1, 1, HMAX-1, 1, HMAX-1, VMAX-1, 1, VMAX-1) in the drawing area 52 of the image memory 43. /2. CMAX means the maximum value of the pixel 値. That is, in this case, as shown in FIG. 21, the pixel 値 maximum 値 1/2 値 is drawn in a range defined by i = 1 to HMAX-1 and VMAX-1 in the target area. -25- The size of this paper is suitable for the national standard of the country (CNS) A4 (210 X 297 mm)
(請先閱讀背面之注意事K W本頁) 裝 經濟部智慧財產局員工消費合作社印製(Please read the note at the back of this page K W) Printed by the Consumer Cooperative of the Intellectual Property Bureau of the Ministry of Economic Affairs
五、發明說明(23) 經濟部智慧財產局員工消費合作社印製 其次,在步驟S54和步驟S55,在變數η和變數m分別 起始設定0,在步驟S56,判斷捲積過濾係數Cv[mnn]是 否是正。此過濾係數是正時,進入步驟s 5 7,作爲混合模 態,設定模態1。其次,進入步驟S58,進行描繪處理。 此描繪處理係和上述圖1 2之步骤S 3 7的情況同樣的處理, 所以其説明省略。 步骤S 5 8之處理的其次,進入步驟S 5 9,判斷變數m是 否比2小,判斷m比2小時,進入步驟S 6 0,變數m只增 加1,回到步驟S56。然後,反覆執行其以後的處理。 在步驟S59,判斷變數m不比2小時,即m和2相等時 ,進入步驟S 6 1,判斷變數η是否比2小。變數η比2小時 ’進入步驟S62,變數η只增加1後’回到步驟S55,反覆 -執行其以後的處理。 在步驟S6 1,判斷變數η不比2小時(變數η和2相等), 進入步驟S63,作爲混合模態,設定模態2。然後,在步 驟S64,執行以下處理:描繪引擎4 1從目標四角形(丨,1 ’ HMAX-1 ’ 1 ’ HMAX-1,VMAX-1,1,vmax_i)的 圖素値減去在步驟S53描繪的CMAX/2之値而描输。 另一方面,在步驟S56,判斷捲積過濾係數cv[m][n]不 是正時,進入步驟S65,判斷此係數是否是負。判斷此係 數是負時,進入步驟S66,作爲混合模態,設定模態2。 然後,在步驟S67,執行和在步驟S58(即圖12之步驟S37) 的情況同樣的描纟會處理。 在步驟S65,判斷係數Cv[m][n]不是負時,即判斷此係 ____- 26 - 本紙張尺度適用中國國家標準(CNS)A4規格(21〇 X 297公釐) ' - • 裝--------訂--------- (請先閱讀背面之注咅本頁) 525078 Α7 Β7 經濟部智慧財產局員工消費合作社印製 五、發明說明(24 ) 數是0時,跳過步驟S66和步驟S67的處理。 然後,步驟S67之處理的其次,進入步驟S59,執行和 上述情況同樣的處理。 關於以上’可以説藉由以多邊形單位描綠圖素資料,執 行捲積過濾運算處理,但以圖素單位描繪亦可執行同樣的 處理。圖22顯示這種情況的處理別。 即,最初在步驟S71,設定捲積過濾係數Cv[m][n]。在 步驟S72,作爲混合模態,設定模態i。然後,在步驟§73 和步驟S74,將變數j和變數i分別起始設定成i,在步驟 S75,作、爲目標圖素資料Cdp[i]Lj]之値,起始設定〇。 其次,在步驟S76和步驟S77,將變數n和變數m起始 没定成0 。在步驟S 7 8 ,執行以下處理:根源圖素資料 CSP[i + m-l][j + n-l]乘以捲積過濾係數Cv[m][n],加在目標 圖素資料Cdp[i][j]上描繪。 其次,在步驟S79,判斷變數m是否比2小。現在的情 況,變數m = 0,所以進行是的判斷,在步驟s8〇,變數m 只增加1,設定成m = 1後,回到步驟$ 7 8。 在步驟S78,執行以下處理:在χ軸方向只i圖素分右 鄰的根源圖素資料+ + 乘以χ軸方向右鄰的 捲和、過/慮係數C v [ m ] [ η ],加在同一目標圖素資料匚d p [丨][j ] 上描缯^。 在步驟S79,反覆執行同樣的處理到判斷爪不比2小(和 2相等)。然後,進入步驟S81,判斷變數n是否比2小, 比2小時,進入步驟S82,變數η只增加1後,回到 --- C請先間讀背面之江意事本頁) 27- 525078 A7 B7 五、發明說明(25 ) S 7 7,反覆執行其以後的處理。 在步驟S 81 ,反覆執行以上的處理到判斷變數η不比2 小時(和2相等)。藉此,反覆描繪、加上同一目標圖素資料 Cdp[i][j]乘以3 X 3捲積過濾係數的結果。即,藉此,1個 對象圖素的捲積過濾運算處理完畢。 其次’進入步驟S83,判斷丨是否比HMAX-2小,判斷 小時,進入步驟S84,變數i只增加1後,回到步驟S75, 反覆執行其以後的處理。即,在χ軸方向逐一移動目標圖 素資料,執行同樣的處理。 在步驟、S83,判斷變數i不比ΗΜΑχ-2小時(判斷相等時) ,進入步驟S85,判斷變數j是否比vMaX-2小。判斷變 數j比VMAX-2小時,進入步驟S86,變數』只增加i後, 回到步驟S74,反覆執行其以後的處理。即,一面在y方 向移動目標圖素資料,—面依次反覆執行同樣的處理。然 後,在步驟S85,判斷>νΜΑχ·2時,、结束捲積過遽處理。 如此亦可得到和目i 2或圖j 9和圖2〇所示的情況同樣的 結果H如此不〇邊形單位,而是以圖素單位進行 描•處理,位址的產生處理需要時間,結果描績處理需要 經濟部智慧財產局員工消費合作社印製 ,間。因此,最好如圖12或圖19和圖20所示,以多邊形 單位執行描績處理。 ’ 其次,就錐形㈣處理加以説明。在此錐形過攄處理, 如圖23所示,係反覆以下處理:求出處理圖像互相鄰接的 4個圖素値的平均値,將其圖素配置於4個圖素中心。即, 利用雙線性插値法(b i | j n e · inter polation)執行運算附近4 297公釐) I I — 111 I — — — — — — · I I (請先閱讀背面之注意事本頁) _尺度適用中國國 525078 A7 經濟部智慧財產局員工消費合作社印製 五、發明說明(26 ) 點的平均圖素値的處理,就可從η X n個(n係2的乘方値) 處理圖像得到(n/2)x (n/2)圖像資料。反覆執行此處理,最 後錐形頂點的1圖素資料就成爲表示錐形底面全部圖素平 均値的圖素資料。 執行這種錐形過濾處理時,主CPU 3 1對於描_引擎4 i 輸出如下的描緣命令:int L ; / *根源區一邊的長度氺/ int offset ;L = ; / *起始圖像一邊的長度* offset^O ;while (L>1){Set-Texture—Base (0,offset) ; /紋理區的基點設定氺 / offset + = L ; Set—Drawing一Rectangle (0,offset) ; / * 描繪區的基點設定 * / Flat—Texture一Rectangle (0,〇,L/2,0,L/2,乙/2 ,0,L/2,0.5,0.5,L + 0.5,〇·5,L + 0.5,L + 〇.5 ,0.5,L + 0.5,1.0);L = L/2 ; i 將此描緣命令表示成流程圖,則如圖2 4所示,最初’在 步驟S91 ,在變數offset起始設定〇。其次,在步驟S92 ,執行將紋理區5 1的基點設定成(〇,〇ffset)的處理。即’ 如圖25所示,設定基點T (0,〇)。其次,進入步蹀s93 ,將變數offset只增加L。然後,在步驟S94,設定描繪 29- 表纸張H適丐中國國家標準(CNS)A4規格(210 X 297公爱)V. Description of the invention (23) Printed by the Consumer Cooperatives of the Intellectual Property Bureau of the Ministry of Economic Affairs. Next, at step S54 and step S55, the variables η and m are initially set to 0. In step S56, the convolution filter coefficient Cv [mnn is determined. ] Is it positive. When the filter coefficient is positive, proceed to step s 5 7 and set the mode 1 as the mixed mode. Next, it progresses to step S58 and performs drawing processing. This drawing process is the same process as in the case of step S 3 7 in FIG. 12 described above, so its description is omitted. The processing of step S 58 is followed by step S 59. It is judged whether the variable m is smaller than 2 and it is judged that m is longer than 2 hours. Then it proceeds to step S 60 and the variable m is only increased by 1 and the process returns to step S56. Then, the subsequent processing is executed repeatedly. In step S59, it is judged that the variable m is not longer than 2 hours, that is, when m and 2 are equal to each other, it proceeds to step S61 to determine whether the variable η is smaller than 2. The variable η is longer than 2 hours 'and the process proceeds to step S62. After the variable η is incremented by 1', the process returns to step S55, and the process thereafter is repeated. In step S61 1, it is determined that the variable η is not longer than 2 hours (the variables η and 2 are equal), and the process proceeds to step S63, and as the mixed mode, the mode 2 is set. Then, in step S64, the following processing is performed: the rendering engine 41 is subtracted from the pixel 値 of the target quadrangle (丨, 1'HMAX-1'1'HMAX-1, VMAX-1,1, vmax_i), which is depicted in step S53. CMAX / 2 has been described. On the other hand, in step S56, when it is judged that the convolution filter coefficient cv [m] [n] is not positive, it proceeds to step S65 to judge whether the coefficient is negative. When it is judged that the coefficient is negative, the process proceeds to step S66, and as the mixed mode, mode 2 is set. Then, in step S67, the same description processing as in the case of step S58 (ie, step S37 in FIG. 12) is performed. In step S65, when the judgment coefficient Cv [m] [n] is not negative, it is judged that this is ____- 26-This paper size applies the Chinese National Standard (CNS) A4 specification (21〇X 297 mm) '- -------- Order --------- (Please read the note on the back of this page first) 525078 Α7 Β7 Printed by the Consumer Cooperatives of the Intellectual Property Bureau of the Ministry of Economic Affairs V. Invention Description (24) Number If it is 0, the processes of steps S66 and S67 are skipped. Then, the processing of step S67 is followed by the processing of step S59, and the same processing as that described above is executed. Concerning the above, it can be said that by drawing the green pixel data in a polygon unit, a convolution filtering operation process is performed, but the same processing may be performed in a pixel unit drawing. Fig. 22 shows the processing of this case. That is, first, in step S71, the convolution filter coefficient Cv [m] [n] is set. In step S72, as the mixed mode, the mode i is set. Then, in step §73 and step S74, the variable j and the variable i are initially set to i, respectively, and in step S75, the target pixel data Cdp [i] Lj] is set to 0, and is initially set to zero. Next, in steps S76 and S77, the variable n and the variable m are initially set to 0. In step S 7 8, the following processing is performed: the root pixel data CSP [i + ml] [j + nl] is multiplied by the convolution filter coefficient Cv [m] [n] and added to the target pixel data Cdp [i] [ j]. Next, in step S79, it is determined whether the variable m is smaller than two. In the present case, the variable m = 0, so a yes judgment is made. At step s80, the variable m is only increased by 1, and after setting m = 1, it returns to step $ 78. In step S78, the following processing is performed: only the root pixel data of the right neighbor of the i pixel in the x-axis direction is multiplied by the volume sum of the right neighbor in the x-axis direction, and the over / consideration coefficient C v [m] [η], Add on the same target pixel data 匚 dp [丨] [j] to describe ^^. In step S79, the same processing is repeatedly performed until it is judged that the claw is not smaller than 2 (equal to 2). Then, go to step S81 to determine whether the variable n is smaller than 2 and longer than 2 hours. Then go to step S82. After the variable η has only increased by 1, return to --- C. Please read the Jiang Yishi page on the back) 27- 525078 A7 B7 V. Description of the invention (25) S 7 7, repeated execution of subsequent processing. In step S 81, the above processing is repeatedly performed until it is determined that the variable η is not longer than 2 hours (which is equal to 2). With this, the result of repeated drawing and adding the same target pixel data Cdp [i] [j] by 3 × 3 convolution filter coefficients. That is, by this, the convolution filtering operation of one object pixel is completed. Next ', it proceeds to step S83, and judges whether 丨 is smaller than HMAX-2. If it is small, it proceeds to step S84. After the variable i is only incremented by one, it returns to step S75 and repeatedly executes subsequent processing. That is, the target pixel data is moved one by one in the x-axis direction, and the same processing is performed. In step S83, it is determined that the variable i is not longer than ΗΜΑχ-2 hours (when the determination is equal), and it proceeds to step S85 to determine whether the variable j is smaller than vMaX-2. It is judged that the variable j is smaller than VMAX-2, and the process proceeds to step S86. After the variable "is increased by only i, the process returns to step S74, and the subsequent processes are repeatedly executed. That is, while moving the target pixel data in the y direction, the same process is performed repeatedly in sequence. Then, in step S85, when it is judged > νΜΑχ · 2, the convolution overshoot processing is ended. In this way, the same result H can be obtained as in the case of head i 2 or Figure 9 and Figure 20. H is not a polygon unit, but is traced and processed in pixel units. The generation and processing of the address requires time. The results of the tracing process need to be printed by the Consumer Cooperative of the Intellectual Property Bureau of the Ministry of Economic Affairs. Therefore, it is preferable to perform tracing processing in polygon units as shown in FIG. 12 or FIG. 19 and FIG. 20. Next, a description will be given of the conical treatment. Here, as shown in FIG. 23, the cone-convolution processing is repeated as follows: the average 値 of the four pixel 邻接 adjacent to each other in the processed image is obtained, and the pixels are arranged at the centers of the 4 pixels. That is, the bilinear interpolation method (bi | jne · inter polation) is used to perform the operation in the vicinity of 4 297 mm. II — 111 I — — — — — — II (Please read the Caution page on the back first) _Scale Applicable to the processing by the Consumer Property Cooperative of the Intellectual Property Bureau of the Ministry of Economic Affairs of the State of China 525078 A7. The average pixel 发明 of the invention description (26) points can be processed from η X n (n is the power of 2) Get (n / 2) x (n / 2) image data. This process is repeatedly performed, and the 1-pixel data of the last cone apex becomes the pixel data indicating that all the pixels on the bottom surface of the cone are average. When performing this cone filtering process, the main CPU 31 outputs the following tracing command for tracing engine 4 i: int L; / * length of one side of the root zone 氺 / int offset; L =; / * start image The length of one side * offset ^ O; while (L > 1) {Set-Texture—Base (0, offset); / base point setting of the texture area 氺 / offset + = L; Set—Drawing-Rectangle (0, offset); / * Set the base point of the drawing area * / Flat—Texture_Rectangle (0, 0, L / 2, 0, L / 2, B / 2, 0, L / 2, 0.5, 0.5, L + 0.5, 0.5 , L + 0.5, L + 0.5, 0.5, L + 0.5, 1.0); L = L / 2; i This drawing command is represented as a flowchart, as shown in Figure 24, initially 'at step S91 , Set 0 at the beginning of the variable offset. Next, in step S92, a process of setting the base point of the texture area 51 to (0, 0ffset) is performed. That is, as shown in Fig. 25, a base point T (0, 0) is set. Secondly, enter step s93 and increase the variable offset by only L. Then, in step S94, the drawing 29-sheet paper H Chinese Standard (CNS) A4 specification (210 X 297 public love) is set.
(請先閱讀背面之注意事TS 1本頁) --裝 訂· 五、發明說明(27) 區 52 的基點 T(〇,〇ffset)男. ,Ετ 〇 - 、 )現在的情況,如圖25所示,么 疋基點D(0,L)。 < 、次’在步I S95 ,執行以下處理:根源(紋理(Please read the note on the back TS 1 page first)-Binding · V. Description of the invention (27) The base point T (〇, 〇ffset) of the area 52, Ετ 〇-,) The current situation, as shown in Figure 25 As shown, the base point D (0, L). <, times' In step I S95, the following processing is performed: root (texture
^(〇.5,0.5,L + 〇.5,〇.5,l + 〇.5,l + 〇5j〇5>l + J 的圖素値乘以1 ’加在目標四角形(〇,〇,l/2,〇 , Μ L/2 ’ 0 ’ L/2)上描繪。即,藉此從圖23所示的最下面 錐形底面)的處理圖像可得到!個上面階層的處理圖像。 其次,進入步驟S96,以變數L爲現在値的1/2。在步騍 S97,判斷變數L是否比i大,變數L比i大時,回到步: S92,反、覆執行其以後的處理。即,藉此從第二階層起^ 再得到第三階層的圖像資料。 以下,反覆執行同樣的處理,在步驟S97,判斷變數乙 不比i大日寺(判斷變數L等於"寺),結束錐形過遽處理。 其次,就幀間差異處理加以說明。在此幀間差異處理, 如圖26所示,運算在時刻t的幀圖像和在時㉝…的幀圖 像之差。藉此,可抽出有動的圖像區域。 即,這種情況,主CPIMI對於描繪引擎4丨,使如圖27 之泥程圖所示的處理執行。最初,在步驟s丨0丨,描繪引擎 4!與來自主CPU3丨的命令對應,設定模態2作爲混;模態 。其次,在步驟S 1 02,描繪引擎4 i在由視頻攝影機3 5所 輸入的圖像資料中,以時間上後的幀圖像資料爲目標圖像 ,以時間上前的幀圖像資料爲根源圖像資料。然後,在步 驟S103,描繪引擎41執行以下處理··從目標四角形的圖 素値減去根源四角形的圖素値而描繪。目標區的幀圖素資^ (〇.5, 0.5, L + 0.5, 0.5, 5 + 1 + 0.5, 1 + 〇5j〇5 > 1 + J pixels 値 multiplied by 1 'added to the target quadrilateral (〇, 〇 , / 2, 0, ML L / 2'0 'L / 2). That is, from this, the processed image of the bottom conical bottom surface shown in Fig. 23) can be obtained! The upper-level processed image. Next, the process proceeds to step S96, where the variable L is 1/2 of the current value. In step S97, it is determined whether the variable L is greater than i. When the variable L is greater than i, the process returns to step: S92, and the subsequent processing is executed repeatedly. That is, the image data of the third layer is obtained from the second layer. Hereinafter, the same processing is repeatedly performed. In step S97, it is judged that the variable B is not larger than i-dai-ji temple (the variable L is judged to be equal to " the temple), and the tapering process is terminated. Next, the difference processing between frames will be described. In this inter-frame difference processing, as shown in FIG. 26, the difference between the frame image at time t and the frame image at time ㉝ ... is calculated. Thereby, a moving image area can be extracted. That is, in this case, the main CPIMI executes the processing shown in the mud chart for the rendering engine 4 丨. Initially, at step s 丨 0 丨, the drawing engine 4! Corresponds to the command from the main CPU 3 丨, and modality 2 is set as the blending mode. Next, in step S 102, the rendering engine 4i uses the frame image data of the time frame as the target image in the image data input by the video camera 35, and uses the frame image data of the time frame as Root image data. Then, in step S103, the rendering engine 41 executes the following processing ... Subtracting the root quadrilateral pixel 値 from the target quadrilateral pixel 値 and rendering. Frame pixel element of target area
本纸張尺度適用中國國家標準(CNSM4規格(2j〇x297公釐) 525078 A7 B7 五、發明說明(28 ) 料和根源區的幀圖素資料在靜止畫區域,其値成爲實質上 相等4値。其結果,—執行步驟S103的處理,其圖素資料 之値就成爲大约0。 ' 對此,有動區域的圖素資料之値在目標的情況和根源的 情況成爲不同値。因此,在步驟S103的處理結果所得到的 圖像貧料i値成爲具有0以外的預定大小之値。於是,從 f貞間差異的圖像資料的各圖素資料値大小可區分動態區= 和靜止畫區域。 其次,就圖像間距離〜加以説明。圖像間距離如圖28所示 ,係表不、圖像A和圖像B兩個幀圖像的不同程度的。求出 此圖像間距離時’以附有箝位(eUmp)的處理執行求出圖像 A和圖像B的差異圖像的處理。所謂附有箝位的處理,意 :耆使比0小之値飽和到〇,而使比最大値大之値飽和到 最^値的處理。求出圖像A和圖像b的圖像間距離時,求 出從圖像A減去圖像b的附有箝位的差異圖像資料和從圖 像B減去圖像A的附有箝位的差異圖像資料,將這些資料 相加,得到絕對値化差異圖像資料。 一” ;例如假設圖像A之預定圖素資料之値爲13,圖像b之對 愿圖素資料之値爲20,則A-B之値成爲_7,但所箝位的結 :,其値成爲0。此外,B-A之値成爲7。其結果,將兩 考相加之値成爲7。 如此得到絕對値化差異圖像資料時,其次執行此絕對値 虎憂異圖像資料的錐形過遽處理。如上述,執行錐形過遽 %理,其頂點的1個圖素値就成爲處理圖像(絕對値化差異This paper size applies the Chinese national standard (CNSM4 specification (2j0x297 mm) 525078 A7 B7 V. Description of the invention (28) The frame pixel data of the material and the root zone are in the still drawing area, which becomes substantially equal to 4 値As a result, the process of step S103 is performed, and the pixel data size becomes approximately 0. 'In this regard, the pixel data size of the moving area becomes different in the target condition and the root condition. Therefore, in The image data i obtained from the processing result of step S103 becomes a predetermined size other than 0. Therefore, the size of each pixel data of the image data of the difference between the frames can be distinguished from the dynamic area = and the still image. Area. Next, the distance between images will be explained. The distance between images is shown in Figure 28, which shows that the two frames of the image, image A and image B have different degrees. Find the distance between images. In the case of distance, the process of obtaining a difference image between the image A and the image B is performed by a process with a clamp (eUmp). The so-called clamped process means that the smaller than 0 is saturated to 0. , And the process of saturating 値, which is larger than the largest 値, to the maximum 値. Find For the distance between images A and b, the difference between the clamped difference image data of image b from image A and the clamped difference of image A from image B is obtained. The difference image data is added, and the absolute difference image data is obtained by adding them together. For example, assuming that the predetermined pixel data of image A is 13, and the expected pixel data of image b is 13. If it is 20, the AB of AB becomes _7, but the clamped result: its 値 becomes 0. In addition, the BA of 値 becomes 7. As a result, the sum of the two tests becomes 。. In this way, the absolute conversion is obtained. In the case of difference image data, the conical processing of the absolute image data is performed next. As described above, if the conical processing is performed, one pixel of its vertex becomes the processed image (absolute Indifference
本纸張規格⑵〇 X (請先閱讀背面之注意事>Specifications of this paper ⑵〇 X (Please read the notes on the back first>
ϋ n n n 一5σ4I ϋ H ϋ n I I I 經濟部智慧財產局員工消費合作社印制衣 525078 五、發明說明(μ 離的範圍存在時,進入步驟S139,執 置的處理。其後,回到步驟S137,反 力1圖素分塊位 在步驟SU8,判斷在全部範圍求出:㈣樣的處理。 步驟S140,藉由反覆執行步驟SU7 ^間距,時,進入 圖像間距離中,抽出與最短距離的對應^旱到的多數 在以上係模板圖像對於對象圖像的 ,但模板比對象圖像十分小時,如圖比較大時的處理 進行模式匹配處理。 流程圖所示,可 :,在這種情況,在步㈣5〗,選擇對象圖像 =。然後,在步驟8152,描緣引擎 亍 的模板,產生第二圖像。在::』 步運算在步驟si51所選擇的對象圖像和在 之;:二斤Λ的模板圖像的圖像間距離。雖然和圖” 但:1 :Γ!況同樣執行此圖像間距離的運算處理, 生第::像’“Γ“,如圖32所示,由於排列多數模板產 圖像,所以以此模板單位運算圖像間距離。 =二,判斷是否在全部搜索範圍求出圖像 圖象間距離的範圍存在時,進入步驟S155,只 執,,回到步一反覆 進::驟S154 ’判斷在全部搜索範圍求出圖像間距離時, 的^驟S156,從藉由反覆執行步驟S丨53的處理所得到 、夕圖像間距離中抽出最短圖像間距離之塊。 圖33表示檢出移動向量時的處理例。最初,在步驟叫1ϋ nnn 5σ4I ϋ H ϋ n III Printed clothing by the Consumers ’Cooperative of the Intellectual Property Bureau of the Ministry of Economic Affairs 525078 V. Description of the invention (if the μ range exists, proceed to step S139 and perform the processing. Thereafter, return to step S137, The pixel block of the reaction force 1 is located in step SU8, and it is judged to be obtained in the entire range: the same process. Step S140, by repeatedly performing step SU7 ^ space, when entering the distance between the images, extract the shortest distance Corresponding to the above, most of the above template images are for the target image, but the template is much smaller than the target image, and the pattern matching process is performed when the figure is relatively large. As shown in the flowchart, you can: In the case, in step 5}, select the target image =. Then, in step 8152, trace the template of the edge engine , to generate a second image. In :: Step, calculate the target image selected in step si51 and in it ;: The distance between the images of the template image of Erjin Λ. Although it is the same as the picture, but: 1: Γ! The same is performed for the distance between the images, and the result is like: "'Γ", as shown in Figure 32 Shown because of the arrangement of most templates Image, so use this template unit to calculate the distance between the images. = Second, if it is determined whether the range of the distance between the images exists in all the search ranges, go to step S155, just execute, go back to step one and repeat: : Step S154 'When it is determined that the distance between the images is obtained in the entire search range, step S156, the shortest distance between the images is extracted from the distance between the images obtained by repeatedly performing the processing of step S 丨 53. Figure 33 shows an example of processing when a motion vector is detected. Initially, it is called 1 in step.
(CNsuTi^F 養 呂丁 -34- (210 x 297 Υ 525078 A7 B7 五、發明說明(32 經 濟 部 智 慧 財 產 局 消 費 合 社 印 製 ’ PCRTC 44使由視頻攝影機35所輸出的前幀圖像和現鴨 圖像如圖34所示,記憶於圖像記憶體43。其次,進入步 驟S162 ’主CPU 3 1抽出前幀圖像之1個塊作爲模板。然 後,在步驟S 1 63,主CPU 3 1如參照圖29之流程圖説明般 地執行以下處理··求出在步驟S丨62抽出的模板和現幀對應 範圍的圖像(塊)的圖像間距離。 其次,在步驟S164,主CPU 31如圖35所示,判斷是否 就現幀搜索範圍内的全部加以搜索,尚未搜索的範圍存在 時,進入步驟S165,執行以下處理:只移動丨圖素分現幀 對應範圍、的圖像(塊)位置。然後,回到步驟S163,執行以 下處理:再求出模板圖像和現幀塊圖像的圖像間距離。 在步驟S 1 64,反覆執行如以上的處理到判斷在全部搜索 範圍内進行搜索。在步驟S164,判斷在全部搜索範圍進行 搜索時’進入步驟S166,± cpu 31從反覆執行步驟$⑹ =處理求出的多個圖像間距離中,求出最短圖像間距離, 遊擇與此對應的現幀上的塊作爲最短塊。然後,在步驟Μ 6 7 ,王CPU 31求出模板和在步驟S166選擇的最短 移動向量。 ^ u w其次,進入步專u168,ACPU31#,mM 塊是否求出移動向量。尚未.中妒私a曰 'χ σΓ 00 里向禾衣出和動向量的塊留下時,回 到y驟S 1 62,從前幀抽出新的 同樣的處理。 .個充作馬換板,反覆執行 在步驟S168 ,判斷關於前賴全部 結束處理。 疋砂勒向I時, 35 - (210x^97^57· (請先閱讀背面之注意事Ϊ1. 裝--- 呙本頁) · 525078 A7 B7 五、發明說明(33 如:上可求出移動向量,但例如圖36所示’亦可求出移 動向量。在此例’在步驟S17l,將視頻攝影機 ; 前幢圖像和現鴨圖像記憶於圖像記憶體43。其次,在= L:2 ’如_38 Γ斤示,同時求出前幀和現幀的各塊的圖像間 再者,在步驟仙,主CPU31判斷是否在全 圍内移動現幢對於前悄的相對位置。尚未遍及全部搜索 圍内移動雜置時,進入步驟S174,主CPU3:= 下處理·只移重力i圖素分現幀對於前幀的相對位置 步、驟sm,執行以下處理:再同時求 ; 的各塊的圖像間距離。 兄t 一:覆執行如以上的處理到在步驟sm,判斷 範園内移動現幢。藉由步驟S172 2 出圖像間距離。因此,在步㈣ 、:數刀木 内崧μ 土 / κ I/*3,判所在全邵搜索範圍 内=現幢時,可得到塊數乘以搜 的圖像間距離。 μ糸数(歎 “rSm,對於1個塊,從搜索範圍内的圖素數分的 圖像間距離中選擇最短的,選 的現帕卜Μ诒谇,、其瑕短圖像間距離對應 。:二Γ最短塊。對於各塊全部進行同樣的處理 短境和、二:S176,各塊求出在步㈣75所求出的最 艰和則幀的移動向量。 此,藉由求出移動向量,與圖 - 更迅速求出移動向量。 ΰ33所-的情泥相比’可 此外,如圖38之流程圖所示般地求出移動向量,比圖33 -36 (210x297 公釐)(CNsuTi ^ F Yang Lu Ding-34- (210 x 297 Υ 525078 A7 B7 V. Description of the invention (32 Printed by the Consumer Cooperative of the Intellectual Property Bureau of the Ministry of Economic Affairs' PCRTC 44 makes the previous frame image output by the video camera 35 and The current duck image is stored in the image memory 43 as shown in Fig. 34. Next, it proceeds to step S162 'The main CPU 31 extracts a block of the previous frame image as a template. Then, in step S 1 63, the main CPU 31 Perform the following processing as explained with reference to the flowchart of FIG. 29. The image distance between the template extracted in step S62 and the image (block) corresponding to the current frame is obtained. Next, in step S164, As shown in FIG. 35, the main CPU 31 determines whether to search all within the search range of the current frame. When the range that has not been searched exists, it proceeds to step S165 and executes the following processing: only moves the map corresponding to the range of the pixel-presented frame, Image (block) position. Then, return to step S163 to perform the following processing: find the image distance between the template image and the block image of the current frame. In step S 1 64, repeat the above processing until it is determined that Search within the entire search range. In step S164, it is judged that when searching is performed in the entire search range, the process proceeds to step S166, and the ± cpu 31 obtains the shortest image distance from the distances between multiple images obtained by repeatedly executing step $ ⑹ = processing, and selects this distance. The corresponding block on the current frame is regarded as the shortest block. Then, in step M 6 7, Wang CPU 31 finds the template and the shortest motion vector selected in step S166. ^ Uw Next, enter step u168, ACPU31 #, mM whether the block is Calculate the motion vector. Not yet. When the block of the motion vector and the motion vector left in the 妒 σσ 00 is left, return to y step S 1 62, and extract the same new processing from the previous frame. Change the board, repeatedly execute step S168, and judge that all the processing is completed. 疋 When Sarah turned to I, 35-(210x ^ 97 ^ 57 · (Please read the precautions on the back first Ϊ 1. Installation --- 呙(This page) · 525078 A7 B7 V. Description of the invention (33 As above, the motion vector can be obtained, but for example, as shown in Fig. 36, 'the motion vector can also be obtained. In this example', in step S17l, the video camera is forwarded; The image and the current duck image are stored in the image memory 43. Second, at = L: 2 'such as _38 Γ cat In the step, the main CPU 31 determines whether to move the relative position of the current building to the front quietness within the entire range. The search for moving miscellaneous objects in the entire range When the time is set, the process proceeds to step S174. The main CPU3: = performs the following processing: only shifts the relative position of the gravity i pixel realizing frame with respect to the previous frame, step sm, and executes the following processing: then calculates the distance between images of each block simultaneously; . Brother t1: Repeat the above processing to step sm, and judge that the existing building is moved in Fanyuan. The distance between the images is obtained in step S172 2. Therefore, in the steps of: 刀 刀 木 内 松 μ soil / κ I / * 3, the judgment is within the whole search range = the current building, the number of blocks multiplied by the searched image distance can be obtained. μ 糸 number (sigh "rSm, for one block, choose the shortest, selected current Pabu M 诒 谇 from the distance between image points in the search range, and the short distance between the corresponding short images. : Two Γ shortest blocks. The same processing is performed for each block. S2, S176, each block finds the motion vector of the most difficult sum frame obtained in step 75. Therefore, by obtaining the motion vector, Compared with the figure-find the movement vector more quickly. Ϋ́33 所-compared to the sentiment 'can also be obtained as shown in the flowchart of Figure 38, compared to Figure 33 -36 (210x297 mm)
(請先閱讀背面之注意事TE '裝---------訂--------- 呙本頁) #- 經濟部智慧財產局員工消費合作社印製 ^5078 五、 發明說明(34 所示的情況,亦可更迅速求出移動向量。 即,在圖3 8之例,在步驟s丨8丨 :出的前幢和現糊像記憶於圖像記 步驟S182,主CPU31分別如圖3 = 43。其次,在 意 前續和㈣,作成低解析度的圖像^錐形過遽處理 程圖機地執行此錐形過遽處理:科。如參照圖24之流 步Μ,主⑽1執行以下處理:利用在 頁 。二== 理。在:= 執行求出此移動向量的處 :步=83求出低精度移動向量時,進入步驟si84 減卢:執行以下處理:以原來解析度的(進行錐形過 料爲基礎’各塊搜索低精度移動向量包 般地執行求出此移動向量的處理。 相36尸斤不 ^次,就霍夫變換(Hough transf〇rmati〇n>以説明 人交換係將直線變換成點的處理,規定如下:(Please read the note on the back TE 'install --------- order --------- 呙 this page) #-Printed by the Consumer Cooperatives of the Intellectual Property Bureau of the Ministry of Economic Affairs ^ 5078 V. Description of the invention (The case shown in (34) can also find the motion vector more quickly. That is, in the example of FIG. 38, in step s 丨 8 丨: the previous building and the current paste image are stored in the image recording step S182, The main CPU 31 is as shown in Figure 3 = 43. Secondly, care about the continuation and the image, and create a low-resolution image In step M, the main processor 1 performs the following processing: using the page. Two == management. At: = where to find the motion vector: step = 83 to find the low-precision motion vector, proceed to step si84. : Based on the original resolution (based on cone tapping), each block searches for a low-precision motion vector package, and performs the process of finding this motion vector. After 36 dead weights, Hough transform (Hough transfomat 〇n > To explain the process of transforming a straight line into a point by the human exchange system, the rules are as follows:
Ah(P,θ) = ^ A (x、y) β (p - XCOS0 - y sine) d χ dy 04) 部 智 慧 費 將上述式變形,可得到: θ)^ /a (p COS0 - t sine - p sinG- t cosG) dt (15) 圖4〇表示描繪引擎41進行此霍夫變換處理時的處理例 。最初,在步驟S19:[,在㊀起始設定〇。此㊀如圖4ι所示 印 (CNSM4 規格⑵〇 X 297 公h 525078 經濟部智慧財產局員工消費合作社印製 A7 B7 五、發明說明(35 ) ,表示輸入圖像的旋轉角度。在步驟si92,運算將輸入圖 像僅角度Θ仿射變換(affine transf〇rmati〇n)的圖像資料。此 角度Θ規定在上述(11)式的係數3至d。 其/人’進入步% S 1 93 ’執行以下處理:纟y方向雙線性 (bihnear)縮小。關於此y方向的雙線性縮小處理的詳細, 參照圖42之流程圖後述之,藉由此處理,如圖41模式地 顯示,可表現只被旋轉Θ的輸入圖像作爲y軸方向的丨條亩 線。 其次,進入步驟S194,執行以下處理:將在步驟si% 的處理縮、小成直線狀的點列寫入描繪區W上的角度㊀位置。 其次,進入步驟S195,判斷θ是否等於冗,不等時,進 入步驟S196,Θ只增加π/η,回到步驟S192,反覆執行其 以後的處理。在步驟8195,判斷㊀和冗相等時,結束處理: 藉此,例如以角分解度爲π/η時,最大也可以6n個多邊 形霍夫變換64 X 64圖素的圖像。此外,256 χ 256圖素的 圖像資料的情況,以角分解度爲尤/n時,最大也可以化個 多邊形霍夫變換。 其次,就圖40之步驟S193的在y方向雙線性縮小處理 的詳細加以説明。使描繪引擎4 1執行此處理時,主cpu 3 t 例如將如下的命令供應給描續^引擎4 1 : intL,/ *根源區一邊的長度*/ int offset · L〇 一 2λν :/*起始圖像一邊的長度*/ l==lo : 本、·氏用中國國家標準(CNS)A4規格⑵GX 297公爱) --------------裝--------訂---------^ (請先閱讀背面尤注音?事本頁} #· 525078 A7 B7 經濟部智慧財產局員工消費合作社印製 五、發明說明(36) offset = 0 : while(L>)( Set—Texture一Base (0、offset) ; / 氺紋理區的基點設定 * / offset + = L ; Set一Drawing一Base (0、offset) : / * 描續^ 區的基點設定 * / Flat一Texture—Rectangle (0,0,L0,L/2,〇,L/2, 0,0.5,L0,〇· 5,L〇, L + 0.5,0,L + 0.5,1.0): L ~ L / 2 ;} \ 將此處理表不'成流程圖’則如圖4 2所tjt ’取初,在步驟 S20 1,在變數L起始設定L〇。同時在變數offset起始設定 0。其次,在步驟S202,執行以下處理:將紋理區5 1的基 點設定成(〇,offset)。即,如圖43所示,設定基點T (0 ,〇)。其次,在步驟S203,變數offset只增加L。然後, 在步驟S204,在描緣區52的基點設定(〇、offset)。現在 的情況,如圖38所示,設定基點D(0,L)。 其次,在步驟S205,執行以下處理:根源(紋理區)四角 形(0,0.5 ,L0,〇·5,L〇,L + 0.5 ,0,L + 0.5)的圖素 値乘以1,加在目標四角形(〇,0,L〇,〇,L〇,l/2, 0,L/2)上描繪。 其次,進入步驟S206,以變數L爲現在値的1/2。在步 驟S 2 0 7,判斷變數L是否比1大,變數L比1大時,回到 步騍S202,反覆執行其以後的處理。 __—_-39- _ 本纸張〈ΐ適…巾國國家標準(CNS)A4規格(210 X 297公釐)— 一--- (請先閱讀背面之注意事; 本頁) 裝 訂: 525078 A7 B7 五、發明說明(38 ) 次’在步驟S225,以正前面幀的圖像爲根源圖像。在步,驟 S226 ’執行以下處理:根源乘以1個捲積係數,加在目標 上描纟會。 其次,進入步驟S227,判斷是否全部乘以n個捲積係數 ’尚未相乘的捲積係數存在時,進入步膝S228,執行變更 捲積係數的處理。然後,在步驟S229,表示捲積係數變更 次數的變數Ν只增加1。 在步驟S230,判斷捲積係數變更次數是否比η/2大,變 數Ν比η/2小時,回到步驟S225,反覆執行其以後的處理。 在步驟、S230 ’判斷變數ν比η/2大時,進入步驟S231 ,以主幀的圖像爲根源幀。然後,回到步驟S226,反覆執 行其以後的處理。 在步驟S227,判斷全部乘以n個捲積係數時,結束處理。 即,如圖47所示,捲積係數4個(η=4)時,以正前面幀F 1 爲基礎產生接近正前面幀Fl的幀Fn、f22(由第一次和第 二次的描繪所產生的幀,幀F23、PH (由第三次和第四次 的描繪所產生的幀)接近現在幀h,所以以現在幀爲基 礎產生。這種情況,移動向量的方向和本來的方向是反方 向’所以考慮此而執行描繚處理。 又,移動向量如圖46所示,以巨塊(macr〇bl〇ck)單位存 在’所以以巨塊單位執行此移動模糊處理。 、使描繪引擎4i執行如以上的各種處理時,例如可如圖48 所示般地使CRT 36顯示其處理結果。這種情況,主cpu3i 執彳亍如圖4 9之流程圖所示的處理。 (請先間讀背面t注意事,Ah (P, θ) = ^ A (x, y) β (p-XCOS0-y sine) d χ dy 04) The above-mentioned wisdom fee is deformed to obtain: θ) ^ / a (p COS0-t sine -p sinG- t cosG) dt (15) FIG. 40 shows a processing example when the rendering engine 41 performs this Huff transform process. Initially, in step S19: [, 0 is set at ㊀. This is printed as shown in Figure 4i (CNSM4 specification: 0 × 297 mmh 525078, printed by the Consumers ’Cooperative of the Intellectual Property Bureau of the Ministry of Economic Affairs, printed A7 B7 5. Invention Description (35)), which indicates the rotation angle of the input image. At step si92, The operation will only input the image data of the angle affine transform (affine transfomation). This angle θ specifies the coefficients 3 to d in the above formula (11). Its / person 'enters the step% S 1 93 'Perform the following processing: 纟 bidirectional reduction in the y-direction (bihnear). The details of the bilinear reduction processing in the y-direction will be described later with reference to the flowchart in FIG. 42. The input image that has been rotated by Θ can be expressed as a y-mu line in the y-axis direction. Next, the process proceeds to step S194, and the following processing is performed: the process in step si% is reduced to a linear point sequence into the drawing area. The angle ㊀ position on W. Next, go to step S195 to determine whether θ is equal to redundant. When not waiting, go to step S196, Θ only increases by π / η, return to step S192, and repeat the subsequent processing. When it is judged that ㊀ and verbose are equal, it ends Processing: With this, for example, when the angular resolution is π / η, a maximum of 6n polygon Hough transforms of 64 x 64 pixels can be used. In addition, in the case of image data of 256 x 256 pixels, the angle When the resolution is Y / n, the maximum polygon Huff transform can be reduced. Next, the bilinear reduction processing in the y direction in step S193 in FIG. 40 will be described in detail. When the rendering engine 41 is executed, The main cpu 3 t supplies, for example, the following command to the engine ^ 4: intL, / * length of one side of the root zone * / int offset · L〇-2λν: / * length of one side of the starting image * / l = = lo : This book uses Chinese National Standard (CNS) A4 specification⑵GX 297 public love) -------------- install -------- order ----- ---- ^ (Please read the special note on the back? This page first) # · 525078 A7 B7 Printed by the Consumer Cooperatives of the Intellectual Property Bureau of the Ministry of Economic Affairs 5. Description of the invention (36) offset = 0: while (L >) (Set —Texture_Base (0, offset); / 的 Set the base point of the texture area * / offset + = L; Set_Drawing-Base (0, offset): / * Set the base point of the drawing ^ area * / Flat_Texture— Rectangle ( 0, 0, L0, L / 2, 0, L / 2, 0, 0.5, L0, 0.5, L0, L + 0.5, 0, L + 0.5, 1.0): L ~ L / 2;} \ If this processing table is not “flow chart”, tjt 'is taken as shown in FIG. 42, and in step S20 1, L0 is set at the beginning of the variable L. At the same time, 0 is set at the beginning of the variable offset. Next, in step S202, the following processing is performed: The base point of the texture area 51 is set to (0, offset). That is, as shown in FIG. 43, the base point T (0, 0) is set. Next, in step S203, the variable offset is increased by only L. Then, in step S204, (0, offset) is set at the base point of the tracing area 52. In the present case, as shown in Fig. 38, a base point D (0, L) is set. Next, in step S205, the following processing is performed: the pixel (値) of the root (texture area) quadrilateral (0, 0.5, L0, 0.5, L0, L + 0.5, 0, L + 0.5) is multiplied by 1, and added to The target quadrilateral (0, 0, L0, 0, L0, 1/2, 0, L / 2) is depicted. Next, it proceeds to step S206, where the variable L is 1/2 of the current value. In step S207, it is determined whether the variable L is greater than 1. When the variable L is greater than 1, the process returns to step S202, and the subsequent processing is executed repeatedly. __—_- 39- _ This paper is compliant with ... National Standard (CNS) A4 (210 X 297 mm) — One (Please read the notes on the back first; this page) Binding: 525078 A7 B7 5. Description of the invention (38) times' In step S225, the image of the immediately preceding frame is used as the root image. In step S226 ', the following processing is performed: the source is multiplied by a convolution coefficient, and the target is traced. Next, the process proceeds to step S227, and if it is determined whether all of the n convolution coefficients have been multiplied, the convolution coefficients that have not yet been multiplied exist, the process proceeds to step S228, and the process of changing the convolution coefficients is executed. Then, in step S229, the variable N indicating the number of times the convolution coefficient is changed is increased by only one. In step S230, it is judged whether the number of times of changing the convolution coefficient is larger than η / 2, and the variable N is smaller than η / 2. Then, the process returns to step S225, and the subsequent processes are repeatedly executed. When it is determined in step S230 'that the variable ν is larger than η / 2, the process proceeds to step S231, and the image of the main frame is used as the root frame. Then, the process returns to step S226, and subsequent processes are executed repeatedly. When it is determined in step S227 that all of the n convolution coefficients are multiplied, the process ends. That is, as shown in FIG. 47, when the convolution coefficients are 4 (η = 4), frames Fn and f22 that are close to the immediately preceding frame F1 are generated based on the immediately preceding frame F 1 (the first and second renderings). The generated frames, frames F23, PH (frames generated by the third and fourth depictions) are close to the current frame h, so they are generated based on the current frame. In this case, the direction of the movement vector and the original direction It is in the opposite direction, so the shading process is performed in consideration of this. As shown in FIG. 46, the motion vector exists in units of macroblocks. Therefore, this movement blurring process is performed in units of macroblocks. When the engine 4i executes various processes as described above, for example, the CRT 36 can display the processing results as shown in FIG. 48. In this case, the main cpu3i executes the processes shown in the flowchart in FIG. 4-9. (Please Read the back t notice first,
經濟部智慧財產局員工消費合作社印則衣 -41 - 525078 經濟部智慧財產局員工消費合作社印製 42- A7 五、發明說明(39 ) 即,最初,在步驟S25 1,主CPU 3 1使由滿相4瓦 、 仗田視頻攝影機35 所輸入的視頻圖像透過PCRTC 44和記憶體介面取 使圖像記憶體43記憶。 其次,在步驟S252,主CPU31控制描繪引擎〇,如參 照圖27之流程圖説明,使幀間差異處理(動態抽出處理)執 行0 其次’在步驟S 2 5 3 ’主C P U 3 1控制描績引擎4 1,使邊 緣抽出處理執行。設定如圖50所示之値作爲參照圖12説 明的捲積過濾處理的捲積過濾係數,進行此邊緣抽出處理^ 其次,、進入步驟S254,主CPU 31控制描繪引擎4;,使 如參照圖40之流程圖説明的霍夫變換處理執行。再者,在 步驟S255,主CPU31控制描繪引擎41,以在步驟Μ” 所進行的隹夫變換處理爲基礎,使線段抽出處理執行。 在步驟S256,判斷是否由使用者命令步驟S252至步驟 S255的處理結果顯示,未命令時,回到步驟s25 ^ ,反芦 執行其以後的處理。在步驟S256,判斷命令處理結果顯: 時,進入步驟S257,主CPU 31控制PCRTC 44,透過記 憶體介面46讀出描繪於圖像記憶體43的圖像,輸出到 36 ’使其顯示。藉此,例如將如圖48所示的圖像 人 CRT 36。 ' ' 在圖48之顯示例,在步| 8252的_差異處理 出處理)的圖像顯示於其最左側,在步驟S253的邊緣:出 處理的圖像顯示於其右側。 再者,在其右側,在步驟S25丨所取入的輸入圖像顯示於 本紙張尺度刺中國國家標準(Cii)A4規格⑵) I------------•裝--- (請先閱讀背面之注咅?事^^^寫本頁) · 525078 經濟部智慧財產局員工消費合作社印製 五、發明說明(4〇 ) 取下ί、1在步辕S254的霍夫變換處理的圖像(圖41的八 ♦ Θ)的輸出圖像)顯示於其上方,在步驟以乃的線段柚出 處理結果(由霍夫變換所抽出的線段)顯示於其更上側。 在以上係以將本發明應用於電腦娛樂裝置的情況爲例加 以說明’但本發明亦可應用於其他圖像處理裝置。 在本%明書中,所謂系統,係表示由多數裝置所構 成的裝置全體。 又’就私進行如上述的處理的電腦程式提供給使用者的 提供媒體而言,除了磁碟、⑶-刪、固態記憶體等記綠 媒體之外、,可利用網路、衛星等通信媒體。 [發明之效果] 如以上’根據申請專利範圍第i項所載之圖像處理裝置 Ή專利_第8項所載之圖像處理裝置方法及申請專 利圍第9項所載(提供媒體,由於使下述動作反覆執行 到得到預定運算結果:以圖素單位施以預定運算而以多邊 形單位描緣於第二記憶機構,所以可以簡單的結構且以低 成本的裝置執行圖像處理。 _申請專利第料載之圖像處理裝置、申請專 〗範圍第1 7項所載之圖像處理方法及申請專利範圍第工8 項所載之提供媒體,由於產生描緣命令,該描繞命令係使 下,動作反覆進行到得到預定運算結果:以圖素單位施以 于:、疋運异而以多邊形單位描緣,所以使用描繪命令可使預 疋運算執行。 根據申請專利範圍第19項所載之圖像處理裝置、申請專 43 丨中國國家標準(CNS)A4規格(2l〇x$f^jy --------^---------. (請先閱tf背面之注意亊^^寫本頁} 525078Printed by the Intellectual Property Bureau of the Ministry of Economic Affairs, the Consumer Cooperative Cooperative -41-525078 Printed by the Consumers' Cooperative of the Intellectual Property Bureau of the Ministry of Economic Affairs, 42- A7 V. Description of the Invention (39) That is, initially, in step S25 1, the main CPU 3 1 makes use of The video image input by the full-scale 4W, Katana video camera 35 is taken through the PCRTC 44 and the memory interface to make the image memory 43 memorize. Next, in step S252, the main CPU 31 controls the rendering engine. As described with reference to the flowchart of FIG. 27, the inter-frame difference processing (dynamic extraction processing) is executed. 0 Secondly, in step S 2 5 3, the main CPU 3 1 controls the rendering. The engine 41 executes edge extraction processing. 50 is set as the convolution filter coefficient of the convolution filter process described with reference to FIG. 12 and this edge extraction process is performed. Next, the process proceeds to step S254, and the main CPU 31 controls the rendering engine 4; The Huff transform process described in the flowchart of 40 is executed. Furthermore, in step S255, the main CPU 31 controls the rendering engine 41 to execute line segment extraction processing based on the widower conversion processing performed in step M ". In step S256, it is determined whether the user has instructed steps S252 to S255. The processing result shows that when there is no command, it returns to step s25 ^, and executes the subsequent processing. In step S256, it is judged that the command processing result is: when it goes to step S257, the main CPU 31 controls the PCRTC 44 through the memory interface 46 reads out the image drawn in the image memory 43 and outputs it to 36 'for display. Thereby, for example, the image shown in FIG. 48 is CRT 36.' 'In the display example of FIG. 48, in step The image of 8252's _difference processing out processing) is displayed on its far left side, at the edge of step S253: the out processed image is displayed on its right side. Furthermore, on its right side, the input taken in step S25 丨The image is displayed on this paper scale to the Chinese National Standard (Cii) A4 Specification ⑵) I ------------ • Install --- (Please read the note on the back first? ^^^ write (This page) · 525078 Printed by the Consumer Cooperatives of the Intellectual Property Bureau of the Ministry of Economic Affairs Description (4〇) Take out the image of the Huff transform processed in step S254 (the output image of the eighth ♦ Θ in Figure 41) is displayed on top of it, and the line segment is processed in step Yi Nao. The result (the line segment extracted by the Huff transform) is shown on the upper side. The above description is based on the case where the present invention is applied to a computer entertainment device ', but the present invention can also be applied to other image processing devices. In this manual, the so-called system refers to the entire device made up of a large number of devices. Also, as for the provided media provided to the user by a computer program that performs the above-mentioned processing privately, in addition to the magnetic disk, CD-ROM, In addition to solid-state memory and other green media, communication media such as the Internet and satellites can be used. [Effects of the invention] As described above, according to the image processing device described in item i of the scope of patent application 申请 Patent_item 8 The image processing device method and the patent application No. 9 (provide media, because the following actions are repeatedly performed to obtain the predetermined operation result: the predetermined operation is performed in the pixel unit and the second unit is drawn in the polygon unit. Remember Memory mechanism, so you can perform image processing with a simple structure and a low-cost device. _The image processing device contained in the patent application, the image processing method contained in item 17 of the application scope, and the scope of patent application Since the provided media described in item 8 generates a tracing command, the tracing command is executed repeatedly to obtain a predetermined calculation result: applying the pixel unit to :, and differently using the polygon unit. Precautions can be performed by using the drawing command. According to the image processing device contained in Item 19 of the scope of patent application, the application is 43 丨 China National Standard (CNS) A4 Specification (2l0x $ f ^ jy- ------ ^ ---------. (Please read the note on the back of tf first ^^ write this page} 525078
五、發明說明(μ) 經濟部智慧財產局員工消費合作社印製 利範圍第26項所載之圖像處理方法及申請專利範園第27 項所載之提供媒體,由於以圖素單位施以預定運算中的_ 部分運算而以多邊形單位描繪於第二記憶機構,同時再以 圖素單位施以預定運算中的其他一部分運算,與圖像資料 加或減而以多邊形單位描繪於第二記憶機構,所以藉由描 緣處理可使各種運算處理迅速執行。 [圖式之簡單説明] 圖1爲顯示習知圖像處理裝置結構例的方塊圖。 圖2爲説明習知捲積過濾處理的流程圖。 圖3爲説明根源圖素資料之圖。 \ 圖4爲説明捲積過濾係數之圖。 圖5爲説明目標圖素資料之圖。 圖ό爲顯示本發明圖像處理裝置結構例的方塊圖。 圖7爲説明圖1之圖像記憶體43内部之記綠區之圖。 圖8爲説明雙線性插値處理之圖。 圖9爲説明捲積過濾處理之圖。 圖1 〇爲説明根源圖素資料之圖。 圖11爲説明捲積過濾係數之圖。 圖12爲説明捲積過濾處理的流程圖。 圖1 3爲説明目標圖素資料之圖。 圖14爲説明目標圖素資料之圖。 圖1 5爲説明目標圖素資料之圖。 圖1 6爲説明目標圖素資料之圖。 圖Π爲説明目標圖素資料之圖。 --------------裝---- (請先閱讀背面V注意事本頁) 訂--------- #. -44- U5078 經濟部智慧財產局員工消費合作社印製 五、發明說明(42 圖1 8爲說明目標圖素資料之圖。 圖19爲說明捲積過濾處理他例的流程圖。 圖20爲說明捲積過濾處理他例的流程圖。 圖2 1爲說明圖19之在步驟S53的目標圖素資料之圖 图2 2爲α元明捲積過滤處理另外他例的流程圖。 圖23爲說明錐形過濾處理之圖。 圖24爲説明錐形過濾處理的流程圖。 圖25爲說明圖24的步驟S92、S94的處理之 圖26爲說明幀間差異處理之圖。 圖27,説明幀間差異處理的流程圖。 圖28爲說明圖像間距離之圖。 圖29爲說明圖像間距離運算處理的流程圖 圖3 0爲説明模式匹配處理的流程圖。 圖3 1爲説明模式匹配處理他例的流程圖。 圖32爲說明模式匹配處理之圖。 圖3 3爲説明移動向量檢出處理的流程圖。 圖34爲說明抽出移動向量的處理之圖。 圖35爲說明在搜索範圍内移動塊而求出 理之圖。 圖36爲説明其他移動向量檢出處理的流程圖。 圖37爲説明求出圖36之名牛 理之圖。 口 b之在步^S172的圖像間距離的肩 圖以爲説明另外其他移動向量檢出處理 圖39爲説明圖38之在步-s 1 "“圖 在步%S182的錐形過濾處理之圖。 圖 動向量的處 (請先閱讀背面之注意事 I寫本頁) 裝 . μ氏張尺度適用中國 - 45- 525078V. Description of the invention (μ) The image processing method described in item 26 of the scope of the printed product and the provided media in item 27 of the patent application park by the Intellectual Property Bureau of the Ministry of Economic Affairs ’s consumer cooperatives, because Part of the _ in the predetermined operation is drawn in the second memory in polygonal units, and the other part of the predetermined operation is performed in pixel units, and the image data is added or subtracted to be drawn in the second memory in polygons. Mechanism, so by drawing the edge processing can make various calculations quickly execute. [Brief Description of the Drawings] FIG. 1 is a block diagram showing a configuration example of a conventional image processing apparatus. FIG. 2 is a flowchart illustrating a conventional convolution filtering process. FIG. 3 is a diagram illustrating root pixel data. Figure 4 illustrates the convolution filter coefficients. FIG. 5 is a diagram illustrating target pixel data. FIG. 6 is a block diagram showing a configuration example of an image processing apparatus according to the present invention. FIG. 7 is a diagram illustrating a green area inside the image memory 43 of FIG. 1. FIG. 8 is a diagram illustrating bilinear interpolation processing. FIG. 9 is a diagram illustrating a convolution filtering process. Figure 10 is a diagram illustrating the source pixel data. FIG. 11 is a diagram illustrating a convolution filter coefficient. FIG. 12 is a flowchart illustrating a convolution filtering process. Figure 13 illustrates the target pixel data. FIG. 14 is a diagram illustrating target pixel data. Figure 15 is a diagram illustrating target pixel data. Figure 16 is a diagram illustrating target pixel data. Figure Π is a diagram illustrating target pixel data. -------------- Equipment ---- (Please read the V Notice on the back page first) Order --------- #. -44- U5078 Intellectual Property of the Ministry of Economic Affairs Printed by the Bureau ’s Consumer Cooperatives 5. Invention Description (42 Figure 18 is a diagram illustrating the target pixel data. Figure 19 is a flowchart illustrating another example of convolution filtering processing. Figure 20 is a flowchart illustrating another example of convolution filtering processing. Fig. 21 is a diagram illustrating the target pixel data in step S53 of Fig. 19, and Fig. 22 is a flowchart of another example of the convolution filtering process of the alpha element. Fig. 23 is a diagram illustrating the cone filtering process. 24 is a flowchart explaining the cone filtering process. FIG. 25 is a diagram explaining the processes of steps S92 and S94 of FIG. 24. FIG. 26 is a diagram explaining the inter-frame difference process. FIG. 27 is a flowchart explaining the inter-frame difference process. FIG. 28 FIG. 29 is a flowchart illustrating the distance between images. FIG. 29 is a flowchart illustrating the distance between image calculation processing. FIG. 30 is a flowchart illustrating the pattern matching processing. FIG. 31 is a flowchart illustrating another example of the pattern matching processing. FIG. 3 is a flowchart for explaining a pattern matching process. FIG. 3 is a flowchart for explaining a motion vector detection process. FIG. 34 is a diagram for explaining a motion vector extraction process. Fig. 35 is a diagram explaining how to move blocks within a search range to find the reason. Fig. 36 is a flowchart explaining another motion vector detection process. Fig. 37 is a diagram explaining how to obtain the famous name of Fig. 36. The shoulder image of the distance between the images in step ^ S172 is used to explain the other motion vector detection processing. Figure 39 is used to explain the processing of step-s 1 " "Figure in step% S182 of Figure 38." Figure. The place where the moving vector is plotted (please read the note on the back first to write this page). The μ scale is applicable to China-45- 525078
五、發明說明(43) 圖40爲説明霍夫變換處理的流程圖。 圖4 1爲说明霍夫變換之圖。 圖42爲説明圖40之在步驟s丨93的y方向雙線性縮小處 理的流程圖。 圖43爲説明圖42之步驟S202、S204的處理之圖。 圖44爲説明移動模糊處理之圖。 圖4 5爲說明移動模糊處理的流程圖。 圖46爲説明附有電子快門的ccd動態圖像修正處理之 圖。 圖4 7 ^説明移動模糊處理之圖。 圖48爲顯示圖像顯示例之圖。 圖49爲說明圖像顯示處理的流程圖。 圖50爲顯示進行邊緣抽出處理時的過濾係數例之圖。 [元件編號之説明] 3 1主CPU ’ 32主記憶體,33圖像處理晶片,34匯流排 ,35視頻攝影機,36 CRT,41描繪引擎,42記憶體介 面,43圖像記憶體,44可程式規劃的陰極射線管控制器 ,45、46匯流排,51紋理區,52描繪區。V. Description of the Invention (43) FIG. 40 is a flowchart illustrating Hough transform processing. Figure 41 is a diagram illustrating the Hough transform. Fig. 42 is a flowchart illustrating the y-direction bilinear reduction processing at step s93 in Fig. 40. FIG. 43 is a diagram illustrating the processing of steps S202 and S204 in FIG. 42. FIG. 44 is a diagram explaining a motion blur process. Fig. 45 is a flowchart illustrating a motion blur process. Fig. 46 is a diagram illustrating a ccd moving image correction process with an electronic shutter. Figure 4 7 ^ illustrates the motion blur processing. FIG. 48 is a diagram showing a display image display example. FIG. 49 is a flowchart illustrating image display processing. FIG. 50 is a diagram showing an example of a filter coefficient when performing edge extraction processing. [Description of component numbers] 3 1 main CPU '32 main memory, 33 image processing chip, 34 bus, 35 video camera, 36 CRT, 41 drawing engine, 42 memory interface, 43 image memory, 44 available Programmable cathode ray tube controller, 45, 46 bus, 51 texture area, 52 drawing area.
(請先閲背面之注意事Z · I I I I I I 1 ·11111111 ▼寫本頁) #. 經濟部智慧財產局員工消費合作社印製 -46-(Please read the notes on the back Z · I I I I I I 1 · 11111111 ▼ write this page) #. Printed by the Consumer Cooperatives of the Intellectual Property Bureau of the Ministry of Economic Affairs -46-
Claims (1)
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP13804398 | 1998-05-20 |
Publications (1)
Publication Number | Publication Date |
---|---|
TW525078B true TW525078B (en) | 2003-03-21 |
Family
ID=15212685
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
TW088108144A TW525078B (en) | 1998-05-20 | 1999-05-19 | Image processing device, method and providing media |
Country Status (10)
Country | Link |
---|---|
US (1) | US7957612B1 (en) |
EP (1) | EP0997844A4 (en) |
KR (1) | KR100639861B1 (en) |
CN (1) | CN1175377C (en) |
AU (1) | AU752396B2 (en) |
BR (1) | BR9906458A (en) |
CA (1) | CA2297168A1 (en) |
RU (1) | RU2000102896A (en) |
TW (1) | TW525078B (en) |
WO (1) | WO1999060523A1 (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4061289B2 (en) * | 2004-04-27 | 2008-03-12 | 独立行政法人科学技術振興機構 | Image inspection method and apparatus |
JP4487959B2 (en) * | 2006-03-17 | 2010-06-23 | ソニー株式会社 | Image processing apparatus, image processing method, and program |
CN100409259C (en) * | 2006-08-29 | 2008-08-06 | 中国航天时代电子公司第七七一研究所 | Scaleable large-scale 2D convolution circuit |
JP5645450B2 (en) * | 2010-04-16 | 2014-12-24 | キヤノン株式会社 | Image processing apparatus and method |
JP5949319B2 (en) * | 2012-08-21 | 2016-07-06 | 富士通株式会社 | Gaze detection apparatus and gaze detection method |
CN104091977B (en) * | 2014-05-06 | 2016-06-15 | 无锡日联科技股份有限公司 | The detection method of wound lithium-ion battery |
JP2016163131A (en) * | 2015-02-27 | 2016-09-05 | 株式会社ソニー・インタラクティブエンタテインメント | Information processing apparatus and image data distribution method |
US11625908B1 (en) * | 2022-03-30 | 2023-04-11 | Browserstack Limited | Image difference generator |
Family Cites Families (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS62251973A (en) * | 1986-04-25 | 1987-11-02 | Sony Corp | Curved face forming device |
JP2951663B2 (en) | 1987-08-05 | 1999-09-20 | ダイキン工業株式会社 | Texture mapping apparatus and method |
JP2629200B2 (en) | 1987-09-14 | 1997-07-09 | ソニー株式会社 | Image processing device |
EP0464907B1 (en) | 1990-06-29 | 1996-10-09 | Philips Electronics Uk Limited | Generating an image |
JP2956213B2 (en) | 1990-11-30 | 1999-10-04 | ソニー株式会社 | Image processing device |
JPH04317184A (en) * | 1991-04-17 | 1992-11-09 | Fujitsu Ltd | Picture processor |
JPH06150017A (en) * | 1992-11-13 | 1994-05-31 | Hitachi Ltd | Three-dimensional graphic display device |
JPH06161876A (en) * | 1992-11-24 | 1994-06-10 | Sony Corp | Image processing method |
US5537224A (en) * | 1992-11-24 | 1996-07-16 | Sony Corporation | Texture mapping image processing method and apparatus |
JPH07160899A (en) * | 1993-12-07 | 1995-06-23 | Fujitsu Ltd | Method and device for drawing divisionally polygon |
US5526471A (en) | 1993-12-15 | 1996-06-11 | International Business Machines Corporation | Rendering of non-opaque media using the p-buffer to account for polarization parameters |
JP4067138B2 (en) | 1994-06-07 | 2008-03-26 | 株式会社セガ | Game device |
JP2673101B2 (en) * | 1994-08-29 | 1997-11-05 | インターナショナル・ビジネス・マシーンズ・コーポレイション | Computer graphics equipment |
JPH08129647A (en) | 1994-10-28 | 1996-05-21 | Yamaha Corp | Graphics device |
GB9422089D0 (en) * | 1994-11-02 | 1994-12-21 | Philips Electronics Uk Ltd | Blurring for computer graphics |
JPH08161511A (en) | 1994-12-02 | 1996-06-21 | Sony Corp | Image generating device |
US5864639A (en) * | 1995-03-27 | 1999-01-26 | Digital Processing Systems, Inc. | Method and apparatus of rendering a video image |
US5630043A (en) * | 1995-05-11 | 1997-05-13 | Cirrus Logic, Inc. | Animated texture map apparatus and method for 3-D image displays |
GB9518695D0 (en) * | 1995-09-13 | 1995-11-15 | Philips Electronics Nv | Graphic image rendering |
JP3553249B2 (en) | 1995-12-15 | 2004-08-11 | 株式会社ソニー・コンピュータエンタテインメント | Image generating apparatus and image generating method |
US6162589A (en) | 1998-03-02 | 2000-12-19 | Hewlett-Packard Company | Direct imaging polymer fluid jet orifice |
JP3495189B2 (en) | 1996-06-19 | 2004-02-09 | 株式会社ソニー・コンピュータエンタテインメント | Drawing apparatus and drawing method |
EP0863497A1 (en) * | 1997-03-06 | 1998-09-09 | Sony Computer Entertainment Inc. | Graphic data generation device with frame buffer regions for normal and size reduced graphics data |
US6172684B1 (en) * | 1997-06-12 | 2001-01-09 | Silicon Engineering, Inc. | Method and apparatus for storing display lists of 3D primitives |
US6031550A (en) * | 1997-11-12 | 2000-02-29 | Cirrus Logic, Inc. | Pixel data X striping in a graphics processor |
JP3179392B2 (en) * | 1997-11-17 | 2001-06-25 | 日本電気アイシーマイコンシステム株式会社 | Image processing apparatus and image processing method |
JP4205573B2 (en) | 2003-12-19 | 2009-01-07 | 株式会社来夢 | Prop support and tent |
JP4317184B2 (en) | 2005-12-14 | 2009-08-19 | 寿子 村田 | Toilet seat |
-
1999
- 1999-05-19 CN CNB998007986A patent/CN1175377C/en not_active Expired - Lifetime
- 1999-05-19 TW TW088108144A patent/TW525078B/en not_active IP Right Cessation
- 1999-05-19 BR BR9906458-8A patent/BR9906458A/en not_active Application Discontinuation
- 1999-05-19 AU AU38492/99A patent/AU752396B2/en not_active Expired
- 1999-05-19 CA CA002297168A patent/CA2297168A1/en not_active Abandoned
- 1999-05-19 WO PCT/JP1999/002622 patent/WO1999060523A1/en not_active Application Discontinuation
- 1999-05-19 RU RU2000102896/09A patent/RU2000102896A/en not_active Application Discontinuation
- 1999-05-19 KR KR1020007000298A patent/KR100639861B1/en not_active IP Right Cessation
- 1999-05-19 EP EP99921180A patent/EP0997844A4/en not_active Ceased
- 1999-05-20 US US09/315,713 patent/US7957612B1/en not_active Expired - Fee Related
Also Published As
Publication number | Publication date |
---|---|
US7957612B1 (en) | 2011-06-07 |
AU752396B2 (en) | 2002-09-19 |
KR20010021735A (en) | 2001-03-15 |
RU2000102896A (en) | 2001-12-20 |
CN1272193A (en) | 2000-11-01 |
WO1999060523A1 (en) | 1999-11-25 |
CA2297168A1 (en) | 1999-11-25 |
KR100639861B1 (en) | 2006-10-27 |
CN1175377C (en) | 2004-11-10 |
AU3849299A (en) | 1999-12-06 |
EP0997844A1 (en) | 2000-05-03 |
EP0997844A4 (en) | 2006-12-06 |
BR9906458A (en) | 2000-09-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP3770896B2 (en) | Image processing method and apparatus | |
Guo et al. | Image retargeting using mesh parametrization | |
JP6435740B2 (en) | Data processing system, data processing method, and data processing program | |
JP4799104B2 (en) | Information processing apparatus and control method therefor, computer program, and storage medium | |
JP2007087346A (en) | Information processing device, control method therefor, computer program, and memory medium | |
TW525078B (en) | Image processing device, method and providing media | |
CN109064525A (en) | Picture format conversion method, device, equipment and storage medium | |
CN112070137A (en) | Training data set generation method, target object detection method and related equipment | |
CN113538623A (en) | Method and device for determining target image, electronic equipment and storage medium | |
JP3369734B2 (en) | Three-dimensional computer-aided design apparatus and method | |
US7961945B2 (en) | System and method for on-the-fly segmentations for image deformations | |
CN115993887A (en) | Gesture interaction control method, device, equipment and storage medium | |
CN103795925A (en) | Interactive main-and-auxiliary-picture real-time rendering photographing method | |
JP2714100B2 (en) | How to make a video | |
JP2005286799A (en) | Index image generating apparatus | |
JP4725059B2 (en) | Camera shake image correction apparatus, camera shake image correction method, and computer program | |
TWI439962B (en) | Intuitive image depth generation system and image depth generation method | |
JP2016136325A (en) | Image processing device and program | |
JP2005173940A (en) | Image processing method, image processor and computer program | |
JP2746981B2 (en) | Figure generation method | |
CN115291992A (en) | Auxiliary graphical user interface picture marking method, electronic equipment and storage medium | |
JP2746980B2 (en) | Figure generation method | |
CN117496293A (en) | Construction method of human foot gesture detection virtual data set based on SUPR model | |
JP2006071691A (en) | Camera-shake image correcting device, camera-shake image correcting method and computer program | |
CN118227062A (en) | Image processing method, device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
GD4A | Issue of patent certificate for granted invention patent | ||
MK4A | Expiration of patent term of an invention patent |