TW201037631A - System and method for comparing images - Google Patents

System and method for comparing images Download PDF

Info

Publication number
TW201037631A
TW201037631A TW98111932A TW98111932A TW201037631A TW 201037631 A TW201037631 A TW 201037631A TW 98111932 A TW98111932 A TW 98111932A TW 98111932 A TW98111932 A TW 98111932A TW 201037631 A TW201037631 A TW 201037631A
Authority
TW
Taiwan
Prior art keywords
image
point
black
bold
backbone
Prior art date
Application number
TW98111932A
Other languages
Chinese (zh)
Other versions
TWI413021B (en
Inventor
Chung-I Lee
Chien-Fa Yeh
Wei-Qing Xiao
Original Assignee
Hon Hai Prec Ind Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hon Hai Prec Ind Co Ltd filed Critical Hon Hai Prec Ind Co Ltd
Priority to TW98111932A priority Critical patent/TWI413021B/en
Publication of TW201037631A publication Critical patent/TW201037631A/en
Application granted granted Critical
Publication of TWI413021B publication Critical patent/TWI413021B/en

Links

Abstract

The present invention provides a system for comparing images. The system extracts skeletons of a first black and white image and a second black and white image, and thickens the first black and white image and the second black and white image. The system further compares the skeleton of the first black and white image with the thickened second black and white image, and compares the skeleton of the second black and white image with the thickened first black and white image, so as to obtain differences between the two black and white images. A related method is also disclosed.

Description

201037631 六、發明說明: 【發明所屬之技術領域】 特別係關於一種 [0001 ] 本發明涉及一種圖像處理系统及方法 對圖像進行差異比較的系統及方法。 C先前技術3201037631 VI. Description of the Invention: [Technical Field of the Invention] [0001] The present invention relates to a system and method for image difference comparison between images. C prior art 3

[酬對祕时崎_,最簡⑼枝魏㈣像中的各 個像素值,找出像素值不同的區域…般來說,這種方 法只能精確地比較像素值上的差異,而不_於大多數 實際的圖像韻。當對兩_像中的目標㈣進行對比 時’可以提取圖像的形狀特徵,根據形狀特徵確定兩張 •圖像_絲度。提轉轉徵相的方法有傅立葉變 換和不變矩等。然而,提取形狀特徵的運算量往往比較 大’當運算能力不足時難以實現’並且對目標物體進行 模糊辨識的效果不是很理想。 [0003] 者偏差,而利用傳統的圖像比較方法對該兩張圖 此外,當比較一張原圖像及一張對上述原圖像掃描少 生成的掃描圖像時,由於在掃描時虫現的種種原因 掃描器品質問題,可能使該掃描圖像出現一些誤差點或 像進行 比較時,可能得出的比較結果是兩張圖像不— 又,而這 樣的結果可能導致用戶的錯誤判斷。例如,用戶需要對 —份合約的原文件和該合約列印後的掃描檔進行比較 若該掃描檔因掃描器的問題出現了某些誤差點,則利用 傳統的圖像比較方法會得到該兩張圖像不一致,因此, 該用戶可能會錯誤地認為該合約曾經被篡改過。 【發明内容】 0S8III932 |單編號A0101 第3頁/共33頁 201037631 [0004] 鑒於以上内容,有必要提供一種圖像比較系統及方法, 其採用輪廓加粗及提取骨幹的方法對圖像進行比較,其 不會因為圖像上的某些誤差而錯誤地判斷該比較的圖像 不一致。 [0005] 一種圖像比較系統*運行於電腦中’用於比較圖像中目 標物體的差異。該系統包括:骨幹提取模組,用於提取 黑白圖像A2中目標物體的骨幹’得到骨幹圖像A3 ’及提 取黑白圖像B2中目標物體的骨幹’得到骨幹®_3 廓加粗模組,用於將黑白圖像A2中目標物體的外輪靡加 粗,以得到加粗圖像A4,及將黑白圖像B2中目標物體的 外輪廓加粗,以得到加粗圖像B4 ;圖像覆蓋模組,用於 將加粗圖像A4覆蓋在骨幹圖像B3上,生成覆蓋圖$AB1 ’ 以得到黑白圖像B2相對於黑白圖像A2多的部分’及將加 粗圖像B4覆蓋在骨幹圖像A3上,生成覆蓋圖像心2,以得 到黑白圖像B2相對於黑白圖像A2少的部分;及結果輸出 模組,用於輸出對黑白圖像A2和B2的比較結果。 [0006] 一種圖像比較方法,用於比较兩張圖像中目標物體的差 異。該方法包括步驟:提取黑白圖像B2中目標物體的骨 幹,得到骨幹圖像B3 ;將黑白圖像A2中目標物體的外輪 廓加粗,以得到加粗圖像A4 ;將加粗圖像A4覆蓋在骨幹 圖像B3上,生成覆蓋圖像AB1,以得到黑白圖像B2相對於 黑白圖像A2多出的部分;提取黑白圖像A2中目標物體的 骨幹,得到骨幹圖像A3 ;將黑白圖像B2中目標物體的外 輪廓加粗’以得到加粗圖像B4 ;將加粗圖像B4覆蓋在骨 幹圖像A3上’生成覆蓋圖像AB2,以得到黑白圖像B2相對 098111932 &單蝙號A0101 第4頁/共33頁 0982019744-0 201037631 [0007] [0008] Ο [0009] ❹ [0010] 於黑白圖像A2少的部分;及輸出對黑白圖像A2和B2的比 較結果。 相較於習知技術,本發明易於實現,且能夠準備地比較 圖像中目標物體的差異,而不會因為圖像上的某些誤差 而錯誤地判斷該比較的圖像不一致。 【實施方式】 參閱圖1所示,係本發明圖像比較系統1較佳實施例的功 能模組圖。所述圖像比較系統1運行於電腦中。該圖像比 較系統1包括圖像轉換模組10、骨幹提取模組11、輪廓加 粗模組12、圖像覆蓋模組13及結果輸出模組14。 所述圖像轉換模組10用於將需要比較的彩色圖像轉換為 黑白圖像,從而分割出目標物體。黑白圖像也稱為二值 圖像,圖像中只包含黑和白兩個灰度,沒有中間的過渡 。黑白圖像的像素值通常為0或者1,0表示黑色,1表示 白色。為了便於說明,將黑白圖像中代表目標物體的像 素值稱為目標物體像素值,以及將黑白圖像中目標物體 的顏色稱為前景色,背景的顏色稱為背景色。參閱圖4所 示,為一個黑白圖像的示意圖。在該黑白圖像中,白色 部分為目標物體,黑色部分為背景。應該可以理解,在 其他的黑白圖像中,也可能是黑色部分為目標物體,白 色部分為背景色。 詳細地,在本實施例中,對於彩色圖像A和B,圖像轉換 模組100首先利用一個轉換演算法將A和B分別轉換為灰度 圖像A1和B1,然後將灰度圖像A1和B1進行二值化處理, 分別轉換為黑白圖像A2和B2。灰度圖像係指每個像素值 表單编號A〇i〇i U98I11932 第5頁/共33頁 201037631 的資訊由一個量化的灰度值來描述的圖像,灰度值通常 為整數。例如,8位元的灰度圖像具有256級灰度,灰度 值取值範圍是0-255。也就是說,用0-255的整數來描述 從黑到白的不同等級的灰度,〇表示黑色,255表示白色 。將彩色圖像轉換為灰度圖像的轉換演算法可以為:[Reward to the secret time _, the simplest (9) branch Wei (four) image of each pixel value, find the area of the pixel value is different... In general, this method can only accurately compare the difference in pixel value, not _ For most practical image rhymes. When the target (four) in the two _ images is compared, the shape feature of the image can be extracted, and two image images are determined according to the shape feature. The methods for transferring the transition phase are Fourier transform and invariant moment. However, the amount of computation for extracting shape features tends to be relatively large, which is difficult to achieve when the computational power is insufficient, and the effect of fuzzy recognition of the target object is not ideal. [0003] The deviation is the same as the conventional image comparison method. In addition, when comparing an original image and a scanned image generated by scanning the original image less, due to the bug during scanning There are various reasons for scanner quality problems, which may cause some error points in the scanned image or when comparing, the possible comparison result is that the two images are not - again, and such a result may lead to the user's wrong judgment. . For example, the user needs to compare the original file of the contract with the scanned file after the contract is printed. If the scanning file has some error points due to the scanner problem, the traditional image comparison method will be used to obtain the two. The images are inconsistent, so the user may mistakenly believe that the contract has been tampered with. SUMMARY OF THE INVENTION 0S8III932 | Single Number A0101 Page 3 of 33 201037631 [0004] In view of the above, it is necessary to provide an image comparison system and method for comparing images by contour thickening and extracting the backbone. It does not erroneously determine that the compared image is inconsistent due to some error in the image. [0005] An image comparison system* running in a computer is used to compare differences in target objects in an image. The system includes: a backbone extraction module for extracting the backbone of the target object in the black and white image A2 to obtain the backbone image A3 'and extracting the backbone of the target object in the black and white image B2 to obtain the backbone®_3 profile bold module, For thickening the outer rim of the target object in the black and white image A2 to obtain the bold image A4, and thickening the outer contour of the target object in the black and white image B2 to obtain the bold image B4; image coverage a module for overlaying the bold image A4 on the backbone image B3, generating the overlay image $AB1 ' to obtain a portion of the black and white image B2 relative to the black and white image A2' and overlaying the bold image B4 On the backbone image A3, a cover image core 2 is generated to obtain a portion in which the black-and-white image B2 is smaller than the black-and-white image A2; and a result output module is used to output a comparison result of the black-and-white images A2 and B2. [0006] An image comparison method for comparing differences in target objects in two images. The method comprises the steps of: extracting the backbone of the target object in the black and white image B2 to obtain the backbone image B3; thickening the outer contour of the target object in the black and white image A2 to obtain the bold image A4; and the bold image A4 Covering the backbone image B3, generating the overlay image AB1 to obtain a portion of the black-and-white image B2 that is more than the black-and-white image A2; extracting the backbone of the target object in the black-and-white image A2 to obtain the backbone image A3; The outer contour of the target object in image B2 is bolded 'to obtain a bold image B4; the bold image B4 is overlaid on the backbone image A3' to generate an overlay image AB2 to obtain a black and white image B2 relative to 098111932 & Single bat number A0101 Page 4 / Total 33 page 0992019744-0 201037631 [0007] [0008] 0009 [0010] The portion of the black and white image A2 is less; and the output is compared with the black and white image A2 and B2 . Compared to the prior art, the present invention is easy to implement, and it is possible to prepare to compare the difference of the target object in the image without erroneously judging the compared image inconsistency due to some error on the image. [Embodiment] Referring to Figure 1, there is shown a functional block diagram of a preferred embodiment of the image comparison system 1 of the present invention. The image comparison system 1 operates in a computer. The image comparison system 1 includes an image conversion module 10, a backbone extraction module 11, a contour thickening module 12, an image overlay module 13, and a result output module 14. The image conversion module 10 is configured to convert a color image to be compared into a black and white image to segment the target object. Black and white images are also called binary images. The image contains only two shades of black and white, with no intermediate transitions. The pixel value of a black and white image is usually 0 or 1, 0 for black and 1 for white. For convenience of explanation, the pixel value representing the target object in the black and white image is referred to as the target object pixel value, and the color of the target object in the black and white image is referred to as the foreground color, and the background color is referred to as the background color. See Figure 4 for a schematic representation of a black and white image. In the black and white image, the white portion is the target object and the black portion is the background. It should be understood that in other black and white images, it is also possible that the black portion is the target object and the white portion is the background color. In detail, in the present embodiment, for the color images A and B, the image conversion module 100 first converts A and B into grayscale images A1 and B1, respectively, using a conversion algorithm, and then grayscale images. A1 and B1 are binarized and converted into black and white images A2 and B2, respectively. Grayscale image refers to each pixel value Form No. A〇i〇i U98I11932 Page 5 of 33 The information of 201037631 is described by a quantized gray value, and the gray value is usually an integer. For example, an 8-bit grayscale image has 256 shades of gray, and the grayscale value ranges from 0 to 255. That is, an integer of 0-255 is used to describe different levels of gray from black to white, 〇 for black and 255 for white. The conversion algorithm for converting a color image into a grayscale image can be:

Gray = (R*〇.3 + G*.059 + B*0.11)。 [0011] [0012] [0013] [0014] 二值化處理就是設定一個閾值,將灰度值大於或等於閾 值的像素值取值為1,而灰度值小於閾值的像素值取值為 〇。灰度圖像的二值化可以根據圖像中目標物體的不同而 有不同的二值化演算法。目前主要的二值化演算法有全 局閾值法、局部閾值法和動態閾值法。其中最簡單的是 全局閾值法,就是整個圖像採用單一閾值進行圖像二值 化’比如將閾值設置為〇-255的中值127。 應該可以理解,若需要比較的圖像本來就是黑白圖像, 則可以不需要該圖像轉換模組丨〇。 所述骨幹提取模組11用於從黑白圖像“或者”中提取其 目標物體的骨幹,得到骨幹圖像“或者B3。以下,以提 取黑白圖像A2中目標物體骨幹為例詳細說明該骨幹提取 模組11怎樣提取目標物體骨幹。 在本實施例中,所述骨幹提取模組n對黑白圖像A2按行 或者按列提取每個點的像素值,對於任意一行(或列) ,若該行(或列)中存在多個連續的目標物體像素值, 則以-個目標物體像素值表示該多個連續的目標物體像 素值。例如,以該多個連續的目標物體像素值的中間一 098111932 表單編號A0101 第6頁/共33頁 0982019744-0 201037631 個像素值來表-1 不碡多個連續的目標物體像素值。也就是 説’提取的目;;!;*此 標物體骨幹的寬度為1。例如,假設黑白圖 :、、、1的像素值係目標物體像素值,假設該黑白圖 像某—行的所有·_像素㈣ 1,1,1,〇〇1ι / ’’ 1,111,〇, 〇, 1,則提取圖像骨幹後該 素值係0, L 〇, 〇, 〇, 0, 0, 1,0, 0, 0, 0,卜參閱圖 5 戶斤 係對圖4中黑白圖像的目標物體提取骨幹後得到的 骨幹圖像。 _所述輪|jj加粗模組12用於將黑白圖像八2或者财目標物 禮的外輪麼加粗’以得到加粗圏像A4或者B4。參閱圖6 ( A)及圖6 (B)所示,分別為一張黑白圖像及對該黑白圖 像中目標物體的外輪廓加粗之後生成的加粗圖像的示意 圈。違輪廓加粗模組12的子功能模組圖請參見圖2所示。 [0016]所述圖像覆蓋模組13用於將加粗圖像A4覆蓋在骨幹圖像 B3上’生成覆蓋圖像AB1,$得到骨幹圖像B3相對於加粗 圜像A4多的部分,及將加粗圖像3 4辱蓋在骨幹圖像…上, Q 生成覆蓋圖像AB2,以得到加粗圖像B4相對於骨幹圖像A3 少的部分。所述骨幹圖像B3相對於加粗圖像A4多的部分 也即黑白圖像B 2的目標物體比黑白圖像a 2的目標物體多 出的部分,以及加粗圖像B4相對於骨幹圖像人3少的部分 也即黑白圖像B2的目標物體比黑白圖像“的目標物體少 的部刀。該圖像覆蓋模組13的子功能模組圖請參見圖3所 承。 [0017] 0981II932 所述的結果輸出模組14用於根據圖像覆蓋模組〗3的處理 結果生成並輸出黑白圖像A2和B2的比較結果,該比較結 表箪煸號AGIGi 苐7 1/共33頁 0982019744-0 201037631 果為黑白圖像A2和B2 —致或者不一致。進一步地,當黑 白圖像A2和B2的比較結果不一致時,該結果輪出模組14 還將黑白圖像B2相較於黑白圖像A2多出的部分以彩色標 注在黑白圖像B2上,及/或把將黑白圖像B2相較於黑白圖 像A2少的部分以彩色標注在黑白圖像A2上,並顯示出上 述標注之後的黑白圖像A2和B2。也就是說,當黑白圖像 B2的目標物體有比黑白圖像A2的目標物體多出的部分時 ,或者,當黑白圖像B2的目標物體有比黑白圖像A2的目 標物體少的部分時,該比較結果為黑白圖像A2和B2不一 致;反之’則該比較結果為黑白圖像A2和B2 —致。 [0018] [0019] 參閱圖2所示,係圖1中輪廓加粗模組12的子功能模組圖 。該輪廓加粗模組12包括設置子模組12〇、第一圖像獲取 子模組121、座標值讀取子模組122、第一像素值讀取子 模組123、第一判斷子模組124、點獲取子模組125及第 /著色子模組126。 所述的設置子模組1则於定義—個加粗矩陣,該加粗矩 陳中定義了需要採用前景色,即目標物體的顏色著色的 點。所述加粗矩陣可以係一個χ階矩陣,如圖7⑴所示 的3階矩陣。所述加粗矩陣的中心處數值為i,表示在進 行外輪靡加粗操作過程中的當前點。所述加粗矩陣中心 處:外的其他位置的數值由(^組成,其中i表示需要採 用前景色著色’〇表示不需要採用前景色著色。眾所週知 ’在二維平面圖像卜每-個點都有相鄰的人個點,即 右上、上、左上、右、左、右下、下、左下八個點。在 圖7 (A)所示的3階加粗矩陣中,矩陣中心處的上、下、 098111932 表單編號A0101 第8頁/共33頁 0982019744-0 201037631 [0020] Ο [0021] [0022] [0023]Ο [0024] [0025] 左 '右取值為1,左上、右上、左下、右下取值為〇,則 表π ’對當前點的上、下、左、右四個相鄰點採用目標 物體的顏色著色。 此外,該設置子模組〗20還用於設置上述加粗矩陣的矩陣 座標。詳細地,該設置子模組13〇可以設置加粗矩陣中心 處的點的座標值為(x,y),則其右上、上、左上、右 、左、右下、下、左下八個點的座標分別為(X」n )、(X,y-l)、(x+l,y-l) ' (x-卜 y)、(χΗ ,y) 、 (x-l,y+l) 、 (x,y+l) 、 (x+1,y+1)。 該矩陣座標可以參見圖7 (B)所示》 所述的第一圖像獲取子模組12ι用於獲取第一圖像。本實 施例中,所述第一圖像為黑白圖像A2或者B2。 所述的座標值讀取子模組122用於讀取該第一圖像的每一 行的每個點的座標值。 所述的第一像素值獲取子模組丨23用於讀取該第一圖像的 每一行的每個點的像素值。 所述的第一判斷子模組124用於判斷該第一圖像的第 的第η點的像素值是否與該第一圖像中目標物體像素值相 同。進一步地,該第一判斷子模組124還用於判斷該第η 點是否為該第Ν行的最末點,及該第Ν行是否為該第一圖 像的表末行。 所述的點獲取子模組125用於當第一圖像的第ν行第η點的 像素值與該第一圖像的目標物體像素值相同時,根據上 述定義的加粗矩陣及該加粗矩陣的矩陣座標在該第一圖 表單編號· Α0101 098111032 第9頁/共33頁 201037631 [0026] [0027] [0028] [0029] [0030] 像取出與該第N行第η點相鄰的Y個點。例如,已知該第一 圖像中第Ν行第η點的座標值為(x,y),根據圖7(A) 所示的加粗矩陣及圖7 (B)所示的矩陣座標,該點獲取 子模組125獲取座標值為(X,y-1),(x,y+l),( H ’ y ) ,(X+l,y )的4個點。 所述的第一著色子模組126用於判斷上述獲取的Y個點中 是否存在其像素值與該第一圖像中目標物體像素值不同 的點’以及當存在這樣的點時,用該第一圖像中目標物 體的顏色對該點著色’以加粗該第一圖像的目標物體外 輪廓生成加粗圖像。 0 參閱圖3所示’係圖1中圖像覆蓋模組13的子功能模組圖 。該圖像覆蓋模組13包括第二圖像獲取子模組130、第二 像素值讀取子模組131、第二判斷子模組132、覆蓋子模 組133、第二著色子模組丨34及圖像生成子模組丨35。 所述第二圖像獲取子模組130用於獲取需要進行圖像覆蓋 的第二圖像及第三圖像。本實施例中,所述第二圖像及 第二圖像均為黑白圖像,且其目標物體顏色為黑色,背 〇 景顏色為白色。所述第二圖像及第三圖像分別為加粗圖 像Α4及骨幹圖像Β3,或者該第二圖像及第三圖像分別為 加粗圖像Β4及骨幹圖像A3。 所述的第二像素值讀取子模組131用於讀取該第二圖像及 第二圖像的每一行的每個點的像素值。本實施例中,該 像素值為〇或者丨,其中,〇表示黑色,1表示白色。 所述的第二判斷子模組132用於判斷第二圖像中第Ν行第η 098111932 表單編號Α0101 第10頁/共33頁 0982019744-0 201037631 點的像素值與第三圖像中第N行第η點的像素值是否相同 。當第二圖像與第三圖像第Ν行第η點的像素值不同時, 該第二判斷子模組1 3 2還用於判斷第二圖像中的第Ν行第η 點的像素值是否為0,即該點是否為黑色。進一步地,所 述第二判斷子模組132還用於判斷該第η點是否為該第Ν行 的最末點,及該第Ν行是否為該第二圖像及第三圖像的最 末行。 [0031] 所述的覆蓋子模組133用於當第二圖像中第Ν行第η點的像 _ 素值與第三圖像中第Ν行第η點的像素值相同時,或者雖 〇 然第二圖像與第三圖像第Ν行第η點的像素值不同,但第 二圖像中的第Ν行第η點的像素值為0時,甩該第二圖像中 第Ν行第η點覆蓋該第三圖像中第Ν行第η點》 [0032] 所述的第二著色子模組134,用於當第二圖像與第三圖像 第Ν行第η點的像素值不同,且第二圖像中的該第Ν行第η 點的像素值不為0時,將第三圖像中的第Ν行第η點著彩色 ,以便更清楚地展現該多出的點。 〇 [_] 所述的圖像生成子模組135用於生成第二圖像覆蓋第三圖 像之覆蓋圖像,如覆蓋圖像ΑΒ1及覆蓋圖像ΑΒ2。 [0034] 參閱圖8所示,係本發明圖像比較方法較佳實施例的實施 流程圖。 • [0035] 步驟S10,圖像轉換模組10將需要比較的彩色圖像Α和Β利 . 用一個轉換演算法分別轉換為灰度圖像A1和B1。所述將 彩色圖像轉換為灰度圖像的轉換演算法可以為: Gray=(R*0.3 + G*.059 + B*0.11)。 098111S32 表單編竑 A01G1Gray = (R*〇.3 + G*.059 + B*0.11). [0014] [0014] The binarization process is to set a threshold value, the pixel value whose gray value is greater than or equal to the threshold value is taken as 1, and the pixel value whose gray value is smaller than the threshold value is 〇 . The binarization of grayscale images can have different binarization algorithms depending on the target object in the image. At present, the main binarization algorithms are global threshold method, local threshold method and dynamic threshold method. The simplest of these is the global threshold method, where the entire image is image binarized with a single threshold, such as setting the threshold to a median of 127 of 〇-255. It should be understood that if the image to be compared is originally a black and white image, the image conversion module may not be needed. The backbone extraction module 11 is configured to extract the backbone of the target object from the black and white image "OR" to obtain the backbone image "or B3. Hereinafter, the backbone of the target object in the black and white image A2 is extracted as an example to describe the backbone. The extraction module 11 extracts the target object backbone. In the embodiment, the backbone extraction module n extracts the pixel value of each point in a row or column for the black and white image A2, for any row (or column), If there are a plurality of consecutive target object pixel values in the row (or column), the plurality of consecutive target object pixel values are represented by the -target object pixel values. For example, in the middle of the plurality of consecutive target object pixel values A 098111932 Form No. A0101 Page 6 / Total 33 Page 0992019744-0 201037631 Pixel values to Table-1 No multiple consecutive target object pixel values. That is to say 'extracted eyes;;!;* The width is 1. For example, suppose the black and white picture: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , 1,111, hey, 〇, 1, after extracting the image backbone, the prime value is 0, L 〇, 〇, 〇, 0, 0, 1, 0, 0, 0, 0, and Figure 5 shows the black and white picture in Figure 4. The image of the skeleton obtained after the target object is extracted from the backbone. _The wheel |jj bold module 12 is used to bold the black and white image 八 or the outer wheel of the financial object to get the bold image A4 Or B4. Referring to Fig. 6 (A) and Fig. 6 (B), respectively, a black and white image and a schematic circle of a bold image generated after the outer contour of the target object in the black and white image is thickened. Please refer to FIG. 2 for the sub-function module diagram of the outline thickening module 12. [0016] The image overlay module 13 is used to overlay the bold image A4 on the backbone image B3 to generate an overlay image. Like AB1, $ gets the part of the backbone image B3 relative to the bold image A4, and puts the bold image 3 4 on the backbone image..., Q generates the overlay image AB2 to get the bold image A portion of B4 that is less than the backbone image A3. The portion of the backbone image B3 that is larger than the bold image A4, that is, the portion of the target object of the black-and-white image B 2 that is larger than the target object of the black-and-white image a 2 , And bold picture B4 with respect to a small part of the backbone of the image of the person 3 i.e. less black and white image of the target object than a monochrome image B2 'portion of the target object knife. See Figure 3 for the sub-function module diagram of the image overlay module 13. [0017] The result output module 14 described in 0981II932 is configured to generate and output a comparison result of the black and white images A2 and B2 according to the processing result of the image overlay module 〖3, and the comparison result is nickname AGIGi 苐7 1/ A total of 33 pages 0982019744-0 201037631 The results are black and white images A2 and B2 are either inconsistent or inconsistent. Further, when the comparison results of the black and white images A2 and B2 are inconsistent, the result wheeling module 14 also colors the portion of the black and white image B2 that is larger than the black and white image A2 on the black and white image B2. And/or the portion where the black-and-white image B2 is smaller than the black-and-white image A2 is color-coded on the black-and-white image A2, and the black-and-white images A2 and B2 after the above-mentioned labeling are displayed. That is, when the target object of the black-and-white image B2 has a portion larger than the target object of the black-and-white image A2, or when the target object of the black-and-white image B2 has a smaller portion than the target object of the black-and-white image A2 The comparison result is that the black and white images A2 and B2 are inconsistent; otherwise, the comparison result is black and white images A2 and B2. [0019] Referring to FIG. 2, a sub-function module diagram of the outline thickening module 12 in FIG. 1 is shown. The outline thickening module 12 includes a setting sub-module 12〇, a first image acquiring sub-module 121, a coordinate value reading sub-module 122, a first pixel value reading sub-module 123, and a first judging sub-module. The group 124, the point acquisition sub-module 125 and the / coloring sub-module 126. The setting sub-module 1 defines a thickening matrix, which defines a point that needs to adopt a foreground color, that is, a color of the target object. The bold matrix may be a matrix of order, as shown in Fig. 7(1). The value at the center of the bold matrix is i, indicating the current point during the outer rim bucking operation. At the center of the bold matrix: the values of other locations outside are composed of (^, where i indicates that foreground coloring is required. '〇 indicates that foreground coloring is not required. It is well known that every two points in the two-dimensional plane image There are adjacent points, namely upper right, upper, upper left, right, left, lower right, lower, lower left. In the 3rd order bold matrix shown in Figure 7 (A), at the center of the matrix Up, Down, 098111932 Form No. A0101 Page 8 / Total 33 Page 0992019744-0 201037631 [0020] [0024] [0024] [0025] Left 'Right' is 1, top left, top right The lower left and lower right values are 〇, and the table π 'colors the four adjacent points of the top, bottom, left and right of the current point by the color of the target object. In addition, the setting sub-module 20 is also used for setting. The matrix coordinates of the above-mentioned bold matrix. In detail, the setting sub-module 13 〇 can set the coordinate value of the point at the center of the bold matrix (x, y), then its upper right, upper, upper left, right, left, right The coordinates of the lower, lower, and lower left eight points are (X"n), (X, yl), (x+l, yl) ' (x-b y), χΗ , y) , (xl, y+l) , (x, y+l) , (x+1, y+1). The matrix coordinates can be seen in the first diagram shown in Figure 7 (B) The image capturing sub-module 122 is used to read the first image. The first image is a black-and-white image A2 or B2. In the embodiment, the coordinate value reading sub-module 122 is used to read the first image. a coordinate value of each point of each line of an image. The first pixel value acquisition sub-module 丨23 is configured to read a pixel value of each point of each line of the first image. The first determining sub-module 124 is configured to determine whether the pixel value of the first nth point of the first image is the same as the target object pixel value in the first image. Further, the first determining sub-module 124 further And determining whether the η point is the last point of the third line, and whether the third line is the last line of the first image. The point obtaining sub-module 125 is used to be the first picture. When the pixel value of the ηth row and the ηth point of the image is the same as the target object pixel value of the first image, the matrix according to the above-defined thick matrix and the matrix of the bold matrix is numbered in the first figure form. [0030] [0030] [0030] [0030] [0030] [0030] [0030] [0030] [0030] [0030] [0030] [0030] [0030] [0030] [0030] [0030] For example, the first point is known to be adjacent to the η point of the Nth line. For example, the first The coordinate value of the ηth point of the third line in the image is (x, y), and the point acquisition sub-module 125 is obtained according to the bold matrix shown in FIG. 7(A) and the matrix coordinates shown in FIG. 7(B). Get the four points of the coordinate values (X, y-1), (x, y + l), (H ' y ), (X + l, y). The first coloring sub-module 126 is configured to determine whether there is a point in the obtained Y points whose pixel value is different from the target object pixel value in the first image, and when there is such a point, use the The color of the target object in the first image is colored 'to create a bold image with the outer contour of the target object that thickens the first image. 0 Refer to FIG. 3 for the sub-function module diagram of the image overlay module 13 in FIG. The image overlay module 13 includes a second image acquisition sub-module 130, a second pixel value reading sub-module 131, a second judging sub-module 132, an overlay sub-module 133, and a second shading sub-module. 34 and an image generation sub-module 丨35. The second image acquisition sub-module 130 is configured to acquire a second image and a third image that need to be covered by the image. In this embodiment, the second image and the second image are both black and white images, and the target object color is black, and the back scene color is white. The second image and the third image are a bold image Α4 and a backbone image Β3, respectively, or the second image and the third image are a bold image Β4 and a backbone image A3, respectively. The second pixel value reading sub-module 131 is configured to read pixel values of each point of each of the second image and the second image. In this embodiment, the pixel value is 〇 or 丨, where 〇 represents black and 1 represents white. The second determining sub-module 132 is configured to determine a pixel value of the second image in the second image, the nth 098111932 form number Α0101, the 10th page, the total page 33, the 0982019744-0 201037631 point, and the Nth image in the third image. Whether the pixel value of the ηth point of the line is the same. When the second image is different from the pixel value of the ηth point of the third image, the second determining sub-module 133 is further configured to determine the pixel of the ηth point of the second image in the second image. Whether the value is 0, that is, whether the point is black. Further, the second determining sub-module 132 is further configured to determine whether the nth point is the last point of the third line, and whether the second line is the most of the second image and the third image. Last line. [0031] The overlay sub-module 133 is configured to use, when the pixel value of the ηth point of the ninth line in the second image is the same as the pixel value of the ηth point of the third line in the third image, or Although the second image is different from the pixel value of the ηth point of the third image, but the pixel value of the ηth point of the second image is 0, 甩 the second image The second coloring sub-module 134 is used to cover the second image and the third image in the third image. If the pixel value of the dot is different, and the pixel value of the ηth point of the second line in the second image is not 0, the ηth point of the third line in the third image is colored to more clearly show the pixel value. More points. The image generation sub-module 135 of 〇 [_] is configured to generate an overlay image of the second image covering the third image, such as the overlay image ΑΒ1 and the overlay image ΑΒ2. Referring to Fig. 8, there is shown a flowchart of an embodiment of a preferred embodiment of the image comparison method of the present invention. [0035] In step S10, the image conversion module 10 converts the color images to be compared and the profit. The conversion algorithms are respectively converted into grayscale images A1 and B1. The conversion algorithm for converting a color image into a grayscale image may be: Gray = (R * 0.3 + G * .059 + B * 0.11). 098111S32 Form Compilation A01G1

Ms 1 1 , > > r\ n —r~· 乐li 只/天K 0982019744-0 201037631 [0036] 步驟Sll,圖像轉換模組10將灰度圖像A1和B1進行二值 化處理,分別轉換為黑白圖像A2和B2。所述二值化處理 就是設定一個閾值,將灰度值大於或等於閾值的像素值 取值為1,而灰度值小於閾值的像素值取值為0。灰度圖 像的二值化可以根據圖像中目標物體的不同而有不同的 二值化演算法。目前主要的二值化演算法有全局閾值法 、局部閾值法和動態閾值法。其中最簡單的是全局閾值 法,就是整個圖像採用單一閾值進行圖像二值化,比如 將閾值設置為0-255的中值127。 [0037] 需要說明書的是,若需要比較的圖像本來就是黑白圖像 ,則可以不需要步驟S10及步驟S11。 [0038] 步驟S1 2,骨幹提取模組11從黑白圖像B2中提取其目標物 體骨幹,得到骨幹圖像B3。詳細地,所述骨幹提取模組 11對黑白圖像B2按行或者按列提取每個點的像素值。對 於任意一行(或列),若該行(或列)中存在多個連續 的目標物體像素值,則以一個目標物體像素值表示該多 個連續的目標物體像素值。例如,以該多個連續的目標 物體像素值的中間一個像素值來表示該多個連續的目標 物體像素值。也就是說,提取的目標物體骨幹的寬度為1 。例如,假設黑白圖像中取值為1的像素值係目標物體像 素值,假設該黑白圖像某一行所有點的像素值為 1,1,1,0, 0, 1,1,1,1,1, 0, 0, 1,則提取圖像骨幹後該 行的像素值係0, 1,0, 0, 0, 0, 0, 1,0, 0, 0, 0, 1。 [0039] 步驟S1 3,輪廓加粗模組12將黑白圖像A2中目標物體的外 輪廓加粗,生成加粗圖像A4。該步驟的詳細流程請參見 098111932 表單編號A0101 第12頁/共33頁 0982019744-0 201037631 [0040] ' [0041] [0042] Ο [0043] [0044] 〇 [0045] [0046] 098111932 圖9所示。 步驟S14,圖像覆蓋模組13將加粗圖像Α4覆蓋在骨幹圖像 Β3上,生成覆蓋圖像ΑΒ1,以得到骨幹圖像Β3相對於加粗 圖像Α4多的部分。該步驟的詳細流程請參見圖1 0所示。 步驟S1 5,骨幹提取模組11從黑白圖像Α2中提取其目標物 體骨幹,得到骨幹圖像A3。提取骨幹圖像A3的方法可以 參照步驟S12中提取骨幹圖像Β3的方法。 步驟S16,輪廓加粗模組12將黑白圖像Β2中目標物體的外 輪廓加粗,生成加粗圖像Β4。該步驟的詳細流程請參見 圖9所示。 步驟S17,圖像覆蓋模組13將加粗圖像Β4覆蓋在骨幹圖像 A3上,生成覆蓋圖像ΑΒ2,以得到加粗圖像Β4相對於骨幹 圖像A3少的部分。該步驟的詳細流程請參見圖10所示。 步驟S18,結果輸出模組14輸出對黑白圖像Α2和Β2的比 較結果。進一步地,當黑白圖像Α2和Β2的比較結果為不 一致時,在該步驟中,結果輸出模組14還將黑白圖像Β2 相較於黑白圖像Α2多出的部分以彩色標注在黑白圖像Β2 上及把將黑白圖像Β2相較於黑白圖像Α2少的部分以彩色 標注在黑白圖像Α2上,並顯示出上述標注之後的黑白圖 像Α2和Β2 。 參閱圖9所示,係圖8中步驟S13及S16的詳細實施流程圖 〇 步驟S100,設置子模組120定義一個加粗矩陣並設置該加 表單編號 Α0101 第 13 頁/共 33 I 0982019744-0 201037631 [0047] [0048] [0049] [0050] [0051] [0052] 098111932 粗矩陣的輯座標。所述加粗矩陣可㈣—個X階矩陣。 WSlGl,第-圖像獲取子模組121獲取第—圖像。本實 】中所述第一圖像為黑白圖像A2或者黑白圖像B2。 二驟S102,座標值讀取子模組122讀取該第一圖像的第N 仃所有點的庳標值,此時N=1。 ,,第一像素值獲取子模組eg讀取該第一圖像的 第N行所有點的像素值。 ,、,第一判斷子模組124判斷該第一圖像的第n行 的第η點的像素值是雜該第-圖像的目標物體像素值相 5此時n_1。若像素值相同,則流程進入步驟S1 05。否 則各像素值不相同’則進入步驟S1Q8。 在步驟SU)5中,點獲取子模組125根據上述定義的加粗矩 陣及該加粗矩陣的矩陣座標在第一圖像取出該第N行第η 點相鄰的Υ個點。例如’已知該第一圖像中抑行第η點的 、值為(x,y),根據圖7 (Α)所示的加粗矩陣及圖7 (B)所示的矩陣座標,該點獲取子模組125獲取座標值 為(x ’ y-1) ’(X ’ y+1) ’(H,y),(χ +卜 y )的4個點。 :驟S106 ’第一著色子模組126判斷上述獲取的7個點中 是否存在其像錄與該第-圖料目標物麟素值不同 的點。若該Y個點中存在像素值與該第_圖像中目標物體 像素值不同的點,則流程進人步驟S1Q?。否則,若該丫個 點中不存在像素值與該第-圖像中目標物體像素值不同 的點’則流程進入步驟S108。 表單編號A0101 第14頁/共33頁 0982019744-0 201037631 [0053] [0054] [0055] Ο [0056] [0057] ❹ [0058] 在步驟SI 07中,第一著色子模組126用該第一圖像中目標 物體的顏色對該點著色,以加粗該第一圖像的目標物體 外輪廓。 在步驟S108中,第一判斷子模組124判斷該第η點是否為 第Ν行的最末點。若不是最末點,則流程返回步驟S104, 此時η=η + 1。若是最末點,則流程進入步驟S109。 在步驟S109中,第一判斷子模組124判斷該第Ν行是否為 該第一圖像的最末行。若不是最末行,則流程返回步驟 S10 2,此時Ν = Ν +1。若是最末行,則流程結束。 參閱圖10所示,係圖8中步驟S14及S17的詳細實施流程 圖。 步驟S200,第二圖像獲取子模組130獲取需要進行圖像覆 蓋的第二圖像及第三圖像。本實施例中,所述第二圖像 及第三圖像均為黑白圖像,且其目標物體顏色為黑色, 背景顏色為白色。所述第二圖像及第三圖像分別為加粗 圖像Α4及骨幹圖像Β3,或者該第二圖像及第三圖像分別 為加粗圖像Β4及骨幹圖像A3。 步驟S201,第二像素值讀取子模組131讀取該第二圖像及 第三圖像的第Ν行的所有點的像素值,此時N = 1。本實施 例中,該像素值為0或者1,其中,0表示黑色,1表示白 色。 步驟S202,第二判斷子模組132判斷該第二圖像與第三圖 像的第N行第η點的像素值是否相同,此時n=l。若像素值 相同,則流程進入步驟S204。否則,若像素值不相同, 098II1932 表單編5¾ A0101 第15頁/共33頁 0982019744-0 [0059] 201037631 則流程進入步驟S203。 [0060] 在步驟S203中,該第二判斷子模組132進一步判斷第二圖 像中的該第N行第η點的像素值是否為0,即該點是否為黑 色。若該點像素值為0,則流程進入步驟S204。否則,若 該點像素值不為0,則流程進入步驟S205。 [0061] 在步驟S204中,覆蓋子模組133用該第二圖像中第Ν行第 η點覆蓋該第三圖像中第Ν行第η點。 [0062] .在步驟S205中,著色子模組134將第三圖像中的該第Ν行 第η點著彩色,以便更清楚地展現該多出的點。 [0063] 步驟S206,第二判斷子模組132判斷該第η點是否為該第 Ν行的最末點。若不是最末點,則流程返回步驟S202,此 時的η = η + 1。若是最末點,則流程進入步驟S207。 [0064] 步驟S207,第二判斷子模組132判斷該第Ν行是否為該第 二圖像及第三圖像的最末行。若不是最末行,則流程返 回步驟S201,此時的Ν = Ν + 1。若是最末點,則流程進入 步驟S208。 [0065] 步驟S208,圖像生成子模組135生成第二圖像覆蓋第三圖 像之覆蓋圖像,即生成覆蓋圖像ΑΒ1或覆蓋圖像ΑΒ2。 [0066] 以上所述僅為本發明之較佳實施例而已,且已達廣泛之 使用功效,凡其他未脫離本發明所揭示之精神下所完成 之均等變化或修飾,均應該包含在下述之申請專利範圍 内。 【圖式簡單說明】 098111932 表單編號Α0101 第16頁/共33頁 0982019744-0 201037631 [0067] 圖1係本發明圖像比較系統較佳實施例的功能模組圖。 [0068] 圖2係圖1中輪廓加粗模組的子功能模組圖。 [0069] 圖3係圖1中圖像覆蓋模組的子功能模組圖。 [0070] 圖4係黑白圖像的示意圖。 [0071] 圖5係由圖4所示的黑白圖像得到的骨幹圖像的示意圖。 [0072] 圖6 (A)及圖6 (B)分別係—張黑白圖像及對該黑白圖 像中的目標物體的外輪廓加粗之後生成的加粗圖像的示 〇 意圖。 [0073] 圖7 (A)及圖7 (B)演示了—個3階加粗矩陣及該3階加 粗矩陣的矩陣座標。 [0074] 圖8係本發明圖像比較方法較佳實施例的實施流程圖。 [0075] 圖9係圖8中步驟S1 3及S1 6的詳細實施流程圖。 [0076] 圖1 〇係圖8中步驟S14及S17的詳細實施流程圖。 【主要元件符號說明】 [0077] 圖像比較系統1 [0078] 圖像轉換模組1〇 [0079] 骨幹提取模組11 [0080] 輪廓加粗模組12 [0081] 設置子模組12〇 [0082] 第一圖像獲取子模組121 [0083] 座標值讀取子模組1 2 2 098ii 1932 表單编號 AOiOl _____ 0982019744-0 弟1γ貝/共33頁 201037631 [0084] 第一像素值讀取子模組 123 [0085] 第一判斷子模組124 [0086] 點獲取子模組125 [0087] 第一著色子模組126 [0088] 圖像覆蓋模組13 [0089] 第二圖像獲取子模組130 [0090] 第二像素值讀取子模組 131 [0091] 第二判斷子模組132 [0092] 覆蓋子模組133 [0093] 第二著色子模組134 [0094] 圖像生成子模組135 [0095] 結果輸出模組14 098111932 表單編號 A0101 第 18 頁/共 33 頁 0982019744-0Ms 1 1 , >> r\ n —r~·leli only/day K 0982019744-0 201037631 [0036] Step S11, the image conversion module 10 binarizes the grayscale images A1 and B1 , converted to black and white images A2 and B2, respectively. The binarization process is to set a threshold value, and the pixel value whose gray value is greater than or equal to the threshold value is 1 and the pixel value whose gray value is smaller than the threshold value is 0. The binarization of grayscale images can have different binarization algorithms depending on the target object in the image. At present, the main binarization algorithms include global threshold method, local threshold method and dynamic threshold method. The simplest of these is the global threshold method, in which the entire image is image binarized with a single threshold, such as setting the threshold to a median of 127 of 0-255. [0037] It should be noted that if the image to be compared is originally a black and white image, step S10 and step S11 may not be required. [0038] Step S12, the backbone extraction module 11 extracts the target object backbone from the black and white image B2 to obtain the backbone image B3. In detail, the backbone extraction module 11 extracts the pixel values of each point in rows or columns for the black and white image B2. For any row (or column), if there are multiple consecutive target object pixel values in the row (or column), the plurality of consecutive target object pixel values are represented by a target object pixel value. For example, the plurality of consecutive target object pixel values are represented by a middle one pixel value of the plurality of consecutive target object pixel values. That is to say, the extracted target object has a width of 1 . For example, suppose that the pixel value in the black and white image is 1 is the target object pixel value, assuming that the pixel value of all points in a black and white image is 1, 1, 1, 0, 0, 1, 1, 1, 1 , 1, 0, 0, 1, the pixel value of the line after extracting the image backbone is 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1. [0039] In step S1 3, the outline thickening module 12 boldens the outer contour of the target object in the black and white image A2 to generate a bold image A4. For the detailed flow of this step, please refer to 098111932 Form No. A0101 Page 12 / Total 33 Page 0992019744-0 201037631 [0040] [0044] [0044] [0044] [0045] [0046] 098111932 Show. In step S14, the image overlay module 13 overlays the bold image Α4 on the backbone image Β3 to generate an overlay image ΑΒ1 to obtain a portion of the backbone image Β3 with respect to the bold image Α4. For the detailed process of this step, see Figure 10. In step S1, the backbone extraction module 11 extracts the target object backbone from the black and white image Α2 to obtain the backbone image A3. The method of extracting the backbone image A3 can refer to the method of extracting the backbone image Β3 in step S12. In step S16, the outline thickening module 12 thickens the outer contour of the target object in the black and white image Β2 to generate a bold image Β4. The detailed process of this step is shown in Figure 9. In step S17, the image overlay module 13 overlays the bold image Β4 on the backbone image A3 to generate an overlay image ΑΒ2 to obtain a portion of the bold image Β4 with respect to the backbone image A3. See Figure 10 for the detailed process of this step. In step S18, the result output module 14 outputs a comparison result of the black and white images Α2 and Β2. Further, when the comparison result of the black and white images Α2 and Β2 is inconsistent, in this step, the result output module 14 also marks the black and white image Β2 in excess of the black and white image Α2 in black and white. The portion on the Β2 and the black-and-white image Β2 compared to the black-and-white image Α2 is color-coded on the black-and-white image Α2, and the black-and-white images Α2 and Β2 after the above-mentioned labeling are displayed. Referring to FIG. 9, which is a detailed implementation flowchart of steps S13 and S16 in FIG. 8, step S100, the setting sub-module 120 defines a bold matrix and sets the added form number Α0101. 13/33 I 0982019744-0 [0048] [0052] [0052] 098111932 The coordinates of the coarse matrix. The bold matrix may be (four) - an X-order matrix. WS1G1, the first image acquisition sub-module 121 acquires the first image. The first image described in the present embodiment is a black and white image A2 or a black and white image B2. In step S102, the coordinate value reading sub-module 122 reads the target value of all points of the Nth point of the first image, and N=1. The first pixel value acquisition sub-module eg reads the pixel values of all points in the Nth row of the first image. The first judging sub-module 124 determines that the pixel value of the nth point of the nth line of the first image is the target object pixel value of the first image. If the pixel values are the same, the flow advances to step S105. Otherwise, the pixel values are not the same', and the process proceeds to step S1Q8. In step SU) 5, the point acquisition sub-module 125 extracts the adjacent points of the Nth line from the η point in the first image according to the above-defined bold matrix and the matrix coordinates of the bold matrix. For example, 'the value of (n, y) is known to suppress the η point in the first image, according to the bold matrix shown in Fig. 7 (Α) and the matrix coordinates shown in Fig. 7 (B), The point acquisition sub-module 125 obtains four points whose coordinate values are (x ' y-1) '(X ' y+1) '(H, y), (χ + bu y ). Step S106: The first coloring sub-module 126 determines whether or not the 7 points obtained by the above are different from the point number of the image object of the first image. If there is a point in the Y points that the pixel value is different from the target object pixel value in the _th image, the flow proceeds to step S1Q?. Otherwise, if there is no point at which the pixel value is different from the pixel value of the target object in the first image, the flow proceeds to step S108. Form No. A0101 Page 14 of 33 0992019744-0 201037631 [0055] [0055] [0058] [0058] In step SI 07, the first coloring sub-module 126 uses the first The color of the target object in an image is colored to emphasize the outline of the target object of the first image. In step S108, the first determining sub-module 124 determines whether the nth point is the last point of the first line. If it is not the last point, the flow returns to step S104, where η = η + 1. If it is the last point, the flow advances to step S109. In step S109, the first determining sub-module 124 determines whether the third line is the last line of the first image. If it is not the last line, the flow returns to step S10 2, at which time Ν = Ν +1. If it is the last line, the process ends. Referring to Fig. 10, a detailed implementation flow chart of steps S14 and S17 in Fig. 8 is shown. In step S200, the second image acquisition sub-module 130 acquires the second image and the third image that need to be image-covered. In this embodiment, the second image and the third image are both black and white images, and the target object color is black and the background color is white. The second image and the third image are respectively a bold image Α4 and a backbone image Β3, or the second image and the third image are a bold image Β4 and a backbone image A3, respectively. In step S201, the second pixel value reading sub-module 131 reads the pixel values of all points of the second image and the third line of the third image, and N=1. In this embodiment, the pixel value is 0 or 1, where 0 represents black and 1 represents white. In step S202, the second determining sub-module 132 determines whether the pixel values of the Nth and Nth points of the second image and the third image are the same, and n=l. If the pixel values are the same, the flow advances to step S204. Otherwise, if the pixel values are not the same, 098II1932 form code 53⁄4 A0101 page 15 / page 33 0982019744-0 [0059] 201037631 Then the flow proceeds to step S203. [0060] In step S203, the second determining sub-module 132 further determines whether the pixel value of the nth point of the Nth line in the second image is 0, that is, whether the point is black. If the pixel value at this point is 0, the flow advances to step S204. Otherwise, if the pixel value of the point is not 0, the flow advances to step S205. [0061] In step S204, the overlay sub-module 133 covers the ηth point of the third row in the third image with the ηth point of the third line in the second image. [0062] In step S205, the coloring sub-module 134 colors the nth point of the third line in the third image to more clearly represent the extra point. [0063] Step S206, the second determining sub-module 132 determines whether the nth point is the last point of the first line. If it is not the last point, the flow returns to step S202, where η = η + 1. If it is the last point, the flow advances to step S207. [0064] Step S207, the second determining sub-module 132 determines whether the third line is the last line of the second image and the third image. If it is not the last line, the flow returns to step S201, where Ν = Ν + 1. If it is the last point, the flow advances to step S208. [0065] Step S208, the image generation sub-module 135 generates an overlay image in which the second image covers the third image, that is, generates the overlay image ΑΒ1 or the overlay image ΑΒ2. The above are only the preferred embodiments of the present invention, and have been used in a wide range of ways, and other equivalent changes or modifications which are not departing from the spirit of the present invention should be included in the following. Within the scope of the patent application. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a functional block diagram of a preferred embodiment of an image comparison system of the present invention. 098111932 Form No. Α0101 Page 16 of 33 0982019744-0 201037631 [0067] FIG. 2 is a sub-function module diagram of the outline thickening module of FIG. 1. 3 is a sub-function module diagram of the image overlay module of FIG. 1. [0070] FIG. 4 is a schematic diagram of a black and white image. [0071] FIG. 5 is a schematic diagram of a backbone image obtained from the black and white image shown in FIG. 4. 6(A) and 6(B) show the intention of the black-and-white image and the bold image generated after the outer contour of the target object in the black-and-white image is thickened, respectively. 7(A) and 7(B) illustrate a matrix of a 3rd order bold matrix and the 3rd order bold matrix. 8 is a flow chart showing an implementation of a preferred embodiment of the image comparison method of the present invention. 9 is a detailed implementation flowchart of steps S13 and S16 in FIG. 8. [0076] FIG. 1 is a detailed implementation flowchart of steps S14 and S17 in FIG. [Main Component Symbol Description] [0077] Image Conversion Module 1 [0079] Backbone Extraction Module 11 [0080] Outline Bolding Module 12 [0081] Setting Sub-Module 12〇 [0082] First Image Acquisition Sub-Module 121 [0083] Coordinate Value Reading Sub-Module 1 2 2 098ii 1932 Form Number AOiOl _____ 0982019744-0 Brother 1 γ 贝 / Total 33 Page 201037631 [0084] First Pixel Value Read sub-module 123 [0085] first judging sub-module 124 [0086] first shading sub-module 126 [0088] image overlay module 13 [0089] second diagram Image acquisition sub-module 130 [0090] second pixel value reading sub-module 131 [0091] second judging sub-module 132 [0092] overlay sub-module 133 [0093] second shading sub-module 134 [0094] Image Generation Sub-Module 135 [0095] Result Output Module 14 098111932 Form Number A0101 Page 18 of 33 0982019744-0

Claims (1)

201037631 七、 *申請專利範圍: 1. 一種圖像比較系統,運行於電腦中,用於比較圖像中目 標物體的差異,該系統包括: 骨幹提取模組,用於提取黑白圖像Α2中目標物體的骨幹, 得到骨幹圖像A3,及提取黑白圖像Β2中目標物體的骨幹, 得到骨幹圖像Β3 ; 輪廓加粗模組,用於將黑白圖像Α2中目標物體的外輪廓加 粗,以得到加粗圖像Α4,及將黑白圖像Β2中目標物體的外 ο 輪廓加粗,以得到加粗圖像Β4 ; 圖像覆蓋模組,用於將加粗圖像Α4覆蓋在骨幹圖像Β3上, 生成覆蓋圖像ΑΒ1,以得到黑白圖像Β2相對於黑白圖像Α2多 的部分,及將加粗圖像Β4覆蓋在骨幹圖像A3上,生成覆蓋 圖像ΑΒ2,以得到黑白圖像Β2相對於黑白圖像Α2少的部分; 及 結果輸出模組,用於根據圖像覆蓋模組的處理結果生成並輸 出黑白圖像Α2和Β2的比較結果。 ❹ 2. 如申請專利範圍第1項所述之圖像比較系統,該系統還包 括: 圖像轉換模組,用於在需要比較的圖像不是黑白圖像時,將 該需要比較的圖像轉換為黑白圖像。 3. 如申請專利範圍第1項所述之圖像比較系統,其中,所述 的輪廓加粗模組包括: 設置子模組,用於定義一個加粗矩陣及設置該加粗矩陣的矩 陣座標,其中該加粗矩陣中定義了需要採用黑白圖像Α2或 者Β2中目標物體的顏色著色的點; 098111932 表單編號 ΑΟΙΟί 第 19 頁/共 33 I 0082019744-0 201037631 第一圖像獲取子模組,用於獲取黑白圖像A2或者B2 ; 座標值讀取子模組,用於讀取黑白圖像A2或者B2的每一行 的每個點的座標值; 第一像素值獲取子模組,用於讀取黑白圖像A2或者B2的每 一行的每個點的像素值; 第一判斷子模組,用於判斷黑白圖像A2或者B2的第N行的第 η點的像素值是否與該黑白圖像A2或者B2中目標物體像素值 相同,判斷該第η點是否為第Ν行的最末點,及該第Ν行是否 為黑白圖像Α2或者Β2的最末行; 點獲取子模組,用於當黑白圖像Α2或者Β2的第Ν行第η點的 像素值與該黑白圖像Α2或者Β2中目標物想像素值相同時, 根據上述定義的加粗矩陣及該加粗矩陣的矩陣座標在黑白圖 像Α2或者Β2取出與該第ν行第η點相鄰的Υ個點;及 第一著色子模組,用於判斷上述獲取的Υ個點中是否存在其 像素值與該黑白圖像Α2或者Β2中目標物體像素值不同的點 ’以及當存在這樣的點時,用該黑白圖像Α2或者Β2中目標 物體的顏色對該點著色。 4·如申請專利範圍第1項所述之圖像比較系統,其中,所述 的圖像覆蓋模組包括: 第二圖像獲取子模組,用於獲取需要進行圖像覆蓋的加粗圖 像及骨幹圖像,所述加粗圖像及骨幹圖像為加粗圖像Α4及 骨幹圖像Β3或者該加粗圖像及骨幹圖像為加粗圖像Β4及骨 幹圖像A3 ; 第二像素值讀取子模組,用於讀取上述加粗圖像及骨幹圖像 的每一行的每個點的像素值; 第二判斷子模組,用於判斷加粗圖像中第Ν行第η點的像素 098111932 第20頁/共33頁 表單蝙號Α0101 201037631 值與骨幹圖像中第N行第η點的像素值是否相同,判斷該第η 點是否為該第Ν行的最末點,及該第Ν行是否為上述加粗圖 像及骨幹圖像的最末行; 覆蓋子模組,用於當加粗圖像中第Ν行第η點的像素值與骨 幹圖像中第Ν行第η點的像素值相同時,或者雖然加粗圖像 與骨幹圖像的第Ν行第η點的像素值不同,但加粗圖像的第Ν 行第η點的顏色為黑色時,用該加粗圖像中第Ν行第η點覆蓋 該骨幹圖像中第Ν行第η點; 第二著色子模組,用於當加粗圖像中與骨幹圖像第Ν行第η 〇 點的像素值不同,且加粗圖像中的第Ν行第η點的顏色為白 色時,將骨幹圖像中的第Ν行第η點著彩色;及 圖像生成子$組,用於生成用加粗圖像覆蓋骨幹圖像之後成 的覆蓋圖像,其中,若加粗圖像及骨幹圖像為加粗圖像Α4 及骨幹圖像Β3,則生成的覆蓋圖像為ΑΒ1,及若加粗圖像及 骨幹圖像為加粗圖像Β4及骨幹圖像A3,則生成的覆蓋圖像 為ΑΒ2。 5. —種圖像比較方法,用於比較兩張圖像中目標物體的差 〇 異,該方法包括: 第一骨幹提取步驟:提取黑白圖像Β2中目標物體的骨幹, 得到骨幹圖像Β3 ; 第一輪廓加粗步驟:將黑白圖像Α2中目標物體的外輪廓加 粗,以得到加粗圖像Α4 ; 第一圖像覆蓋步驟:將加粗圖像Α4覆蓋在骨幹圖像Β3上, 生成覆蓋圖像ΑΒ1,以得到黑白圖像Β2相對於黑白圖像Α2多 出的部分; 第二骨幹提取步驟:提取黑白圖像Α2中目標物體的骨幹, 0Q81HQ32 表單编號Α0101 0982019744-0 201037631 得到骨幹圖像A3 ; 第二輪廓加粗步驟:將黑白圖像B2中目標物體的外輪廓加 粗,以得到加粗圖像B4 ; 第二圖像覆蓋步驟:將加粗圖像B4覆蓋在骨幹圖像A3上, 生成覆蓋圖像AB2,以得到黑白圖像B2相對於黑白圖像A2少 的部分;及 結果輸出步驟,輸出對黑白圖像A2和B2的比較結果。 6. 如申請專利範圍第5項所述之圖像比較方法,其中,若需 I 要比較的圖像係彩色圖像A和B時,該方法還包括: 將需要比較的彩色圖像A和B利用一値轉換演算法分別轉換 為灰度圖像A1和B1 ; 設定一個閾值;及 將灰度圖像A1和B1中灰度值大於或等於該閾值的像素值取 值為1,及將灰度值小於該閾值的像素值取值為0,生成黑 白圖像A2和B2。 7. 如申請專利範圍第6項所述之圖像比較方法,其中,所述 閾值為0~255之間的中值127。 8. 如申請專利範圍第5項所述之圖像比較方法,其中,所述 的第一骨幹提取步驟或者第二骨幹提取步驟包括: 按行或者按列提取黑白圖像A2或者B2中每個點的像素值; 及 當每一行或列中存在多個連續的目標物體像素值時,以一個 目標物體像素值表示該多個連續的目標物體像素值。 9. 如申請專利範圍第5項所述之圖像比較方法,其中,所述 第一輪廓加粗步驟或者第二輪廓加粗步驟包括: (A)定義一個加粗矩陣及設置該加粗矩陣的矩陣座標,其 098111932 表單編號A0101 第22頁/共33頁 0982019744-0 201037631 中該加粗矩陣中定義了需要採用黑白圖像A2或者B2中目標 物體的顏色著色的點; (B)獲取黑白圖像A2或者B2 ; - (C)讀取黑白圖像A2或者B2的第N行的每個點的座標值, 此時N = 1 ; (D) 讀取黑白圖像A2或者B2的第N行的每個點的像素值; (E) 判斷黑白圖像A2或者B2的第N行的第η點的像素值是否 與該黑白圖像Α2或者Β2的目標物體像素值相同,此時η=ι ; 〇 (F)當黑白圖像A2或者B2的第N行第η點的像素值與該黑白 圖像Α2或者Β2的目標物體像素值相同時,根據上述定義的 加粗矩陣及該加粗矩陣的矩陣座標在該黑白圓像Α2或者Β2 取出與該第Ν行第η點相鄰的Υ個點; (G)當上述獲取的Υ個點中否存在其像素值與該黑白圖像 Α2或者Β2中目標物體像素值不同的點時,用該黑白圖像八2 或者Β2中目標物體的顏色對該點著色; (Η)判斷該第η點是否為第Nfe的襄末點,其中,若該第〇 ^ 點不是第N行的最末點則返回步'驟.(E),此時n=n+l,或者 若該第η點係第N行的最末點,則進入步驟(I );及 (I)判斷該第Ν行是否為黑白圖像Α2或者Β2的最末行,若 不是最末行則返回步驟(C),此時Ν = Ν+1。 1 0 ·如申請專利範圍第5項所述之圖像比較方法,其中,所 述的第一圖像覆蓋步驟或者第二圖像覆蓋步驟包括: ' (a)獲取需要進行圖像覆蓋的加粗圖像及骨幹圖像,所述 加粗圖像及骨幹圖像為加粗圖像A4及骨幹圖像B3或者該加 粗圖像及骨幹圖像為加粗圖像B4及骨幹圖像A3 ; 098Π1932 表單編號A010I 第23頁/共33頁 0982019744- 201037631 (b)讀取上述加粗圖像及骨幹圖像的第的每個點的像 素值,此時N=1 ; (c )判斷加粗圖像中第N行第n點的像素值與骨幹圖像中第 N行第η點的像素值是否相同,此時η = ι,· (d) 當加粗圖像中第1^行第n點的像素值與骨幹圖像中第n 行第η點的像素值相同時,或者雖然加粗圖像中與骨幹圖像 中第Ν订第η點的像素值不同,但加粗圖像的第Ν行第仏點的 顏色為黑色時,用該加粗圖像中第Ν行第η點覆蓋該骨幹圖 像中第Ν行第η點; (e) 當加粗圖像中與骨幹圖像第點的像素值不同, 且加粗圖像中的第N行第η點的顏色為白色時,將骨幹圖像 中的第Ν行第η點著彩色; ⑴判斷該第η點是否為該第阶的最末點,其中,若不是 最末點,則返回步驟(c),此時的步驟⑷中㈣Η 該第η點係該第Ν行的最末點,則進入步驟(g); (g)判斷第N仃是否為該加粗圖像及骨幹圖像的最末行, 其中’右不是最末行,則返回步驟⑻,此時的步驟(b "-N + l ’若該第N行係最末行,則進入步驟⑴;及 ⑻生成加粗圖像覆蓋骨幹圖像之覆蓋圖像。 098111932 表單編號A0101 第24頁/共33頁 0982019744-0201037631 VII. *Application patent scope: 1. An image comparison system, running in a computer, for comparing the difference of target objects in an image. The system includes: a backbone extraction module for extracting black and white images Α 2 targets The backbone of the object, the backbone image A3, and the backbone of the target object in the black and white image Β2 are extracted to obtain the backbone image Β3; the contour bolding module is used to bold the outer contour of the target object in the black and white image Α2, To obtain a bold image Α4, and to bold the outer ο contour of the target object in the black and white image Β2 to obtain a bold image Β4; an image overlay module for overlaying the bold image Α4 in the backbone image On the Β3, the overlay image ΑΒ1 is generated to obtain a portion of the black and white image Β2 with respect to the black and white image Α2, and the bold image Β4 is overlaid on the backbone image A3 to generate the overlay image ΑΒ2 to obtain black and white. a portion of the image Β2 with respect to the black and white image Α2; and a result output module for generating and outputting a comparison result of the black and white images Α2 and Β2 according to the processing result of the image overlay module. ❹ 2. The image comparison system of claim 1, wherein the system further comprises: an image conversion module, configured to compare the image to be compared when the image to be compared is not a black and white image Convert to black and white images. 3. The image comparison system of claim 1, wherein the outline bolding module comprises: a set sub-module for defining a bold matrix and setting a matrix coordinate of the bold matrix , wherein the bold matrix defines a point that needs to be colored by the color of the target object in the black and white image Α2 or Β2; 098111932 Form number ΑΟΙΟί 19th/33 I 0082019744-0 201037631 The first image acquisition sub-module, For acquiring a black and white image A2 or B2; a coordinate value reading submodule for reading a coordinate value of each point of each line of the black and white image A2 or B2; a first pixel value acquisition submodule for Reading the pixel value of each point of each line of the black and white image A2 or B2; the first determining submodule for determining whether the pixel value of the η point of the Nth line of the black and white image A2 or B2 is related to the black and white The pixel value of the target object in the image A2 or B2 is the same, and it is determined whether the ηth point is the last point of the first line, and whether the first line is the last line of the black and white image Α2 or Β2; the point acquisition submodule For black and white images Α 2 or Β 2 When the pixel value of the ηth point of the third line is the same as the target object pixel value in the black and white image Α2 or Β2, the matrix of the bold matrix and the matrix of the bold matrix are extracted in the black and white image Α2 or Β2 according to the definition. a first point of the νth line adjacent to the η point; and a first coloring sub-module, configured to determine whether the pixel value of the acquired point and the target object pixel value of the black and white image Α2 or Β2 Different points' and when such a point exists, the point is colored with the color of the target object in the black and white image Α2 or Β2. 4. The image comparison system of claim 1, wherein the image overlay module comprises: a second image acquisition sub-module for obtaining a bold image that requires image coverage. For the image and the backbone image, the bold image and the backbone image are a bold image Α4 and a backbone image Β3 or the bold image and the backbone image are a bold image Β4 and a backbone image A3; a two-pixel value reading sub-module, configured to read pixel values of each point of each of the bold image and the backbone image; and a second determining sub-module for determining a third image in the bold image The pixel of the ηth point 098111932 The 20th/page 33 form bat number Α0101 201037631 The value is the same as the pixel value of the η point of the Nth line in the backbone image, and it is determined whether the η point is the most The last point, and whether the third line is the last line of the above-mentioned bold image and the backbone image; the overlay sub-module is used for the pixel value and the backbone image of the ηth point of the third line in the bold image When the pixel value of the ηth point in the middle row is the same, or the image of the η point of the third line of the bold image and the backbone image The value is different, but when the color of the ηth point of the third line of the bold image is black, the η point of the third line in the backbone image is covered with the η point of the third line of the bold image; The sub-module is configured to: when the pixel value of the η 〇 point of the third image of the backbone image is different in the bold image, and the color of the η point of the third line in the bold image is white, the backbone image is In the image, the ηth point is colored with color; and the image generation sub-group is used to generate an overlay image formed by overwriting the backbone image with the bold image, wherein if the image is bold and the image is bold For the bold image Α4 and the backbone image Β3, the generated overlay image is ΑΒ1, and if the bold image and the backbone image are the bold image Β4 and the backbone image A3, the generated overlay image is ΑΒ 2. 5. An image comparison method for comparing difference differences of target objects in two images, the method comprising: a first backbone extraction step: extracting a backbone of a target object in a black and white image Β2, and obtaining a backbone image Β3 The first contour thickening step: bolding the outer contour of the target object in the black and white image Α2 to obtain a bold image Α4; the first image covering step: overwriting the bold image Α4 on the backbone image Β3 , generating an overlay image ΑΒ1 to obtain a portion of the black and white image Β2 relative to the black and white image Α2; a second backbone extraction step: extracting the backbone of the target object in the black and white image Α2, 0Q81HQ32 Form number Α0101 0982019744-0 201037631 Obtaining the backbone image A3; the second contour thickening step: thickening the outer contour of the target object in the black and white image B2 to obtain the bold image B4; second image covering step: overlaying the bold image B4 On the backbone image A3, the overlay image AB2 is generated to obtain a portion where the black-and-white image B2 is smaller than the black-and-white image A2; and the result output step outputs a comparison result of the black-and-white images A2 and B2. 6. The image comparison method according to claim 5, wherein, if the image to be compared is the color images A and B, the method further comprises: the color image A to be compared and B is converted into grayscale images A1 and B1 by a conversion algorithm respectively; a threshold is set; and the pixel values of the grayscale images A1 and B1 whose grayscale values are greater than or equal to the threshold are taken as 1, and A pixel value whose gray value is smaller than the threshold value is 0, and black and white images A2 and B2 are generated. 7. The image comparison method of claim 6, wherein the threshold is a median value of 127 between 0 and 255. 8. The image comparison method of claim 5, wherein the first backbone extraction step or the second backbone extraction step comprises: extracting each of the black and white images A2 or B2 by row or column a pixel value of a point; and when there are a plurality of consecutive target object pixel values in each row or column, the plurality of consecutive target object pixel values are represented by a target object pixel value. 9. The image comparison method of claim 5, wherein the first contour thickening step or the second contour thickening step comprises: (A) defining a bold matrix and setting the bold matrix The matrix coordinates, its 098111932 form number A0101 page 22 / total page 33 0982019744-0 201037631 in this bold matrix defines the point required to color the color of the target object in black and white image A2 or B2; (B) get black and white Image A2 or B2; - (C) Read the coordinate value of each point of the Nth line of the black and white image A2 or B2, at this time N = 1; (D) Read the Nth of the black and white image A2 or B2 (P) determining whether the pixel value of the nth point of the Nth line of the black and white image A2 or B2 is the same as the target object pixel value of the black and white image Α2 or Β2, at this time η= ι ; 〇 (F) when the pixel value of the η point of the Nth line of the black and white image A2 or B2 is the same as the pixel value of the target object of the black and white image Α2 or Β2, the bold matrix and the bold according to the above definition The matrix coordinates of the matrix are taken out in the black and white circle image Α2 or Β2 and the η point of the third line (G) When there is a point in the above-mentioned acquired points whose pixel value is different from the pixel value of the target object in the black and white image Α2 or Β2, the target in the black and white image 八2 or Β2 is used. The color of the object is colored by the point; (Η) determining whether the ηth point is the end point of the Nfe, wherein if the 〇^ point is not the last point of the Nth line, the step is returned. (E ), at this time, n=n+l, or if the ηth point is the last point of the Nth line, proceed to step (I); and (I) determine whether the third line is a black and white image Α2 or Β2 The last line, if not the last line, returns to step (C), at which point Ν = Ν +1. The image comparison method of claim 5, wherein the first image covering step or the second image covering step comprises: ' (a) obtaining an image that requires image coverage a bold image and a backbone image, the bold image and the backbone image are a bold image A4 and a backbone image B3 or the bold image and the backbone image are a bold image B4 and a backbone image A3 ; 098Π1932 Form No. A010I Page 23 of 33 0982019744- 201037631 (b) Read the pixel value of each point of the above bold image and the backbone image, at this time N=1; (c) Judgment plus Whether the pixel value of the nth point of the Nth line in the coarse image is the same as the pixel value of the ηth point of the Nth line in the backbone image, where η = ι, · (d) when the 1st line in the bold image When the pixel value of the nth point is the same as the pixel value of the nth line and the ηth point in the backbone image, or although the pixel value of the ηth point of the ninth point in the bold image is different in the bold image, the bold image is bold When the color of the third point of the image is black, the η point of the third line in the bold image is used to cover the η point of the third line in the backbone image; (e) In the coarse image, the pixel value of the first point of the backbone image is different, and when the color of the Nth point and the ηth point in the bold image is white, the ηth point of the third line in the backbone image is colored; (1) Determining whether the η point is the last point of the first order, wherein if it is not the last point, returning to step (c), in step (4) (4) Η the η point is the last point of the third line Go to step (g); (g) Determine whether the Nth is the last line of the bold image and the backbone image, where 'Right is not the last line, then return to step (8), step (b) "-N + l 'If the Nth line is the last line, proceed to step (1); and (8) generate a cover image of the bold image overlay backbone image. 098111932 Form No. A0101 Page 24 of 33 0982019744 -0
TW98111932A 2009-04-10 2009-04-10 System and method for comparing images TWI413021B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW98111932A TWI413021B (en) 2009-04-10 2009-04-10 System and method for comparing images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW98111932A TWI413021B (en) 2009-04-10 2009-04-10 System and method for comparing images

Publications (2)

Publication Number Publication Date
TW201037631A true TW201037631A (en) 2010-10-16
TWI413021B TWI413021B (en) 2013-10-21

Family

ID=44856772

Family Applications (1)

Application Number Title Priority Date Filing Date
TW98111932A TWI413021B (en) 2009-04-10 2009-04-10 System and method for comparing images

Country Status (1)

Country Link
TW (1) TWI413021B (en)

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3037818B2 (en) * 1992-02-28 2000-05-08 株式会社ハドソン A method for correcting color unevenness in color images
US5721788A (en) * 1992-07-31 1998-02-24 Corbis Corporation Method and system for digital image signatures
US6636635B2 (en) * 1995-11-01 2003-10-21 Canon Kabushiki Kaisha Object extraction method, and image sensing apparatus using the method
US7187785B2 (en) * 2001-08-28 2007-03-06 Nippon Telegraph And Telephone Corporation Image processing method and apparatus
JP2007149837A (en) * 2005-11-25 2007-06-14 Tokyo Seimitsu Co Ltd Device, system, and method for inspecting image defect
US7630520B2 (en) * 2006-07-31 2009-12-08 Canadian Bank Note Company, Limited Method and system for document comparison using cross plane comparison
JP4362528B2 (en) * 2007-09-10 2009-11-11 シャープ株式会社 Image collation apparatus, image collation method, image data output processing apparatus, program, and recording medium

Also Published As

Publication number Publication date
TWI413021B (en) 2013-10-21

Similar Documents

Publication Publication Date Title
CN110678901B (en) Information processing apparatus, information processing method, and computer-readable storage medium
CN110008969B (en) Method and device for detecting image saliency region
JP5701182B2 (en) Image processing apparatus, image processing method, and computer program
JP5875637B2 (en) Image processing apparatus and image processing method
JP5387193B2 (en) Image processing system, image processing apparatus, and program
JP2020129276A (en) Image processing device, image processing method, and program
JP4993615B2 (en) Image recognition method and apparatus
JP2004166007A (en) Device, method and program for image processing, and storage medium
JP2000181992A (en) Color document image recognition device
JP2016054564A (en) Image processing system and image processing method
US8411940B2 (en) Method for fast up-scaling of color images and method for interpretation of digitally acquired documents
KR100569194B1 (en) Correction method of geometrical distortion for document image by camera
JP2015198385A (en) Image processing apparatus, image processing method and program
CN111445402A (en) Image denoising method and device
JP6006675B2 (en) Marker detection apparatus, marker detection method, and program
TW201037631A (en) System and method for comparing images
US8295630B2 (en) Image processing system and method
JP6101656B2 (en) Marker embedding device, marker detection device, and program
TWI446277B (en) System and method for comparing images and deleting acceptable errors
JP6276504B2 (en) Image detection apparatus, control program, and image detection method
JP3030126B2 (en) Image processing method
JP7362405B2 (en) Image processing device, image processing method, and program
TWI564842B (en) Feature model establishing system and method and image processing system using the feature model establishing system and method
CN101853375B (en) Image comparison fault-tolerance processing system and method
CN108280815B (en) Geometric correction method for monitoring scene structure

Legal Events

Date Code Title Description
MM4A Annulment or lapse of patent due to non-payment of fees