TW201126135A - Image detecting method - Google Patents

Image detecting method Download PDF

Info

Publication number
TW201126135A
TW201126135A TW99101675A TW99101675A TW201126135A TW 201126135 A TW201126135 A TW 201126135A TW 99101675 A TW99101675 A TW 99101675A TW 99101675 A TW99101675 A TW 99101675A TW 201126135 A TW201126135 A TW 201126135A
Authority
TW
Taiwan
Prior art keywords
image
blocks
comparison
aligned
alignment
Prior art date
Application number
TW99101675A
Other languages
Chinese (zh)
Other versions
TWI416069B (en
Inventor
Chia-Wei Kang
Tsung-Sheng Kuo
Ying-Chih Hsieh
Chun-Yen Wu
Original Assignee
Tatung Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tatung Co filed Critical Tatung Co
Priority to TW99101675A priority Critical patent/TWI416069B/en
Publication of TW201126135A publication Critical patent/TW201126135A/en
Application granted granted Critical
Publication of TWI416069B publication Critical patent/TWI416069B/en

Links

Landscapes

  • Image Analysis (AREA)

Abstract

An image detecting method is provided. First, perform an auto-correlation operation on a first comparing image, and determine a preset distance according to the auto-correlation operation. Second, shift a selection area on a image to be tested by the preset distance from a starting position on the image to be tested, and the images to be tested which correspond to the starting position and on each of the positions to which the selection area is shifted by the preset distance are selected to obtain a plurality of first reference blocks, wherein the size of the selection area, the first comparing image and the first reference blocks are the same. Third, calculate the representative value of each first reference blocks. Fourth, select a plurality of comparing blocks from the first reference blocks according to the representative value of each first reference block. Finally, compare each comparing block against the first comparing image to obtain the position of the first comparing image.

Description

201126135 UUU4-〇y-018 33191twf.doc/n 六、發明說明: 【發明所屬之技術領域】 本發明是有關於一種影像偵測方法。 【先前技術】 衫像偵測在工業應用中是常見的技術,例如在半導 體製程技術中的曝光、顯影等步驟均需要十分準確的影像 • 偵測技術。過去常利用人力檢測或機構來尋找定位點。然 而由於人類的視覺對於物件辨識能力有科^ 進步電子元件的尺寸日益減小,傳統的影像二;= 法滿足製造工業的需求。 一圖1繪不為習知技術之影像偵測方法的流程圖。圖2 繪不為習知技術之影像偵測方法的示意圖。.請同時參照圖 1與圖2 ’首先,自待測影像2〇2的最左上端開始依據'比對 影像204的尺寸大小選取欲比對的區塊(步驟si〇2)。接 著’將所選取出的比對區塊與比對影像204進行比對(步驟 • S104)。然後’依據比對的結果判斷比對區塊是否為目前最 接近比對影像204的區塊(步驟S106)。若比對區塊為目前 最接近比對影像2G4的區域,則將比對影像2()4的匹配位 置修改為最接近比對影像的比對區塊對應的位置(步驟 S10 8 ),並接著判斷所選取的比對區塊是否為待測影像2 〇 2 的影像末端(亦即待測影像202的最右下端)(步驟sll〇)。 若在;X驟S106中判斷出比對區塊不為目前最接近比對影 像204的區域’則直接進入步驟S110,判斷所選取的比對 201126135 0004-09-018 3319】 twf.doc/n 區塊是否為待測影像202的影像末端。 在步驟S110中若所選取的比對區塊為待測影像搬 的影像末端,則結束影像的偵測(步驟Sll2),而比對$ 2 〇 4在待測影像中的位置即為匹配位置所紀錄的座^位 置;若所選取的比對區塊不為待測影像2〇2的影像末二, 則將選取比對區塊的位置平移一個晝素(步驟幻Μ),如 將上一個所選取的比對區塊的位置向右平移—蚩素 \ 再回到步驟變縣下-個輯區塊以與^影像$ ,行輯H若選取的輯區塊已平移轉測影像的 最右端,則將比對區塊向下平移一像素,繼續由左至 取比對區塊與比對影像204進行比對。 一 習知技術之影像_方法雖可擺脫傳統影像偵測方 ’㈣依照其方法必須以—個像素為 依序移動躲比舰塊的位置,輯彳铜雜進行全 的比對,不僅耗費大量的時間成本,也需要具有強大運算 理器以應付比對區塊與比對影像比對時所執行的 大置計异。 【發明内容】 本發明提供一種影像债測方法,可增加影像 並可依據處理器的運算能力調整影像偵測的逮二: 預設 隔 像進饤自相關運具,亚依據自相關運算的 間隔。㈣,自待㈣像上的-起始位置開細預= 201126135 0004-09-018 33191twf.doc/n 平移待測影像上的一選取區域,並選取選取區域在起始位 置以及在每次平移預設間隔後的位置上選取區域範圍内所 對應到的待測影像而得到多個第一參考區塊,其中選取區 域、第一比對影像與各第一參考區塊具有相同的尺寸大 小。'纟k之,什异各第一參考區塊的代表值。然後,依據各 第一參考區塊的代表值自第一參考區塊中選取出多個比對 區塊。之後,比對各比對區塊與第一比對影像以得到第一 比對影像的位置。 在本發明之一實施例中,上述對第一比對影像進行自 相關運算的步驟前更包括先自待測影像中選取出一第二比 對影像以及一第三比對影像,其中第二、第三比對影像與 第一比對影像具有相同的尺寸大小,且第一比對影像之代 表值介於第二比對影像以及第三比對影像的代表值之間。 然後再將待測影像、第一至第三比對影像縮小為1/p,其 中P為大於1之正實數。 '、 在本發明之一實施例中’上述之代表值為各第一參考 區塊的平均值或標準差。 / 在本發明之一實施例中’上述依據第一參考區塊的代 表值選取出比對區塊的步驟包括··先自第一參考區塊中選 取出平均值介於第二比對影像的平均值與第三比對影像的 平均值的第一參考區塊以得到多個第二參考區塊,之後再 自弟一參考區塊中選取出標準差介於第二比對影像的標準 差與第三比對影像的標準差的第二參考區塊以得到多:比 對區塊。 201126135 U004-uy-ui8 3319Itwf.doc/n 在本發明之一實施例中,上述比對比對區塊與第一比 對影像以得到第一比對影像的位置的步驟包括:先分別計 算各比對區塊中每一像素與第一比對影像中對應位置的像 素相減的平方值的總和,然後再比較各比對區塊所計算出 的平方值總和以得到最小平方值總和的比對區塊。 在本發明之一實施例中,上述之起始位置為待測影 之邊界區域。 基於上述,本發明利用比對影像的自相關運 隔Γ,以在確保能找;比對 可依據_影二的=== 負擔:純影像二::度減少進行影像偵測時處理器的 舉實ί:本和優點能更明顯易懂,下文特 配口所附圖式作詳細說明如下。 【實施方式】 =3、.、a不為本發明—實施例 圖。圖4纟會示為本發 ㈣,_方法流程 圖。請同_目3^7,=^=啦法的示意 彻的像素與自相關性 „弟—比對影像 設間隔(步驟S3G2)。、矣猎此關係決定一預 時略過的像素個數,而、待__ 相關運^方切例如利用數值 201126135 •018 3319ltwf,d〇c/n 分析程式(例如MATLAB)對第一比對影像404之像素的自 相關矩陣進行分析計算而得。 舉例來說,假設第一比對影像404可表為一 ΜχΝ維 度的矩陣Α,則第一比對影像4〇4的自相關矩陣C可表為: C{i, i) = Σ Σ n)cor>j{Am + i,n + j)) m=〇 n=0 ( ) 其中〇Si<2M-l ’ 〇9<2則,M、n皆為正整數。 利用MATLAB程式提供之程式庫所開發之應用程式即可 籲 依據式(1)繪成如圖5的自相關性百分比對像素偏移個數的 關係圖。由此關係圖可看出,當第一比對影像4〇4位於原 來位置沒有偏移時(亦即偏移像素個數為〇個時),其自相 關性百分比為100%。而當第一比對影像4〇4偏移開其原 本位置時’隨著第一比對影像4〇4偏移的距離越大(亦即偏 移的像數個數越多)’偏移的第一比對影像4〇4和位於原本 位置時的第一比對影像404之間的自相關性百分比也越 低。例如當第一比對影像404偏移6個像素時,自相關性 φ 便下降至8〇%。使用者可依據實際情形來決定預設間隔的 大小(亦即像素偏移個數),例如可選擇6個或1〇個像素(其 自相關性分別為80%以及60%)作為預設間隔。當預設間隔 越大時,代表搜尋待測影像402時略過的像素個數越多, 搜寻的速度也就越快。 接著,在決定預設間隔後(在本實施例中預言免間隔為6 個像素),自待測影像402上選定—起始位置開始進行選取 第-參考區塊的動作,選取的方式為自使用者選定的起始 位置開始以步驟S302所決定的預設間隔平移一選取區域 201126135 3319ltwf.doc/n 406,且在起始位置以及每次平移—預朗崎的位置上選 取區弍406範圍内戶斤對應到的待測影像々ο〗將被選取出來 作為第一參考區塊(步驟S3〇4)。其中選取區域4〇6、第一 比對影像姻與各個被選取出的第一參考區塊具有相同的 尺寸大小,而起始位置可為待測影像4〇2的邊界區域。 舉例來說,可設定待測影像4〇2的最左上端區域作為 起始位置。當選取區域槪在起始位置時,被選取區域概 *包圍到的待測影像術將被選取出作為第—參考區塊 者’選取區域406便向右移動一預設間隔的距離至下—個 位置’再選取被選取區域4〇6包圍到的待測影像術以得 到另一個第—參考區塊。以此類推,重複選取與平移選取 =06的動作以得到多個第一參考區塊。其中當選取: 域406由待測影像術的最左上端向 4〇2的最右上端時,將選取區域撕向下移動—】上f 第—參考區塊,接著再以相同的預設間隔開於: 仃平移選取區域條及選取第—參考區塊的動作, ^到4區域移至待測影像術的最左端。以此類推, ^稷上述步驟以使待測影像搬全部被選取區域條覆蓋 除了可以利用前述迁迴移動選取區域條來進 4〇2山^亦可在選取區域概移動至待測影 開於繼4古會Ϊ日由起始位置往下一預設間隔的位置 開始、%、,向右重覆進行平移選取區域傷及選取第— 區塊的動作直到選取區域4〇6移動至待測影像·的最右 201126135 wu^-uy-018 33191twf.doc/n 端。然後再由起始位置往下兩預設間隔的位置開始繼續向 右進行平移選取區域406及選取第一參考區塊的動作^以 此類推直到待測影像402全部被選取區域406覆蓋過。值 得注意的是,在本實施例中,雖以待測影像4〇2的最左上 端區域作為起始位置,然不以此為限,在實際應用時起始 位置可設定為待測影像402上的任一位置,且選取區域4〇6 所移動的範圍也不限定於必須覆蓋過整個待測影像4〇2, 也就是說選取區域406所須覆蓋的範圍可由使用者決定。 另外,選取區域406之形狀亦不限定於圖4中的矩形,使 用者可依所欲比對的影像調整選取區域4〇6的形狀來選取 參考區塊。 ' 請繼續參照圖3與圖4,然後,接著計算在步驟S3〇4 中所選取出的各第一參考區塊的代表值(步驟S3〇6),其中 第-參考區塊的代表值可例如是第—參考區塊的平均值或 標準差,然不以此為限。接著再依據各個第一參考區塊的 代表值自多個第-參考區塊中選出多個比較區塊(步驟 籲S3G8)。舉例來說’可選取多個第—參考區塊中代表值接近 弟-比對影像彻代表值的第—參考區塊作為比較區塊, 例如可設定選取代表值落於第_比對影像姻的代表值 正、負3〇%内的第-參考區塊作為比較區塊。 最後,分別比對各個比對區塊與第一比對影像4〇4以 得到第-比對影像姻在待測影像搬上的位置(步驟 S310)。其中比對各個比對區塊與第—比對影像4〇4的方式 可利用將各個比對區塊與第一比對影像撕中各個對岸像 201126135 0U04-uy-u i 8 33191 twf.doc/n 素相減後平方的總和,來判斷哪個比對區塊最接近第一比 對影像404,然不以此為限。 本貫施例利用第一比對影像4〇4的自相關運算結果來 ,定選取區域406每次平移的間隔大小,以在確保能找出 第-比對影像404的正石崔位置的情形下,加速影像偵測的 速度二另外’也可依據負責執行影像债測的處理器(未緣示:) 的運异能力來調整進行影賴㈣的運算量。舉例來說, 當處理器的運算能力較差時’可增大預設間隔以減少選取 的第二參考區塊數量或縮小選擇味區塊的翻,來雜 ⑩ 處理器的負擔。相反地,當處理器的運算能力較強時,可 減小預設間隔或增加選擇比較區塊的範圍,以增加選取的 弟一參考區塊數量。 在部分實施例中’影像偵測的方法可如圖6所示。圖 6繪示為本發明另一實施例之影像偵測的方法流程圖。請 參照圖6,本實施利之影像_方法與圖3之影像偵測方 法的不同之處在於,本實施例在對第一比對影像4〇4進行 自相關運算(亦即步驟S606)之前,先自待測影像4〇2中選 取出兩個比對影像,其中此兩個比對影像與第一比對影I · 404具有相同的尺寸大小,且第一比對影像4〇4的代表值 (在本Λ施例中為平均值及標準差)介於此兩個比對影像的 代表值之間(步驟S602)。如圖7所示之待測影像的示音 圖,自待測影像402中所選出的第二比對影像7〇2、704 的平均值分別為87與240,而標準差則分別為917 4〇842〇 與1703.758495,而比對影像404的平均值與標準差則分 10 201126135 UUU4-uy-018 33191twf.doc/n 別為217與1310.583458,比對影像404的平均值與標準 差皆介於第二比對影像702和第三比對影像704之間。 接著,將待測影像402、第一比對影像404、第二比 對影像702以及第三比對影像704皆縮小為l/p,其中p 為大於1之正實數。如此可減少進行影像彳貞測時需要處理 運算的像素個數’增加影像偵測的速度(步驟S604)。之後 的步驟S606〜S610則類似於圖3的步驟S302〜S306,因此 不再贅述。 籲 另外,圖6之影像偵測方法中,自多個第一參考區塊 中選出多個比較區塊的方式亦與圖3之步驟S308有所不 同,圖6之選出多個比較區塊的方式為先自第一參考區塊 中選取出平均值介於苐二比對影像702的平均值與第三比 對影像704的平均值的第一參考區塊,以得到多個第二參 考區塊(步驟S612)。之後再自第二參考區塊中選取出標準 差介於第二比對影像702的標準差與第三比對影像7〇4的 標準差的第二參考區塊,以得到多個比對區塊(步驟 鲁 S614)。如此便可以減少在步驟S616中須與比對影像404 進行比對的比較區塊,以增加影像偵測的速度。最後再分 別比對各個比對區塊與第一比對影像4〇4以得到第一比對 影像404在待測影像402上的位置(步驟§616)。 在部份實施例中,也可先取出標準差介於第二比對影 像702的標準差與第三比對影像7〇4的標準差的第一參考 區塊,之後再選取出平均值介於第二比對影像7〇2的平均 值與第二比對影像704的平均值的第二參考區塊,以減少 201126135 υυυ4-υ卜υ18 3319itwf.d〇c/n 在步驟S616中須與比對影像4〇4進行比對的比較區塊, 增加影像偵測的速度。 表τ、上所述,本發明利用縮小待測影像以及比對影像的 尺寸,以減少進行影像偵測時需要處理運算的像素個數, k力〜像债測的速度。並依據比對景》像的自相關運算結果 來決定選取區域每次平移的間隔大小,以在確保能找出比 對影像的正確位置的情形下,加快影像制的速度,此外 也”責執行影像债測的處理器的運算能力來調整選 取區域每次平移的間隔大小,減少進行影像偵測時處理器 籲 的負擔’加快影像偵測的速度。另外,依據比對影像的代 表值來挑選與比對影像比對的比較區, ,比對的區壤,而節省比對的時間,增加=測的匕 速度。 义雖然本發明已以實施例揭露如上,然其並非用以限定 月,任何所屬技術領域中具有通常知識者,在不脫離 =明之,神和範圍内,當可作些許之更動與潤飾,故本 發明之保護範圍當視後附之申請專利範圍所界定者為準。籲 【圖式簡單說明】 圖1 %示為習知技術之影像偵測方法的流程圖。 圖2 %示為習知技術之影像偵測方法的示意圖。 圖3繪不為本發明一實施例之影像偵測的方法流 圖。 圖4繪示為本發明一實施例之影像偵測方法的示意 12 201126135 UUU4-uy-018 33191 twf.doc/n 圖。 圖5繪示為比對影像之自相關性百分比對像素偏移個 數的關係圖。 圖6繪示為本發明另一實施例之影像偵測的方法流程 圖。 圖7繪示為本發明一實施例之待測影像的示意圖。 【主要元件符號說明】 202、402 :待測影像 2〇4、404、702、704 :比對影像 406 :選取區域 S102〜S112:習知之影像偵測的步驟 S302〜S310、S602〜S614 :影像偵測的步驟201126135 UUU4-〇y-018 33191twf.doc/n VI. Description of the Invention: [Technical Field] The present invention relates to an image detecting method. [Prior Art] Shirt image detection is a common technique in industrial applications. For example, exposure and development steps in semiconductor technology require very accurate image detection techniques. In the past, manpower inspections or agencies were often used to find anchor points. However, due to the fact that human vision has an object-recognition capability, the size of progressive electronic components is decreasing, and the conventional image 2;= method meets the needs of the manufacturing industry. FIG. 1 depicts a flow chart of a conventional image detection method. FIG. 2 is a schematic diagram showing an image detection method that is not a conventional technique. Please refer to FIG. 1 and FIG. 2' at the same time. First, the block to be compared is selected according to the size of the comparison image 204 from the top left end of the image 2 to be measured (step si〇2). Next, the selected extracted comparison block is compared with the comparison image 204 (step S104). Then, based on the result of the comparison, it is judged whether or not the comparison block is the block closest to the comparison image 204 (step S106). If the comparison block is the area closest to the comparison image 2G4, the matching position of the comparison image 2() 4 is modified to be the position corresponding to the comparison block of the comparison image (step S10 8 ), and Then, it is judged whether the selected comparison block is the end of the image of the image to be tested 2 〇 2 (that is, the lower right end of the image to be tested 202) (step s11〇). If it is determined in the step S106 that the comparison block is not the region closest to the comparison image 204, then the process proceeds directly to step S110, and the selected comparison is determined 201126135 0004-09-018 3319] twf.doc/n Whether the block is the end of the image of the image 202 to be tested. If the selected comparison block is the end of the image to be tested, the image detection is ended (step S11), and the position of the comparison object in the image to be tested is the matching position. If the selected comparison block is not the end of the image of the image to be tested 2〇2, the position of the comparison block will be selected to translate a pixel (step illusion), such as The position of a selected comparison block is shifted to the right - 蚩素\ and then back to the step-by-step county---------------------------------------------------------- At the far right end, the comparison block is translated downward by one pixel, and the left to right comparison block is compared with the comparison image 204. An image of a conventional technology _ method can get rid of the traditional image detection method' (4) according to its method must move the position of the block than the pixel in order, and compile the copper miscellaneous for the whole comparison, which not only consumes a lot of The time cost also requires a powerful processor to cope with the large-scale calculations performed when the comparison block is compared to the comparison image. SUMMARY OF THE INVENTION The present invention provides an image debt measurement method, which can increase an image and adjust the image detection according to the computing power of the processor: the preset image is inserted into the auto-correlation tool, and the interval is based on the autocorrelation operation. . (4), self-waiting (four) on the image - starting position opening fine pre-201126135 0004-09-018 33191twf.doc/n panning a selected area on the image to be tested, and selecting the selected area at the starting position and at each translation A plurality of first reference blocks are obtained by selecting the image to be tested corresponding to the area in the position after the preset interval, wherein the selected area, the first comparison image and the first reference block have the same size. '纟k, the representative value of each of the first reference blocks. Then, a plurality of aligned blocks are selected from the first reference block according to the representative values of the respective first reference blocks. Thereafter, the aligned blocks are compared with the first aligned image to obtain the position of the first aligned image. In an embodiment of the present invention, the step of performing an autocorrelation operation on the first comparison image further comprises: first selecting a second alignment image and a third alignment image from the image to be tested, wherein the second The third aligned image has the same size as the first aligned image, and the representative value of the first aligned image is between the second aligned image and the representative value of the third aligned image. Then, the image to be tested and the first to third comparison images are reduced to 1/p, where P is a positive real number greater than 1. 'In one embodiment of the invention' the representative value is the average or standard deviation of each of the first reference blocks. In an embodiment of the present invention, the step of selecting the comparison block according to the representative value of the first reference block includes: first selecting an average value from the first reference block to be in the second comparison image. The average of the first reference block and the average of the third comparison image to obtain a plurality of second reference blocks, and then select a standard deviation from the second reference image from the reference block. The difference is compared to the third reference block of the standard deviation of the third alignment image to obtain more: the comparison block. 201126135 U004-uy-ui8 3319Itwf.doc/n In an embodiment of the present invention, the step of comparing the comparison block to the first alignment image to obtain the position of the first comparison image comprises: separately calculating each ratio Comparing the sum of the square values of each pixel in the block and the pixel corresponding to the position in the first alignment image, and then comparing the sum of the square values calculated by the comparison blocks to obtain the sum of the sum of the least squares Block. In an embodiment of the invention, the starting position is a boundary area of the image to be measured. Based on the above, the present invention utilizes the autocorrelation barrier of the comparison image to ensure that it can be found; the comparison can be based on the _ shadow 2 === burden: pure image 2:: degree reduction for image detection when the processor To be honest: This and the advantages can be more clearly understood. The following is a detailed description of the drawings. [Embodiment] =3, ., a is not the present invention - an embodiment. Figure 4纟 shows the current process (4), _ method flow chart. Please set the interval between the pixel and the autocorrelation parameter of the _目3^7, =^= method, and set the interval (step S3G2). The relationship between the parameters determines the number of pixels skipped in a pre-time. And the __ related operation is performed by, for example, using the values 201126135 • 018 3319 ltwf, d〇c/n analysis program (for example, MATLAB) to analyze and calculate the autocorrelation matrix of the pixels of the first comparison image 404. In other words, assuming that the first alignment image 404 can be represented as a matrix of one dimension, the autocorrelation matrix C of the first alignment image 4〇4 can be expressed as: C{i, i) = Σ Σ n) cor&gt ;j{Am + i,n + j)) m=〇n=0 ( ) where 〇Si<2M-l ' 〇9<2, both M and n are positive integers. The library provided by the MATLAB program The developed application can be drawn according to equation (1) as a graph of the percentage of autocorrelation versus the number of pixel offsets as shown in Fig. 5. From this diagram, it can be seen that when the first alignment image is located 4〇4 When there is no offset in the original position (that is, when the number of offset pixels is one), the autocorrelation percentage is 100%, and when the first alignment image 4〇4 is offset from its original position, The greater the distance of the first alignment image 4〇4 offset (that is, the more the number of offset images), the offset first alignment image 4〇4 and the first alignment at the original position The percentage of autocorrelation between images 404 is also lower. For example, when the first alignment image 404 is shifted by 6 pixels, the autocorrelation φ is reduced to 8〇%. The user can decide the preset interval according to the actual situation. The size (that is, the number of pixel offsets), for example, 6 or 1 pixels can be selected (the autocorrelation is 80% and 60%, respectively) as the preset interval. When the preset interval is larger, the search is representative. The more the number of pixels skipped when the image 402 is to be measured, the faster the search speed. Next, after the preset interval is determined (in this embodiment, the interval is 6 pixels), the image to be tested 402 The selected-starting position starts the action of selecting the first reference block, and the selected mode is to start shifting from the starting position selected by the user to the preset interval determined by step S302. A selected area 201126135 3319ltwf.doc/n 406 And select the area at the starting position and at each position of the panning-pre-Langsaki The image to be tested corresponding to the size of the 406 is selected as the first reference block (step S3〇4), wherein the selected area 4〇6, the first comparison image, and each selected image are selected. The first reference block has the same size, and the starting position may be the boundary area of the image to be tested 4〇 2. For example, the leftmost upper end region of the image to be tested 4〇2 may be set as the starting position. When the region 槪 is in the starting position, the image to be tested surrounded by the selected region will be selected as the first reference block. The selected region 406 is moved to the right by a predetermined interval to the next. The position 'reselects the image to be tested surrounded by the selected area 4〇6 to obtain another first reference block. By analogy, the actions of selecting and translating =06 are repeated to obtain a plurality of first reference blocks. Wherein: when the field 406 is from the top leftmost end of the image to be tested to the top right end of 4〇2, the selected area is torn down--] the upper-first reference block, and then at the same preset interval Open: 仃 Pan the selection area bar and select the action of the first reference block, and the ^ to 4 area moves to the leftmost end of the image to be tested. And so on, ^ 稷 the above steps to make the image to be tested all covered by the selected area strip, in addition to the use of the aforementioned move back to select the area to enter the 4 〇 2 mountain ^ can also move in the selected area to the image to be measured After the 4th meeting, the starting position from the starting position to the next preset interval starts, %, and repeats to the right to perform the translation of the selected area and select the action of the first block until the selected area 4〇6 moves to the test. The rightmost image of the 201126135 wu^-uy-018 33191twf.doc/n end. Then, from the starting position to the next two preset intervals, the motion of the selected area 406 and the selection of the first reference block are continued, and so on until the image to be tested 402 is completely covered by the selected area 406. It should be noted that, in this embodiment, although the leftmost upper end region of the image to be tested 4〇2 is used as the starting position, the starting position may be set as the image to be tested 402 in actual application. Any of the above positions, and the range in which the selected area 4〇6 is moved is not limited to having to cover the entire image to be tested 4〇2, that is, the range to be covered by the selected area 406 can be determined by the user. In addition, the shape of the selected area 406 is not limited to the rectangle in FIG. 4, and the user can select the reference block according to the shape of the selected area 4〇6 of the desired image. Please continue to refer to FIG. 3 and FIG. 4, and then calculate the representative values of the first reference blocks selected in step S3〇4 (step S3〇6), wherein the representative value of the first reference block can be For example, the average or standard deviation of the first reference block is not limited thereto. Then, a plurality of comparison blocks are selected from the plurality of first reference blocks according to the representative values of the respective first reference blocks (step S3G8). For example, 'the plurality of first-reference blocks may be selected as the comparison block whose representative value is close to the brother-aligned image and the representative value is used as the comparison block. For example, the selected representative value may be set to fall in the first comparison image. The first reference block within the positive and negative 3〇% of the representative value is used as the comparison block. Finally, each of the comparison blocks and the first comparison image 4〇4 are respectively compared to obtain a position at which the first-aligned image is moved on the image to be tested (step S310). The manner of comparing the comparison block and the first-alignment image 4〇4 can be utilized to tear each of the comparison blocks and the first alignment image into each of the opposite images 201126135 0U04-uy-u i 8 33191 twf.doc /n The sum of squares after the prime phase is subtracted to determine which of the aligned blocks is closest to the first alignment image 404, but not limited to this. The present embodiment uses the autocorrelation operation result of the first alignment image 4〇4 to determine the interval of the interval of each selection region 406 to ensure that the Orthogonal position of the first-aligned image 404 can be found. In the next step, the speed of the image detection is accelerated. In addition, the amount of calculation performed by the processor (not shown) can be adjusted according to the ability of the processor responsible for performing the image debt measurement (4). For example, when the computing power of the processor is poor, the preset interval may be increased to reduce the number of selected second reference blocks or reduce the flip of the selected taste block, thereby burdening the processor. Conversely, when the computing power of the processor is strong, the preset interval may be decreased or the range of the selected comparison block may be increased to increase the number of selected reference blocks. In some embodiments, the method of image detection can be as shown in FIG. 6. 6 is a flow chart of a method for image detection according to another embodiment of the present invention. Referring to FIG. 6, the image method of the present embodiment is different from the image detecting method of FIG. 3 in that, before the autocorrelation operation (ie, step S606) is performed on the first comparison image 4〇4, First, two alignment images are selected from the image to be tested 4〇2, wherein the two alignment images have the same size as the first alignment image I·404, and the first alignment image is represented by 4〇4 The values (the average value and the standard deviation in the present embodiment) are between the representative values of the two aligned images (step S602). As shown in FIG. 7 , the average of the second comparison images 7〇2 and 704 selected from the image to be tested 402 is 87 and 240, respectively, and the standard deviation is 917 4 respectively. 〇842〇 and 1703.758495, and the average and standard deviation of the comparison image 404 are divided into 10 201126135 UUU4-uy-018 33191twf.doc/n is not 217 and 1310.583458, the average and standard deviation of the comparison image 404 are both The second alignment image 702 and the third alignment image 704 are between. Next, the image to be tested 402, the first alignment image 404, the second alignment image 702, and the third alignment image 704 are all reduced to l/p, where p is a positive real number greater than one. This can reduce the number of pixels that need to be processed during image guessing to increase the speed of image detection (step S604). Subsequent steps S606 to S610 are similar to steps S302 to S306 of Fig. 3, and therefore will not be described again. In addition, in the image detection method of FIG. 6, the manner of selecting a plurality of comparison blocks from the plurality of first reference blocks is also different from the step S308 of FIG. 3, and the plurality of comparison blocks are selected in FIG. The method is to first select, from the first reference block, a first reference block whose average value is between the average of the second alignment image 702 and the average of the third comparison image 704 to obtain a plurality of second reference regions. Block (step S612). Then, a second reference block whose standard deviation is between the standard deviation of the second comparison image 702 and the standard deviation of the third comparison image 7〇4 is selected from the second reference block to obtain a plurality of comparison regions. Block (step Lu S614). Therefore, the comparison block that needs to be compared with the comparison image 404 in step S616 can be reduced to increase the speed of image detection. Finally, the comparison block and the first alignment image 4〇4 are respectively compared to obtain the position of the first alignment image 404 on the image to be tested 402 (step §616). In some embodiments, the first reference block whose standard deviation is between the standard deviation of the second comparison image 702 and the standard deviation of the third comparison image 7〇4 may also be taken out first, and then the average value is selected. The second reference block of the second comparison image 7〇2 and the second comparison image 704 are averaged to reduce 201126135 υυυ 4-υ υ 18 3319itwf.d〇c/n in step S616 The image 4〇4 compares the comparison blocks to increase the speed of image detection. Table τ, as described above, the present invention utilizes the reduction of the size of the image to be tested and the size of the comparison image to reduce the number of pixels that need to be processed when performing image detection, and the speed of the k-force to the debt measurement. And according to the result of the autocorrelation operation of the comparison scene, the interval of each translation of the selected area is determined, so as to ensure that the correct position of the comparison image can be found, the speed of the image system is accelerated, and the execution is also performed. The computing power of the processor of the image debt measurement adjusts the interval of each translation in the selected area, and reduces the burden of the processor when the image is detected. The speed of image detection is accelerated. In addition, the representative value of the comparison image is selected. Comparing with the comparison image, comparing the regions, saving the time of comparison, increasing the speed of the measured 。. Although the invention has been disclosed above by way of example, it is not intended to limit the month, The scope of protection of the present invention is defined by the scope of the appended claims, and the scope of the invention is defined by the scope of the appended claims. BRIEF DESCRIPTION OF THE DRAWINGS Figure 1 is a flow chart showing a conventional image detection method. Figure 2 is a schematic diagram showing a conventional image detection method. FIG. 4 is a schematic diagram of a method for detecting image detection according to an embodiment of the present invention. FIG. 4 is a schematic diagram of an image detection method according to an embodiment of the present invention. 201126135 UUU4-uy-018 33191 twf.doc/n FIG. FIG. 6 is a flowchart of a method for detecting image offset according to another embodiment of the present invention. FIG. 7 is a flowchart of a method for image detection according to another embodiment of the present invention. Schematic diagram of the image to be tested. [Description of main component symbols] 202, 402: image to be tested 2〇4, 404, 702, 704: comparison image 406: selection area S102~S112: steps S302~S310 of conventional image detection , S602~S614: steps of image detection

Claims (1)

201126135 KjwH-uy-ui8 3319ltwf.doc/n 七、申請專利範圍: ΐ· 一種影像偵測方法,包括: 關運ΐ對影像進行一自相關運算,並依據該自相 關運的結果決定一預設間隔; 浐哕二二寺:影像上的一起始位置開始以該預設間隔平 t r的—選取區域,並選取該選取區域在該起 ^位置以及在每次平機預設間隔後的位置上該選取區域 犯圍内所對制的該待測影像而得到多個第—參考區塊,201126135 KjwH-uy-ui8 3319ltwf.doc/n VII. Patent application scope: ΐ· An image detection method, including: Guan Yunhao performs an autocorrelation operation on the image, and determines a preset based on the result of the autocorrelation operation. Interval; 浐哕二二寺: a starting position on the image begins with the preset interval flat tr - select the area, and select the selected area at the starting position and at the position after each preset interval of the plane The selected area is subjected to the image to be tested in the surrounding area to obtain a plurality of first reference blocks. 其中該選取區域、該第—輯影像與各該第—參考區塊具 有相同的尺寸大小; 计异各該第一參考區塊的代表值; 依據各該第一參考區塊的代表值自該些第一參考區 塊中選取出多個比對區塊;以及 分別比對各該比對區塊與該第一比對影像以得到該 第一比對影像的位置。The selected area, the first image and the first reference block have the same size; the representative values of the first reference blocks are calculated; and the representative value of each of the first reference blocks is used. Selecting a plurality of comparison blocks in the first reference blocks; and respectively comparing the comparison blocks and the first alignment image to obtain a position of the first comparison image. 2. 如申請專利範圍第1項所述之影像偵測方法,其中 在對該第一比對影像進行該自相關運算的步驟前更包括: 自該待測影像中選取出一第二比對影像以及—第三 比對影像,其中第二、第三比對影像與該第—比對影像具 有相同的尺寸大小,且該第一比對影像之代表值介於該第 一比對影像以及該第三比對影像的代表值之間;以及 將該待測影像、該第一至第三比對影像縮小為1/p, 其中P為大於1之正實數。 3. 如申请專利範圍第2項所述之影像偵測方法,其中 14 201126135 υυυΗ.υ7-018 33l91twf.d〇c/n °亥代表值為各該第一參考區塊的平均值或標準差。 4. 如申請專利範圍第3項所述之影像偵測方法,其中 依據各该第一參考區塊的代表值選取出該些比對區塊的步 驟包括: 自該些第一參考區塊中選取出平均值介於該第二比 對影像的平均值與該第三比對影像的平均值的第一參考區 塊以得到多個第二參考區塊;以及 •自該些第二參考區塊中選取出標準差介於該第二比 對影像的標準差與該第三比對影像的標準差的第二參考區 塊以得到該些比對區塊。 5. 如申請專利範圍第3項所述之影像偵測方法,其中 依據各該第一參考區塊的代表值選取出該些比對區塊的步2. The image detection method of claim 1, wherein before the step of performing the autocorrelation operation on the first comparison image, the method further comprises: selecting a second alignment from the image to be tested. The image and the third comparison image, wherein the second and third alignment images have the same size as the first alignment image, and the representative value of the first alignment image is between the first alignment image and Between the representative values of the third comparison image; and reducing the image to be tested, the first to third alignment images to 1/p, where P is a positive real number greater than 1. 3. The image detection method described in claim 2, wherein 14 201126135 υυυΗ.υ7-018 33l91twf.d〇c/n ° represents the average or standard deviation of each of the first reference blocks. . 4. The image detecting method of claim 3, wherein the step of selecting the aligned blocks according to the representative values of the first reference blocks comprises: from the first reference blocks Selecting a first reference block whose average value is between the average of the second aligned image and the average of the third aligned image to obtain a plurality of second reference blocks; and • from the second reference regions A second reference block whose standard deviation is between the standard deviation of the second aligned image and the standard deviation of the third aligned image is selected in the block to obtain the aligned blocks. 5. The image detecting method according to claim 3, wherein the step of selecting the matching blocks is selected according to representative values of the first reference blocks. 自該些第-參考區塊巾選取出標準差介於該第二比 對影像的鮮差與該第三輯影像的標準差的第一參考區 塊以得到多個第二參考區塊;以及 自該些第二參考區塊中選取出平均值介於該第二比 對影像的平均值與該第三轉影像的平均值的第二參考區 塊以得到該些比對區塊。 6.如申請專利範圍帛!項所述之影像偵測方法,其中 ^對該些輯區塊與該第—輯影像以㈣該[比對影 像的位置的步驟包括: 士分別計算各該比龍塊中每—像素與該第一比對影 對應位置的像素相減的平方值的總和;以及 e 15 201126135 8 33191twf.doc/n 比較各該比對區塊所計算出的平方值總和以得到最 小平方值總和的比對區塊。 7.如申請專利範圍第1項所述之影像偵測方法,其中 該起始位置為該待測影像之邊界區域。Extracting, from the first reference block, a first reference block whose standard deviation is between the difference between the second comparative image and the standard deviation of the third image to obtain a plurality of second reference blocks; And selecting, from the second reference blocks, a second reference block whose average value is between the average of the second aligned image and the average of the third rotated image to obtain the aligned blocks. 6. If you apply for a patent range! The image detecting method of the item, wherein the step of comparing the position of the image to the first image is (4) the step of comparing the positions of the images includes: calculating each pixel of each of the dragon blocks and the The sum of the squared values of the pixels subtracted from the corresponding position of the first pair of shadows; and e 15 201126135 8 33191twf.doc/n compares the sum of the squared values calculated for each of the aligned blocks to obtain the sum of the sum of the least squares Block. 7. The image detection method of claim 1, wherein the starting position is a boundary area of the image to be tested. 1616
TW99101675A 2010-01-21 2010-01-21 Image detecting method TWI416069B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW99101675A TWI416069B (en) 2010-01-21 2010-01-21 Image detecting method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW99101675A TWI416069B (en) 2010-01-21 2010-01-21 Image detecting method

Publications (2)

Publication Number Publication Date
TW201126135A true TW201126135A (en) 2011-08-01
TWI416069B TWI416069B (en) 2013-11-21

Family

ID=45024406

Family Applications (1)

Application Number Title Priority Date Filing Date
TW99101675A TWI416069B (en) 2010-01-21 2010-01-21 Image detecting method

Country Status (1)

Country Link
TW (1) TWI416069B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI496091B (en) * 2012-04-06 2015-08-11 Benq Materials Corp Thin film detecting method and detecting device
TWI571830B (en) * 2016-05-31 2017-02-21 和碩聯合科技股份有限公司 Moving object detecting method
CN113033561A (en) * 2019-12-09 2021-06-25 财团法人资讯工业策进会 Image analysis device and image analysis method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI254581B (en) * 2004-12-27 2006-05-01 Sunplus Technology Co Ltd Method and device for detecting image movements
TWI291668B (en) * 2005-12-29 2007-12-21 Metal Ind Res & Dev Ct Recognition method for pattern matching

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI496091B (en) * 2012-04-06 2015-08-11 Benq Materials Corp Thin film detecting method and detecting device
TWI571830B (en) * 2016-05-31 2017-02-21 和碩聯合科技股份有限公司 Moving object detecting method
CN113033561A (en) * 2019-12-09 2021-06-25 财团法人资讯工业策进会 Image analysis device and image analysis method
CN113033561B (en) * 2019-12-09 2023-07-07 财团法人资讯工业策进会 Image analysis device and image analysis method

Also Published As

Publication number Publication date
TWI416069B (en) 2013-11-21

Similar Documents

Publication Publication Date Title
US20170230577A1 (en) Image processing apparatus and method therefor
TW201214335A (en) Method and arrangement for multi-camera calibration
KR101620933B1 (en) Method and apparatus for providing a mechanism for gesture recognition
TW201131512A (en) Distance evaluation methods and apparatuses, and machine readable medium thereof
CN109523506A (en) The complete of view-based access control model specific image feature enhancing refers to objective evaluation method for quality of stereo images
TW201126135A (en) Image detecting method
JP2008065458A (en) Test device using template matching method utilizing similarity distribution
JP2005037378A (en) Depth measurement method and depth measurement device
TWI300159B (en) Camera system
JP2011081485A (en) Method and program for matching pattern, electronic computer, electronic device inspection device
US8000556B2 (en) Method for estimating noise according to multiresolution model
JP4728795B2 (en) Person object determination apparatus and person object determination program
JPWO2008041518A1 (en) Image processing apparatus, image processing apparatus control method, and image processing apparatus control program
TW201239348A (en) Exterior inspection method and device for same
JP6308962B2 (en) Displacement or strain calculation program and displacement or strain measurement method
JP2003115052A (en) Image processing method and image processor
CN107579028B (en) Method and device for determining edge of incomplete wafer and scribing device
JP2008052598A (en) Image position measuring method, image position measuring apparatus and image position measuring program
JP5904168B2 (en) Feature point extraction method and feature point extraction device for captured image
JP2009157701A (en) Method and unit for image processing
CN104125446A (en) Depth image optimization processing method and device in the 2D-to-3D conversion of video image
TW201820261A (en) Image synthesis method for character images characterized by using a processing module combining a first image with a second image
JP2003179797A (en) Image processor, digital camera and program
Wang et al. Optimization of Corner Detection Algorithm for Video Stream Based on FAST
TWI493977B (en) Image searching module and method thereof

Legal Events

Date Code Title Description
MM4A Annulment or lapse of patent due to non-payment of fees