TWI298856B - - Google Patents

Download PDF

Info

Publication number
TWI298856B
TWI298856B TW95108643A TW95108643A TWI298856B TW I298856 B TWI298856 B TW I298856B TW 95108643 A TW95108643 A TW 95108643A TW 95108643 A TW95108643 A TW 95108643A TW I298856 B TWI298856 B TW I298856B
Authority
TW
Taiwan
Prior art keywords
image
value
mask
grayscale
new
Prior art date
Application number
TW95108643A
Other languages
Chinese (zh)
Other versions
TW200734967A (en
Inventor
Chao Wang Hsiung
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed filed Critical
Priority to TW095108643A priority Critical patent/TW200734967A/en
Publication of TW200734967A publication Critical patent/TW200734967A/en
Application granted granted Critical
Publication of TWI298856B publication Critical patent/TWI298856B/zh

Links

Landscapes

  • Image Analysis (AREA)

Description

1298856 九、發明說明: 【發明所屬之技術領域】 本發明為-麵賦及互動叙即時· _觀方法,特別是 種不受環境辆與雜訊影響之㈣影像辨識軟法,且本發明之即;: 像辨識軟體梅可供,如:各種多媒體互動廣告、鮮、各種娛樂遊戲: 電玩…荨廣泛運用者。 【先前技術】 見7 觸技術,主要係由單槍(或其他顯像設齡射出多媒 體動晝影像,並藉_機與_取介爾__數位化者/、 利用相關辨識技術判斷人的肢體所接觸投射影像的區域,並作相對的 ^ ° 355345917 , ^ 運子方式來侧樣的觸,主要是將反映區的圖樣當作樣板儲存,再以 攝"機不斷#貞取#像供以觸,於辨識過程中係逐—崎。惟,此方法雖 …、簡單不而大里的運算速度,但此種辨識方法極易受到不同背景光的影 曰而致使辨識產生誤差。惟,該事先儲存在記憶射關形樣板,經過投 =的色軸和度大都會有所改變,再加上祕架設位於不_場合,其 月不光、原亦有戶斤差異’所以,此種辨識技術在系統架設完成後,均必須經 過色溫色差的校正,其過裎相當繁瑣者。 有L於上述之缺失亟待改進,故發明人依業界多年之經驗,不斷的 不受魏光源變化及影像投躲置投射影 像所k成的色JL%響之軟體辨識方法,且係採用灰階攝影機,故資料流量 j298856 較小,得以大量降低硬體設備成本者。 日n結構鱗神’婦細委胃參照下列之說 明即可完全的了解者。 【發明内容】 本發明為—麵賦及互動式之㈣雜觸健方法,制是指一 種不受環境光源與雜訊影響之即時影像辨識軟體方法,係包括有被動式與 互動式之辨識方法,其乃是藉以—影像投射裝置投射影像,主要係先行建 立一旦⑽也灰階值)固定背景影像以作為基準參考影像,並以攝影機不斷 地對衫像投射裝置投射出之影像區域操取即時㈤ts灰階值)影像與基準 參考影像,進行影像相減及影像二值化..·等運算步驟,即可快速準辆識 移動物體之活動’以進行感應檢測是否有遮蔽到投射影像之感應區塊,並 執行對應動作者。 又,本發嗎以灰階攝職擷取,故無需特定之高階影像操取卡 及各種醉細剛卿t,她―斷_卩爾形成準確 賴”大里降低成本之功效者。且本發明之即時影像辨識軟體方法係可 供’如:娜恤咖、解、她_戲、伽..等廣泛運用 者。 【實施方式】 第圖為本舍明之-種破動式及互動式之即時影像辨識軟體方法之系 統架構簡易示意圖’如圖所示,係包括有一個人_ ι〇、一影像投射裝置 U、影像區域lla、一攝影機12、-影像掏取卡13者。 1298856 本發明為-種鶴式及互賦之㈣影細錄财法,主要辨識對 _類係可分為被動式與絲式兩嫌其巾,被動式與絲式的差異乃 是在於影佩舰_位置。在被賦的職池下,_的感應 位置固定’·而互動式的情形正好相反’ _區塊會在影像投射裝置投射影 像區域範圍内變動者。 ^ 又’本發賴擷取的影像皆為_灰階,其灰階值範圍係G〜225之間 者0 其中’被動式之即時影像辨識軟體方法如下: 步驟―讀賴12擷轉倾練置u娜至影健域⑴之影像作為 基準參考影像(5x5灰階值)(參照第一、二圖)。 步驟二:以攝影機12不斷擷取影像投射裝置u投射至影像區域❿之即時 &像(5 5灰P白值)(參照第一、二圖),檢驗是否有外物接觸感應 區域。 以上步驟一之基準參考影像(參照第二圖)與步驟二之即時影 像(參照第三圖)的差異值可由式子(1)表示: DIFF{x,y) =| REF(x,y)^NEW(x,y) ------------------------------------------⑴ 步驟三··將步驟-之基準參考影像各灰階值與步驟二之即時影像各灰階值 相減’即可得到剩餘之影像灰階值分佈(參照第四圖),即表示有 外物。 步驟四··經步驟三相減後之影像,通常會有雜訊存在,即由式子(2) BIN(x9y) = !255 DIFF{x,y)>r______________ l 0 DIFF{x,y) < Τ* (2) 1298856 二值化的方法消除雜點的影響(參照第七圖);其中,广為門 檻值’在8bits灰階影像中,門檻值的範圍為〇〜况之間。而最佳 門檻值的決定方式可由料財絲得,綠佳門健為波雜 置的灰階值(參縣五圖),決定以卩可將影像分職二區間(參 照第六圖),其最佳門檻值Γ•的條件為q⑽變異數加上&内的變 異數之和編、。假郷像敝切= 5χ5,且_細影像的灰 階值個數為1=256。則灰階值為I的機率可表示為 P(i)=—---------------------------------- 麵 麵 j 此處'·表示灰階值1在影像中出現的次數,且i的細介於 0SG/-1。依據機率原理可得知 (4) (5) ⑹ Σ户⑺=1_ /=〇 假設q内的像素個數佔的比率為 r* W\ = Ργ(^) = y]P(j)-------------------- /=〇 而C2内的像素個數佔的比率為 r2=Pr(C2)= ^P(i)------------------ i=T*+l 這裡亦滿足=1。 接下來’也可算出(^的期望值 τ* %=Σ j=0 m X ----------------- -⑺ δ Ϊ298856 而〇2的期望值為 g Z(〇x/______________ ___________________________1298856 IX. Description of the invention: [Technical field to which the invention pertains] The present invention is a method for image recognition and interaction, and in particular, a method for image recognition that is not affected by environmental vehicles and noise, and the present invention That is:: Like the identification software plum available, such as: a variety of multimedia interactive advertising, fresh, a variety of entertainment games: video games ... 荨 widely used. [Prior Art] See 7-touch technology, mainly by single gun (or other imaging ages to shoot multimedia images, and borrowing _ machine and _ _ _ _ _ _ digitizer /, using relevant identification technology to judge people The area where the limb is in contact with the projected image, and the relative ^ ° 355345917 , ^ the way of the way to the side of the touch, mainly to reflect the pattern of the area as a sample storage, and then take a photo of the machine For the touch, in the process of identification, it is smoothed. However, although this method is simple, it is not so fast, but this method of identification is highly susceptible to different background light and causes errors in identification. The pre-stored in the memory-shooting model, the color axis and degree of the cast will change greatly, and the secret erection is not located in the occasion, the moon is not only the original, but also has the difference between the households. After the system is set up, the technology must be corrected by the color temperature and color difference, which is quite cumbersome. The lack of L in the above-mentioned defects needs to be improved, so the inventors are constantly immune to Wei light source changes and image projections based on years of experience in the industry. Hide projection Like the color recognition method of the color JL% sound, and the use of grayscale camera, the data flow j298856 is smaller, which can greatly reduce the cost of hardware equipment. The invention can be completely understood. [Invention] The present invention is a four-dimensional and interactive (four) hetero-touch method, which refers to a real-time image recognition software method that is not affected by ambient light sources and noise, and includes The passive and interactive identification method is to project an image by using an image projection device, which is mainly to establish a fixed background image once (10) and a grayscale value as a reference reference image, and continuously project the image projection device with the camera. In the image area, the instant (5) ts grayscale value image and the reference reference image are processed, and the image subtraction and image binarization are performed.., and the operation steps can quickly identify the activity of the moving object to perform the sensing detection. There is a sensing block that is shadowed to the projected image and the corresponding actor is executed. In addition, this hair is taken in grayscale, so there is no need for a specific high-order image manipulation card and a variety of drunken jingqing t, she _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ The real-time image recognition software method is available for a wide range of applications such as: Na-shirt coffee, solution, her _ drama, gamma, etc. [Embodiment] The picture is a kind of broken and interactive instant A simplified schematic diagram of the system architecture of the image recognition software method is shown in the figure, including a person _ ι〇, an image projection device U, an image area 11a, a camera 12, and an image capture card 13. 1298856 The present invention is - The kind of crane type and mutual Fu (4) shadow record financial method, the main identification of the _ class can be divided into passive and silk type two suspected towels, the difference between passive and silk is in the shadow ship _ position. Under the job pool, the sensing position of _ is fixed '· and the interactive situation is just the opposite ' _ The block will change within the range of the projected image area of the image projection device. ^ Also the image of the image taken by the hair is _ gray scale , the gray scale value range is between G and 225, where 0 The passive real-time image recognition software method is as follows: Step--Read the image of the 娜12撷 倾 ting u Na to the shadow field (1) as the reference reference image (5x5 grayscale value) (refer to the first and second figures). Step 2: The camera 12 continuously captures the instant image of the image projection device u projected into the image area (5 5 gray P white value) (refer to the first and second figures), and checks whether there is a foreign object touching the sensing area. The difference between the reference reference image (refer to the second image) and the instant image of step 2 (refer to the third image) can be expressed by the equation (1): DIFF{x,y) =| REF(x,y)^NEW(x , y) ------------------------------------------ (1) Step 3·· By subtracting the grayscale values of the reference reference image from the reference image and the grayscale values of the instant image in step two, the remaining image grayscale value distribution can be obtained (refer to the fourth figure), that is, there is a foreign object. Step 4· · After the three-step subtracted image, there will usually be noise, that is, by the formula (2) BIN(x9y) = !255 DIFF{x,y)>r______________ l 0 DIFF{x,y) < Τ* (2) 1298856 Binary method to eliminate noise Ringing (refer to the seventh picture); among them, the wide threshold value 'in the 8bits grayscale image, the threshold value range is between the 〇 and the case. And the optimal threshold value can be determined by the wealth, the green door The gray-scale value of Jianwei wave miscellaneous (refer to the five maps of the county), it is decided that the image can be divided into two intervals (refer to the sixth figure), and the optimal threshold value is the condition of q(10) variability plus & The sum of the variances in the code, the false image is cut = 5χ5, and the number of grayscale values of the _ fine image is 1=256. Then the probability that the grayscale value is I can be expressed as P(i)=----------------------------------- - Face j Here '· indicates the number of times the grayscale value 1 appears in the image, and the fineness of i is between 0SG/-1. According to the probability principle, we can know that (4) (5) (6) Seto (7)=1_ /=〇 assume that the ratio of the number of pixels in q is r* W\ = Ργ(^) = y]P(j)-- ------------------ /=〇 and the ratio of the number of pixels in C2 is r2=Pr(C2)= ^P(i)------ ------------ i=T*+l This also satisfies =1. Next ' can also be calculated (^ expected value τ * % = Σ j = 0 m X ----------------- - (7) δ Ϊ 298856 and the expected value of 〇 2 is g Z ( 〇x/______________ ___________________________

Al 利用式子(7)和式子(8)可求得ς和q的變異數為 σιΊ(/ - A)2 逆__一一…一_ ___________________________—(9) /=〇 Wx 镰 Σ 0-^2)2Ψ_____________________________________________________0°) /=rVi 則CjuC2的變異數和為 σΙ^ΨχσΐΛ-Ψ2σ22----------------------------------------------------------(11) 接著,只要將0〜255之間的數值代入式子(11)中使式子(11) 有最小值者就是最佳門檻值广。 ffife Τη · •雖級步驟四二值化後所殘留的雜訊已消除,惟,移動物體會有些 許的殘破,此種現象係以四連通遮罩(參照第八圖)及其膨脹、 侵蝕演算法來加以去除。 膨脹的演算法如下··當遮罩从6(ζ·,))==255時,便設定其四鄰點 位置的遮罩Al can use the equation (7) and the equation (8) to find the variation of ς and q as σιΊ(/ - A)2 inverse __一一...一____________________________—(9) /=〇Wx 镰Σ 0 -^2)2Ψ_____________________________________________________0°) /=rVi The variation of CjuC2 is σΙ^ΨχσΐΛ-Ψ2σ22---------------------------- ------------------------------ (11) Next, just substitute the value between 0 and 255 into the expression (11) In the case of the formula (11), the minimum value is the best threshold. Ffife Τη · • Although the residual noise after the step four-fibration has been eliminated, the moving object will be slightly broken. This phenomenon is a four-connected mask (refer to Figure 8) and its expansion and erosion. Algorithm to remove it. The expansion algorithm is as follows: When the mask is from 6 (ζ·,)) == 255, the mask of its four adjacent positions is set.

Mb Qj -1) = Μδ (ij +1) = Mh (/ -1 j ) =: Mb (/ +1J) = 255-----------------(12) 侵蝕的演算法如下:當遮罩時从6(;,;.)= 〇,便設定其四鄰點位 置的遮罩 ihj -V) = Mb (ij +\)=zMb (i -1 j) = Mb (i +1J) = 〇---------------------- 將上述的遮罩與一值化後的影像作迴旋積分即可消除破碎的 現象。 9 1298856 步驟六:接著,我們便可利關麵轉轉移__輪廓,此處,我 們將採用Sobel(影像輪廓運算遮罩)遮罩(參照第九圖)來完成物 體輪廓的取得。 係將Sobel(影像輪廓運算遮罩)遮罩前時影像作迴旋積分, 如式子(14)(15)所示: σ, (x5 j;) = (NEW(x - 1ί3; +1) + 2 χ NEW{x^y +1} + NEW(x + hy +1)} __ (NEW(x - -1) + 2 x NEW(x,y ~ 1) + NEW(x + lyy -1)) (14)Mb Qj -1) = Μδ (ij +1) = Mh (/ -1 j ) =: Mb (/ +1J) = 255-----------------(12) The algorithm of erosion is as follows: when the mask is from 6(;,;.)= 〇, set the mask of its four neighbors position ihj -V) = Mb (ij +\)=zMb (i -1 j) = Mb (i +1J) = 〇---------------------- The above mask can be broken back with the binarized image to eliminate the broken phenomenon. 9 1298856 Step 6: Next, we can transfer the __ contours. Here, we will use the Sobel (image contouring mask) mask (see Figure 9) to complete the contour of the object. The Sobel (image contouring mask) is masked as a convolution integral, as shown in equation (14)(15): σ, (x5 j;) = (NEW(x - 1ί3; +1) + 2 χ NEW{x^y +1} + NEW(x + hy +1)} __ (NEW(x - -1) + 2 x NEW(x, y ~ 1) + NEW(x + lyy -1)) (14)

Gy 〇 J ) = {NEW{x + l5j; -1) + 2 x NEW{x + l5y) + NEW(x + l^y +1)) ^ (NEW(x -1?^ -1) + 2 x NEW(x ^ \y) + NEW{x l9y +1)) (15) 利用式子⑽便可得到所擷取影像的邊緣 吨㈧=—___________________________________⑽ 將上述之邊緣影像二值化 £(x^)= 255 G(^)>T; l 0 G(x,y)<T: ------------------------------------------(17) 《中7;為最佳Ρ«值,求取最佳門檻值之方法和先前相同;接著, 字P切像的一值化輪廓圖取力與相減後之二值化影像册㈣ ^ 作又集的動作後,移動物體的外圍輪廊即可求得。 二檢利移動物體之外圍輪廓邊點的座標是否接觸到感應區域與 執行對應之動作者。 夕驟八·重複上述之所有步驟。 值化、p喊之㈣影像觸健方法的主要㈣分為有影像相減、二 應區〜像分割、感龜侧樣特徵擷取與感應區顧樣_,其中,感 鬼圖樣特_取是_線方式事前取得,而感應區塊圖樣觸則是即 1298856 時處理’·由於感應_在投射影像中為㈣形組可能會有旋轉或平移的 運動,所以,w樣魏值μ受舰轉、平移或職效果的辟。此處所 採用的圖樣特徵值為待辨識圖樣的不變矩,不變矩不會受到平移、旋轉、 大小比例改變的影響者。 互動式之即時影像辨識軟體方法如下·· 步驟一:輯影機12擷取影像投射裝置u投射至影像區域⑴之影像作為 基準參考影像(參照第一、十圖)。 ' 步驟二以攝影機12不_取影像投射裝置π投射至影像區域以之即時 影像(參照第十一圖),其中,影像具有活動影像20,檢驗是否有 外物接觸活動感應區塊21 〇 以上步驟-之基準參考影像(參照第十圖)與步驟二之即時影 像(參照第十一圖)的差異值可由式子(丨)表示·Gy 〇J ) = {NEW{x + l5j; -1) + 2 x NEW{x + l5y) + NEW(x + l^y +1)) ^ (NEW(x -1?^ -1) + 2 x NEW(x ^ \y) + NEW{x l9y +1)) (15) Use the formula (10) to get the edge of the captured image (8) = -___________________________________ (10) Binarize the above edge image to £(x^ )= 255 G(^)>T; l 0 G(x,y)<T: --------------------------- ---------------(17) "中中7; for the best Ρ« value, the method of obtaining the optimal threshold value is the same as before; then, the value of the word P-cut image The contour map and the subtracted binarized image book (4) ^ After the action of the set, the outer wheel of the moving object can be obtained. 2. Whether the coordinates of the peripheral contour point of the moving object are in contact with the sensing area and the corresponding actor. Repeat step VIII. Repeat all the steps above. (4) The main methods of image-sensing methods are: image subtraction, two-receiving area-image segmentation, sensory side-like feature extraction and sensing area _, in which the ghost pattern is taken Yes, the _ line method is obtained in advance, and the sensing block pattern touch is 1298856. 'Because the induction _ in the projected image, the (four) group may have a rotation or translation movement, so the w-like value is affected by the ship. Turn, translate, or effect. The pattern feature used here is the invariant moment of the pattern to be identified, and the invariant moment is not affected by the translation, rotation, and size ratio changes. The interactive real-time image recognition software method is as follows: Step 1: The video camera 12 captures the image projected by the image projection device u into the image area (1) as a reference reference image (refer to the first and tenth drawings). 'Step 2: The camera 12 does not take the image projection device π to project an image to the image area (refer to FIG. 11), wherein the image has a moving image 20, and it is checked whether there is a foreign object contacting the active sensing block 21 or more. The difference between the reference reference image (refer to the tenth figure) and the instant image of step two (refer to the eleventh figure) can be expressed by the expression (丨).

DlFF(x,y) ^\REF{x,y)^NEW{x,y) |----------- . (1) ^驟三:將步驟-之基準參考影像(參照第十圖)各灰階值與步驟二之即 時影像(參照第十-圖)各灰階值相減,即得到剩餘之影像灰階 值分佈,通常會有雜訊存在,即由式子(2) BIN^y)J255 DIFF^y)>r_ (2)DlFF(x,y) ^\REF{x,y)^NEW{x,y) |----------- . (1) ^Step 3: The reference reference image of the step - reference The tenth figure) the grayscale values are subtracted from the grayscale values of the instant image (refer to the tenth-figure) in step two, that is, the remaining image grayscale value distribution is obtained, and usually there is noise, that is, by the formula ( 2) BIN^y)J255 DIFF^y)>r_ (2)

[〇 DIFF(x,y) < T 二值化的方法消除雜點的影響(參照第十二图) 步驟四:經二值化後,白色部分(參照第十二 固)即是影像中的活動影像 2〇與活動感應區塊21,可藉由線段編 、 套將活動影像20與活動 感應區塊21分割出來(參照第十四圖、 π 厂讀(參照第十三圖)線[〇DIFF(x,y) < T Binary method to eliminate the influence of noise (refer to the twelfth figure) Step 4: After binarization, the white part (refer to the twelfth solid) is the image The moving image 2〇 and the motion sensing block 21 can be divided into the moving image 20 and the motion sensing block 21 by line segment editing (refer to the fourteenth figure, π factory reading (refer to the thirteenth figure) line).

1298856 段編碼法是-種赠段儲存的方法儲存物體巾每―點的資料,在 第1行偵剩有-浙姆彡像,就把它視為第—個物體中的第_ 列,符號記下1_1,接著,在第2行伽峨兩列,第—列因處於 1-1的下方,所以記做W而第二列為_新的物體,所以記做^, 如此偵測到第4行時發現,只有一列且位於物體丨及物體2之下 方,所以原先視為兩個物體之影像原來為一物體,但,先記做㈠, 等待全部影像掃描完成之後,再作合併的動作者。 其中,每個物體儲存的資訊,包括有:面積區域、周長、物 體特徵、分割之影像大小、寬度以及物體之總數者。 步驟五·田活動影像2〇與活動感應區塊以被分割出之後,接著,就要計 汁每個物體的特徵值,係採用七個不變矩絲示物體的特徵,其 求解過程如下: 個一值化影像的(jt + /)階矩定義為 M-IN-1The 1298856 segment coding method is a method of storing a piece of gift to store the information of each point of the object towel. In the first line, the remaining image is the _ column of the first object. Write down 1_1, then, in the second row, the two columns are gamma, the first column is below 1-1, so it is recorded as W and the second column is _new object, so it is recorded as ^, so detected In 4 lines, it is found that there is only one column and it is located below the object 丨 and the object 2. Therefore, the image originally regarded as two objects is originally an object, but it is first recorded as (1), and after all the image scanning is completed, the merged action is performed. By. Among them, the information stored by each object includes: area area, perimeter, object features, divided image size, width, and total number of objects. Step 5: The field image 2〇 and the activity sensing block are divided, and then, the eigenvalue of each object is counted, and the seven invariant moments are used to show the characteristics of the object. The solution process is as follows: The (jt + /) moment of a binarized image is defined as M-IN-1

MkJ " llllmknlb{m,ri)-------------------------------- m=〇«=〇 —_ 一 而,其中心矩的定義可表示為 y 办(m,《)--------------------- (18) (19) 其中,无: 7 = 代表物體的質量中心 1VI0,0 接著,求得式子(19)之正規化中心矩可得到(ν^〇,〇)Λ Ι,ο Μ \M+2 -(20) 12 1298856 也就有數個子參考影像,利用被動式之即時影像辨識軟體方法之 步驟-辭驟八的技術即可靖外物是否接細仔參考影像。辨 識的執行步驟可以整理如下: !.將圖形樣板事先訓練,計算各類觀..為,再計算各類別之〜及 ,就可完成分類器的決策準則。 2 .賴雜I2賴取的影像經轉四的方式分誠數個子影像, 並計算每個子影像的Α (Λ〇。 3.比較出⑷之大小,設法找出最大者為灿),則此圖形判定 為第k類。 辨識處理後,即可將活動感應區塊21準確的尋找絲(參照 第十五圖)。 步驟七:活動感應區塊21是否有外物接觸與執行對應之動作者。 步驟八··重複上述之所有步驟。 由此可知’本發明之方法確可達到預設之功效,且未見諸公開使用者, 符合專利實職、新酿之要件,爰依法向鈞局提出專辦請,懇請鈞 局委員速予惠審並准予本案專利權,實感德便。 需陳明者’以上所述乃是本發明之較佳實施例,如依本發明之構想所 作之改變’其產生之功能、個仍未超出說明書姻式所涵蓋之精神時, 切應屬本創作之範圍内,合予陳明。 14 1298856 【圖式簡單說明】 ‘辨識軟體方法之系統架 第-圖為本發明之—種被動式及互動式之即時影衝 構簡易示意圖例 :體方法之攝影機 第二圖為本發明之-種被動式及互動式之即時影像辨識抝 事先擷取之基準參考影像示意圖例 第三圖為本發敗-種㈣歧互喊切時職觸触綠之攝影機 擷取即時影像示意圖例 第四圖為树社-種魏歧轉仏㈣觸讀綠之攝影機 擷取基準參考影像與即時影像相減後之示意圖例 第五圖為本麵嫌綠 值為波谷位置的灰階值示意圖例 I7 w衫像辨識軟體方法之最佳門 第六圖為本發明之一種被動式及互動式之艮丨 檻值二區間示意圖例 第七圖為本發狀-種鶴式及互_之_雜_倾方法之触其 準參考影像與即時影像相減後又經二值化之示意圖例 第八圖為本義之-麵献及册觸触妓之 遮罩示意圖例 法之Sobel 方法之互動式 第九圖為本發明之一種被動式及互動式之即時影像辨識軟體方 遮罩U)X軸與(b)y轴之示意圖例 第十圖為本發社—麵賦及互㈣之”職辨識軟體 之基準參考影像示意圖例 1298856 第十一圖絲發馳減及絲仏―賴讀方法之互動 式之即時影像示意圖例 第十二圖為本_之-讎減及互動式之即時影像觸軟财法之互動 外—式之基準參考影像與即時影像相減與二值化後之示意圖例 第十三圖為本發明之一種被動式及互動式之 寺衫像辨識軟體方法之互動 式之物體線段編碼片段示意圖例MkJ " llllmknlb{m,ri)-------------------------------- m=〇«=〇—_ However, the definition of the central moment can be expressed as y (m, ")--------------------- (18) (19) where, none: 7 = Representing the mass center of the object 1VI0,0 Next, find the normalized center moment of the equation (19) to get (ν^〇,〇)Λ Ι,ο Μ \M+2 -(20) 12 1298856 There are also several sub- The reference image, using the passive real-time image recognition software method step - the technique of the eighth step can be used to check whether the foreign object is connected to the fine reference image. The execution steps of the identification can be organized as follows: !. The graphics template is trained in advance, and various types of views are calculated. For the calculation of the categories, the decision criteria of the classifier can be completed. 2. The image of the Lai I2 image is divided into several sub-images by means of four turns, and the Α of each sub-image is calculated (Λ〇. 3. Compare the size of (4), try to find the largest one is Can), then The graph is judged to be the kth class. After the identification process, the activity sensing block 21 can be accurately searched for the wire (refer to Fig. 15). Step 7: Whether the activity sensing block 21 has a foreign object contact and execution corresponding to the actor. Step 8·· Repeat all the steps above. It can be seen that the method of the present invention can achieve the preset effect, and it has not been disclosed to the public users. It meets the requirements of the patent and the new brewing, and invites the special office to the bureau in accordance with the law. The trial and approval of the patent right in this case is really sensible. It is to be understood that the above description is a preferred embodiment of the present invention, and that the changes made in accordance with the concept of the present invention, the functions produced thereby, which are still beyond the spirit of the specification, are Within the scope of creation, he is given to Chen Ming. 14 1298856 [Simple description of the diagram] The system frame of the identification software method - the figure is a simple schematic diagram of the passive and interactive instant shadowing structure of the invention: the second method of the body method camera is the invention Passive and interactive real-time image recognition 拗 Pre-fetched reference reference image Schematic example The third picture is the failure of the type (four) 互 互 时 职 职 职 职 职 绿 绿 绿 绿 绿 绿 绿 第四 第四 第四 第四The social-species Wei Qi transition (4) touched the green camera to capture the reference image and the real-time image subtracted schematic example. The fifth picture is the gray-scale value of the surface where the green value is the trough position. The sixth figure of the best method of the software method is a passive and interactive 艮丨槛 value two-section schematic diagram of the present invention. The seventh picture is the same as the hair-type and the _ _ _ _ _ _ _ _ The eighth example of the schematic diagram of the quasi-reference image and the real-time image is subtracted and then binarized. The ninth diagram of the Sobel method of the present invention is the interactive ninth diagram of the present invention. Passive and interactive The instant image recognition software square mask U) X axis and (b) y axis schematic example The tenth figure is the reference image of the reference identification image of the hair extension system and the mutual (4) Figure 12: The interactive image of the interactive image of the method is the same as the interactive reference image of the interactive image. The thirteenth figure is a schematic diagram of an interactive object line segment coding segment of a passive and interactive temple image recognition software method according to the present invention.

第十四圖為本發明之-種被動式及互動式之即時影像辨識軟體方法之 式之活動影像與活動感應區塊分割出來之八立固 /之互動 第十五圖為本發明之-種被動式及互動式之”影像辨識軟^方法 式之活動感應區塊辨識結果示意圖例 去之互動 【主要元件符號說明】 個人電腦------------------10 影像投射裝置--------…11 影像區域--------------——lla 攝影機---------------------12 影像操取卡---------------13 活動影像-一--------------20 活動感應區塊· 16The fourteenth figure is a passive and interactive real-time image recognition software method of the present invention, and the movable image and the activity sensing block are separated by the eight-solid/interaction. The fifteenth figure is a passive type of the present invention. And interactive "image recognition soft ^ method type activity sensing block identification result schematic example to go interactive [main component symbol description] personal computer ------------------10 Image Projection Device --------...11 Image Area--------------——lla Camera----------------- ----12 Image Operation Card ---------------13 Activity Image-一--------------20 Activity Sensing Block · 16

Claims (1)

1298856 、申請專利範圍: -種被動紅科雜_倾妓,其线_方法如下: 步驟-:輯職娜簡裝级槪雜赋找像作為基準參 考影像(5x5灰階值); 步驟二:_雜賴娜影像投錄錄駐鱗輯之即時影像 (⑽灰階值)’檢驗是否有外物接觸感舰域; " 以上步驟-之鲜參考影像與挪二之㈣影像的差異 值可由式子(1)表示: DIFF(x9y) =| REF{x,y) — |_________ (1) 步驟三:將步驟戈基準參考影像各灰階值與麵二之即時影像各灰階 值相減,即可得到剩餘之影像灰階值分佈,即表示有外物; 步驟四··經步驟三相減後之影像,通常會有雜訊存在,即由式子⑵ BIN{x,y) = i255 DIFF{^y) > T* 1 0 DIFF(x9y)<T* -(2) 二值化的方法消除雜點的影響;其中,广為門檻值,在 8bits灰階影像中,門檻值的範圍& 〇〜255之間,·而最佳門檻值 的決定方式可由統計的方式求得,其最佳Η檻值驗谷位置的 灰1¾值,決定r即可將影像分割成二區^,其最佳門檻值广的 條件為Cl内的變異數加上q内的變異數之和為最扣假設影像 的大小#:=5><5,且8bits灰階影像的灰階值個數為1=256,則 灰階值為I的機率可表示為 m N <3) 17 1298856 此處%表示灰階值i在影像中出現的次數,且i的範圍介 於0SKJ-1,依據機率原理可得知 Σ^(〇=ι-----------------------------------------------------------------(4) i=0 假設q内的像素個數佔的比率為 * (5) ⑹ W^VxiCO^Pii)------------------- /=01298856, the scope of application for patents: - a kind of passive red branch _ 妓 妓, its line _ method is as follows: Step -: the collection of the syllabus 简 槪 赋 赋 找 找 找 as the reference reference image (5x5 grayscale value); Step 2: _ The video of the Ryanna image recorded in the scales ((10) grayscale value) 'tests whether there is a foreign contact sense shipfield; " The above steps - the difference between the fresh reference image and the second (4) image can be Sub-(1) means: DIFF(x9y) =| REF{x,y) — |_________ (1) Step 3: Subtract the grayscale values of the step reference image from the grayscale values of the instant image of face 2, The remaining image grayscale value distribution can be obtained, that is, there is a foreign object; Step 4·· After the three-step subtraction of the image, there is usually a noise, that is, by the formula (2) BIN{x, y) = i255 DIFF{^y) > T* 1 0 DIFF(x9y)<T* -(2) Binarization method to eliminate the influence of noise; among them, the threshold is wide, in the 8bits grayscale image, the threshold value The range of & 〇 ~ 255, and the optimal threshold value can be determined by statistical methods, the best Η槛 value of the valley position gray 1 3⁄4 value, the r can be divided into two regions ^, the optimal threshold value is the condition that the variation in Cl plus the variation in q is the size of the most hypothetical image #:=5><5, and the number of grayscale values of the 8bits grayscale image is 1=256, then the probability of the grayscale value of I can be expressed as m N < 3) 17 1298856 where % represents the grayscale value i in the image The number of occurrences, and the range of i is between 0SKJ-1, according to the probability principle, you can know Σ^(〇=ι------------------------ -----------------------------------------(4) i=0 Assume q The ratio of the number of pixels is * (5) (6) W^VxiCO^Pii)------------------- /=0 而C2内的像素個數佔的比率為 ^2=Pr(C2)= XP(〇----------------- 本 i=T +1 這裡亦滿足%+%=1, 接下來,也可算出(^的期望值 u4f X i ⑺ 而〇2的期望值為 7-1The ratio of the number of pixels in C2 is ^2=Pr(C2)= XP(〇----------------- this i=T +1 here also satisfies %+ %=1, next, you can also calculate (^ the expected value u4f X i (7) and the expected value of 〇 2 is 7-1 % = Σ i=T*+l (8) 利用式子⑺和式子(8)可求得q和C2的變異數為 /=0 im 0-2= Σ 0-^2) i=T*+l 2m ⑼ (10) 則^和心的變異數和為 a2w=W^+W2a22---------- 18 (11) a298856 接著’只要將〇〜255之間的數值代人式子(n)巾,使式子 (11)有最小值者就是最佳門檻值广; ’、雖、、、工步驟ι值化後所殘留的雜訊已消除,惟,移動物體會有 些許的殘破,此種現象係以四連通遮罩及其雜、侵蚀演算法 來加以去除; 膨脹的演算法如下··當遮罩MWJ·) == 255時,便設定其四鄰 點位置的遮罩 K(U -\)^Mb(ij +1).} = Mb(. + u) = 255------------(12) 侵蝕的演算法如下:當遮罩時= 〇,便設定其四鄰 點位置的遮罩 ub (hj -l) Mb (ij +1) = ^ (/ _ ij) = Mb (/ + U) ^ 0 (13) 將上述的遮罩與二值化後的影像作迴旋積分即可消除破 碎的現象; 乂驟/、·接著,便可利用側邊遮罩來取得移動物艘的輪廓,此處,我們 將採用Sobel(影像輪廓運算遮罩)遮罩來究成物體輪廓的取得; 係將Sobel(影像輪廓運算遮罩)遮罩與即時影像作迴旋積 分,如式子(14)(15)所示: G, (x5^) = (NEW(x -+1) + 2 x NEW{x.y + D + NEW{% + Ιγ + 〇) "-(14) (娜(X — 1,卜 ” + 2 χ 響⑽一 D + + L少- D) Gy(hj) = (NEW(χ + ly-\) + 2xNEW(x + l?y)+ ^EW^X + l'y + ^ -(15) (歷㈣汐-1) + 2 x慶(x - 1沙)+,齡一以+ ”) 19 (16)1298856 利用式子(16)便可得到所擷取影像的邊緣 G(x,少)=Gy(x,y)2------------------ 將上述之邊緣影像二值化 £(^) = ]255 G(x,y)>T* 1° G{x,y)<T: (17)% = Σ i=T*+l (8) Using equation (7) and equation (8), the variance of q and C2 can be found as /=0 im 0-2= Σ 0-^2) i=T* +l 2m (9) (10) Then the sum of the sum of the heart and the heart is a2w=W^+W2a22---------- 18 (11) a298856 Then 'as long as the value between 〇~255 is substituted Formula (n) towel, so that the minimum value of the formula (11) is the optimal threshold value; ', although,,,,,,,,,,,,,,,,,,,,,,,,,,,,, Xu is broken, this phenomenon is removed by the four-connected mask and its miscellaneous and erosion algorithm; the expansion algorithm is as follows: · When the mask MWJ·) == 255, the mask of the four neighbors is set. Cover K(U -\)^Mb(ij +1).} = Mb(. + u) = 255------------(12) The algorithm of erosion is as follows: when masking = 〇, set the mask of its four neighbors ub (hj -l) Mb (ij +1) = ^ (/ _ ij) = Mb (/ + U) ^ 0 (13) The above mask and two The valued image is used as a convolution integral to eliminate the fragmentation phenomenon. Steps/, · Then, you can use the side mask to get the outline of the moving object. Here, we will use Sob. The el (image contour calculation mask) mask is used to obtain the contour of the object; the Sobel (image contour calculation mask) mask is rotated and integrated with the instant image, as shown in equation (14) (15): G , (x5^) = (NEW(x -+1) + 2 x NEW{xy + D + NEW{% + Ιγ + 〇) "-(14) (Na (X-1, Bu) + 2 χ (10) One D + + L is less - D) Gy(hj) = (NEW(χ + ly-\) + 2xNEW(x + l?y)+ ^EW^X + l'y + ^ -(15) (4) 汐-1) + 2 x Qing (x - 1 sand) +, age one by + ") 19 (16) 1298856 Use the formula (16) to get the edge of the captured image G (x, less) = Gy(x,y)2------------------ Binarize the above edge image by £(^) = ]255 G(x,y)>T* 1° G{x,y)<T: (17) 其中?;為最佳門檻值,求取最佳門檻值之方法和先前相同丨接 著,將即時影像的二值化輪廓圖別^力與相減後之二值化影像 作交集的動作後,移動物體的外圍輪廓即可求得; 步驟七:感應檢測移動物體之外圍輪麟點的座標是否接觸到感應區域 與執行對應之動作者; 步驟八:重複上述之所有步驟。 2、-種互動式之即時影像辨識軟體方法,其主要辨識方法如下: 步驟一:以攝影機擷取影像投射裝置投射至影像區域之影像作為基準參 考影像; 土夕among them? For the optimal threshold value, the method of obtaining the optimal threshold value is the same as the previous one. Then, after the binarized contour map of the real-time image is intersected with the subtracted binarized image, the moving object is moved. The peripheral contour can be obtained; Step 7: Inductively detecting whether the coordinates of the peripheral wheel of the moving object contact the sensing area and the corresponding actor; Step 8: Repeat all the above steps. 2. An interactive real-time image recognition software method, the main identification methods are as follows: Step 1: Using the camera to capture the image projected by the image projection device to the image area as a reference reference image; 步驟二··肩職顿擷取影雜職置㈣至影像區域之g卩時影像, 其中,影像財活_像,檢驗是砰外物接__應區塊; 以上步驟-之基準參考影像與步驟二之㈣f彡像的差異 值可由式子(1)表示: " DIFF(x,y) =| REF(xyy)^NEW(x,y) |----------- 步驟三 (1) 將步驟-之基準參考影像各灰階值與步驟二之即時影像各灰階 值相減,即得卿餘之影像灰階值分佈,通常會麵訊存在, 即由式子(2) 子’ 20 -(2) -(2) BIN{x,y) = J ^55 ^^F(x,y) > T* 0 DIFF{x,y)<rStep 2························································································ The difference value from the (4) f彡 image of step 2 can be expressed by the formula (1): " DIFF(x,y) =| REF(xyy)^NEW(x,y) |---------- - Step 3 (1) Subtract the grayscale values of the reference reference image from the reference image and the grayscale values of the instant image in step 2, that is, the grayscale value distribution of the image of the image is obtained, usually the surface signal exists, that is, Sub(2) child ' 20 -(2) -(2) BIN{x,y) = J ^55 ^^F(x,y) > T* 0 DIFF{x,y)<r 1298856 一值化的方法消除雜點的影響; 步驟四:經二崎,咖峨_軸爾活動感應區 塊,可精由線段編躲將活動影像與活動感應區塊分割出來, ^線段編躲是—種鱗段儲存的綠齡無巾每-點的 資料,在第1行侧到有一列分割影像,就把它視為第-個物 體中的第-列,符號記下Η,接著,在第2行_到有兩列, 第一列因處於14的下方,所以記做1-2 ;而第二列為一新的 物體,所以記做2],如此侧到第4行時魏,只有一列且 位於物體1及物體2之下方,所以原先視為兩個物體之影像原 來為-物體,但,先記做Μ,等待全部影像掃描完成之後, 再作合併的動作者; 其中’每個物體儲存的資訊,包括有:面積區域、周長、 物體特徵、分割之影像大小、寬度以及物體之總數者; 步驟五:當活動影像無域舰塊被分·之後,接著,就要計算每 個物體的特徵值,係採用七個不變矩來表示物體的特徵,其象 解過程如下: 一個二值化影像如㈨的& + /)階矩定義為 ΜΑΝΑ ^ ^ --------------------------------- m=0n=0 而,其中心矩的定義可表示為 21 (18)1298856 One-valued method to eliminate the influence of noise; Step 4: After the two-saki, curry_axis activity-sensing block, you can separate the moving image from the active sensing block by the line segment, and the line segment is hidden. Yes - the data of the green age without a towel stored in the scales. On the side of the first line, there is a column of divided images, which is regarded as the first column in the first object, and the symbol is recorded as Η, then, In the second row _ to have two columns, the first column is below the 14th, so it is recorded as 1-2; the second column is a new object, so it is recorded as 2], so the side to the fourth row Wei There is only one column and is located below the object 1 and the object 2. Therefore, the image originally regarded as two objects is originally an object, but it is first recorded as a Μ, waiting for all the image scanning to be completed, and then the merged actor; The information stored in each object includes: area area, perimeter, object features, segmented image size, width, and total number of objects; Step 5: When the moving image has no domain blocks, then, Calculate the eigenvalue of each object, using seven The moment is used to represent the characteristics of the object, and the image resolution process is as follows: A binarized image such as (9) & + /) moment is defined as ΜΑΝΑ ^ ^ ---------------- ----------------- m=0n=0 and its central moment can be expressed as 21 (18)
TW095108643A 2006-03-14 2006-03-14 Passive and interactive real-time image recognition software method TW200734967A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW095108643A TW200734967A (en) 2006-03-14 2006-03-14 Passive and interactive real-time image recognition software method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW095108643A TW200734967A (en) 2006-03-14 2006-03-14 Passive and interactive real-time image recognition software method

Publications (2)

Publication Number Publication Date
TW200734967A TW200734967A (en) 2007-09-16
TWI298856B true TWI298856B (en) 2008-07-11

Family

ID=45069517

Family Applications (1)

Application Number Title Priority Date Filing Date
TW095108643A TW200734967A (en) 2006-03-14 2006-03-14 Passive and interactive real-time image recognition software method

Country Status (1)

Country Link
TW (1) TW200734967A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110755105A (en) * 2018-07-26 2020-02-07 台达电子工业股份有限公司 Detection method and detection system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110755105A (en) * 2018-07-26 2020-02-07 台达电子工业股份有限公司 Detection method and detection system
CN110755105B (en) * 2018-07-26 2023-12-08 台达电子工业股份有限公司 Detection method and detection system

Also Published As

Publication number Publication date
TW200734967A (en) 2007-09-16

Similar Documents

Publication Publication Date Title
CN105760859B (en) Reticulate pattern facial image recognition method and device based on multitask convolutional neural networks
CN111553929B (en) Mobile phone screen defect segmentation method, device and equipment based on converged network
CN110738125B (en) Method, device and storage medium for selecting detection frame by Mask R-CNN
CN109816644B (en) Bearing defect automatic detection system based on multi-angle light source image
CN111080620B (en) Road disease detection method based on deep learning
Nakamura et al. Scene text eraser
CN108710910B (en) Target identification method and system based on convolutional neural network
CN111257341B (en) Underwater building crack detection method based on multi-scale features and stacked full convolution network
CN110598686B (en) Invoice identification method, system, electronic equipment and medium
CN109242802A (en) Image processing method, device, electronic equipment and computer-readable medium
CN110544251A (en) Dam crack detection method based on multi-migration learning model fusion
JP2018005640A (en) Classifying unit generation device, image inspection device, and program
CN108388822A (en) A kind of method and apparatus of detection image in 2 D code
JP2018005639A (en) Image classification device, image inspection device, and program
TW201211913A (en) A method for recognizing the identity of user by palm vein biometric
Kumari et al. Automatic license plate recognition using OpenCV and neural network
CN115829942A (en) Electronic circuit defect detection method based on non-negative constraint sparse self-encoder
CN115375991A (en) Strong/weak illumination and fog environment self-adaptive target detection method
CN110956184A (en) Abstract diagram direction determination method based on HSI-LBP characteristics
CN113435219B (en) Anti-counterfeiting detection method and device, electronic equipment and storage medium
CN111259757A (en) Image-based living body identification method, device and equipment
TWI298856B (en)
CN111549486B (en) Detergent dosage determining method and device, storage medium and washing machine
US20210049366A1 (en) Detecting Fake Videos
Song et al. Artificial Intelligence‐Assisted Fresco Restoration with Multiscale Line Drawing Generation

Legal Events

Date Code Title Description
MM4A Annulment or lapse of patent due to non-payment of fees