TW200836803A - Sphere identification system - Google Patents

Sphere identification system Download PDF

Info

Publication number
TW200836803A
TW200836803A TW96109009A TW96109009A TW200836803A TW 200836803 A TW200836803 A TW 200836803A TW 96109009 A TW96109009 A TW 96109009A TW 96109009 A TW96109009 A TW 96109009A TW 200836803 A TW200836803 A TW 200836803A
Authority
TW
Taiwan
Prior art keywords
sphere
image
target
edge
subject
Prior art date
Application number
TW96109009A
Other languages
Chinese (zh)
Other versions
TWI316863B (en
Inventor
Guo-Shing Huang
Cheng-Chang Chen
Cen-Yan Zhou
Original Assignee
Nat Univ Chin Yi Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nat Univ Chin Yi Technology filed Critical Nat Univ Chin Yi Technology
Priority to TW96109009A priority Critical patent/TWI316863B/en
Publication of TW200836803A publication Critical patent/TW200836803A/en
Application granted granted Critical
Publication of TWI316863B publication Critical patent/TWI316863B/en

Links

Abstract

An sphere identification system is provided. The method includes retrieving an original image of the subject; converting into a gray-level image data; proceeding edge detecting to retrieve a shape of the subject; amplifying the shape of the subject; proceeding the threshold to obtain an appearance of the subject; proceeding the convex hull to fill with the inner space of the appearance of the subject; removing the noise outside the image; cutting off the point portion of the edge. Therefore, the identification system determines whether the subject is the predetermined ball or not through the calculated center, x-coordinate and y-coordinate of the subject.

Description

200836803 九、發明說明: 【發明所屬之技術領域】 本發明疋有關於一種影像處理系統,且特別是有關於 一種用來檢測網球的球體之辨識方法。 【先前技術】 隨著科技文明的進步,休閒服務業發展迅速,若能將 -些服務性質的工作,藉由機器人來執行,將可達到自動 化,並兼具娛樂的效果。 政府近幾年來,有計畫性的支援機器人的研究與開 發’因此,本計畫將迎運而生。 目前機器人之使用範疇,不僅止於應用在工業界。事 實上,在休閒娛樂等服務領域中,亦利用許多智慧型的機 时人替大眾在日常生活中服務;例如:自動打掃的機器 人、倒茶的機器人…等等。 現代人生活忙碌、緊張,工作壓力大,許多人必須藉 助運動來舒緩緊繃的情緒;因此,許多的球類運動愈來愈 受到上班族的喜愛。 以網球為例,在練習的過程中,球場上集滿許多網 球,有待球僮一一去撿拾。然而,球僮在撿拾網球中,經 常因背部長時間負重,而產生磨擦撕裂,造成嚴重背脊傷 害。 為此’為減輕球僮的負荷,以及預防球僮身體受到傷 害。因此’將有一種可扮演球僮角色的機器人,以取代人 200836803 工撿拾網球,而造成身體嚴重受損的缺點。 然而,為能準確判斷所欲撿拾的物體,究竟是否為% 定所要撿拾的球體,因而有以下發明之產生。 【發明内容】 因此本發明的目的就是在提供一種球體之辨識方 法’用以攝取目標物的形狀,並加以確認是否為預設取用 的球體,以獲致準確辨識網球的目的。 根據本發明之上述目的,提出一種球體之辨識方法, 用以攝取一目標物的形狀,並加以確認是否為預設取用的 球體’該球體之辨識方法包含以下步驟: 步驟(A):擷取目標物的原始影像。 步驟(B):將原始影像轉換成灰階影像資料。 步驟(C):採灰階影像資料進行邊緣偵測 Detection),以擷取出目標物的外觀輪廓。 步驟(D):將邊緣偵測(Edge Detecti〇n)後資料,再加 以擴張化,以擴張目標物的外觀輪廓。 ° 步驟(Ε) ·進行二值化(Thresh〇ld)處理,以得出目標 的外觀雛形。 、 步驟(F):進行凸包(c_exHull)處理,填滿此目標物 外觀雛形的内部空缺。 步驟(G):去除影像物件外周的雜訊。 步驟(H):截去邊緣的尖銳部位。 步驟(I) ··分別求出此目庐私μ + 此S铩物的中心點,以及X座標與 6 、/、 200836803 Y座標’即可判岐否為預設的球體。 本發明的球體之辨識 人上,藉掏取目標物的彩色原始影像,=網球的機器 邊緣偵測、擴張效果、二 再依序經由灰階、 處理,最後取中心座1凸包、去雜訊、去棱角之 欲撿拾的網球。"""^與確判斷是否為機器人所BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to an image processing system, and more particularly to a method for identifying a sphere for detecting tennis balls. [Prior Art] With the advancement of science and technology civilization, the leisure service industry has developed rapidly. If some of the service-oriented work can be performed by robots, it will be automated and have the effect of entertainment. In recent years, the government has been researching and developing robotic support robots. Therefore, this project will be born. At present, the scope of use of robots is not limited to applications in the industrial world. In fact, in the service fields such as leisure and entertainment, many intelligent machine people are also used to serve the public in daily life; for example, robots that are automatically cleaned, robots that pour tea, etc. Modern people are busy, nervous, and stressed. Many people must use sports to relieve tension. Therefore, many ball games are increasingly loved by office workers. Taking tennis as an example, during the practice, the stadium is full of many nets, waiting for the caddies to pick them up. However, in caddy picking up tennis balls, they often suffer from frictional tears due to long-term weight on the back, causing severe back injury. To do this, it is to reduce the load on the caddy and prevent the caddy from being physically injured. Therefore, there will be a robot that can play the role of a caddy to replace the person who used the 200836803 to pick up tennis balls and cause serious damage to the body. However, in order to accurately judge whether the object to be picked up is a sphere to be picked up by %, the following invention is produced. SUMMARY OF THE INVENTION It is therefore an object of the present invention to provide a method for identifying a sphere for taking in the shape of a target and confirming whether it is a preset sphere for the purpose of accurately identifying the tennis ball. According to the above object of the present invention, a method for identifying a sphere for ingesting a shape of a target and confirming whether it is a preset sphere is selected. The method for identifying the sphere includes the following steps: Step (A): Take the original image of the target. Step (B): Convert the original image into grayscale image data. Step (C): Gray-level image data is used for edge detection (Detection) to extract the appearance of the object. Step (D): After edge detection (Edge Detecti〇n), the data is expanded to expand the outline of the target. ° Step (Ε) • Perform a binarization (Thresh〇ld) process to get the appearance of the target. Step (F): Perform a convex hull (c_exHull) process to fill the internal vacancy of the object's appearance. Step (G): Remove noise from the periphery of the image object. Step (H): Cut off the sharp part of the edge. Step (I) ························································································ In the identification of the sphere of the present invention, by taking the color original image of the target object, the machine edge detection and expansion effect of the tennis ball, the second step is sequentially passed through the gray scale, and the processing is performed. The tennis ball that you want to pick up. """^ and sure to determine whether it is a robot

【實施方式】 參照第1圖,係為本 步驟流程圖。 月球體之辨識方法-實施例的 參照第2圖,係為對庫 應於弟圖中流程110所擷取之 和色原始影像的圖形。μ楚 匕弟2圖為灰階圖形,彩色圖形請 參附件一。 …本月之球體之辨識方法,用以攝取一目標物削的 /狀並加以確5忍疋否為預設取用的球體⑽,(見第u 圖)。在此實施例中,是用來辨識網球形狀。 該球體之辨識方法包含以下步驟: 步驟·參照第1圖之流程110及流程111、第2圖 與附件一’先擷取目標物1〇〇的彩色原始影像,包括網球 顏色、形狀、大小及邊紋,並且作彩度分析。 參照第3圖,係為對應於第1圖中流程120彩色轉灰 階影像的圖形。 第10圖係為經過流程120處理後的統計圖。 步驟二:參照第1圖之流程120及流程121、第3圖 200836803 與附件一,將目標物100的原始影像轉換成灰階影像資 料,並且。因處理彩色影像會比處理灰階或黑白影像多出 三倍的時間,所以,此步驟將彩色影像轉為灰階影像,可 加快程式的執行速度。此外,影像中含有R、G、B三種色 彩,因此,在影像(Vision)内灰階(Gray)中分成RGB-Red Plane(彩色轉紅階)、RGB-Green Plane(彩色轉綠階)及 RGB-Blue Plane(彩色轉藍階)。經由灰階形態學(Gray Morphology)後的雛形,可知藍階(Blue Plane)的色澤較深 沉,而綠階(Green Plane)比較中庸且清晰,所以,本實施 例中,此灰階影像是採以綠色(Green)的灰階影像。 參照第4圖,係為對應於第1圖中流程130、140經 邊緣擴張後的圖形。 步驟三:參照第1圖之流程130、第4圖與附件一, 採灰階影像資料進行邊緣偵測(Edge Detection),以擷取出 目標物100的外觀輪廓。 步驟四··參照第1圖之流程140、第4圖與附件一, 將邊緣偵測(Edge Detection)後資料,再加以擴張化,以擴 張目標物的外觀輪廓。 此外,邊緣債測(Edge Detection)主要是一種特徵擷取 的描述方式,資料量已比原影像大量減少,因此在描述此 影像以及後級處理上更為迅速及方便。本發明尋找影像的 邊緣,是使用Sobel Filter,這個運算包含二個運算子如下: '1 2 1 _[Embodiment] Referring to Fig. 1, a flow chart of this step is shown. Method for Identifying the Lunar Body - Example Referring to Figure 2, it is a graph of the original image of the color captured by the library 110 in the flowchart. μ楚 The two brothers are shown in grayscale graphics. For color graphics, please refer to Appendix 1. ... the identification method of the sphere of this month, which is used to ingest a target object/shape and to determine whether it is a preset sphere (10) (see Figure u). In this embodiment, it is used to identify the shape of the tennis ball. The method for identifying the sphere includes the following steps: Step · Refer to the process 110 and the process 111 of FIG. 1 , the second figure and the annex 1 'first capture the color original image of the object 1 , including the color, shape and size of the tennis ball and Edge pattern, and chroma analysis. Referring to Fig. 3, it is a graph corresponding to the color-graded grayscale image of the flow 120 in Fig. 1. Figure 10 is a statistical diagram after processing by process 120. Step 2: Convert the original image of the object 100 into grayscale image data by referring to the process 120 and the process 121 of FIG. 1 and the third figure of 200836803 and the annex 1. Since processing a color image is three times longer than processing a grayscale or black-and-white image, this step turns the color image into a grayscale image, which speeds up the execution of the program. In addition, the image contains three colors of R, G, and B. Therefore, it is divided into RGB-Red Plane (color to red level) and RGB-Green Plane (color to green level) in the gray scale of the image (Vision). RGB-Blue Plane (color to blue level). Through the prototype of Gray Morphology, it can be seen that the Blue Plane has a deeper color, and the Green Plane is more moderate and clear. Therefore, in this embodiment, the grayscale image is taken. Grayscale image in green. Referring to Fig. 4, there is a graph corresponding to the edge expansion of the flows 130, 140 in Fig. 1. Step 3: Referring to the process 130, FIG. 4 and the annex 1 of FIG. 1 , gray detection image data is used for edge detection to extract the appearance contour of the object 100. Step 4: Refer to the process 140, FIG. 4 and Annex 1 of FIG. 1 to expand the edge detection data to expand the outline of the object. In addition, Edge Detection is mainly a description of feature extraction. The amount of data has been greatly reduced compared with the original image, so it is more convenient and convenient to describe this image and post-processing. The present invention seeks the edge of the image by using the Sobel Filter. This operation contains two operators as follows: '1 2 1 _

Gy = 0 0 0 一 1 一 1 一 1 8 200836803Gy = 0 0 0 1 1 1 1 1 8 200836803

Gx -1 0 Γ -2 〇 2 一1 0 1 iGl"近似 M = |叫 + |办|Gx -1 0 Γ -2 〇 2 1 1 1 1 iGl"Approximate M = |叫+ |办|

Gx和Gy是分別判斷影像物件的X、Y變化,EGx=< 和Σ Gy=〇 ,此時一個像素(Pixel)鄰近的八個像素(pixei)知 果都疋相同值,就代表沒有邊緣變化。假設左右的值耳 同,則可以保證GX輸出結果就不會是零,但Gy仍會是驾 值到上下的值不同為止。Gx and Gy are the X, Y changes of the image object, EGx=< and Σ Gy=〇, respectively. At this time, the pixel (pixei) of one pixel (Pixel) has the same value, which means there is no edge. Variety. Assuming that the left and right values are the same, it is guaranteed that the GX output will not be zero, but Gy will still be different from the value of the driving value.

ΛΛΛ J2為4 4ΛΛ _ I = G 使用上述要點,則近似可寫成如下式·· Η = |(4 + 2χ^2 + Λμ(Λ + 2χΛ + Λ^+ \(Α3 + 2χΑ6 + Α9)-(Αι+2χΑ4 + ΑΊ] 由以上要點(kernel)將得到第4圖,可以清楚看出此物 件(目標物)之邊緣明顯的被表達出來。 在灰階形態學(Gray Morphology)有兩個最基本的操 作··侵蝕(Erosion)與擴張(Dilati〇n)。完成此兩種操作均需 兩個步驟,-個稱為Activeimage是用來執行侵餘或擴張 的image,在第4圖中也可以看出擴張的效果,優點是在 9 200836803 ,邊緣偵測(Edge Deteet_)雖然可以看出網球的邊緣… 疋不夠彰顯,所以需加以擴張化,讓後續的影像處理,能 更加方便運算。 參照第5圖,係為對應於第j圖中流程15〇經 的圖形。 步驛五.參知、第!圖之流程15〇、第$圖與附件一, 進行二值化(Threshoid)處理,以得出目標物丨⑼的外觀雛ΛΛΛ J2 is 4 4ΛΛ _ I = G Using the above points, the approximation can be written as follows: Η = |(4 + 2χ^2 + Λμ(Λ + 2χΛ + Λ^+ \(Α3 + 2χΑ6 + Α9)-( Αι+2χΑ4 + ΑΊ] From the above point (kernel) will get the 4th picture, it can be clearly seen that the edge of this object (target) is clearly expressed. There are two basics in Gray Morphology. Operation Erosion and Dilatation (Dilati〇n). Two steps are required to complete both operations, an image called Activeimage is used to perform the image of the invasion or expansion, and can also be used in Figure 4. Seeing the effect of expansion, the advantage is that in 9 200836803, Edge Deteet_ can be seen that the edge of the tennis ball... is not enough to be highlighted, so it needs to be expanded to make subsequent image processing more convenient for calculation. Figure 5 is a graph corresponding to the flow of the process in Figure j. Step 5. The process of the reference, the figure of the figure 15〇, the figure of Figure and the annex I, the process of binarization (Threshoid), The appearance of the target 丨(9)

形0 二值化又可稱為灰度分劃(Th贈。M),—般來說,在 處理圖形影像二值化時,值多集中在〇到28,及225到 255之兩極,是把灰度分劃(Thresh〇ld)分成兩種灰度值, 令物件影像自身灰度值高於設定值者,則令其為亮點,而 灰度值低於設定值者,則令其為暗點。最常遇見的問題就 疋影像經過濾波處理後所造成的失真雜訊問題。所幸這個 問題在經過二值化處理後便可以大幅的減低空間濾波時 的這些困擾。假設m為二值化之閥值(Thresh〇lding Value),設定影像灰度分劃值瓜為: η mzzYJf{^y) /=1 上式中f:輸入之影像; η :所有像素之數目; f(x,y):像素座標(X,y)的灰度值。 將像素值從0至255區分為〇和255,影像像素只有 0與255兩種灰階度。第5圖係為經過二值化後的雛形。 200836803 經實驗結果,原影像檔案大小75·5Κ,經二值化後檔案大 小降為21.3Κ,由於上述說明二值化影像較容易儲存、處 理與辨認,因此二值影像信號處理在㈣學及影像辨識處 理中’皆佔有很重要的地位。 參知、第6圖’係為對應於第1圖中流程160經凸包處 理後的圖形。 步驟六·蒼照第1圖之流程160、第6圖與附件一,Shape 0 Binarization can also be called grayscale division (Th gift. M). Generally speaking, when processing graphics image binarization, the values are mostly concentrated at 28 and 225 to 255. Divide the grayscale division (Thresh〇ld) into two kinds of gray values, so that the gray value of the object image itself is higher than the set value, then make it a bright spot, and if the gray value is lower than the set value, then make it dark spot. The most frequently encountered problem is the distortion noise caused by the filtering of the image. Fortunately, this problem can greatly reduce these problems when spatial filtering is done after binarization. Assuming m is the threshold of binarization (Thresh〇lding Value), set the image grayscale score value to be: η mzzYJf{^y) /=1 where f: input image; η: number of all pixels ; f(x, y): Gray value of the pixel coordinate (X, y). The pixel values are divided into 〇 and 255 from 0 to 255, and the image pixels have only two gray levels of 0 and 255. Figure 5 is the prototype after binarization. 200836803 According to the experimental results, the original image file size is 75·5Κ, and the file size is reduced to 21.3Κ after binarization. Since the above-mentioned binarized image is easier to store, process and identify, the binary image signal processing is in (4) In image recognition processing, 'has a very important position. The reference Fig. 6 is a graph corresponding to the convex package processing corresponding to the flow 160 in Fig. 1. Step 6 · Process 160, Figure 6 and Annex I of Figure 1 of the photo

進行凸包(C〇nvexHu11)處理,填滿此目標物1〇〇外觀離形 的内部空缺。 參照第7圖’係為對應於第1圖中流程170去除影像 外界雜訊的圖形。 步驟七·參照第1圖之流程17G、第7圖與附件-, 去除影像物件外周的雜訊101(見第6圖)。 參知、第8圖’係為對應於第1圖中流程180經去棱角 處理以截去邊緣尖銳部份的圖形。 步驟八·參照第1圖之流程180、第8圖與附件一, 截去目標物100邊緣的尖銳部位。 參照第9圖,係為對應於第1圖中流程190取中心點、 X座與Y座標所求出的圖形。 、步驟九:參照第1圖之流程190、第9圖與附件一, 刀別求出此目標物1〇〇的中心點1〇2,以及X座標與Y座 ‘’即可判斷是否為預設的球體。 參知'第10圖,係為經過流程120處理後的統計圖。 使用視覺(Visi〇n)的顏色臨限(Color Threshold)狀態 11 200836803 圖,表示影像的像素強度分佈狀態,由此圖看出整個影像 的灰階度以及對比的關係。此狀態分析統計圖,所採用的 彩色模式(Color Model)為 HSI (Hue Saturation Intensity : 色彩飽和強度),橫軸代表灰階值,依明亮度或灰階度,由 0到255,數值越小表越暗,反之越明亮。可利用HSI分 析出全域CCD攝影機230(見第11圖)攝取並經由邊緣偵 測後的影像,目前的灰階分佈情形,以作為二值化的後級 處理。 參照第11圖,係為該球體之辨識方法運用在撿拾網球 之機器人200的外觀立體圖。 參照第1圖與第11圖,本發明球體之辨識方法是由 軟、硬體所構成,前述均是在闡述軟體的執行運作。至於 硬體部份,是包含安裝在機器人200本體的一微電腦單元 210、一影像擷取卡220,以及一。€0攝影機230 〇 當CCD攝影機230抓取目標物的影像,如網球100’ 顏色、形狀、大小及邊紋,微電腦單元210就會隨著網球 100’的移動來驅動CCD攝影機230鏡頭,使得此網球100’ 恆保持在FOV中央,以正確取得網球100’的影像。 歸納上述,本發明之球體之辨識方法,藉此判定網球 之研究,經由實驗所獲得的數據,我們將辨識物件參數設 定為比對相似度達850分以上、高色彩靈敏度、中庸搜尋 策略。經由實驗證實可以快速辨識所預先設定的物件(網球 100,),且所耗費的時間僅0.640秒,遠低於CCD擷取一 個畫面所需的0.973秒。 12 200836803 此外,因本系統中模型比對(pattern match)是追蹤物 件的主要函式,而目前Labview 7.1所提供的函式會精確 的比對待辨識物件與影像模板(image template)。雖具有精 確比對的優,點,但是相對地對光線、辨識物件影像的要求 也較高。在實驗中,發現白天、晚上、窗戶是否以窗簾遮 蔽等等因素’都會影響到模型比對的成功率。尤其,如果 所辨識的物件是其他球類時,而導致模型比對失敗,使得 _ 》統^時失去要鎖定,的對象。因此,本發明之球體之辨識 方法藉以強健(robust)的演算》,將可獲得精確的_效 果。 在本發明中,上揭之球體1〇〇,係指用來撿拾網球之 用。依照本發明之球體之辨識方法,亦可用來撿拾各種球 體,如乒乓球、高爾夫球等。 〜雖然本發明已以-實施例揭露如上,然其並非用以限 $本發明,任何熟習此技藝者,在殘離本發明之精神和 . 關内,當可作各種之更動與潤飾,因此本發明之保護範 圍當視後附之申請專利範圍所界定者為準。 【圖式簡單說明】 △為讓本發明之上述和其他目的、特徵、優點與實施例 能更明顯易懂,所附圖式之詳細說明如下: 第1圖係為本發明一實施例的球體之辨識方法的步驟 13 200836803 第2圖係失 120彩色轉灰階影像 始影像的圖形應於第1圖中流程110所擷取之彩色原 的圖對應於第1圖中流程 #圖係為對應於第1圖中流程130、140經邊緣&gt;[貞 測與擴張後的圖形。The convex hull (C〇nvexHu11) is processed to fill the internal vacancy of the object 1 〇〇 appearance. Referring to Fig. 7, a figure corresponding to the process of removing the external noise of the image corresponding to the flow 170 in Fig. 1 is used. Step VII. Referring to the process 17G, Fig. 7 and the attachment - in Fig. 1, the noise 101 on the periphery of the image object is removed (see Fig. 6). The reference Fig. 8 is a pattern corresponding to the process 180 of Fig. 1 which is subjected to the de-angular processing to cut off the sharp portion of the edge. Step 8: Referring to the process 180, FIG. 8 and Annex 1 of FIG. 1, the sharp portion of the edge of the object 100 is cut off. Referring to Fig. 9, the figure obtained by taking the center point, the X seat and the Y coordinate corresponding to the flow 190 in Fig. 1 is shown. Step 9: Refer to the process 190, Fig. 9 and Annex 1 of Figure 1, and find the center point 1〇2 of the target 1〇〇, and the X coordinate and the Y seat '' to determine whether it is pre- Set the sphere. See Fig. 10 for the statistical diagram after processing by process 120. Using the Color Threshold state of Visi〇n 11 200836803 The graph shows the pixel intensity distribution of the image. The grayscale and contrast of the entire image are seen from the graph. This state analyzes the chart. The color model used is HSI (Hue Saturation Intensity), and the horizontal axis represents grayscale values. Depending on the brightness or grayscale, from 0 to 255, the smaller the value. The darker the table, the brighter the opposite. The HSI can be used to analyze the image captured by the global CCD camera 230 (see Fig. 11) and detected via the edge, and the current grayscale distribution is treated as a post-binarization. Referring to Fig. 11, it is an external perspective view of the robot 200 for picking up the tennis ball. Referring to Figures 1 and 11, the identification method of the sphere of the present invention is composed of soft and hard bodies, and the foregoing describes the execution of the software. The hardware portion includes a microcomputer unit 210, an image capture card 220, and one mounted on the body of the robot 200. €0 camera 230 〇 When the CCD camera 230 captures an image of the object, such as the tennis 100' color, shape, size and fringe, the microcomputer unit 210 drives the CCD camera 230 lens as the tennis ball 100' moves, making this The tennis 100' is kept in the center of the FOV to properly capture the image of the tennis 100'. In summary, the method for identifying the sphere of the present invention, thereby determining the research of tennis, and the data obtained through the experiment, we set the identification of the object parameters to a similarity of 850 or more, high color sensitivity, and a moderate search strategy. It has been experimentally confirmed that the pre-set object (tennis 100,) can be quickly identified and the time taken is only 0.640 seconds, which is much lower than the 0.973 seconds required for the CCD to capture a picture. 12 200836803 In addition, because the pattern match in this system is the main function of tracking objects, the functions provided by Labview 7.1 will accurately identify objects and image templates. Although it has the advantages of precise comparison, it has relatively high requirements for light and identification of object images. In the experiment, it was found that the factors such as day and night, whether the window is covered by curtains and so on will affect the success rate of the model comparison. In particular, if the identified object is another ball, the model comparison fails, causing the _ 》 system to lose the object to be locked. Therefore, the identification method of the sphere of the present invention can obtain an accurate _ effect by using a robust calculation. In the present invention, the above-mentioned sphere 1 is used for picking up tennis balls. The method for identifying a sphere according to the present invention can also be used to pick up various spheres such as table tennis, golf balls, and the like. The present invention has been disclosed in the above-described embodiments, but it is not intended to limit the invention, and any person skilled in the art can make various changes and retouches within the spirit and scope of the present invention. The scope of the invention is defined by the scope of the appended claims. BRIEF DESCRIPTION OF THE DRAWINGS The above and other objects, features, advantages and embodiments of the present invention will become more <RTIgt; Step 13 of the identification method 200836803 Fig. 2 is a diagram of the image of the original image which is subtracted from the color of the grayscale image. The image of the original color obtained by the process 110 in Fig. 1 corresponds to the flow in the first figure. In the first diagram, the flow 130, 140 passes through the edge &gt; [the measured and expanded graphics.

圖形 ^ 5圖係為對應於第1圖中流程 150經二值化的圖形。 第6圖係為對應於第1圖中流程160經凸包處理後的 第7圖係為對應於第1圖中流程170去除影像外界雜 訊的圖形。 第8圖係為對應於第1圖中流程180經去棱角處理以 截去邊緣尖銳部份的圖形。 弟9圖係為對應於第1圖中流程19〇取中心點、X座 與Y座標所求出的圖形。 第10圖係為經過流程120處理後的統計圖。 第Π圖係為此球體之辨識方法運用在撿拾網球的機 裔人的外觀立體圖。 附件一:第2圖、第3圖、第4圖、第5圖、第6圖、 第7圖、第8圖、第9圖與第1〇圖之彩色圖片。 【主要元件符號說明】 1G0 :目標物 101 :雜訊 102 :中心點 100’ :網球 14 200836803 110 :流程 111 : 流程 120 :流程 121 : 流程 130 :流程 140 : 流程 150 :流程 160 : 流程 170 :流程 180 : 流程 190 :流程 200 : 機器人 210 :微電腦單元 220 : 影像擷取卡 230 : CCD攝影機 15The graph ^5 is a graph corresponding to the binarization of the flow 150 in Fig. 1. Fig. 6 is a diagram corresponding to the convex envelope processing of the flow 160 in Fig. 1 to remove the image external noise corresponding to the flow 170 in Fig. 1. Figure 8 is a graph corresponding to the process 180 of Figure 1 which has been subjected to a de-angular treatment to cut off the sharp portion of the edge. The figure 9 is a graph obtained by taking the center point, the X seat and the Y coordinate corresponding to the flow 19 in Fig. 1 . Figure 10 is a statistical diagram after processing by process 120. The third diagram is a stereoscopic view of the appearance of the sphere for the player who picks up the tennis ball. Annex I: Color pictures of 2nd, 3rd, 4th, 5th, 6th, 7th, 8th, 9th, and 1st. [Main component symbol description] 1G0: target 101: noise 102: center point 100': tennis 14 200836803 110: process 111: process 120: process 121: process 130: process 140: process 150: process 160: process 170: Process 180: Process 190: Process 200: Robot 210: Microcomputer Unit 220: Image Capture Card 230: CCD Camera 15

Claims (1)

200836803 十、申請專利範圍: 1·一種球體之辨識方法,用以攝取一目標物的形狀, 並加以確認是否為預設取用的球體,該球體之辨識方法包 含以下步驟: 步驟(A):擷取目標物的原始影像; 步驟(B):將原始影像轉換成灰階影像資料; 步驟(c):採灰階影像資料進行邊緣偵測(Edge Detection),以擷取出目標物的外觀輪廓; 步驟(D) ··將邊緣偵測(Edge Detecti〇n)後資料,再加以 擴張化,以擴張目標物的外觀輪廓; 步驟(E”進行二值化(Thresh〇ld)處理,以得出目標物 的外觀雛形; 步驟(F):進行凸包(convex hu11)處理,填滿此目標物 外觀雛形的内部空缺; 步驟(G):去除影像物件外周的雜訊; 步驟(H):截去邊緣的尖銳部位;以及 步驟(I) ·分別求出此目標物的中心點,以及X座標與 Y座標,即可判斷是否為預設的球體。 一 2.如申請專利範圍第1項所述之球體之辨識方法,其 中,在步驟(A)中,係擷取目標物之彩色原始影像。 3·如申請專利範圍第1項所述之球體之辨識方法,其 16 200836803200836803 X. Patent application scope: 1. A method for identifying a sphere for ingesting the shape of a target object and confirming whether it is a preset sphere. The method for identifying the sphere includes the following steps: Step (A): Capture the original image of the target; Step (B): Convert the original image into grayscale image data; Step (c): Use grayscale image data for Edge Detection to extract the outline of the target Step (D) ··Edge detection (Edge Detecti〇n) data, then expand to expand the outline of the target; Step (E) for binarization (Thresh〇ld) processing The appearance of the target is prototyped; Step (F): a convex hu11 treatment is performed to fill the internal vacancy of the appearance of the target; Step (G): removing the noise of the periphery of the image object; Step (H): Cut off the sharp part of the edge; and step (I) · Find the center point of the target, and the X coordinate and the Y coordinate respectively, to determine whether it is the preset sphere. 1. 2. For the patent application scope 1 Said Identification of the spheres, wherein, in step (A), the color-based object captured the original image. 3. The sphere of application of the method of identification of patentable scope of paragraph 1, which 16,200,836,803 參 中,在步驟(B)中,此灰階影像是採以綠色(Green)的灰階 影像。 4.如申請專利範圍第1項所述之球體之辨識方法,此 球體之辨識方法用以辨識網球。 17In the step (B), the grayscale image is a grayscale image in green. 4. The method for identifying a sphere as described in claim 1, wherein the sphere identification method is used to identify tennis balls. 17
TW96109009A 2007-03-15 2007-03-15 Sphere identification system TWI316863B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW96109009A TWI316863B (en) 2007-03-15 2007-03-15 Sphere identification system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW96109009A TWI316863B (en) 2007-03-15 2007-03-15 Sphere identification system

Publications (2)

Publication Number Publication Date
TW200836803A true TW200836803A (en) 2008-09-16
TWI316863B TWI316863B (en) 2009-11-11

Family

ID=44819997

Family Applications (1)

Application Number Title Priority Date Filing Date
TW96109009A TWI316863B (en) 2007-03-15 2007-03-15 Sphere identification system

Country Status (1)

Country Link
TW (1) TWI316863B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI394841B (en) * 2009-03-18 2013-05-01 China Steel Corp Methods for monitoring blast furnace tuyere

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI394841B (en) * 2009-03-18 2013-05-01 China Steel Corp Methods for monitoring blast furnace tuyere

Also Published As

Publication number Publication date
TWI316863B (en) 2009-11-11

Similar Documents

Publication Publication Date Title
WO2021138995A1 (en) Fully automatic detection method for checkerboard corners
CN105608671B (en) A kind of image split-joint method based on SURF algorithm
US8036458B2 (en) Detecting redeye defects in digital images
CN109271937A (en) Athletic ground Marker Identity method and system based on image procossing
TWI358674B (en)
CN115063421B (en) Pole piece region detection method, system and device, medium and defect detection method
CN110008968B (en) Automatic triggering method for robot settlement based on image vision
KR20210084449A (en) Target object recognition system, method, apparatus, electronic device and recording medium
CN104065872B (en) Moving image extraction element, moving image extracting method and recording medium
CN109951635A (en) It takes pictures processing method, device, mobile terminal and storage medium
WO2017120796A1 (en) Pavement distress detection method and apparatus, and electronic device
CN109584215A (en) A kind of online vision detection system of circuit board
CN106529531A (en) Chinese chess identification system and method based on image processing
JP4691570B2 (en) Image processing apparatus and object estimation program
US20160205283A1 (en) Method and apparatus for inspecting an object employing machine vision
TW200836803A (en) Sphere identification system
CN111476056B (en) Target object identification method, device, terminal equipment and computer storage medium
CN108335308A (en) A kind of orange automatic testing method, system and intelligent robot retail terminal
CN116500052A (en) Edible oil impurity visual detection system and application method thereof
CN114018946B (en) OpenCV-based high-reflectivity bottle cap defect detection method
Wang et al. Deep learning-based human activity analysis for aerial images
JP4775599B2 (en) Eye position detection method
CN114022468A (en) Method for detecting article leaving and losing in security monitoring
CN111563869B (en) Stain test method for quality inspection of camera module
JP4831344B2 (en) Eye position detection method