TW200824636A - Image recognition method for detecting alimentary tract - Google Patents

Image recognition method for detecting alimentary tract Download PDF

Info

Publication number
TW200824636A
TW200824636A TW95145011A TW95145011A TW200824636A TW 200824636 A TW200824636 A TW 200824636A TW 95145011 A TW95145011 A TW 95145011A TW 95145011 A TW95145011 A TW 95145011A TW 200824636 A TW200824636 A TW 200824636A
Authority
TW
Taiwan
Prior art keywords
image data
value
image
input
threshold
Prior art date
Application number
TW95145011A
Other languages
Chinese (zh)
Inventor
Shaou-Gang Miaou
Jenn-Lung Su
rong-sheng Liao
feng-ling Zhang
Xu-Yao Cai
Original Assignee
Chung Shan Inst Of Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chung Shan Inst Of Science filed Critical Chung Shan Inst Of Science
Priority to TW95145011A priority Critical patent/TW200824636A/en
Publication of TW200824636A publication Critical patent/TW200824636A/en

Links

Abstract

The invention relates to an image recognition method for detecting alimentary tract, which includes firstly receiving a first image data of a series of data, determining whether the image data complies with a threshold value based on multiple determining algorithms, storing the first image data and inputting a second image data. With multiple recognition methods having the same features of the some portions, it can analyze multiple diseases, delete repeating computations, reducing processing time and integrating different recognition method. Therefore, it reduces the computation of the system and increases the computation efficiency.

Description

200824636 九、發明說明: 【發明所屬之技術領域】 本發明係有關於一種辨識方法,其係尤指偵測消化道之影像辨識方法。 【先前技術】 按’早在1795年起,就有許多f療人㈣ 〜 的檢查儀器較為粗糙並且使用上也較為不方便,〜=的檢查’傳統 端或是後端作診察,_後為了改善檢查的便利性對魏道的前 發明。__視鏡有光源和操作上的困擾,因此能見的構想及 纖影像傳輸日趨成熟之後1動軟式内視鏡的 ^ i 4 ’在光 鏡彎曲度不足的問題。 文。了仞期硬式内視 ,,雖然内視鏡可改善内外射術具破壞性檢查的缺點 需由嘴巴進入,經由喉觉、胃、十二指腸,最多只能^ 的檢查仍 公尺深;或是由肛門,經直腸、結腸,甚至到小财::幽門内約1 都無法探測到小腸中央長達6公尺的部份,隨著的===方式 内視鏡。 孜妁進步而發展出膠囊 膠囊内視鏡本身是-個相當精密的電子儀器,並 的大小,其含有鏡頭、無線發射機、影像感測器、天線及肝油 裝置。在拍攝效果方面’咖缝最,觸的 :寺多種 秒2張彩色則的速率進行拍攝。發射機藉由特殊的H 1母 傳出人體,由體外的接收裝置接收訊號。 ,'7巴衫像讯唬 由於勝囊内視鏡以每秒2張彩色; 鏡停留於人體約6〜8小時的時間,因此總共會的=膠囊内視 張的照片都需要醫生進行研判病情的話,相當^張^^ ’若每-妻繼識系統可分別針對不同的病情而做初步的_,,= 如 運算量,亦相當地浪費丁另一種病症的辨識,如此増加系統的 200824636 因此,如何針對上述問題而提出一種新穎偵測消化道之影像辨識方 法’不僅可改善傳統消化道之影像辨識方法耗時之缺點,又可同時辨識出 多種病症,可解決上述之問題。 【發明内容】 本發明之目的之-,在於提供-種_消化道之影像辨識方法,其可 同時分析數種病症,刪減重複運算,以減少處理時間。 本發明之目狀…在於提供—種仙、;肖化道之影細識方法,豆整 合不同的辨識方法,以減少系統的運算量,以增加運算速率。 ’、正 一本發消化道之影像觸方法,首先接收^狀—序列資料之 —弟-影像資料,再依據複數判斷方式,判斷影像資料符合—門播值 儲存該第—影像資料並輸人第二影像資料,以辨識多種病症。 、 再者’本發明之偵測消化道之影像辨識方法可細彳消化道錄塞 先觸十麵騰1—門健,嶋第-影 像貝^亚輪人-弟二影像資料重新辨識,再分辨第—影像資料符人一第 =播值,則儲存第—影《料,並輸人第二影像資料重新辨識^著二 值化弟-影像資料並統計第—影像f料之—亮點與—暗: 點與暗點的數量符合—第三門檀值H 里亚判⑽ 來係結合第-影像資料之不Μ熱t 衫像貝抖重新辨識’接下 當輪出值符合—第四_,則二經值’ 辨識。 〜1豕貝枓,亚輸入第二影像資料 實施方式】 餘為使貞番查委員對本發明 之瞭解與麟,難啸佳之 价騎達奴似有更進-步 本發明之偵測消化道的影像辨 士H ..孕仏之只知例及配合詳細之說明,說明如後· 兩種以上 識方法,係針對系統可同時對 6 200824636 如此可避免系統 ’進而增加運算 之病症以進行初步的辨識,再由醫療人員進行最後判斷, 對單一並進行訂正,以減少處理時間而減少系統的運算量 速率。 請參閱第-圖,其係為本發明之-較佳實施例之流程圖。如圖所示 首先執行步驟S1G輸人-序鄕像資料,也就是接收—[影像資料並轉換 第-影像諸為色調、飽合度、亮度之色域,射趣收序狀影像雜、200824636 IX. INSTRUCTIONS: [Technical Field of the Invention] The present invention relates to an identification method, which is particularly an image recognition method for detecting a digestive tract. [Prior Art] According to 'as early as 1795, there are many people who are treated (4) ~ The inspection equipment is rough and inconvenient to use, ~= check 'traditional or back end for examination, _ after Improve the convenience of inspection on the pre-invention of Wei Dao. __The mirror has light source and operation troubles, so the vision and the problem that the optical image transmission is becoming more and more mature and the ^ i 4 ' of the soft endoscope is insufficient in the curvature of the light microscope. Text. In the late stage of hard endoscopy, although the endoscope can improve the internal and external shooting destructive examination, the shortcomings need to be entered by the mouth, through the larynx, stomach, duodenum, the maximum can only be checked is still deep; or by Anus, through the rectum, colon, and even to the small fortune:: About 1 in the pylorus can not detect the central part of the small intestine up to 6 meters, with the === way endoscope. Capsules evolved to develop capsules Capsule endoscopes themselves are a fairly sophisticated electronic device that is sized to include lenses, wireless transmitters, image sensors, antennas, and liver oil devices. In terms of the shooting effect, the most of the coffee is stitched: the temple is photographed at a rate of two colors per second. The transmitter is transmitted out of the human body by a special H 1 mother, and the signal is received by the external receiving device. , '7 Babies like the newsletter because the winning capsule endoscope is 2 colors per second; the mirror stays in the human body for about 6 to 8 hours, so the total number of photos of the capsules will require the doctor to judge the condition. If it is, it is quite ^^^^ 'If every wife-wisdom system can make a preliminary _ for different conditions, ???, = such as the amount of calculation, is also quite a waste of identification of another disease, so add system 200824636 How to propose a novel image recognition method for detecting digestive tracts for the above problems can not only improve the time-consuming shortcomings of the image recognition method of the traditional digestive tract, but also recognize various diseases at the same time, and can solve the above problems. SUMMARY OF THE INVENTION The object of the present invention is to provide an image recognition method for a kind of digestive tract, which can simultaneously analyze several diseases and reduce repeated operations to reduce processing time. The object of the present invention is to provide a method for cultivating a fairy, a method for clarifying the path of the tract, and integrating different identification methods for the bean to reduce the amount of calculation of the system to increase the calculation rate. ', is a method of image touch of the digestive tract, first receives the ^ shape - sequence data - brother - image data, and then according to the plural judgment method, determine the image data match - the door number value stores the first - image data and loses Second image data to identify multiple conditions. Furthermore, the image recognition method for detecting the digestive tract of the present invention can be used to re-identify the image of the digestive tract, the first touch of the ten-faced one-door, the first-image, the second image, and the second image. Distinguish the first-image data, the first one, and then save the first image, and input the second image data to re-identify the binary image-image data and count the first-image material. - Dark: The number of points and dark points is the same - the third value is the value of the H. The sentence is judged (10). The combination of the first and the image data is like a hot t-shirt, and the image is re-identified as the next round. _, then the two values are identified. ~1豕贝枓, sub-input second image data implementation method】 Yu is to make the 贞 查 查 查 员 知 知 知 知 知 知 知 知 知 知 知 知 知 知 知 知 知 知 知 知 知 知 知 知 知Image discriminator H.. Pregnancy only knows examples and cooperates with detailed explanations, such as the following two methods, the system can be used for the system at the same time 6 200824636 so that the system can be avoided to further increase the operation of the disease for preliminary Identification, and then the medical staff makes the final judgment, and corrects the single and reduces the processing time to reduce the calculation rate of the system. Please refer to the figure, which is a flow chart of a preferred embodiment of the present invention. As shown in the figure, the first step is to perform the step S1G input-sequence image data, that is, to receive the [image data and convert the first image into the color gamut of hue, saturation, and brightness, and to shoot the interesting image.

骞 的第-影像龍,並㈣-影像資料從紅輕⑽)之色域轉換為色調飽和 度亮度(Hue Saturation Intensity,HSI)之色域,如此本實施例把相同色 調與飽和度像«集起來,形成—挪Glrele。色調與飽和度的關係,也 可以表示成HS circle (Hue Saturation circle)的形式,其中色調為〇产 至360度,飽和度射心0%至外酬〇%,並依照色調以順時鐘方向排列 的圓形色板。再者’在進行使用多種演算法的辨識前,因為在影像資料中 常包含主體和背景兩種資訊,我們想要的是主體資訊,若是直接對所有圖 形資Λ進行辨識,其結果容易受背景資訊所干擾,所以有必要在辨識前要 先將主體和背景資訊分開。在此使贼階二值化,配合濾波器的方式處理, 以框選R〇I (Region of Interest)區域。 再執行步驟S湖斷第-影像資料符合一第一門梧值,由於此為判斷消 化迢為腸阻塞之狀況,並腸阻塞之狀況會呈現黃綠色,且該病灶面積大又 為色彩明顯的影像,尤其是此病灶在HSI之色域,可使電腦更容易判定腸阻 塞之狀況,如第二A圖與第二B圖所示,其係為正常與異常之胳的色域圖, 第二A圖係為腸道正常的HS的色域圖,其影像資料將會落於色調15度至洲度 之間以及飽和度在10%至75%,第二B圖係為異常的HS的色域圖,其影像資料 200824636 將會落於色調40度至60度之間以及飽和度位於概至臓,當輸人第—影像 資料時,便會統計第-影像之像雜,當第―影像之像素錄於異常阳色 域範圍的比例值超過-第-門檻值時,就判定第—影像資料為異常的影像 資料。當第-f彡像賴為異常的影像㈣,辨辦統便會儲存第—影像資 蝌如步驟s⑹,供醫療人員進行更進一步的診斷,並輸入nU 料,即是下一張影像資料。 'The first image dragon of 骞, and (4) - the image data is converted from the color gamut of red light (10) to the color gamut of Hue Saturation Intensity (HSI), so this embodiment uses the same hue and saturation image Get up, form - move Glrele. The relationship between hue and saturation can also be expressed in the form of HS circle (Hue Saturation circle), where the hue is 360 to 360 degrees, the saturation is 0% to the external 〇%, and is arranged clockwise according to the hue. Round swatches. Furthermore, before using the identification of multiple algorithms, because the image data often contains both the subject and the background information, we want the subject information. If all the graphic assets are directly identified, the result is susceptible to background information. Interference, so it is necessary to separate the subject and background information before identifying. Here, the thief order is binarized and processed in the manner of a filter to frame the R〇I (Region of Interest) region. Then, the step S-image data conforms to a first threshold value. Because this is a condition for judging digestive sputum as intestinal obstruction, the condition of intestinal obstruction will appear yellow-green, and the area of the lesion is large and the color is obvious. The image, especially the lesion in the color gamut of HSI, makes it easier for the computer to determine the condition of the intestinal obstruction, as shown in Figures 2A and 2B, which is the gamut of normal and abnormal traits, The second A picture is the color gamut of the normal intestinal HS, and the image data will fall between 15 degrees and the degree of saturation and the saturation is 10% to 75%. The second picture B is abnormal HS. The gamut map, its image data 200824636 will fall between 40 degrees and 60 degrees of color and the saturation is in the approximate range. When the first image is input, the image of the first image will be counted. When the pixel value of the image recorded in the abnormal color gamut exceeds the -th threshold value, it is determined that the first image data is abnormal image data. When the first-f彡 image is an abnormal image (4), the identification system will store the first image resource, such as step s(6), for the medical staff to make further diagnosis and input the nU material, which is the next image data. '

當第-影像資料沒有被判定為異常之影像時,則進行下一 法’其執行步驟S14分辨第-影像資料符合—第二門根值,則儲存第二 像資料,其林實❹m顧-難平均數群聚料法(Fuzzy C—此祕 ㈣細,FOO以分辨第-影像資料是否為紅色,也就是說分辨消化道 是否為出血顏色或是腸内壁的顏色,其以兩點為重心以分辨第 為其中之-群組,即是以出血點群中心或料血群中心作為分類的重心 點丄當弟4像貧像素落人兩重心點之其中之—的距離較近時,則可 2该像素屬於出血群組或是非出i群組,若第—影像資料大部分的像素 為出血群組,即第-影像倾之像素值落於出血群 值’則欺第-影像資料為異常之影像資料,辨識系統 ^ 資料(如步驟S⑷,供醫療人員進行更進一步的診斷,其=的=像 點(以』)為(108.12,41.993,17 215),非屮“、,、 的重〜 (卿1卿u料中:組的重 ^ 在大里的骖囊内視鏡影像序列中,以單 一張的紅色異㈣像之範圍為例,並不能接近能 職分伟,因此先把此種異常的影像標案依序— 1之 ^始之群聚中心顧以經驗值代替,然後經過多血影 =:::=:逐漸接近最適當_值制= 侷限於此紐。 (“關的作法之射之―,但並不 8 200824636 由於經過第-級和第二級判斷之後’已經將大面積的異常影像排除,因此 要進入較.賴«影像的職,在前端_識紐t,已麟偏黃色 和偏綠色的異常現象挑選出來,剩τ的多屬於白色的影像。若要直接進行 腸白點的辨識方法’容易辨識錯誤,因此必須進行初步的職,此辨識方 法與前述之辨識方法不同,其主要係將正常影像挑選出。接著執行步獅 二值化第-影像資料並統計第―影像f料之—亮點與—暗闕數量,其中 二值化第-影像資料係將第—影像資料之像素值分成亮點與暗點,也就是 像素值為255與0,再執行步驟S2〇判斷亮點與暗點的數量符合一第三門檻 值’即是亮點的比例符合第三門檻值,則輸入第二影像資料辨識,其中皿, 必須先轉換第-影像資料從紅綠藍⑽)之色域轉換為色調餘和度亮度 暗點⑻像素值的數量,其中閥值係,當判斷亮點與暗點的數量符合一 第三《值,輸人下i影像資料,以辨識第二影像資料,也就是說第一 影像資料為正常之影像資料。 此外,因為已經過前面兩個辨識方法,因此進入此辨識方法時,对定 除了正常影像,不會出現大面積均勻色彩的狀況,因此才使用Η轉換的簡 ΓΓΓ除他㈣㈣·,_恤物峨被挑選 二二=Γ方法均以挑選異常影像為目標,將判定正常影像輸 们辨减方法以做更準確的確認,而第三辨識方法雖然第— 為了“漏掉兴常影像沒偵測到,因此,第三級辨識方法的間值取到最寬, 9 200824636 而猶有可能是 顯示於介面供 亦即只能將真正非常均勻的正f影像挑選出不再進行辨識, 兴常的影像就f再經過第四級觸方法,才敏是否里常而 醫師診斷觀看。When the first image data is not determined to be an abnormal image, the next method is performed. [Step S14 is performed to distinguish the first image data from the second root value, and the second image data is stored, and the forest image is stored. Difficult average group aggregation method (Fuzzy C - this secret (four) is fine, FOO to distinguish whether the first - image data is red, that is to say whether the digestive tract is the color of bleeding or the color of the inner wall of the intestine, which is centered on two points In order to distinguish between the group and the group, that is, the center of the bleeding point group or the center of the blood group as the center of gravity of the classification, when the distance between the younger brothers and the center of the two points of the poor pixel is lower, then 2, the pixel belongs to a bleeding group or a non-i group, if most of the pixels of the first image data are a bleeding group, that is, the pixel value of the first image falls on the value of the bleeding group, then the image is Abnormal image data, identification system ^ data (such as step S (4), for medical personnel to make further diagnosis, its = image point (to") (108.12, 41.993, 17 215), non-屮 ",,, Heavy ~ (Qing 1 Qing u material: group of heavy ^ in the big sac In the sequence of endoscopic image, taking the range of the red (four) image of a single piece as an example, it is not close to the position of the job. Therefore, the image of the abnormal image is first ordered. Replace with the empirical value, and then pass the multi-blood shadow =:::=: gradually approach the most appropriate _ value system = limited to this button. ("Off the practice of shooting - but not 8 200824636 due to the first level and After the second level of judgment, 'there has been a large area of abnormal image exclusion, so to enter the more than Lai's image, in the front end _ knowledge of the new t, has been yellow and greenish anomalies selected, the remaining τ It belongs to the white image. If you want to directly identify the white point of the intestine, it is easy to identify the error, so you must perform the preliminary job. This identification method is different from the above identification method. It mainly selects the normal image. Then the lion is executed. Binarize the first-image data and count the number of - the bright spot and the dark spot of the first image, wherein the binarized image data is divided into bright and dark points, that is, pixel values. 255 and 0, then execute Step S2: determining that the number of bright spots and dark spots conforms to a third threshold value, that is, the ratio of the bright spots conforms to the third threshold value, and then inputting the second image data identification, wherein the dish must first convert the first image data from red, green and blue. (10)) The color gamut is converted to the number of darkness points (8) pixel values of the hue residual brightness, wherein the threshold value is determined when the number of bright spots and dark spots is determined to be a third value, and the image data is input to identify The second image data means that the first image data is normal image data. In addition, since the first two identification methods have been passed, when the identification method is entered, the large-area uniform color does not appear except for the normal image. Therefore, the use of Η conversion is not only (4) (4), _ 恤 峨 峨 峨 峨 Γ Γ Γ Γ Γ Γ Γ Γ Γ Γ Γ Γ 峨 峨 Γ 峨 峨 峨 峨 挑选 挑选 挑选 挑选 挑选 挑选 挑选 挑选 挑选 挑选 挑选 挑选 挑选 挑选 挑选 挑选 挑选 挑选 挑选 挑选The third identification method, although the first - in order to "missing the normal image is not detected, therefore, the value of the third-level identification method is the widest, 9 200824636 and may still be displayed in the interface for That can only be truly very uniform positive image f pick out no further identification, Xing Chang image on the touch f again through the fourth stage method, whether it was often sensitive and physician-diagnosed watch.

,接下來進行第喊順方法,主要針對白色異常現象,因為此種 異常現象出現做不簡,制也不—定相連,異常區域較第—級和第二 級所針對的異常區域面積小,因此___識方法,在此使用類神 經網路中的_遞細_BaGk-PrQpagat_ 驗咖麵)。 首先,轉換該第-影像資料為AC1C2之色域並執行步驟從2处 影像資料之不同色域值,產生—灰階共同出現矩陣,也就是糊上^辨識 方法中y斤使用色彩空間的值,即RGB、脫及AC1C2所組成9維度的色彩 空間並搭配絲朗出現鱗,其各取4個統計值,以制8個統計值所 組成的統計轉,其巾上述4個統計值可由下列式子取得; 對比値HA7PlMm f ./ 能量値HPl、d{Uj、 1 ./ 鑛:Σ $〜(/"·)、〜彻 麵二Σ Σ纖 • j 其中,/V"為灰階共同出現矩陣,〆為矩陣的方向,j為其共同出現像 素的距離’本貫施例採用〇。與9〇。的方向且距離為1個像素(即相鄰)的灰階 共同出現矩陣。將九個色彩轉換轴,搭配經過〇度和9〇度的灰階共同出現 矩陣後,各獲取四個統計值,故每個色軸可得到八個統計值,因此BPNN輸 入向量維度為72。 首先將第一影像資資料切成16x16畫素的256張子影像,並在考量有效 貢訊後’扣除邊緣及可能受影響部分,即剩下152張子影像。之後針對每張 10 200824636 數’並進行麵訓練和測試。在此輸入向量有72個分析參 rL二 利用輸入神經單元個數+輸出神經單元個數後除以 得到讀’制37爾經71,略獅麵岐繼收敛後, 子^ t ’便可進入物份。輯,瓣罐取152張 fi™; !!^ 中閥值^為-第四^=異常’在此最後的面積闕值,採用經驗閥值。其Next, the first shouting method is mainly applied to the white anomaly phenomenon, because the abnormal phenomenon is not simple, and the system is not connected, and the abnormal region is smaller than the abnormal region targeted by the first level and the second level. Therefore, the ___ method is used here, using the _delta _BaGk-PrQpagat_ in the neural network. First, the first image data is converted into the color gamut of AC1C2 and the steps are performed from different color gamut values of the two image data to generate a gray scale common appearance matrix, that is, the value of the color space used in the paste method , that is, RGB, off the 9-dimensional color space composed of AC1C2 and appearing with the scales of the silk, each taking 4 statistical values, to make a statistical conversion composed of 8 statistical values, the above four statistical values of the towel can be as follows取得 値 HA7PlMm f ./ Energy 値 HPl, d{Uj, 1 ./ Mine: Σ $~(/"·), ~ 彻 Σ Σ • • • • • • • • • • • • • • • • • • The co-occurrence matrix, 〆 is the direction of the matrix, and j is the distance from which the pixels appear together. With 9 〇. The direction and distance are 1 pixel (ie, adjacent) gray scale co-occurrence matrix. After the nine color conversion axes are combined with the gray scale of 9 degrees and 9 degrees, the four statistical values are obtained, so each color axis can obtain eight statistical values, so the BPNN input vector dimension is 72. First, the first image data is cut into 256 sub-images of 16x16 pixels, and after considering the effective tribute, the edge and the potentially affected part are subtracted, that is, 152 sub-images remain. Then, for each piece of 10 200824636 number, and face training and testing. In this input vector, there are 72 analysis parameters rL two, using the number of input neural units + the number of output neural units, and then dividing by to get the reading '37 yr 71, after the lion's face is converged, the sub-t' can enter Items. In the series, 152 sheets of fiTM; !!^ The threshold value ^ is - fourth ^ = abnormal 'in this last area 阙 value, using the empirical threshold. its

此針:1 卜色=Τΐ内:鏡影像在國内尚未普及’加上資料取得不易,因 像數麵行麵_和鱗,因此本實 為#父佳貫施例,但並不侷限此驗證辨識方法。 神經:下生=…-輸人值於, -雜音n 門捏值(如步驟娜),則儲存第 產生龄入#亚:~二影像倾辨識,其中係依據灰階制出現矩陣而 並葬1碰广經訓練過之倒傳遞類神經網路,以輸出正確的結果, 二神經網路之輸出值而判斷消化道是否出_,當輸出值 白點,輸出值為0時,表示無白點,第一影像資料為異常 行=貝糸統便會儲存第—影像資料(如步麵),供醫療人員進 仃更進一步的診斷,並輪繁— 只史 ,I輸入弟一衫像貧料,即是下-張影像資料。 確性二糾。如騎示,其辨識的正 =判,爾為無病徵的個數,靖系統將無病徵處判斷為 為上’紙表系統將有病徵處判斷為無病徵的鑛。其中正確率 統鏗別後的結果為有紐的轉,有為有病㈣經過系 別後的結果為無錢的比^靜Γ 影像資料經過系統鑑 、’ D财柄縣統賴結果之可信任度評估。 ,.不处’本發明之備測消化道之影像辨識方法,首先接收一第—影 200824636 像資料’再域魏_方式,_f彡像細轉—n健, 一影像資料並輸入第-与傻資料省存邊弟 乐—衫像貝枓以進行辨識,如此可辨識多種病症, 冋寸’刀析數種病症’ #j減重複運算,以減少處理時間。 本發明係實為-具有新穎性、 專利法所規定之專利申往要件益4, 座詞用者應付合我國 〇 .. 明要件…、破,支依法提出發明專利申請,祈釣月 早曰賜准專利,至感為禱。 T釣局 惟以上所述者,僅為本發 明實施之細,舉凡依砂日㈣W ,並·來限定本發 神所為之均,本申,專利乾圍所述之形狀、構造、特徵及精 …寺文化,、修飾,均應包括於本發明之申請專利範園内。 【圖式簡單說明】 ^一圖係為本發明之—較佳實施例之流程圖; 圖搜尋移動向量之示意圖;· =一 B圖鱗_動向量之示意圖;以及 第三圖係林發敗—較佳實補之實驗數據圖This needle: 1 Bu color = Τΐ: Mirror image is not popular in China'. It is not easy to obtain data. Because it is like a number of faces _ and scales, this is actually a #父佳例例, but it is not limited to this. Verify the identification method. Nerve: the next generation = ... - the value of the input, - the noise n door pinch value (such as step Na), then the storage of the age of entry into the #:: two image tilt identification, which is based on the gray matrix system and buried 1 Touch the training network of the neural network to output the correct result, the output value of the two neural networks to determine whether the digestive tract is out _, when the output value is white, the output value is 0, indicating no white Point, the first image data is abnormal line = Becky will store the first - image data (such as step), for medical personnel to go further and further diagnosis, and turn the same - only history, I input brother a shirt like poor Material, that is, the next-to-image data. The second is correct. If riding, the identification of the positive = judgment, the number of disease-free, the Jing system will judge the disease-free system as the upper paper system will be diagnosed as a disease-free mine. The result of the correct rate is that there is a turn of the New Zealand, and there is a disease. (4) After the system, the result is no money. The static image data is systematically reviewed, and the results of the D. Trust assessment. , not in the image identification method of the prepared digestive tract of the present invention, firstly receives a first photo-200824636 image data 're-domain Wei _ mode, _f 细 细 fine-n health, an image data and input the first - and Stupid data saves the side of the music - shirt like Bellow for identification, so that a variety of illnesses can be identified, and the number of diseases can be reduced by #j minus the repetition of operations to reduce processing time. The invention is a novelty, a patent application to the element 4 as stipulated by the patent law, and the user of the word can cope with the Chinese 〇.. clearly, ..., break, and submit an invention patent application according to law, praying for the early morning Granting a patent, it is a prayer. The T fishing unit is only the above-mentioned details, and it is only the details of the implementation of the present invention. The shape, structure, characteristics and essence of the patents are defined by Yishen (4) W and ... Temple culture, and decoration, should be included in the patent application garden of the present invention. BRIEF DESCRIPTION OF THE DRAWINGS A diagram is a flow chart of a preferred embodiment of the present invention; a schematic diagram of a map search motion vector; a schematic diagram of a B-scale _ motion vector; and a third diagram of a forest failure - Better experimental data chart

j主要元件符號說明】 · 圖係為本發明之—較佳實施例之流麵; 一!像資料為正常影像之色域圖; 〜/ 像資料為異常影像之色域圖;以及 乐二圖係為本發明之實驗數據圖。 12j main component symbol description] · The figure is the flow surface of the preferred embodiment of the invention; one! The image data is the color gamut of the normal image; the image data is the color gamut of the abnormal image; It is a graph of experimental data of the present invention. 12

Claims (1)

200824636 十、申請專利範圍: 1. 一種彳貞測消化道之影像辨識方法,其步驟包含: 判斷一第一影像資料之像素值的比例符合一第一門檻值,則儲存該第 一影像資料,並輸入一第二影像資料辨識; 分辨該第一影像資料之像素值的比例符合一第二門檻值,則儲存該第 一影像資料,並輸入該第二影像資料辨識; 二值化該第一影像資料並統計該第一影像資料之一亮點與一暗點的數 量且判斷亮點與暗點之數量的比例符合一第三門檻值,則輸入該第二 影像資料辨識; 結合該第一影像資料之不同色域值,產生一灰階共同出現矩陣;以及 依據該灰階共同出現矩陣,輸入一輸入值於一類神經網路產生一輸出 值,當該輸出值符合一第四門檻值,則儲存該第一影像資料,並輸入 該第二影像資料辨識。 2. 如申請專利範圍第1項所述之方法,其中於判斷一第一影像資料之像素 值的比例符合一第一門檻值,則儲存該第一影像資料,並輸入一第二影 像資料辨識之步驟前,更包括: 轉換該第一影像資料為色調、飽合度、亮度之色域。 3. 如申請專利範圍第2項所述之方法,其中該第一門檻值為色調在40度 至60度之間,並且飽和度在40%至丨00%之間。 4. 如申請專利範圍第1項所述之方法,其中於分辨該第一影像資料之像素 值的比例符合一第二門檻值,則儲存該第一影像資料,並輸入該第二影 像資料辨識之步驟中係使用模糊平均數群聚演算.法(Fuzzy OMeans Clustering, FCM)〇 5. 如申請專利範圍第1項所述之方法,其中該第二門檻值為 (108· 12, 41· 993,17· 215)。 6. 如申請專利範圍第1項所述之方法,其中於二值化該第一影像資料並統 計該第一影像資料之一亮點與一暗點的數量並判斷亮點與暗點數量符 200824636 口第-pi松值’則輸入該第二影像資_識之步驟前,更包括· 轉換該第-影像資料為色調、飽合度、亮度之色域。 · 7·如申請專利範圍第6項所述之方法,其 _ΡΒ,^ — 、甲一值化該弟一衫像資料係依據 閥值,一值化該第一影像資料的色調色域。 =如申請專利範圍第7項所述之方法,其中該間值係為2〇。 .nr利範圍第1項所述之方法,其中於結合該第—影像資料之不同 色域值,產生一灰階共同出現矩陣之步驟前,更包括: 轉換該第一影像資料為AC1C2。 讥如申請專利範圍第1項所述之方法,其中該類神經網路係使用倒傳遞類 神經網路(Back-Propagation Neural Network,BPNN)。 Π· —種偵測消化道之影像辨識方法,其步驟包含·· 接收一第一影像資料;以及 依據複數判斷方式’判斷該第-影像資料符合—門梧值,則儲存兮# —影像資料並輸入一第二影像資料。 、-乐 12.如申請專利範圍f u工員所述之方法,其中於依據複數判断方式,判斷 該影像資料之像素值的比例符合-門檀值,則儲存該影像資料之 中,更包括: ' ^、 肇 判斷該第一影像資料之像素值的比例符合一第一門檻值,則儲存节第 一影像資料,並輸入一第二影像資料重新辨識。 13 如申請專利範圍第12項所述之方法,其中於判斷該第一影像資料符合 -第-門健,則儲存該第—影像資料,並輸人—第二影像^料= 之步驟前,更包括: @ 判斷一第一影像資料符合一第一門檻值,則儲存該第—影像資料,二 輸入一第二影像資料辨識。 、’、,亚 Μ·如申請專利範圍第13項所述之方法,其中該第一門檻值為色調在如 至60度之間,並且飽和度在4〇%至1〇〇%之間。 度 15·如申請專利範圍第u項所述之方法,其中於依據複數判斷方 1 八’判斷 200824636 該第一影像資料符合一門檻值,則儲存該影像資料之步驟中,更包括·· 分辨該第一影像資料之像素值的比例符合一第二門檻值,則儲存該第 一影像資料,並輸入該第二影像資料辨識。 16·如申請專利範圍第15項所述之方法,其中於分辨該第一影像資料之像 素值的比例付合一第二門播值,則储存該第一影像資料,並輸入該第 二影像資料辨識之步驟中係使用模糊平均數群聚演算法(F UZZ y C-Means Clustering, FCM)〇 Π·如申請專利範圍第15項所述之方法,其中該第二門檻值為 _ (108.12, 41· 993,17· 215)。 18·如申請專利範圍第n項所述之方法,其中於依據複數判斷方式,判斷 該第一影像資料符合一門檻值,則儲存該影像資料之步驟中,更包括: 二值化該第一影像資料並統計該第一影像資料之一亮點與一暗點的數 量並判斷亮點與暗點之數量的比例符合一第三門檻值,則輸入該第二 影像資料辨識。 丨19·如申請專利範圍第18項所述之方法,其中於二值化該第一影像資料並 統計該第一影像資料之一亮點與一暗點的數量並判斷亮點與暗點之數 量的比例符合一第三門檻值,則輸入該第二影像資料辨識之步驟前,更 φ 包括: 轉換該第一影像資料為色調、飽合度、亮度之色域。 20·如申請專利範圍第19項所述之方法,其中二值化該第一影像資料係依 據一閥值,二值化該第一影像資料。 21·如申請專利範圍第2〇項所述之方法,其中該閥值係為2〇。 22·如申請專利範圍第n項所述之方法,其中於依據複數判斷方式,判斷 該第一影像資料符合一門檻值,則儲存該影像資料之步驟中,更包括· 結合該第一影像資料之不同色域值,產生一灰階共同出現矩陣;以及 依據該灰階共同出現矩陣,輸入一輸入值於一類神經 、’以產生一 輸出值,當該輸出值符合一第四門檻值,則儲存該第_影像資料,並 200824636 輸入該第二影像資料辨識。 23·如申請專利fe圍第22項所述之方法,足 Θ色域值,產灰階共同出現矩陣之二於結合該第—影像資料之不 步驟前,更包括· 轉換該第一影像資料為AC1C2之色域。 24·如申明專利範圍第22項所述之方法’其中該類神經網路係使用倒傳遞 類神經網路(Back-Propagation Neural Network,BPNN)。200824636 X. Patent application scope: 1. A method for image recognition of a digestive tract, the method comprising: determining that a ratio of pixel values of a first image data conforms to a first threshold value, storing the first image data, And inputting a second image data identification; determining that the ratio of the pixel value of the first image data conforms to a second threshold value, storing the first image data, and inputting the second image data identification; binarizing the first And comparing the number of bright spots and a dark point of the first image data and determining that the ratio of the number of bright spots to the dark spots meets a third threshold value, inputting the second image data identification; combining the first image data The different gamut values generate a gray-scale co-occurrence matrix; and according to the gray-scale co-occurrence matrix, inputting an input value to generate an output value in a type of neural network, and storing the output value when the output value meets a fourth threshold value The first image data is input and the second image data is input. 2. The method of claim 1, wherein determining the ratio of the pixel values of the first image data to a first threshold value stores the first image data and inputting a second image data identification Before the step, the method further includes: converting the first image data into a color gamut of hue, saturation, and brightness. 3. The method of claim 2, wherein the first threshold value is between 40 degrees and 60 degrees, and the saturation is between 40% and 00%. 4. The method of claim 1, wherein the first image data is stored and the second image data is input after the ratio of the pixel value of the first image data is determined to meet a second threshold value. In the step of using Fuzzy OMeans Clustering (FCM), the method described in claim 1, wherein the second threshold is (108·12, 41·993). , 17· 215). 6. The method of claim 1, wherein the first image data is binarized and the number of bright spots and a dark spot of the first image data is counted and the number of bright spots and dark spots is determined to be 200824636. The first-pi loose value is input before the step of inputting the second image information, and further includes: converting the first image data into a color gamut of hue, saturation, and brightness. 7. The method of claim 6, wherein the method of digitizing the younger one is based on a threshold value, and binarizing the color gamut of the first image data. = The method of claim 7, wherein the value is 2 〇. The method of claim 1, wherein before the step of generating a gray-scale co-occurrence matrix by combining different gamut values of the first image data, the method further comprises: converting the first image data to AC1C2. For example, the method of claim 1, wherein the neural network uses a Back-Propagation Neural Network (BPNN). Π···································································································· And input a second image data. - Le 12. The method described in the patent application scope, wherein the method for determining the pixel value of the image data according to the plural judgment method is in accordance with the -door value, and storing the image data includes: ^, 肇 determining that the ratio of the pixel values of the first image data conforms to a first threshold value, storing the first image data of the section, and inputting a second image data for re-identification. 13 The method of claim 12, wherein before determining that the first image data meets the -th-threshold, storing the first image data and inputting the second image material = The method further includes: @ judging that the first image data meets a first threshold value, storing the first image data, and inputting the second image data for identification. The method of claim 13, wherein the first threshold is a hue of between 60 degrees and a saturation of between 4% and 1%. The method of claim 5, wherein in the step of determining the first image data according to the plural judgment party 1 八 ' 200820086, the step of storing the image data further comprises: The ratio of the pixel values of the first image data conforms to a second threshold, and the first image data is stored, and the second image data is input. The method of claim 15, wherein the first image data is stored in a ratio of the pixel value of the first image data, and the first image data is stored and the second image is input. The method of data identification uses a method of F UZZ y C-Means Clustering (FCM), as described in claim 15, wherein the second threshold is _ (108.12). , 41· 993, 17· 215). The method of claim n, wherein in the step of determining the first image data according to the plural judgment method, the step of storing the image data further comprises: binarizing the first The image data is used to count the number of bright spots and one dark point of the first image data and determine that the ratio of the number of bright spots to the dark spots meets a third threshold value, and the second image data is input. The method of claim 18, wherein the first image data is binarized and the number of bright spots and a dark spot of the first image data is counted and the number of bright spots and dark spots is determined. The ratio conforms to a third threshold, and before the step of inputting the second image data identification, the φ includes: converting the first image data into a color gamut of hue, saturation, and brightness. 20. The method of claim 19, wherein binarizing the first image data binarizes the first image data based on a threshold. 21) The method of claim 2, wherein the threshold is 2 〇. The method of claim n, wherein in the step of determining the first image data according to the plural judgment method, the step of storing the image data further comprises: combining the first image data The different gamut values generate a gray-scale co-occurrence matrix; and according to the gray-scale co-occurrence matrix, input an input value to a type of nerve, 'to generate an output value, and when the output value meets a fourth threshold value, The first image data is stored, and the second image data identification is input in 200824636. 23. If the method described in claim 22 of the patent application, the gamut color value, the gray matrix common appearance matrix 2, before the step of combining the first image data, further includes: converting the first image data Is the color gamut of AC1C2. 24. The method of claim 22, wherein the neural network uses a Back-Propagation Neural Network (BPNN). 1616
TW95145011A 2006-12-04 2006-12-04 Image recognition method for detecting alimentary tract TW200824636A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW95145011A TW200824636A (en) 2006-12-04 2006-12-04 Image recognition method for detecting alimentary tract

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW95145011A TW200824636A (en) 2006-12-04 2006-12-04 Image recognition method for detecting alimentary tract

Publications (1)

Publication Number Publication Date
TW200824636A true TW200824636A (en) 2008-06-16

Family

ID=44771577

Family Applications (1)

Application Number Title Priority Date Filing Date
TW95145011A TW200824636A (en) 2006-12-04 2006-12-04 Image recognition method for detecting alimentary tract

Country Status (1)

Country Link
TW (1) TW200824636A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI494087B (en) * 2010-06-11 2015-08-01 Univ Nat Cheng Kung Tumor detection apparatus with dark-current operation mode and signal correction method thereof
US9807347B2 (en) 2010-02-02 2017-10-31 Omnivision Technologies, Inc. Encapsulated image acquisition devices having on-board data storage, and systems, kits, and methods therefor
TWI673683B (en) * 2018-03-28 2019-10-01 National Yunlin University Of Science And Technology System and method for identification of symptom image

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9807347B2 (en) 2010-02-02 2017-10-31 Omnivision Technologies, Inc. Encapsulated image acquisition devices having on-board data storage, and systems, kits, and methods therefor
US9819908B2 (en) 2010-02-02 2017-11-14 Omnivision Technologies, Inc. Encapsulated image acquisition devices having on-board data storage, and systems, kits, and methods therefor
US9912913B2 (en) 2010-02-02 2018-03-06 Omnivision Technologies, Inc. Encapsulated image acquisition devices having on-board data storage, and systems, kits, and methods therefor
TWI494087B (en) * 2010-06-11 2015-08-01 Univ Nat Cheng Kung Tumor detection apparatus with dark-current operation mode and signal correction method thereof
TWI673683B (en) * 2018-03-28 2019-10-01 National Yunlin University Of Science And Technology System and method for identification of symptom image

Similar Documents

Publication Publication Date Title
Zhang et al. Detecting diabetes mellitus and nonproliferative diabetic retinopathy using tongue color, texture, and geometry features
Pan et al. Bleeding detection in wireless capsule endoscopy based on probabilistic neural network
CN104363815B (en) Image processing apparatus and image processing method
Li et al. Computer-based detection of bleeding and ulcer in wireless capsule endoscopy images by chromaticity moments
US20200211235A1 (en) Method of modifying a retina fundus image for a deep learning model
CN110189303B (en) NBI image processing method based on deep learning and image enhancement and application thereof
CN103945755B (en) Image processing apparatus
CN109635871A (en) A kind of capsule endoscope image classification method based on multi-feature fusion
CN100563550C (en) Medical image-processing apparatus
CN112102256A (en) Narrow-band endoscopic image-oriented cancer focus detection and diagnosis system for early esophageal squamous carcinoma
US20170178322A1 (en) System and method for detecting anomalies in an image captured in-vivo
Li et al. Development and evaluation of a deep learning system for screening retinal hemorrhage based on ultra-widefield fundus images
Cui et al. Bleeding detection in wireless capsule endoscopy images by support vector classifier
Yuan et al. Automatic bleeding frame detection in the wireless capsule endoscopy images
CN111242920A (en) Biological tissue image detection method, device, equipment and medium
Ghosh et al. Block based histogram feature extraction method for bleeding detection in wireless capsule endoscopy
WO2019073962A1 (en) Image processing device and program
TW200824636A (en) Image recognition method for detecting alimentary tract
Bhaskaranand et al. EyeArt+ EyePACS: automated retinal image analysis for diabetic retinopathy screening in a telemedicine system
US20090080768A1 (en) Recognition method for images by probing alimentary canals
Lee et al. Real-time image analysis of capsule endoscopy for bleeding discrimination in embedded system platform
CN104573723B (en) A kind of feature extraction and classifying method and system of " god " based on tcm inspection
Chen et al. Application of artificial intelligence in tongue diagnosis of traditional Chinese medicine: a review
CN112419246A (en) Depth detection network for quantifying esophageal mucosa IPCLs blood vessel morphological distribution
Ghosh et al. An automatic bleeding detection scheme in wireless capsule endoscopy based on statistical features in hue space