TW201740871A - Method for reconstructing fundus image - Google Patents

Method for reconstructing fundus image Download PDF

Info

Publication number
TW201740871A
TW201740871A TW105115464A TW105115464A TW201740871A TW 201740871 A TW201740871 A TW 201740871A TW 105115464 A TW105115464 A TW 105115464A TW 105115464 A TW105115464 A TW 105115464A TW 201740871 A TW201740871 A TW 201740871A
Authority
TW
Taiwan
Prior art keywords
image data
component image
enhancement
component
color
Prior art date
Application number
TW105115464A
Other languages
Chinese (zh)
Other versions
TWI660708B (en
Inventor
施秉宏
Original Assignee
施秉宏
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 施秉宏 filed Critical 施秉宏
Priority to TW105115464A priority Critical patent/TWI660708B/en
Publication of TW201740871A publication Critical patent/TW201740871A/en
Application granted granted Critical
Publication of TWI660708B publication Critical patent/TWI660708B/en

Links

Landscapes

  • Eye Examination Apparatus (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a method for reconstructing fundus image including: identifying a plurality of component image data as an input of an iteration calculation; calculating a weight factor for respective component image data based on a first ratio. Said iteration calculation includes executing an enhancement calculation to generate a plurality of enhanced component image data and calculating a weight factor for respective enhanced component image data based on a second ratio; determining whether to end the iteration calculation, if not, the computed enhanced component image data being served as an input for the next iteration calculation. After the iteration calculation is ended, the method reconstructs the fundus image based on the plurality of enhanced component image data.

Description

眼底影像重建方法 Fundus image reconstruction method

本發明是關於一種影像重建方法,尤其是一種關於眼底影像的重建方法。 The present invention relates to an image reconstruction method, and more particularly to a method for reconstructing a fundus image.

視網膜上增生膜或視網膜前增生膜(Epiretinal Membrane)是一種發生位於眼底(fundus)位置的病變,其嚴重會影像患者的視力。眼底增生膜可被妥善治療,只要能夠診斷增生膜產生的正確位置,並徹底地將其移除,即可改善失去的視力。 The epiretinal proliferative membrane or Epiretinal Membrane is a lesion that occurs at the fundus, which severely images the patient's vision. The fundus hyperplasia membrane can be properly treated, as long as it can diagnose the correct position of the proliferative membrane and completely remove it, it can improve the lost vision.

眼底增生膜的發現可經由眼底檢查而識別。一眼底鏡可用來拍攝眼睛的後方,以獲取一眼底影像。眼底影像記錄了患者視網膜的外觀,其用來檢查由眼疾所引起的異常,也可用來追蹤眼疾的進展。臨床醫師可藉由追蹤患者眼底影像,了解患者眼疾的歷史資訊,並做出適當的診斷以及治療。尤其眼底影像所顯示的細節變化可能與某種疾病產生關連,例如,就一張彩色的眼底影像而言,影像中所顯示之異常或是特別顯眼的透明(白色)區域,可能與眼底增生膜有關。也就是說,所述影像中的異常白色區域是由在眼底位置或視網膜上所形成的薄膜所引起的光學特性。臨床人員可經由定期追蹤患者的眼底影像記錄,而辨識出增生膜。在一含有增生膜的眼底影像中,應包含有一較亮影像區域及一較暗影像區域,其中較量影 像區域對應眼底增生膜的區域。在實務上,經由一般眼底鏡所拍攝的眼底影像,若未經特殊影像處理,難以讓臨床人員辨識出影像的較亮區域和較暗區域,以至無法給出精準的診斷及治療。 The discovery of the fundus hyperplasia membrane can be identified by fundus examination. A bottom eyepiece can be used to capture the back of the eye to obtain a fundus image. The fundus image records the appearance of the patient's retina, which is used to examine abnormalities caused by eye diseases and can also be used to track the progression of eye diseases. Clinicians can follow the patient's fundus image to understand the history of the patient's eye disease and make appropriate diagnosis and treatment. In particular, changes in the detail displayed by the fundus image may be related to a disease. For example, in the case of a colored fundus image, the abnormality displayed in the image or the particularly conspicuous transparent (white) area may be associated with the fundus hyperplasia film. related. That is, the abnormal white area in the image is an optical characteristic caused by a film formed at the fundus position or on the retina. The clinical staff can identify the proliferative membrane by regularly tracking the patient's fundus image. In a fundus image containing a proliferative membrane, a brighter image area and a darker image area should be included, The image area corresponds to the area of the fundus hyperplasia membrane. In practice, the fundus image taken through the general ophthalmoscope is difficult to allow the clinician to recognize the brighter and darker areas of the image without special image processing, so that accurate diagnosis and treatment cannot be given.

因此,為了給予這類的眼疾患者有更可靠且更有效率的眼底診斷,勢必需要一種工具來提高影像細節及對比度的辨識。 Therefore, in order to give patients with such eye diseases a more reliable and efficient fundus diagnosis, a tool is needed to improve the identification of image details and contrast.

本發明使用影像處理來作為提高影像細節及對比度的辨識,適用於眼底影像,尤其是與眼底增生膜有關的眼底影像。本發明提供一種影像重建的方法,其將一未經處理的原始影像轉換為一重建影像。對於一眼底鏡所拍攝的一眼底影像(初始影像),經由本發明所提供的方法處理,被轉換成一重建影像,其具有對比度及/或圖案細節強化之特性。 The invention uses image processing as an identification for improving image detail and contrast, and is suitable for fundus images, especially fundus images related to fundus hyperplasia films. The present invention provides a method of image reconstruction that converts an unprocessed original image into a reconstructed image. A fundus image (initial image) taken by a lens is converted into a reconstructed image by the method provided by the present invention, which has the characteristics of contrast and/or pattern detail enhancement.

所述眼底影像重建方法,由至少一處理器所執行,包含:接收一彩色影像資料,該影像資料有關於一眼底影像;執行一疊代運算,以獲得多張強化成分影像資料;及基於該等強化成分影像資料,重建該彩色影像資料。 The method for reconstructing a fundus image is performed by at least one processor, comprising: receiving a color image data, wherein the image data is related to a fundus image; performing a iterative operation to obtain a plurality of enhanced component image data; The image data of the enhanced component is reconstructed to reconstruct the color image data.

所述方法,在執行該疊代運算前,包含:根據該彩色影像資料,獲得複數個成分影像資料,作為該疊代運算的一輸入。一般而言,眼底影像包含色彩。本發明所提供的方法是將接收的彩色影像資料,依據其顏色成分,辨識出與該影像有關的複數張成分影像資料,且每一成分影像資料對應一顏色,該成分影像對應的顏色不同於另一成分影像所對應的另一顏色。例如,一成分影像可對應紅色,另一成分影像可對應藍色。這些成分影像代表該彩色影像的每一個顏色的分量。本發明利用該等成分影像 作為所述疊代運算的一輸入,並分別被處理。 The method, before performing the iterative operation, includes: obtaining a plurality of component image data as an input of the iterative operation according to the color image data. In general, fundus images contain color. The method provided by the present invention is to identify the plurality of component image data related to the image according to the color component of the received color image data, and each component image data corresponds to a color, and the color corresponding to the component image is different from the color Another color corresponding to another component image. For example, one component image may correspond to red, and another component image may correspond to blue. These component images represent the components of each color of the color image. The present invention utilizes these component images As an input to the iterative operation, and processed separately.

所述疊代運算包含:基於該等成分影像資料的一第一比例,運算該等成分影像資料各自的加權。該等成分影像資料的每一者在所述彩色影像中之貢獻不同,也就是說,每一成分影像資料在對應的像素上可能具有不同的像素數值。這導致所述彩色影像資料的該等成分影像資料彼此之間存在一比例關係。而基於這樣的比例關係,對於該等成分影像資料的每一者進行加權運算。基於這樣的比例關係,產生複數個權重值,該等權重值的每一者對應該等成分影像的每一者,並基於這樣的對應關係進行所述加權運算。 The iterative operation includes calculating a weight of each of the component image data based on a first ratio of the component image data. Each of the component image data has a different contribution in the color image, that is, each component image material may have different pixel values on corresponding pixels. This results in a proportional relationship between the component image data of the color image data. Based on such a proportional relationship, each of the component image data is subjected to a weighting operation. Based on such a proportional relationship, a plurality of weight values are generated, each of the equal weight values corresponding to each of the equal component images, and the weighting operation is performed based on such correspondence.

所述疊代運算包含:執行一強化運算,以運算複數個強化成分影像資料,其中每一強化成分影像資料是基於其他成分影像資料乘以各自的加權之總和。所述其他成分影像資料乘以對應的權重值。加權後的所述其他成分影像資料的總和,為該強化成分影像資料。其中,該強化成分影像資料所對應的顏色不同於所述其他成分影像資料對應的顏色,且所述其他影像資料也分別對應不同的顏色。換句話說,對應一顏色的成分影像資料,是由已加權之對應其他顏色的成分影像資料之總和所計算而獲得。 The iterative operation includes performing an enhancement operation to calculate a plurality of enhancement component image data, wherein each enhancement component image data is multiplied by a sum of respective weights based on other component image data. The other component image data is multiplied by a corresponding weight value. The sum of the weighted other component image data is the enhanced component image data. The color corresponding to the enhanced component image data is different from the color corresponding to the other component image data, and the other image materials respectively correspond to different colors. In other words, the component image data corresponding to one color is obtained by calculating the weighted sum of the component image data corresponding to the other colors.

所述強化運算還包含:基於該等強化成分影像資料的一第二比例,運算該等強化成分影像資料各自的強化加權。如同前述,該等強化成分影像資料彼此也存在一比例關係,因此可基於這樣的關係進一步產生複數個權重值(此處稱強化權重值)給該等強化成分影像資料,用於後續處理。 The enhancement operation further includes calculating a respective enhancement weight of the enhancement component image data based on a second ratio of the enhancement component image data. As described above, the enhancement component image data also has a proportional relationship with each other, so that a plurality of weight values (herein referred to as enhancement weight values) can be further generated based on such relationships for the enhancement component image data for subsequent processing.

所述疊代運算包含:基於對應的該強化加權與該強化成分影 像資料的一均值,對各自該強化成分影像資料進行常態化運算。所獲得的該等強化成分影像資料,可能會使得其所對應呈現的影像強度過高,因此執行與影像強度有關的常態化運算,獲得適當的影像強度。 The iterative operation includes: based on the corresponding enhancement weight and the enhancement component For the average value of the data, the normalized image data of each of the enhanced components is normalized. The image data of the enhanced components obtained may make the image intensity corresponding to the image intensity too high, so the normalization operation related to the image intensity is performed to obtain an appropriate image intensity.

所述疊代運算包含:判斷是否結束該疊代運算,若未結束,則將常態化運算的該等強化成分影像資料作為該疊代運算的該輸入。當該疊代運算,或該等強化成分影像滿足一特定條件時,結束該疊代運算。當條件不被滿足時,再次執行該疊代運算,並以所述常態化結果取代該等成分影像資料作為後續將執行疊代運算的輸入。後續疊代運算的流程基本上與前一次疊代運算的流程相同。結束該疊代運算,基於該等強化成分影像資料,重建該彩色影像資料,重建的彩色影像資料所顯示的影像具有不同於初始影像的顏色和細節表現。 The iterative operation includes: determining whether to end the iterative operation, and if not, using the enhancement component image data of the normalization operation as the input of the iterative operation. The iterative operation is ended when the iterative operation, or the enhancement component images satisfy a particular condition. When the condition is not satisfied, the iterative operation is performed again, and the component image data is replaced with the normalization result as an input to which the iterative operation will be performed later. The flow of subsequent iterative operations is basically the same as the flow of the previous iterative operation. Ending the iterative operation, reconstructing the color image data based on the enhanced component image data, and the reconstructed color image data displays an image having a color and a detail different from the initial image.

在以下本發明的說明書以及藉由本發明原理所例示的圖式當中,將更詳細呈現本發明的這些與其他特色和優點。 These and other features and advantages of the present invention will be described in more detail in the description of the appended claims.

2‧‧‧影像擷取單元 2‧‧‧Image capture unit

4‧‧‧成像單元 4‧‧‧ imaging unit

6‧‧‧照明單元 6‧‧‧Lighting unit

61‧‧‧光源 61‧‧‧Light source

62‧‧‧光學元件 62‧‧‧Optical components

8‧‧‧攝像單元 8‧‧‧ camera unit

10‧‧‧電腦裝置 10‧‧‧Computer equipment

102‧‧‧處理器 102‧‧‧Processor

104‧‧‧儲存單元 104‧‧‧ storage unit

106‧‧‧顯示單元 106‧‧‧Display unit

108‧‧‧輸入介面 108‧‧‧Input interface

E‧‧‧眼睛 E‧‧‧ eyes

20‧‧‧輸入影像 20‧‧‧ Input image

22‧‧‧重建影像 22‧‧‧Reconstruction of images

24‧‧‧成分影像資料 24‧‧‧Component image data

24r、24g、24b‧‧‧成分影像資料 24r, 24g, 24b‧‧‧ component image data

26‧‧‧中介重建影像 26‧‧‧Intermediary reconstruction image

Wr、Wg、Wb‧‧‧權重值 W r , W g , W b ‧‧ ‧ weight value

300至320‧‧‧步驟 300 to 320‧‧ steps

第一圖顯示一系統,該系統適用於本發明之眼底影像重建方法。 The first figure shows a system that is suitable for use in the fundus image reconstruction method of the present invention.

第二圖顯示一輸入影像及其一重建影像(根據本發明所提供的方法)。 The second figure shows an input image and a reconstructed image thereof (in accordance with the method provided by the present invention).

第三圖顯示本發明方法之流程。 The third figure shows the flow of the method of the invention.

底下將參考圖式更完整說明本發明,並且藉由例示顯示特定範例具體實施例。不過,本主張主題可具體實施於許多不同形式,因此所 涵蓋或申請主張主題的建構並不受限於本說明書所揭示的任何範例具體實施例;範例具體實施例僅為例示。同樣,本發明在於提供合理寬闊的範疇給所申請或涵蓋之主張主題。除此之外,例如主張主題可具體實施為方法、裝置或系統。因此,具體實施例可採用例如硬體、軟體、韌體或這些的任意組合(已知並非軟體)之形式。 The invention will be described more fully hereinafter with reference to the accompanying drawings, However, the subject matter of this claim can be embodied in many different forms, so The construction of the claimed subject matter is not limited to any of the exemplary embodiments disclosed in the specification; the specific embodiments are merely illustrative. As such, the present invention is intended to provide a broad and broad scope of the claimed subject matter. In addition, for example, the claimed subject matter can be embodied as a method, apparatus, or system. Thus, particular embodiments may take the form of, for example, a hardware, a soft body, a firmware, or any combination of these (known as not being a soft body).

本說明書內使用的詞彙「在一個實施例」並不必要參照相同具體實施例,且本說明書內使用的「在其他具施例」並不必要參照不同的具體實施例。其目的在於例如主張的主題包括全部或部分範例具體實施例的組合。 The words "in one embodiment" used in the specification are not necessarily referring to the specific embodiments, and the "in other embodiments" used in the specification are not necessarily referring to the specific embodiments. It is intended that the subject matter, for example, be construed as a

第一圖示意一系統,用於取得一眼睛E的眼底影像及其處理。該系統包含一影像擷取裝置、一影像拍攝(記錄)裝置以及一處理與運用裝置。所述影像擷取裝置可包含在所述影像拍攝裝置,用以擷取清晰的眼底影像。所述處理與運用裝置載有程式指令,其係配置以執行本發明之影像重建方法,以對接收的一影像進行處理。所述處理與運用裝置接收影像拍攝裝置所取得的初始影像資料,並經由本發明提供的特殊加權處理而輸出一重建影像。影像擷取裝置與影像拍攝裝置不一定結合成一拍攝裝置,影像拍攝裝置也可合併至處理與運用裝置。 The first figure illustrates a system for obtaining a fundus image of an eye E and its processing. The system includes an image capture device, an image capture (recording) device, and a processing and operation device. The image capturing device may be included in the image capturing device for capturing a clear fundus image. The processing and application device carries program instructions configured to perform the image reconstruction method of the present invention to process a received image. The processing and operation device receives the initial image data obtained by the image capturing device, and outputs a reconstructed image through the special weighting process provided by the present invention. The image capturing device and the image capturing device are not necessarily combined into a single capturing device, and the image capturing device can also be incorporated into the processing and operating device.

所述拍攝裝置可為專用於拍攝眼底影像之眼底鏡(fundus camera)。如第一圖提供一實施例,所述系統的影像擷取裝置可包含一影像擷取單元2及一成像單元4。該影像擷取單元2包含複數個光學元件,例如包含有物鏡(objective)、半透明鏡(semitransparent mirror)、聚焦透鏡(focusing lens)及光圈(aperture)等。該影像擷取單元2係經配置以在物空間形成一 聚焦表面,使一眼睛E底部表面與該聚焦表面重疊。該影像擷取單元2可搭配一照明單元6。該照明裝置6包含光源61、聚光鏡(condensor)及其他光學元件62。該照明單元6的一部分被包含在該影像擷取單元2中,例如共同的物鏡。該照明單元6係配置以投射光線至眼球內,照明眼底區域,提供足夠的光線供該影像擷取單元2擷取。依據操作,該光源61可選擇性地開啟。本發明中,更多的照明單元可被包含,以滿足不同的觀看需求。 The photographing device may be a fundus camera dedicated to taking a fundus image. As shown in the first figure, an image capturing device of the system may include an image capturing unit 2 and an imaging unit 4. The image capturing unit 2 includes a plurality of optical elements including, for example, an objective, a semitransparent mirror, a focusing lens, and an aperture. The image capturing unit 2 is configured to form a space in the object space The surface is focused such that the bottom surface of an eye E overlaps the focused surface. The image capturing unit 2 can be combined with a lighting unit 6. The illumination device 6 includes a light source 61, a condenser, and other optical elements 62. A portion of the illumination unit 6 is included in the image capture unit 2, such as a common objective lens. The illumination unit 6 is configured to project light into the eyeball to illuminate the fundus region to provide sufficient light for the image capturing unit 2 to capture. Depending on the operation, the light source 61 can be selectively turned on. In the present invention, more lighting units can be included to meet different viewing needs.

該成像單元4包含複數個光學元件,例如包含反射鏡、分色鏡(dichroic mirror)及中繼透鏡(relay lens)等。該成像單元4連接於該影像擷取單元2的後端,用以接收前者擷取的光線並投射至一攝像單元8,其被包含在所述影像拍攝裝置。在其他可能的情況,該成像單元4與該攝像單元8可被包含在該影像拍攝裝置,以拍攝該眼底影像並輸出一彩色影像資料。一拍攝之眼底影像即被傳送至所述處理與運用裝置。所述處理與運用單元可以是一電腦裝置10,其包含至少一處理器102、儲存單元104、顯示單元106及輸入介面108。拍攝之眼底影像即已資料的形式存放在該儲存單元104。該儲存單元104還儲存有複數個指令用以被該至少一處理器102所讀取以執行各種的操作,包含本發明所提供之眼底影像重建的操作。該儲存單元104還儲存每次疊代運算的結果,作為後續疊代運算或重建影像的來源。該儲存單元104為一記憶體,其儲存該彩色影像資料以及相關影像處理的中間資料(如成分影像資料及強化成分影像資料)。所述中間資料將於後續內容說明。一或多個經處理或未經處理的影像可被顯示於該顯示單元106,以供觀看及比較。該顯示單元106可以是一顯示裝置,例如LCD高解析螢幕,顯示重建後該彩色影像資料中關於該眼底影像的透明區域(即影 像中偏白色的部分,也就是眼底增生膜可能形成的區域)。該輸入介面108由硬體及軟體所構成,像是一輸入鍵盤、觸控裝置及與這些硬體互動的操作程式。輸入介面108是設置以提供一使用者或一操作人員與該電腦裝置10之互動。透過輸入介面108,該電腦裝置10可接收用於執行本發明方法之控制參數,如附數個關於運算的預設值或條件,這些將在後續內容說明。所述處理器104,可根據該輸入界面108的操作,令不同的疊代次數所重建的該彩色影像資料,(即中介重建影像)於該顯示裝置。 The imaging unit 4 includes a plurality of optical elements including, for example, a mirror, a dichroic mirror, a relay lens, and the like. The imaging unit 4 is connected to the rear end of the image capturing unit 2 for receiving the light captured by the former and projecting to an imaging unit 8, which is included in the image capturing device. In other possible cases, the imaging unit 4 and the imaging unit 8 may be included in the image capturing device to capture the fundus image and output a color image data. A photographed fundus image is transmitted to the processing and application device. The processing and application unit may be a computer device 10 including at least one processor 102, a storage unit 104, a display unit 106, and an input interface 108. The fundus image of the photograph is stored in the storage unit 104 in the form of data. The storage unit 104 also stores a plurality of instructions for being read by the at least one processor 102 to perform various operations, including the operation of reconstructing the fundus image provided by the present invention. The storage unit 104 also stores the results of each iteration operation as a source of subsequent iterative operations or reconstructed images. The storage unit 104 is a memory that stores the color image data and intermediate data related to image processing (such as component image data and enhanced component image data). The intermediate data will be explained in the following content. One or more processed or unprocessed images may be displayed on the display unit 106 for viewing and comparison. The display unit 106 can be a display device, such as an LCD high-resolution screen, displaying a transparent area of the fundus image in the color image data after reconstruction (ie, a shadow) The part that is white in the middle, that is, the area where the fundus hyperplasia film may form). The input interface 108 is composed of a hardware and a software, such as an input keyboard, a touch device, and an operating program that interacts with the hardware. The input interface 108 is configured to provide interaction of a user or an operator with the computer device 10. Through the input interface 108, the computer device 10 can receive control parameters for performing the method of the present invention, such as a plurality of preset values or conditions regarding operations, which will be described later. The processor 104 can cause the color image data reconstructed by different iterations according to the operation of the input interface 108 (ie, mediate reconstruction of the image) on the display device.

本發明之影像重建方法不一定是在第一圖的系統中執行,該彩色影像資料亦可經由其他方式,如網路,傳送至其他電腦或伺服器進行處理。執行本發明方法的處理器可為一或多個,這些處理器可存在一電腦裝置中,或是分散於不同的運算裝置。 The image reconstruction method of the present invention is not necessarily performed in the system of the first figure, and the color image data may be transmitted to other computers or servers for processing via other means, such as a network. The processor executing the method of the present invention may be one or more, and the processors may be present in a computer device or distributed among different computing devices.

第二圖顯示一輸入影像20(初始影像)經由本發明之方法轉換為一重建影像22。該輸入影像20基於一彩色影像資料所顯示。該彩色影像資料是由一影像拍攝裝置所取得。該彩色影像資料可為所述影像拍攝裝置的輸出,且一般未經處理。該彩色影像資料的格式可以是jpg、jpeg、bmp或png。該彩色影像資料是以矩陣的方式呈現,例如一張512×512像素的輸入影像20,其彩色影像資料亦為512×512之矩陣,該矩陣中的每一個元素代表一像素值(pixel value)。該彩色影像資料及後續運算的成分影像資料及強化成分影像資料皆包含複數個像素值。該彩色影像資料是由複數個成分影像資料所組成。該等成分影像資料,是基於一色彩模型所定義,該色彩模型可選自三原色模型(RGB color model)、混色模型(CMYK color model)及色相飽和亮度模型(HSI color model)之其中一者。 The second figure shows an input image 20 (initial image) converted to a reconstructed image 22 by the method of the present invention. The input image 20 is displayed based on a color image material. The color image data is obtained by an image capturing device. The color image material can be the output of the image capture device and is generally unprocessed. The format of the color image data may be jpg, jpeg, bmp or png. The color image data is presented in a matrix, for example, a 512×512 pixel input image 20, and the color image data is also a 512×512 matrix, and each element in the matrix represents a pixel value. . The color image data and the component image data and the enhancement component image data of the subsequent calculations all include a plurality of pixel values. The color image data is composed of a plurality of component image data. The component image data is defined based on a color model selected from one of an RGB color model, a CMYK color model, and a HSI color model.

如第二圖,首先,接收與該輸入影像20(如一眼底影像)有關的該彩色影像資料。處理器根據該彩色影像資料,獲得複數個成分影像資料24。此處示範的成分影像資料24是基於三原色模型所獲得,即該彩色影像資料的紅色部分24r、藍色部分24b及綠色部分24g,但本屬領域具有通常知識者亦可根據三原色模型與其他色彩模型之間的轉換而獲得基於混色模型之其他成分影像資料或是基於HSI模型之其他成分影像資料。該等成分影像資料24也是以矩陣的方式表現,如512×512矩陣,而每一矩陣的元素為與一顏色有關的像素值。換句話說,該彩色影像資料是包含或由該等成分影像資料24的疊加而組成。大部分影像拍攝裝置所獲得的初始影像,一般是根據其三原色的分量來形成對應的資料,這與感光元件的配置有關。因此,在彩色影像資料中辨識出三原色的成分影像資料已是本領域技術者能夠做到的。 As shown in the second figure, first, the color image data related to the input image 20 (such as a fundus image) is received. The processor obtains a plurality of component image data 24 based on the color image data. The component image data 24 exemplified herein is obtained based on a three-primary color model, that is, a red portion 24r, a blue portion 24b, and a green portion 24g of the color image data, but those having ordinary knowledge in the field may also be based on three primary color models and other colors. The conversion between the models obtains other component image data based on the color mixing model or other component image data based on the HSI model. The component image data 24 is also represented in a matrix, such as a 512 x 512 matrix, and the elements of each matrix are pixel values associated with a color. In other words, the color image material is comprised of or consists of a superposition of the component image data 24. The initial image obtained by most image capturing devices generally forms corresponding data according to the components of its three primary colors, which is related to the configuration of the photosensitive elements. Therefore, it has been possible for a person skilled in the art to recognize the component image data of the three primary colors in the color image data.

本發明方法包含一加權運算。處理器根據該等成分影像資料24r、24g、24b之間的一比例關係(第一比例)產生複數個權重值Wr、Wg、Wb(初始權重),其各別對應先前已識別的成分影像資料24r、24g、24b。所述比例關係(第一比例)將於後續內容說明。 The method of the invention comprises a weighting operation. The processor generates a plurality of weight values W r , W g , W b (initial weights) according to a proportional relationship (first ratio) between the component image data 24r, 24g, and 24b, each of which corresponds to the previously identified Component image data 24r, 24g, 24b. The proportional relationship (first ratio) will be explained in the following.

本發明之方法包含一疊代運算,以強化該彩色影像資料中關於該眼底影像的透明區域。本發明之方法還包含更多的疊代運算,進一步強化所述區域。所述疊代運算包含一強化運算及一強化權重運算。已辨識的該等成分影像資料24r、24g、24b及運算的該等權重值Wr、Wg、Wb可分別做為該疊代運算(第一疊代運算)的一輸入。所述強化運算被執行,以運算複數個強化成分影像資料(圖中未示),其中每一強化成分影像資料是 基於其他成分影像資料乘以各自的加權(權重值Wr、Wg、Wb)之總和。關於強化運算將於後續內容具體說明。所述強化成分影像的資料維度與為強化前的成分影像資料相同,即512×512矩陣。 The method of the present invention includes an iterative operation to enhance the transparent region of the fundus image in the color image data. The method of the present invention also includes more iterative operations to further strengthen the region. The iterative operation includes an enhancement operation and an enhancement weight operation. The identified component image data 24r, 24g, 24b and the calculated weight values W r , W g , W b can be used as an input to the iterative operation (first iteration operation), respectively. The enhancement operation is performed to calculate a plurality of enhancement component image data (not shown), wherein each enhancement component image data is multiplied by respective weights based on other component image data (weight values W r , W g , W b ) The sum of them. The enhanced operation will be described in detail later. The data dimension of the enhanced component image is the same as the component image data before the enhancement, that is, a 512×512 matrix.

所述疊代運算包含的強化權重運算,是基於該等強化成分影像資料的一比例關係(第二比例),運算該等強化成分影像資料各自的強化加權,即產生複數個強化權重值。此處強化權重與前述初始權重不同,強化權重是基於疊代運算中的強化運算所獲得;初始權重是基於初始的成分影像資料所獲得。 The enhancement weight operation included in the iterative operation is based on a proportional relationship (second ratio) of the image data of the enhancement components, and the enhancement weights of the image data of the enhancement components are calculated, that is, a plurality of enhancement weight values are generated. Here, the reinforcement weight is different from the initial weight, and the reinforcement weight is obtained based on the reinforcement operation in the iterative operation; the initial weight is obtained based on the initial component image data.

所述該等強化成分影像資料可為所述疊代運算的一輸出。該輸出用來作為下一個疊代運算(第二疊代運算)的一輸入,並再重複所述強化運算及強化權重運算,以獲得新的複數個強化成分影像資料,作為再下一個疊代運算的的一輸入。這些運算過程會持續重複,直到處理器根據一判斷結果結束疊代運算。處理器結束該疊代運算後,基於最新的該等強化成分影像資料,重建該彩色影像資料。最後獲得之該等強化成分影像資料與對應的該彩色影像資料的成分影像資料相互疊加,以產生所述重建影像22。比較第二圖中,重建前後的眼底影像20、22,其中該重建影像22具有更高的明暗對比度(brightness contrast),其有助於辨識出影像22的一明亮區域及一相對陰暗區域。尤其,該重建影像22的圖案細節也因強化而更為明顯(distinct)。 The enhanced component image data may be an output of the iterative operation. The output is used as an input to the next iterative operation (second iteration operation), and the enhancement operation and the enhancement weight operation are repeated to obtain a new plurality of enhancement component image data as the next iteration. An input to the operation. These operations continue to repeat until the processor ends the iterative operation based on a result of the decision. After the processor ends the iterative operation, the color image data is reconstructed based on the latest enhanced component image data. The finally obtained enhancement component image data and the corresponding component image data of the color image data are superimposed on each other to generate the reconstructed image 22. Comparing the fundus images 20, 22 before and after reconstruction in the second figure, wherein the reconstructed image 22 has a higher brightness contrast, which helps to identify a bright region of the image 22 and a relatively dark region. In particular, the pattern details of the reconstructed image 22 are also more distinct due to enhancement.

在處理器停止持續疊代運算之前,一或多個疊代運算的輸出,即該等強化成分影像資料,可儲存於如第一圖系統中的儲存單元104。同時,一或多個中介重建影像26(基於儲存的該等強化成分影像資料所重 建)可一併被儲存。所述中介重建影像26與第二圖的重建影像22不同。所述中介重建影像26是在疊代運算尚未停止前所產出的。該等中介重建影像26可被調用而經由一顯示裝置顯示。每一中介重建影像26可在每隔一預定次數的疊代運算後,基於最後一次運算的該等強化成分影像資料而產生,例如每1000次疊代運算產生一中介重建影像。因此,只要疊代運算尚未停止,有至少一個中介重建影像26可取得。該等重建影像26的對比度及亮度等特性呈現一連續變化,此變化可用於其他影像處理,例如可作為是否結束或停止持續執行疊代運算。 The output of one or more iterative operations, i.e., the enhanced component image data, may be stored in storage unit 104 as in the first map system before the processor stops the continuous iterative operation. At the same time, one or more intermediaries reconstruct images 26 (based on the stored image data of the enhanced components) Built) can be stored together. The intermediate reconstructed image 26 is different from the reconstructed image 22 of the second map. The intermediate reconstruction image 26 is produced before the iterative operation has not stopped. The intermediate reconstruction images 26 can be invoked for display via a display device. Each intermediate reconstructed image 26 may be generated based on the last calculated computed component image data after every predetermined number of iterations, such as an intervening reconstructed image every 1000 iterations. Thus, as long as the iterative operation has not stopped, at least one intermediate reconstructed image 26 is available. The features such as contrast and brightness of the reconstructed image 26 exhibit a continuous change that can be used for other image processing, such as whether to continue the iterative operation as to whether to end or stop.

第三圖為本發明方法之流程,包含步驟300至步驟320,其由至少一處理器執行。為了方便說明,圖中的n代表未經疊代運算的初始處理,而n+1代表經第一次疊代運算之處理,以此類推。該方法始於步驟300,接收一初始影像。由一運算裝置或電腦接收該彩色影像資料,即第二圖的輸入影像20,且該彩色影像資料是有關於眼底影像。該初始影像是取自一影像拍攝裝置且通常顏色或像素值未經過處理。然而,在其他實施例中,該初始影像也可經由初步處理,例如影像的壓縮或像素值的常態化(normalization)。 The third diagram is a flow of the method of the present invention, including steps 300 through 320, which are performed by at least one processor. For convenience of explanation, n in the figure represents the initial processing without the iterative operation, and n+1 represents the processing by the first iterative operation, and so on. The method begins in step 300 by receiving an initial image. The color image data, that is, the input image 20 of the second image, is received by an computing device or a computer, and the color image data is related to the fundus image. The initial image is taken from an image capture device and typically the color or pixel values are not processed. However, in other embodiments, the initial image may also be subjected to preliminary processing, such as compression of the image or normalization of pixel values.

步驟302,由至少一處理器識別該彩色影像資料所包含的顏色成分,如前述三原色。辨識出該彩色影像資料的每一個像素值所包含的三原色成分,據此獲得複數個成分影像資料,如下列方程式所描述:X initial =r n +g n +b n ......(1),其中X initial 為該彩色影像資料,r n g n 、b n 為成分影像資料,如第二圖 所示之基於三原色模型的該等成分影像資料24r、24g、24b。在其他實施例中,可基於CMYK模型或HSI模型自該彩色影像資料識別其他的成分影像資料。該等成分影像資料具有相同的維度,如512×512。該等成分影像資料也是後續疊代運算的一輸入。 Step 302: Identify, by the at least one processor, a color component included in the color image data, such as the foregoing three primary colors. Recognizing the three primary color components contained in each pixel value of the color image data, thereby obtaining a plurality of component image data, as described by the following equation: X initial = r n + g n + b n ...... 1), wherein X initial is the color image data, r n , g n , b n are component image data, and the component image data 24r, 24g, 24b based on the three primary color models are shown in the second figure. In other embodiments, other component image data may be identified from the color image data based on the CMYK model or the HSI model. The image data of the components have the same dimensions, such as 512×512. The component image data is also an input to subsequent iterative operations.

步驟304,產生關於所述顏色成分的初始權重。該等成分影像資料的一比例關係(第一比例)被用來運算該等成分影像資料各自的加權。該第一比例為該等成分影像資料各自的均值的比例。根據該第一比例,產生複數個初始權重值。該各自成分影像資料的加權,即每一個成分影像資料所對應的一初始權重值,是各自成分影像資料的均值除以所有成分影像資料的均值之總和,如以下方程式所描述: 其中,k代表該等成分影像資料之一者,i={r,g,b}代表所有成分影像資料,avg(k)代表一成分影像資料的像素均值,Σ i avg(i)代表所有成分影像資料的像素均值,而W k 為與該成分影像資料k有關的一權重值,如第二圖所運算的Wr、Wg、Wb。該等權重值作為後續疊代運算的一輸入。 Step 304, generating an initial weight for the color component. A proportional relationship (first ratio) of the component image data is used to calculate the weighting of each of the component image data. The first ratio is the ratio of the mean values of the respective component image data. According to the first ratio, a plurality of initial weight values are generated. The weighting of the respective component image data, that is, an initial weight value corresponding to each component image data, is the sum of the mean values of the respective component image data divided by the mean values of all component image data, as described by the following equation: Where k represents one of the component image data, i = { r, g, b } represents all component image data, avg ( k ) represents the pixel mean of a component image data, Σ i avg ( i ) represents all components The pixel mean of the image data, and W k is a weight value associated with the component image data k , such as W r , W g , W b calculated in the second figure. These weight values are used as an input to subsequent iterative operations.

步驟306,執行一疊代運算,該疊代運算始於一強化運算,以產生複數個強化成分影像資料(n+1),其中每一強化成分影像資料是基於其他成分影像資料乘以各自的加權之總和。該等強化成分影像資料的每一者是基於經加權運算的該等成分影像資料的至少二者所運算,其中用於所述強化運算的該等成分影像資料(n)的至少二者分別對應兩相異顏色(例如綠色、藍色),而基於該等成分影像資料(n)的至少二者所運算的該強 化成分影像資料(n+1)對應另一顏色(例如紅色),該另一顏色不同於所述兩相異顏色。如基於三原色之該等強化成分影像資料(n+1)之運算如以下方程式所描述: 其中,r n g n b n 為該等成分影像資料(矩陣),W rn W gn W bn 為與該等成分影像資料對應的權重值(常數),r n+1g n+1b n+1為該等強化成分影像資料(矩陣)。 Step 306, performing an iterative operation, where the iterative operation starts with an enhancement operation to generate a plurality of enhancement component image data (n+1), wherein each enhancement component image data is multiplied by respective component image data. The sum of the weights. Each of the enhancement component image data is computed based on at least two of the component image data subjected to the weighting operation, wherein at least two of the component image data (n) used for the enhancement operation correspond respectively Two different colors (for example, green, blue), and the enhanced component image data (n+1) calculated based on at least two of the component image data (n) corresponds to another color (for example, red), and the other One color is different from the two different colors. For example, the operation of the enhanced component image data (n+1) based on the three primary colors is as described in the following equation: Where r n , g n , b n are the image data (matrices) of the components, W rn , W gn , W bn are weight values (constants) corresponding to the image data of the components, r n +1 , g n +1 and b n +1 are the image data (matrices) of the enhancement components.

該疊代運算還包含一強化權重運算,步驟310,即對於運算獲得之該等強化影像資料的一加權運算。步驟310的加權運算與步驟304的加權運算概念相同,但運算的基礎不同。步驟304是基於初始的該等成分影像資料(n)所運算,而步驟308是基於強化成分影像資料(n+1)所運算。基於該等強化成分影像資料(n+1)的一比例關係(第二比例),運算該等強化成分影像資料(n+1)各自的強化加權。產生複數個強化權重值Wk n+1,如下方程式所描述: 其中r n+1g n+1b n+1為該等強化成分影像資料(矩陣)。該疊代運算可於產生該等強化成分影像資料及對應的強化權重值後完成。 The iterative operation also includes an enhancement weighting operation, step 310, which is a weighting operation of the enhanced image data obtained by the operation. The weighting operation of step 310 is the same as the weighting operation of step 304, but the basis of the operation is different. Step 304 is based on the initial component image data (n), and step 308 is based on the enhancement component image data (n+1). Based on a proportional relationship (second ratio) of the enhanced component image data (n+1), the respective enhancement weights of the enhanced component image data (n+1) are calculated. A plurality of enhancement weight values W k n+1 are generated, as described by the following equation: Where r n +1 , g n +1 , and b n +1 are the image data (matrices) of the enhancement components. The iterative operation can be performed after generating the enhanced component image data and the corresponding enhancement weight values.

該疊代運算還包含一常態化運算,步驟312。基於對應的該強化加權與該強化成分影像資料的一均值,對各自該強化成分影像資料進 行常態化運算。其中對各自該強化成分影像資料進行常態化運算,是計算原來各自強化成分影像資料減去原來各自強化成分影像資料之均值與1加上各自強化加權之乘積,並將計算結果作為常態化之各自強化成分影像資料,如下列方程式所描述:r n+1(normalized)=r n+1-avg(r n+1)×(1+Wr n+1)......(7),g n+1(normalized)=g n+1-avg(g n+1)×(1+Wg n+1)......(8),bn+1(normalized)=bn+1-avg(bn+1)×(1+Wbn+1)......(9)。經常態化運算的該等強化成分影像資料可作為該疊代運算的一輸出。在某些實施例中,步驟312可省略,即以該等強化成分影像資料作為該疊代運算的輸出。 The iterative operation also includes a normalization operation, step 312. A normalization operation is performed on each of the enhanced component image data based on the corresponding enhancement weight and a mean value of the enhancement component image data. The normalization operation is performed on the image data of the respective enhancement components, and the product of the original enhancement component image data minus the mean value of the original enhancement component image data plus 1 and the respective enhancement weights is calculated, and the calculation results are normalized. Enhanced component image data, as described by the following equation: r n +1 (normalized) = r n +1 - avg ( r n +1 ) × (1 + Wr n +1 ) (7), g n +1 (normalized)= g n +1 - avg ( g n +1 )×(1+ Wg n +1 ) (8), b n+1 (normalized)=b n+ 1 - avg(b n+1 ) × (1 + Wb n+1 ) (9). The enhanced component image data of the regularization operation can be used as an output of the iterative operation. In some embodiments, step 312 may be omitted, that is, the enhanced component image data is used as the output of the iterative operation.

步驟314,判斷是否結束或停止持續該疊代運算,即是否繼續強化該等強化成分影像資料及其強化加權運算。若未結束或停止該疊代運算,則將常態化運算的該等強化成分影像資料(步驟312)作為該疊代運算的該輸入。持續該疊代運算,回到步驟306,且持續的疊代運算(n+2)的輸入包含自前述步驟308獲得的該等強化成分影像資料(n+1)以及自步驟310獲得的強化權重值(n+1)。 Step 314, determining whether to end or stop the iterative operation, that is, whether to continue to strengthen the enhanced component image data and its enhancement weighting operation. If the iterative operation is not completed or stopped, the enhanced component image data of the normalization operation (step 312) is taken as the input of the iterative operation. The iterative operation is continued, and the process returns to step 306, and the input of the continuous iterative operation (n+2) includes the enhancement component image data (n+1) obtained from the foregoing step 308 and the enhancement weight obtained from step 310. Value (n+1).

在一實施例中,所述判斷是否結束該疊代運算,是判斷是否達到一疊代次數,其可為預設。當持續的疊代運算滿足該疊代次數,例如10000次,則於(n+9999)之疊代運算結束或停止,並進入步驟316。若判斷未達到該疊代次數,則將常態化運算的該等強化成分影像資料作為持續該疊代運算的輸入,執行該疊代運算。此外,可能的作法還可包含手動操 作。例如,操作人員可依據如第二圖所示之該等中介重建影像26的觀察,決定是否還要執行幾次疊代運算。 In an embodiment, the determining whether to end the iterative operation is determining whether a number of iterations is reached, which may be a preset. When the continuous iterative operation satisfies the number of iterations, for example 10,000 times, the iteration operation at (n+9999) ends or stops, and proceeds to step 316. If it is determined that the number of iterations has not been reached, the enhanced component image data of the normalization operation is used as an input for continuing the iterative operation, and the iterative operation is performed. In addition, possible practices can also include manual operations. Work. For example, the operator can determine whether or not to perform several iterations in accordance with the observation of the intermediate reconstructed image 26 as shown in the second figure.

該疊代運算進一步包含每達到一預設的疊代次數或其倍數,就基於該疊代次數所獲得的該等強化成分影像資料,重建該疊代次數之該彩色影像資料,即中所述介重建影像。例如,可預設當疊代次數達到1000次,或是達到1000的倍數時,基於最後一次疊代運算獲得的該等強化成分影像資料來重建該彩色影像資料,以獲得一或多個中介重建影像,並繼續執行疊代運算。如前面所提到的,該等中介重建影像可呈現該初始影像在處理過程中的連續變化。此可提供給操作人員或臨床人員作為觀察。 The iterative operation further includes reconstructing the color image data of the iteration number based on the enhanced image data obtained by the number of iterations for each preset number of iterations or a multiple thereof. Reconstruct the image. For example, it may be preset that when the number of iterations reaches 1000 or reaches a multiple of 1000, the color image data is reconstructed based on the enhanced image data obtained by the last iteration operation to obtain one or more intermediate reconstructions. Image and continue to perform iterative operations. As mentioned previously, the intermediate reconstructed images may present a continuous change in the initial image during processing. This can be provided to the operator or clinical staff as an observation.

步驟314,在其他實施例中,所述判斷是否結束該疊代運算,是判斷不同的疊代次數所獲得的不同運算結果之差異是否小於一臨界條件。在持續的疊代運算中,當某一疊代運算的強化成分影像資料(常態化)與先前的一疊代運算的強化成分影像資料(常態化)之間的差異小於該臨界條件,則結束或停止持續該疊代運算,並進入步驟316。例如,基於一疊代運算獲得之一中介重建影像與基於先前另一疊代運算獲得之另一中介重建影像兩者之間的整體像素值差異,來判斷是否小於該臨界條件。 In step 314, in other embodiments, determining whether to end the iterative operation is determining whether a difference in different operation results obtained by different iteration times is less than a critical condition. In a continuous iterative operation, when the difference between the enhancement component image data (normalization) of a certain iteration operation and the enhancement component image data (normalization) of the previous iteration operation is less than the critical condition, then the end Or stop the iterative operation and proceed to step 316. For example, based on an iterative operation, an overall pixel value difference between an intermediate reconstructed image and another intermediate reconstructed image obtained based on another previous iterative operation is obtained to determine whether it is less than the critical condition.

所束臨界條件為可為收斂條件,當不同疊代運算的這些輸出(即多個中介重建影像,或多個強化成分影像資料)之間的差異呈現一收斂趨勢,即滿足該收斂條件,疊代運算便會結束或停止。 The bundled critical condition is a convergence condition, and the difference between the outputs of different iteration operations (ie, multiple mediation reconstructed images, or multiple enhancement component image data) exhibits a convergence tendency, that is, the convergence condition is satisfied, The generation will end or stop.

步驟316,基於步驟314的判斷,即疊代次數是否滿足或疊代結果是否收斂,停止執行疊代運算,強化終止。步驟318,基於最後一次疊代運算獲得的該等強化成分影像資料,重建該彩色影像資料。所述重建該 彩色影像資料,包含將結束該最後一次疊代運算後的該等強化成分影像資料與對應的該彩色影像資料的成分影像資料相互疊加,以產生一重建影像,如以下方程式所描述:X final =r final +g final +b final +X initial ......(10),其中,X final 為重建影像資料,r final 、g final 、b final 為疊代運算停止後最後一次疊代運算獲得的該等強化成分影像資料,且各自與對應的成分影像r n 、g n 、b n 疊加,即r final +r n g final +g n b final +b n 。步驟320,處理器基於該重建影像資料X final 輸出該重建影像(如第二圖重建影像22),其可被顯示於顯示裝置上。該重建影像與前述中介重建影像不同,此處的重建影像是在疊代運算結束或停止後所產生。一眼底影像經由上述步驟處理重建後,有眼底增生膜的區域(偏白色)更為明顯。 Step 316, based on the judgment of step 314, that is, whether the number of iterations is satisfied or whether the iteration result converges, the execution of the iterative operation is stopped, and the termination is enhanced. Step 318: reconstruct the color image data based on the enhanced component image data obtained by the last iterative operation. Reconstructing the color image data includes superimposing the enhanced component image data after the last iteration operation and the component image data of the corresponding color image data to generate a reconstructed image, as described in the following equation : X final = r final + g final + b final + X initial ...... (10), where X final is the reconstructed image data, r final , g final , b final is the last time after the iteration operation stops The enhancement component image data obtained by the iterative operation is superimposed with the corresponding component images r n , g n , b n , that is, r final + r n , g final + g n , b final + b n . Step 320: The processor outputs the reconstructed image (such as the second image reconstructed image 22) based on the reconstructed image data X final , which can be displayed on the display device. The reconstructed image is different from the intermediate reconstructed image, where the reconstructed image is generated after the iteration operation is terminated or stopped. After the fundus image was reconstructed through the above steps, the area of the fundus hyperplasia film (whiteish) was more pronounced.

上述實施例是以三原色模型作為舉例說明。在其他實施例中,亦可將三原色模型替換為CMYK模型或HSI模型。對於CMYK模型而言,上述運算可進一步包含有關顏色的一互補運算,例如,包含辨識即轉換一輸入影像資料的成分影像資料,如以下方程式所描述:X initial =c+m+y+k......(11),c n =255-c......(12),m n =255-m......(13),y n =255-y......(14),k n =255-k......(15),其中,cmy、k分別為青綠色、洋紅色、黃色、黑色的矩陣, c n m n y n 、k n 為轉換後可作為所述疊代運算的一輸入及所述加權計算的基礎。此外,對於HSI模型而言,是基於三原色模型與HSI模型的一線性關係在上述運算包含一轉換運算。本領域技術者可基於通常知識來適當地調整或修飾上述運算流程,以使本發明適用於其他的彩色模型。 The above embodiment is exemplified by a three primary color model. In other embodiments, the three primary color models may also be replaced with CMYK models or HSI models. For the CMYK model, the above operation may further comprise a complementary operation on the color, for example, component image data including recognition to convert an input image data, as described by the following equation: X initial = c + m + y + k . .....(11), c n =255- c ......(12), m n =255- m ......(13), y n =255- y ... ...(14), k n =255- k (15), where c , m , y, k are matrices of cyan, magenta, yellow, black, respectively, c n , m n , y n , k n are the basis for the conversion as an input to the iterative operation and the weighting calculation. In addition, for the HSI model, a linear relationship between the three primary color model and the HSI model is included in the above operation including a conversion operation. Those skilled in the art can appropriately adjust or modify the above-described operational flow based on usual knowledge to make the present invention applicable to other color models.

雖然為了清楚瞭解已經用某些細節來描述前述本發明,吾人將瞭解在申請專利範圍內可實施特定變更與修改。因此,以上實施例僅用於說明,並不設限,並且本發明並不受限於此處說明的細節,但是可在附加之申請專利範圍的領域及等同者下進行修改。 Although the foregoing invention has been described in some detail, it will be understood that Therefore, the above embodiments are intended to be illustrative only, and the invention is not limited to the details described herein, but may be modified in the field of the appended claims.

20‧‧‧輸入影像 20‧‧‧ Input image

22‧‧‧重建影像 22‧‧‧Reconstruction of images

24‧‧‧成分影像資料 24‧‧‧Component image data

24r、24g、24b‧‧‧成分影像資料 24r, 24g, 24b‧‧‧ component image data

26‧‧‧中介重建影像 26‧‧‧Intermediary reconstruction image

Wr、Wg、Wb‧‧‧權重值 W r , W g , W b ‧‧ ‧ weight value

Claims (15)

一種眼底影像重建方法,由至少一處理器所執行,包含:接收一彩色影像資料,該影像資料有關於一眼底影像;根據該彩色影像資料,獲得複數個成分影像資料(n),作為一疊代運算的一輸入;基於該輸入獲得該等成分影像資料的一第一比例(Wn),運算該等成分影像資料各自的加權;執行該疊代運算,包含:執行一強化運算,以運算複數個強化成分影像資料(n+1),其中每一強化成分影像資料是基於其他成分影像資料乘以各自的加權之總和,以及基於該等強化成分影像資料的一第二比例,運算該等強化成分影像資料各自的強化加權(n+1);基於對應的該強化加權與該強化成分影像資料的一均值,對各自該強化成分影像資料進行常態化運算;判斷是否結束該疊代運算,若未結束,則將常態化運算的該等強化成分影像資料作為該疊代運算的該輸入,及結束該疊代運算後,基於該等強化成分影像資料,重建該彩色影像資料。 A fundus image reconstruction method is performed by at least one processor, comprising: receiving a color image data, wherein the image data relates to a fundus image; and obtaining a plurality of component image data (n) as a stack according to the color image data An input of the generation operation; obtaining a first ratio (W n ) of the component image data based on the input, and calculating a weight of each of the component image data; performing the iterative operation, comprising: performing an enhancement operation to calculate a plurality of enhanced component image data (n+1), wherein each of the enhancement component image data is based on a sum of the weights of the other component image data and a second ratio based on the image data of the enhancement components, and the operation is performed Enhancement weighting (n+1) of each of the enhanced component image data; normalizing the image data of the enhanced component based on the corresponding enhancement weight and a mean value of the enhanced component image data; determining whether to end the iterative operation, If not completed, the normalized component image data of the normalization operation is used as the input of the iterative operation, and the stack is ended. After the generation operation, the color image data is reconstructed based on the enhanced component image data. 如申請專利範圍第1項之方法,其中,該成分影像資料為像素值(pixel value)。。 For example, the method of claim 1, wherein the component image data is a pixel value (pixel Value). . 如申請專利範圍第1項之方法,其中該第一比例為該等成分影像資料各自的均值的比例,該第二比例為該等強化成分影像資料各自的均值的比例。 The method of claim 1, wherein the first ratio is a ratio of respective mean values of the component image data, and the second ratio is a ratio of respective mean values of the enhancement component image data. 如申請專利範圍第3項之方法,其中該各自成分影像資料的加權是各自成分影像資料的均值除以所有成分影像資料的均值之總和。 For example, in the method of claim 3, wherein the weighting of the image data of the respective components is the sum of the mean values of the respective component image data divided by the mean values of the image data of all the components. 如申請專利範圍第3項之方法,其中該各自強化成分影像資料的強化加權是各自強化成分影像資料的均值除以所有強化成分影像資料的均值之總和。 For example, in the method of claim 3, the enhanced weighting of the image data of the respective enhancement components is the sum of the mean values of the respective enhancement component image data divided by the mean values of all the enhancement component image data. 如申請專利範圍第1項之方法,其中所述複數個成分影像資料,是基於一色彩模型所執行,該色彩模型選自RGB模型、CMYK模型及HSI模型之其中一者。 The method of claim 1, wherein the plurality of component image data is performed based on a color model selected from one of an RGB model, a CMYK model, and an HSI model. 如申請專利範圍第1項之方法,其中對各自該強化成分影像資料進行常態化運算,是計算原來各自強化成分影像資料減去原來各自強化成分影像資料之均值與1加各自強化加權之乘積,並將計算結果作為各自強化成分影像資料。 For example, in the method of claim 1, wherein the normalization operation is performed on the image data of the respective enhancement components, and the product of the original enhancement component image data minus the original enhancement component image data and the respective enhancement weights are calculated. The calculation results are taken as the image data of the respective enhancement components. 如申請專利範圍第1項之方法,其中所述判斷是否結束該疊代運算,是判斷是否達到一疊代次數。 The method of claim 1, wherein the determining whether to end the iterative operation is determining whether the number of iterations is reached. 如申請專利範圍第8項之方法,其中判斷未達到該疊代次數,則將常態化運算的該等強化成分影像資料作為該疊代運算的該輸入,執行該疊代運算。 For example, in the method of claim 8, wherein the number of iterations is not reached, the normalized component image data of the normalization operation is used as the input of the iterative operation, and the iterative operation is performed. 如申請專利範圍第1項之方法,其中該疊代運算進一步包含每達到一預設的疊代次數,就基於該疊代次數所獲得的該等強化成分影像資料,重建該疊代次數的該彩色影像資料。 The method of claim 1, wherein the iterative operation further comprises reconstructing the number of iterations based on the enhanced component image data obtained by the number of iterations each time a predetermined number of iterations is reached Color image data. 如申請專利範圍第10項之方法,其中所述判斷是否結束該疊代運算,是判斷不同的疊代次數所獲得的不同運算結果之差異是否小於一臨界條件。 The method of claim 10, wherein the determining whether to end the iterative operation is determining whether a difference in different operation results obtained by different iteration times is less than a critical condition. 如申請專利範圍第1項之方法,其中所述重建該彩色影像資料,包含將結束該疊代運算後的該等強化成分影像資料與對應的該彩色影像資料的成分影像資料相互疊加,以產生一重建影像。 The method of claim 1, wherein the reconstructing the color image data comprises superimposing the image data of the enhancement component and the component image data of the corresponding color image data after the completion of the iteration operation to generate A reconstruction of the image. 一種眼底影像拍攝裝置,包含:一影像擷取裝置,擷取一眼底影像;一影像拍攝裝置,拍攝該眼底影像以輸出一彩色影像資料;以及 一處理器,根據該彩色影像資料,獲得複數個成分影像資料,作為一疊代運算的一輸入,該處理器執行該疊代運算,以強化該彩色影像資料中關於該眼底影像的透明區域,該疊代運算包含一強化運算,以運算複數個強化成分影像資料,其中每一強化成分影像資料是基於其他成分影像資料乘以各自的加權之總和,以及基於該等強化成分影像資料的一第二比例,運算該等強化成分影像資料各自的強化加權,該處理器結束該疊代運算後,基於該等強化成分影像資料,重建該彩色影像資料。 A fundus image capturing device comprising: an image capturing device for capturing a fundus image; and an image capturing device for capturing the fundus image to output a color image data; a processor, based on the color image data, obtaining a plurality of component image data as an input of an iterative operation, the processor performing the iterative operation to enhance a transparent region of the fundus image in the color image data, The iterative operation includes an enhancement operation for computing a plurality of enhancement component image data, wherein each enhancement component image data is based on a sum of the respective component image data multiplied by respective weights, and a image based on the enhancement component image data The second ratio calculates the respective enhancement weights of the enhanced component image data, and after the processor ends the iterative operation, reconstructing the color image data based on the enhanced component image data. 如申請專利範圍第13項之裝置,包含一記憶體,該記憶體儲存該彩色影像資料、該等成分影像資料及該等強化成分影像資料,以及一顯示裝置,該顯示裝置顯示重建後該彩色影像資料中關於該眼底影像的透明區域。 The device of claim 13 includes a memory that stores the color image data, the component image data, and the enhanced component image data, and a display device that displays the color after reconstruction The transparent area of the fundus image in the image data. 如申請專利範圍第14項之方法,其中所述處理器在每達到一預設的疊代次數,就基於該疊代次數所獲得的該等強化成分影像資料,重建該疊代次數所重建的該彩色影像資料,且所述處理器根據一輸入界面的操作以顯示不同的疊代次數所重建的該彩色影像資料於該顯示裝置。 The method of claim 14, wherein the processor reconstructs the number of iterations based on the image of the enhanced component obtained by the number of iterations each time a predetermined number of iterations is reached. The color image data, and the processor is configured to display the color image data reconstructed by different iterations according to an operation of the input interface to the display device.
TW105115464A 2016-05-19 2016-05-19 Method for reconstructing fundus image TWI660708B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW105115464A TWI660708B (en) 2016-05-19 2016-05-19 Method for reconstructing fundus image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW105115464A TWI660708B (en) 2016-05-19 2016-05-19 Method for reconstructing fundus image

Publications (2)

Publication Number Publication Date
TW201740871A true TW201740871A (en) 2017-12-01
TWI660708B TWI660708B (en) 2019-06-01

Family

ID=61230016

Family Applications (1)

Application Number Title Priority Date Filing Date
TW105115464A TWI660708B (en) 2016-05-19 2016-05-19 Method for reconstructing fundus image

Country Status (1)

Country Link
TW (1) TWI660708B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110543802A (en) * 2018-05-29 2019-12-06 北京大恒普信医疗技术有限公司 Method and device for identifying left eye and right eye in fundus image

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6792162B1 (en) * 1999-08-20 2004-09-14 Eastman Kodak Company Method and apparatus to automatically enhance the quality of digital images by measuring grain trace magnitudes
US7215365B2 (en) * 2001-06-25 2007-05-08 Sony Corporation System and method for effectively calculating destination pixels in an image data processing procedure
TWI408619B (en) * 2009-11-16 2013-09-11 Inst Information Industry Image contrast enhancement apparatus and method thereof

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110543802A (en) * 2018-05-29 2019-12-06 北京大恒普信医疗技术有限公司 Method and device for identifying left eye and right eye in fundus image

Also Published As

Publication number Publication date
TWI660708B (en) 2019-06-01

Similar Documents

Publication Publication Date Title
WO2019240257A1 (en) Medical image processing device, medical image processing method and program
WO2020183799A1 (en) Medical image processing device, medical image processing method, and program
Sinthanayothin Image analysis for automatic diagnosis of diabetic retinopathy
JP7114358B2 (en) MEDICAL IMAGE PROCESSING APPARATUS, MEDICAL IMAGE PROCESSING METHOD AND PROGRAM
JP7297628B2 (en) MEDICAL IMAGE PROCESSING APPARATUS, MEDICAL IMAGE PROCESSING METHOD AND PROGRAM
JP6361776B2 (en) Diagnosis support apparatus, image processing method and program in diagnosis support apparatus
KR101998595B1 (en) Method and Apparatus for jaundice diagnosis based on an image
Davis et al. Vision-based, real-time retinal image quality assessment
JP7019815B2 (en) Learning device
CA3060762A1 (en) Diagnosis assisting device, and image processing method in diagnosis assisting device
WO2020075345A1 (en) Medical image processing device, medical image processing method, and program
AU2021100684A4 (en) DEPCADDX - A MATLAB App for Caries Detection and Diagnosis from Dental X-rays
JP2008229157A (en) Fundus image processing apparatus and fundus photographing apparatus
TWI660708B (en) Method for reconstructing fundus image
Majumdar et al. An automated graphical user interface based system for the extraction of retinal blood vessels using kirsch‘s template
JP2018201629A (en) Fundus image processing device
JP2018023602A (en) Fundus image processing device
JP2019118670A (en) Diagnosis support apparatus, image processing method, and program
US20240112333A1 (en) Methods and systems for ehnanced ophthalmic visualization
JP6481432B2 (en) Fundus image processing device
JP2020182680A (en) Visual perception simulation method and visual perception simulation program
CN113744254B (en) Fundus image analysis method, fundus image analysis system, storage medium and computer equipment
US20220280026A1 (en) Method of image enhancement for distraction deduction
WO2024009631A1 (en) Image processing device, and method for operating image processing device
US11288800B1 (en) Attribution methodologies for neural networks designed for computer-aided diagnostic processes

Legal Events

Date Code Title Description
MM4A Annulment or lapse of patent due to non-payment of fees