TWI660708B - Method for reconstructing fundus image - Google Patents
Method for reconstructing fundus image Download PDFInfo
- Publication number
- TWI660708B TWI660708B TW105115464A TW105115464A TWI660708B TW I660708 B TWI660708 B TW I660708B TW 105115464 A TW105115464 A TW 105115464A TW 105115464 A TW105115464 A TW 105115464A TW I660708 B TWI660708 B TW I660708B
- Authority
- TW
- Taiwan
- Prior art keywords
- image data
- component image
- enhanced
- color
- enhanced component
- Prior art date
Links
Landscapes
- Image Processing (AREA)
- Eye Examination Apparatus (AREA)
Abstract
本發明提供一種眼底影像重建方法,包含:接收一眼底影像;識別複數個成分影像資料,作為一疊代運算的一輸入;基於一第一比例,運算該等成分影像資料各自的加權。所述疊代運算包含:執行一強化運算,以運算複數個強化成分影像資料,基於一第二比例,運算該等強化成分影像資料各自的強化加權;判斷是否結束該疊代運算,若未結束,則將運算的該等強化成分影像資料作為下一個疊代運算的該輸入。結束該疊代運算後,基於該等強化成分影像資料,重建該眼底影像。 The invention provides a fundus image reconstruction method, including: receiving a fundus image; identifying a plurality of component image data as an input of an iterative operation; and calculating a respective weighting of the component image data based on a first scale. The iterative operation includes: performing an enhancement operation to calculate a plurality of enhanced component image data, and based on a second ratio, calculating respective enhancement weights of the enhanced component image data; determining whether to end the iterative operation, if not , The enhanced component image data of the operation is used as the input of the next iterative operation. After the iterative calculation is completed, the fundus image is reconstructed based on the enhanced component image data.
Description
本發明是關於一種影像重建方法,尤其是一種關於眼底影像的重建方法。 The invention relates to an image reconstruction method, in particular to a method for reconstructing an eye fundus image.
視網膜上增生膜或視網膜前增生膜(Epiretinal Membrane)是一種發生位於眼底(fundus)位置的病變,其嚴重會影像患者的視力。眼底增生膜可被妥善治療,只要能夠診斷增生膜產生的正確位置,並徹底地將其移除,即可改善失去的視力。 The retinal hyperplasia membrane or epiretinal membrane (Epiretinal Membrane) is a disease that occurs at the fundus location, which seriously affects the vision of patients. The fundus hyperplasia film can be properly treated. As long as the correct location of the hyperplasia film can be diagnosed and removed completely, the lost vision can be improved.
眼底增生膜的發現可經由眼底檢查而識別。一眼底鏡可用來拍攝眼睛的後方,以獲取一眼底影像。眼底影像記錄了患者視網膜的外觀,其用來檢查由眼疾所引起的異常,也可用來追蹤眼疾的進展。臨床醫師可藉由追蹤患者眼底影像,了解患者眼疾的歷史資訊,並做出適當的診斷以及治療。尤其眼底影像所顯示的細節變化可能與某種疾病產生關連,例如,就一張彩色的眼底影像而言,影像中所顯示之異常或是特別顯眼的透明(白色)區域,可能與眼底增生膜有關。也就是說,所述影像中的異常白色區域是由在眼底位置或視網膜上所形成的薄膜所引起的光學特性。臨床人員可經由定期追蹤患者的眼底影像記錄,而辨識出增生膜。在一含有增生膜的眼底影像中,應包含有一較亮影像區域及一較暗影像區域,其中較量影 像區域對應眼底增生膜的區域。在實務上,經由一般眼底鏡所拍攝的眼底影像,若未經特殊影像處理,難以讓臨床人員辨識出影像的較亮區域和較暗區域,以至無法給出精準的診斷及治療。 The findings of fundus hyperplasia can be identified through fundus examination. A fundus lens can be used to capture the back of the eye to obtain a fundus image. Fundus images record the appearance of the patient's retina, which is used to check for abnormalities caused by eye diseases and to track the progress of eye diseases. Clinicians can track the patient's fundus image to understand the historical information of the patient's eye disease, and make appropriate diagnosis and treatment. In particular, the changes in the details displayed in the fundus image may be related to a certain disease. For example, in the case of a color fundus image, the abnormality displayed in the image or the particularly prominent transparent (white) area may be related to the fundus hyperplasia related. That is, the abnormal white area in the image is an optical characteristic caused by a thin film formed on the fundus position or on the retina. Clinicians can identify the hyperplasia membrane by regularly tracking the patient's fundus image records. A fundus image containing a hyperplasia film should include a brighter image area and a darker image area. The image area corresponds to the area of the fundus hyperplasia film. In practice, without special image processing, fundus images taken through general ophthalmoscopes will make it difficult for clinical staff to identify the brighter and darker areas of the image, and it will not be possible to give accurate diagnosis and treatment.
因此,為了給予這類的眼疾患者有更可靠且更有效率的眼底診斷,勢必需要一種工具來提高影像細節及對比度的辨識。 Therefore, in order to give more reliable and efficient fundus diagnosis to patients with such eye diseases, a tool is bound to improve the identification of image details and contrast.
本發明使用影像處理來作為提高影像細節及對比度的辨識,適用於眼底影像,尤其是與眼底增生膜有關的眼底影像。本發明提供一種影像重建的方法,其將一未經處理的原始影像轉換為一重建影像。對於一眼底鏡所拍攝的一眼底影像(初始影像),經由本發明所提供的方法處理,被轉換成一重建影像,其具有對比度及/或圖案細節強化之特性。 The present invention uses image processing for identification of improving image details and contrast, and is suitable for fundus images, especially fundus images related to fundus hyperplasia films. The invention provides a method for image reconstruction, which converts an unprocessed original image into a reconstructed image. A fundus image (initial image) taken by a fundus mirror is processed by the method provided by the present invention and converted into a reconstructed image, which has the characteristics of contrast and / or pattern detail enhancement.
所述眼底影像重建方法,由至少一處理器所執行,包含:接收一彩色影像資料,該影像資料有關於一眼底影像;執行一疊代運算,以獲得多張強化成分影像資料;及基於該等強化成分影像資料,重建該彩色影像資料。 The fundus image reconstruction method is executed by at least one processor and includes: receiving a color image data related to a fundus image; performing an iterative operation to obtain a plurality of enhanced component image data; and based on the And other enhanced component image data to reconstruct the color image data.
所述方法,在執行該疊代運算前,包含:根據該彩色影像資料,獲得複數個成分影像資料,作為該疊代運算的一輸入。一般而言,眼底影像包含色彩。本發明所提供的方法是將接收的彩色影像資料,依據其顏色成分,辨識出與該影像有關的複數張成分影像資料,且每一成分影像資料對應一顏色,該成分影像對應的顏色不同於另一成分影像所對應的另一顏色。例如,一成分影像可對應紅色,另一成分影像可對應藍色。這些成分影像代表該彩色影像的每一個顏色的分量。本發明利用該等成分影像 作為所述疊代運算的一輸入,並分別被處理。 Before performing the iterative operation, the method includes: obtaining a plurality of component image data according to the color image data as an input of the iterative operation. Generally, the fundus image contains color. The method provided by the present invention is to identify a plurality of component image data related to the image based on the color component of the received color image data, and each component image data corresponds to a color, and the color corresponding to the component image is different from Another color corresponding to another component image. For example, one component image may correspond to red, and the other component image may correspond to blue. These component images represent the components of each color of the color image. The present invention utilizes these component images As an input of the iterative operation, and processed separately.
所述疊代運算包含:基於該等成分影像資料的一第一比例,運算該等成分影像資料各自的加權。該等成分影像資料的每一者在所述彩色影像中之貢獻不同,也就是說,每一成分影像資料在對應的像素上可能具有不同的像素數值。這導致所述彩色影像資料的該等成分影像資料彼此之間存在一比例關係。而基於這樣的比例關係,對於該等成分影像資料的每一者進行加權運算。基於這樣的比例關係,產生複數個權重值,該等權重值的每一者對應該等成分影像的每一者,並基於這樣的對應關係進行所述加權運算。 The iterative operation includes: calculating a respective weight of the component image data based on a first ratio of the component image data. Each of the component image data has a different contribution in the color image, that is, each component image data may have a different pixel value on a corresponding pixel. This results in a proportional relationship between the component image data of the color image data. Based on such a proportional relationship, a weighting operation is performed on each of the component image data. Based on such a proportional relationship, a plurality of weight values are generated, each of the weight values corresponding to each of the component images, and the weighting operation is performed based on such a corresponding relationship.
所述疊代運算包含:執行一強化運算,以運算複數個強化成分影像資料,其中每一強化成分影像資料是基於其他成分影像資料乘以各自的加權之總和。所述其他成分影像資料乘以對應的權重值。加權後的所述其他成分影像資料的總和,為該強化成分影像資料。其中,該強化成分影像資料所對應的顏色不同於所述其他成分影像資料對應的顏色,且所述其他影像資料也分別對應不同的顏色。換句話說,對應一顏色的成分影像資料,是由已加權之對應其他顏色的成分影像資料之總和所計算而獲得。 The iterative operation includes performing an enhanced operation to calculate a plurality of enhanced component image data, wherein each enhanced component image data is based on the sum of the other component image data multiplied by a respective weight. Multiply the other component image data by a corresponding weight value. The sum of the weighted other component image data is the enhanced component image data. The color corresponding to the enhanced component image data is different from the colors corresponding to the other component image data, and the other image data also correspond to different colors. In other words, the component image data corresponding to one color is obtained by calculating the weighted sum of the component image data corresponding to other colors.
所述強化運算還包含:基於該等強化成分影像資料的一第二比例,運算該等強化成分影像資料各自的強化加權。如同前述,該等強化成分影像資料彼此也存在一比例關係,因此可基於這樣的關係進一步產生複數個權重值(此處稱強化權重值)給該等強化成分影像資料,用於後續處理。 The enhancement operation further includes: calculating an enhancement weight of each of the enhanced component image data based on a second ratio of the enhanced component image data. As mentioned above, the enhanced component image data also has a proportional relationship with each other, so based on this relationship, a plurality of weight values (herein referred to as enhanced weight values) can be further generated for the enhanced component image data for subsequent processing.
所述疊代運算包含:基於對應的該強化加權與該強化成分影 像資料的一均值,對各自該強化成分影像資料進行常態化運算。所獲得的該等強化成分影像資料,可能會使得其所對應呈現的影像強度過高,因此執行與影像強度有關的常態化運算,獲得適當的影像強度。 The iterative operation includes: based on the corresponding enhanced weighting and the enhanced component shadow An average value of the image data is used to perform normalization operations on the respective enhanced component image data. The obtained enhanced component image data may cause the correspondingly rendered image intensity to be too high, so a normalization operation related to the image intensity is performed to obtain an appropriate image intensity.
所述疊代運算包含:判斷是否結束該疊代運算,若未結束,則將常態化運算的該等強化成分影像資料作為該疊代運算的該輸入。當該疊代運算,或該等強化成分影像滿足一特定條件時,結束該疊代運算。當條件不被滿足時,再次執行該疊代運算,並以所述常態化結果取代該等成分影像資料作為後續將執行疊代運算的輸入。後續疊代運算的流程基本上與前一次疊代運算的流程相同。結束該疊代運算,基於該等強化成分影像資料,重建該彩色影像資料,重建的彩色影像資料所顯示的影像具有不同於初始影像的顏色和細節表現。 The iterative operation includes: determining whether to end the iterative operation, and if not, the enhanced component image data of the normalized operation is used as the input of the iterative operation. When the iterative operation or the enhanced component images meet a specific condition, the iterative operation is ended. When the condition is not satisfied, the iterative operation is performed again, and the component image data is replaced with the normalized result as an input for subsequent iterative operations. The flow of subsequent iteration operations is basically the same as the flow of the previous iteration operation. End the iterative calculation and reconstruct the color image data based on the enhanced component image data. The image displayed by the reconstructed color image data has a color and detail performance different from the original image.
在以下本發明的說明書以及藉由本發明原理所例示的圖式當中,將更詳細呈現本發明的這些與其他特色和優點。 These and other features and advantages of the invention will be presented in more detail in the following description of the invention and the drawings illustrated by the principles of the invention.
2‧‧‧影像擷取單元 2‧‧‧Image Acquisition Unit
4‧‧‧成像單元 4‧‧‧ imaging unit
6‧‧‧照明單元 6‧‧‧lighting unit
61‧‧‧光源 61‧‧‧light source
62‧‧‧光學元件 62‧‧‧optical element
8‧‧‧攝像單元 8‧‧‧ camera unit
10‧‧‧電腦裝置 10‧‧‧Computer device
102‧‧‧處理器 102‧‧‧ processor
104‧‧‧儲存單元 104‧‧‧Storage unit
106‧‧‧顯示單元 106‧‧‧Display unit
108‧‧‧輸入介面 108‧‧‧ input interface
E‧‧‧眼睛 E‧‧‧ eyes
20‧‧‧輸入影像 20‧‧‧ input image
22‧‧‧重建影像 22‧‧‧ reconstructed image
24‧‧‧成分影像資料 24‧‧‧ component image data
24r、24g、24b‧‧‧成分影像資料 24r, 24g, 24b ‧‧‧ component image data
26‧‧‧中介重建影像 26‧‧‧ Intermediate reconstruction image
Wr、Wg、Wb‧‧‧權重值 W r , W g , W b ‧‧‧ weight value
300至320‧‧‧步驟 300 to 320‧‧‧ steps
第一圖顯示一系統,該系統適用於本發明之眼底影像重建方法。 The first figure shows a system suitable for the fundus image reconstruction method of the present invention.
第二圖顯示一輸入影像及其一重建影像(根據本發明所提供的方法)。 The second figure shows an input image and a reconstructed image (according to the method provided by the present invention).
第三圖顯示本發明方法之流程。 The third figure shows the flow of the method of the present invention.
底下將參考圖式更完整說明本發明,並且藉由例示顯示特定範例具體實施例。不過,本主張主題可具體實施於許多不同形式,因此所 涵蓋或申請主張主題的建構並不受限於本說明書所揭示的任何範例具體實施例;範例具體實施例僅為例示。同樣,本發明在於提供合理寬闊的範疇給所申請或涵蓋之主張主題。除此之外,例如主張主題可具體實施為方法、裝置或系統。因此,具體實施例可採用例如硬體、軟體、韌體或這些的任意組合(已知並非軟體)之形式。 The invention will be described more fully with reference to the accompanying drawings, and specific examples and specific embodiments are shown by way of illustration. However, the subject matter of this claim can be embodied in many different forms, so The construction of the claimed or covered subject matter is not limited to any example embodiments disclosed in this specification; the example embodiments are merely examples. As such, the invention resides in providing a reasonably broad scope to the claimed subject matter that is claimed or covered. In addition, for example, it is claimed that the subject matter may be embodied as a method, apparatus, or system. Thus, specific embodiments may take the form of, for example, hardware, software, firmware, or any combination of these (known not to be software).
本說明書內使用的詞彙「在一個實施例」並不必要參照相同具體實施例,且本說明書內使用的「在其他具施例」並不必要參照不同的具體實施例。其目的在於例如主張的主題包括全部或部分範例具體實施例的組合。 The term "in one embodiment" used in this specification does not necessarily refer to the same specific embodiment, and "in other embodiments" used in this specification does not necessarily refer to a different specific embodiment. Its purpose is, for example, that the claimed subject matter includes all or part of a combination of exemplary embodiments.
第一圖示意一系統,用於取得一眼睛E的眼底影像及其處理。該系統包含一影像擷取裝置、一影像拍攝(記錄)裝置以及一處理與運用裝置。所述影像擷取裝置可包含在所述影像拍攝裝置,用以擷取清晰的眼底影像。所述處理與運用裝置載有程式指令,其係配置以執行本發明之影像重建方法,以對接收的一影像進行處理。所述處理與運用裝置接收影像拍攝裝置所取得的初始影像資料,並經由本發明提供的特殊加權處理而輸出一重建影像。影像擷取裝置與影像拍攝裝置不一定結合成一拍攝裝置,影像拍攝裝置也可合併至處理與運用裝置。 The first figure illustrates a system for obtaining a fundus image of an eye E and its processing. The system includes an image capture device, an image capture (recording) device, and a processing and application device. The image capturing device may be included in the image capturing device to capture a clear fundus image. The processing and using device carries program instructions, which are configured to execute the image reconstruction method of the present invention to process a received image. The processing and using device receives the initial image data obtained by the image capturing device, and outputs a reconstructed image through the special weighting process provided by the present invention. The image capture device and the image capture device are not necessarily combined into a single capture device, and the image capture device can also be combined into a processing and application device.
所述拍攝裝置可為專用於拍攝眼底影像之眼底鏡(fundus camera)。如第一圖提供一實施例,所述系統的影像擷取裝置可包含一影像擷取單元2及一成像單元4。該影像擷取單元2包含複數個光學元件,例如包含有物鏡(objective)、半透明鏡(semitransparent mirror)、聚焦透鏡(focusing lens)及光圈(aperture)等。該影像擷取單元2係經配置以在物空間形成一 聚焦表面,使一眼睛E底部表面與該聚焦表面重疊。該影像擷取單元2可搭配一照明單元6。該照明裝置6包含光源61、聚光鏡(condensor)及其他光學元件62。該照明單元6的一部分被包含在該影像擷取單元2中,例如共同的物鏡。該照明單元6係配置以投射光線至眼球內,照明眼底區域,提供足夠的光線供該影像擷取單元2擷取。依據操作,該光源61可選擇性地開啟。本發明中,更多的照明單元可被包含,以滿足不同的觀看需求。 The photographing device may be a fundus camera dedicated to photographing a fundus image. As shown in the first figure, an image capture device of the system may include an image capture unit 2 and an imaging unit 4. The image capturing unit 2 includes a plurality of optical elements, such as an objective lens, a semitransparent mirror, a focusing lens, an aperture, and the like. The image capturing unit 2 is configured to form a Focusing surface such that the bottom surface of an eye E overlaps the focusing surface. The image capturing unit 2 can be matched with a lighting unit 6. The lighting device 6 includes a light source 61, a condensor, and other optical elements 62. A part of the illumination unit 6 is included in the image capturing unit 2, such as a common objective lens. The lighting unit 6 is configured to project light into the eyeball, illuminate the fundus area, and provide sufficient light for the image capture unit 2 to capture. According to the operation, the light source 61 can be selectively turned on. In the present invention, more lighting units can be included to meet different viewing needs.
該成像單元4包含複數個光學元件,例如包含反射鏡、分色鏡(dichroic mirror)及中繼透鏡(relay lens)等。該成像單元4連接於該影像擷取單元2的後端,用以接收前者擷取的光線並投射至一攝像單元8,其被包含在所述影像拍攝裝置。在其他可能的情況,該成像單元4與該攝像單元8可被包含在該影像拍攝裝置,以拍攝該眼底影像並輸出一彩色影像資料。一拍攝之眼底影像即被傳送至所述處理與運用裝置。所述處理與運用單元可以是一電腦裝置10,其包含至少一處理器102、儲存單元104、顯示單元106及輸入介面108。拍攝之眼底影像即已資料的形式存放在該儲存單元104。該儲存單元104還儲存有複數個指令用以被該至少一處理器102所讀取以執行各種的操作,包含本發明所提供之眼底影像重建的操作。該儲存單元104還儲存每次疊代運算的結果,作為後續疊代運算或重建影像的來源。該儲存單元104為一記憶體,其儲存該彩色影像資料以及相關影像處理的中間資料(如成分影像資料及強化成分影像資料)。所述中間資料將於後續內容說明。一或多個經處理或未經處理的影像可被顯示於該顯示單元106,以供觀看及比較。該顯示單元106可以是一顯示裝置,例如LCD高解析螢幕,顯示重建後該彩色影像資料中關於該眼底影像的透明區域(即影 像中偏白色的部分,也就是眼底增生膜可能形成的區域)。該輸入介面108由硬體及軟體所構成,像是一輸入鍵盤、觸控裝置及與這些硬體互動的操作程式。輸入介面108是設置以提供一使用者或一操作人員與該電腦裝置10之互動。透過輸入介面108,該電腦裝置10可接收用於執行本發明方法之控制參數,如附數個關於運算的預設值或條件,這些將在後續內容說明。所述處理器104,可根據該輸入界面108的操作,令不同的疊代次數所重建的該彩色影像資料,(即中介重建影像)於該顯示裝置。 The imaging unit 4 includes a plurality of optical elements, such as a reflector, a dichroic mirror, a relay lens, and the like. The imaging unit 4 is connected to the rear end of the image capturing unit 2, and is used for receiving the light captured by the former and projecting to a camera unit 8, which is included in the image capturing device. In other possible cases, the imaging unit 4 and the camera unit 8 may be included in the image capturing device to capture the fundus image and output a color image data. A captured fundus image is transmitted to the processing and application device. The processing and application unit may be a computer device 10 including at least a processor 102, a storage unit 104, a display unit 106, and an input interface 108. The captured fundus image is stored in the storage unit 104 in the form of data. The storage unit 104 also stores a plurality of instructions for being read by the at least one processor 102 to perform various operations, including operations for reconstructing fundus images provided by the present invention. The storage unit 104 also stores the results of each iteration operation as a source of subsequent iteration operations or reconstructed images. The storage unit 104 is a memory that stores the color image data and intermediate data related to image processing (such as component image data and enhanced component image data). The intermediate information will be explained later. One or more processed or unprocessed images may be displayed on the display unit 106 for viewing and comparison. The display unit 106 may be a display device, such as an LCD high-resolution screen, which displays the transparent area (i.e., shadow) of the fundus image in the color image data after reconstruction. The white part of the image is the area where the fundus hyperplasia film may form). The input interface 108 is composed of hardware and software, such as an input keyboard, a touch device, and an operating program that interacts with these hardware. The input interface 108 is configured to provide a user or an operator to interact with the computer device 10. Through the input interface 108, the computer device 10 can receive control parameters for executing the method of the present invention, such as a number of preset values or conditions related to the operation, which will be described in the subsequent content. According to the operation of the input interface 108, the processor 104 can cause the color image data reconstructed by different iteration times (that is, the intermediate reconstruction image) to be displayed on the display device.
本發明之影像重建方法不一定是在第一圖的系統中執行,該彩色影像資料亦可經由其他方式,如網路,傳送至其他電腦或伺服器進行處理。執行本發明方法的處理器可為一或多個,這些處理器可存在一電腦裝置中,或是分散於不同的運算裝置。 The image reconstruction method of the present invention is not necessarily executed in the system of the first figure, and the color image data can also be transmitted to other computers or servers for processing through other methods, such as the network. There may be one or more processors executing the method of the present invention, and these processors may be stored in a computer device or distributed among different computing devices.
第二圖顯示一輸入影像20(初始影像)經由本發明之方法轉換為一重建影像22。該輸入影像20基於一彩色影像資料所顯示。該彩色影像資料是由一影像拍攝裝置所取得。該彩色影像資料可為所述影像拍攝裝置的輸出,且一般未經處理。該彩色影像資料的格式可以是jpg、jpeg、bmp或png。該彩色影像資料是以矩陣的方式呈現,例如一張512×512像素的輸入影像20,其彩色影像資料亦為512×512之矩陣,該矩陣中的每一個元素代表一像素值(pixel value)。該彩色影像資料及後續運算的成分影像資料及強化成分影像資料皆包含複數個像素值。該彩色影像資料是由複數個成分影像資料所組成。該等成分影像資料,是基於一色彩模型所定義,該色彩模型可選自三原色模型(RGB color model)、混色模型(CMYK color model)及色相飽和亮度模型(HSI color model)之其中一者。 The second figure shows an input image 20 (initial image) converted into a reconstructed image 22 by the method of the present invention. The input image 20 is displayed based on a color image data. The color image data is obtained by an image capturing device. The color image data may be an output of the image capturing device, and is generally unprocessed. The format of the color image data can be jpg, jpeg, bmp or png. The color image data is presented in a matrix manner, for example, an input image 20 of 512 × 512 pixels, and the color image data is also a 512 × 512 matrix. Each element in the matrix represents a pixel value. . The color image data and the component image data and the enhanced component image data of subsequent calculations all include a plurality of pixel values. The color image data is composed of a plurality of component image data. The component image data is defined based on a color model, which may be selected from one of a RGB color model, a CMYK color model, and a HSI color model.
如第二圖,首先,接收與該輸入影像20(如一眼底影像)有關的該彩色影像資料。處理器根據該彩色影像資料,獲得複數個成分影像資料24。此處示範的成分影像資料24是基於三原色模型所獲得,即該彩色影像資料的紅色部分24r、藍色部分24b及綠色部分24g,但本屬領域具有通常知識者亦可根據三原色模型與其他色彩模型之間的轉換而獲得基於混色模型之其他成分影像資料或是基於HSI模型之其他成分影像資料。該等成分影像資料24也是以矩陣的方式表現,如512×512矩陣,而每一矩陣的元素為與一顏色有關的像素值。換句話說,該彩色影像資料是包含或由該等成分影像資料24的疊加而組成。大部分影像拍攝裝置所獲得的初始影像,一般是根據其三原色的分量來形成對應的資料,這與感光元件的配置有關。因此,在彩色影像資料中辨識出三原色的成分影像資料已是本領域技術者能夠做到的。 As shown in the second figure, first, the color image data related to the input image 20 (such as a fundus image) is received. The processor obtains a plurality of component image data 24 based on the color image data. The component image data 24 exemplified here is obtained based on the three primary color models, that is, the red part 24r, the blue part 24b, and the green part 24g of the color image data. Conversion between models to obtain other component image data based on the color mixing model or other component image data based on the HSI model. The component image data 24 is also represented in a matrix manner, such as a 512 × 512 matrix, and the elements of each matrix are pixel values related to a color. In other words, the color image data includes or consists of a superposition of the component image data 24. The initial image obtained by most image capturing devices generally forms corresponding data according to the components of its three primary colors, which is related to the configuration of the photosensitive element. Therefore, it is possible for those skilled in the art to identify the component image data of the three primary colors in the color image data.
本發明方法包含一加權運算。處理器根據該等成分影像資料24r、24g、24b之間的一比例關係(第一比例)產生複數個權重值Wr、Wg、Wb(初始權重),其各別對應先前已識別的成分影像資料24r、24g、24b。所述比例關係(第一比例)將於後續內容說明。 The method of the present invention includes a weighting operation. The processor generates a plurality of weight values W r , W g , and W b (initial weights) according to a proportional relationship (first ratio) between the component image data 24r, 24g, and 24b, each of which corresponds to a previously identified Component image data 24r, 24g, 24b. The proportional relationship (the first ratio) will be described later.
本發明之方法包含一疊代運算,以強化該彩色影像資料中關於該眼底影像的透明區域。本發明之方法還包含更多的疊代運算,進一步強化所述區域。所述疊代運算包含一強化運算及一強化權重運算。已辨識的該等成分影像資料24r、24g、24b及運算的該等權重值Wr、Wg、Wb可分別做為該疊代運算(第一疊代運算)的一輸入。所述強化運算被執行,以運算複數個強化成分影像資料(圖中未示),其中每一強化成分影像資料是 基於其他成分影像資料乘以各自的加權(權重值Wr、Wg、Wb)之總和。關於強化運算將於後續內容具體說明。所述強化成分影像的資料維度與為強化前的成分影像資料相同,即512×512矩陣。 The method of the present invention includes an iterative operation to enhance the transparent area of the fundus image in the color image data. The method of the present invention also includes more iterative operations to further strengthen the area. The iterative operation includes an enhanced operation and an enhanced weight operation. Identification of these components has image data 24r, 24g, 24b and the weight calculation of such weight value W r, W g, W b, respectively, may be used as an input to the iterative computation (the first iteration operation). The enhancement operation is performed to calculate a plurality of enhanced component image data (not shown), wherein each enhanced component image data is based on other component image data multiplied by a respective weight (weight values W r , W g , W b ) the sum. The enhanced operation will be explained in detail in the following content. The data dimension of the enhanced component image is the same as the component image data before the enhancement, that is, a 512 × 512 matrix.
所述疊代運算包含的強化權重運算,是基於該等強化成分影像資料的一比例關係(第二比例),運算該等強化成分影像資料各自的強化加權,即產生複數個強化權重值。此處強化權重與前述初始權重不同,強化權重是基於疊代運算中的強化運算所獲得;初始權重是基於初始的成分影像資料所獲得。 The iterative weighting operation included in the iterative operation is based on a proportional relationship (second ratio) of the enhanced component image data, and calculates the respective enhanced weighting of the enhanced component image data, that is, a plurality of enhanced weight values are generated. The enhancement weight here is different from the aforementioned initial weight. The enhancement weight is obtained based on the enhancement operation in the iterative operation; the initial weight is obtained based on the initial component image data.
所述該等強化成分影像資料可為所述疊代運算的一輸出。該輸出用來作為下一個疊代運算(第二疊代運算)的一輸入,並再重複所述強化運算及強化權重運算,以獲得新的複數個強化成分影像資料,作為再下一個疊代運算的的一輸入。這些運算過程會持續重複,直到處理器根據一判斷結果結束疊代運算。處理器結束該疊代運算後,基於最新的該等強化成分影像資料,重建該彩色影像資料。最後獲得之該等強化成分影像資料與對應的該彩色影像資料的成分影像資料相互疊加,以產生所述重建影像22。比較第二圖中,重建前後的眼底影像20、22,其中該重建影像22具有更高的明暗對比度(brightness contrast),其有助於辨識出影像22的一明亮區域及一相對陰暗區域。尤其,該重建影像22的圖案細節也因強化而更為明顯(distinct)。 The enhanced component image data may be an output of the iterative operation. The output is used as an input of the next iteration operation (second iteration operation), and the enhanced operation and the enhanced weight operation are repeated to obtain new plurality of enhanced component image data as the next iteration An input for the operation. These operations are repeated until the processor ends the iterative operation according to a judgment result. After the processor ends the iterative operation, the color image data is reconstructed based on the latest enhanced component image data. The enhanced component image data finally obtained and the corresponding component image data of the color image data are superimposed on each other to generate the reconstructed image 22. Compare the fundus images 20 and 22 before and after reconstruction in the second image. The reconstructed image 22 has a higher brightness contrast, which helps to identify a bright area and a relatively dark area of the image 22. In particular, the pattern details of the reconstructed image 22 are more distinct due to the enhancement.
在處理器停止持續疊代運算之前,一或多個疊代運算的輸出,即該等強化成分影像資料,可儲存於如第一圖系統中的儲存單元104。同時,一或多個中介重建影像26(基於儲存的該等強化成分影像資料所重 建)可一併被儲存。所述中介重建影像26與第二圖的重建影像22不同。所述中介重建影像26是在疊代運算尚未停止前所產出的。該等中介重建影像26可被調用而經由一顯示裝置顯示。每一中介重建影像26可在每隔一預定次數的疊代運算後,基於最後一次運算的該等強化成分影像資料而產生,例如每1000次疊代運算產生一中介重建影像。因此,只要疊代運算尚未停止,有至少一個中介重建影像26可取得。該等重建影像26的對比度及亮度等特性呈現一連續變化,此變化可用於其他影像處理,例如可作為是否結束或停止持續執行疊代運算。 Before the processor stops the continuous iteration operation, the output of one or more iteration operations, that is, the enhanced component image data, may be stored in the storage unit 104 in the first picture system. At the same time, one or more intermediary reconstructed images 26 (based on the stored It can be stored together. The intermediate reconstructed image 26 is different from the reconstructed image 22 of the second image. The intermediary reconstruction image 26 is produced before the iterative operation has not stopped. The intermediary reconstructed images 26 can be recalled and displayed via a display device. Each intermediate reconstruction image 26 may be generated based on the enhanced component image data of the last operation after every predetermined number of iteration operations. For example, an intermediate reconstruction image is generated every 1000 iteration operations. Therefore, as long as the iterative operation has not stopped, at least one intermediary reconstructed image 26 can be obtained. The characteristics such as contrast and brightness of the reconstructed images 26 exhibit a continuous change, and this change can be used for other image processing, for example, it can be used as whether to end or stop the continuous iterative calculation.
第三圖為本發明方法之流程,包含步驟300至步驟320,其由至少一處理器執行。為了方便說明,圖中的n代表未經疊代運算的初始處理,而n+1代表經第一次疊代運算之處理,以此類推。該方法始於步驟300,接收一初始影像。由一運算裝置或電腦接收該彩色影像資料,即第二圖的輸入影像20,且該彩色影像資料是有關於眼底影像。該初始影像是取自一影像拍攝裝置且通常顏色或像素值未經過處理。然而,在其他實施例中,該初始影像也可經由初步處理,例如影像的壓縮或像素值的常態化(normalization)。 The third diagram is the flow of the method of the present invention, including steps 300 to 320, which are executed by at least one processor. For the convenience of explanation, n in the figure represents the initial processing without iteration operation, n + 1 represents the processing after the first iteration operation, and so on. The method starts at step 300 and receives an initial image. An arithmetic device or a computer receives the color image data, that is, the input image 20 of the second image, and the color image data is related to the fundus image. The initial image is taken from an image capturing device and usually the color or pixel value is not processed. However, in other embodiments, the initial image may also undergo preliminary processing, such as compression of the image or normalization of pixel values.
步驟302,由至少一處理器識別該彩色影像資料所包含的顏色成分,如前述三原色。辨識出該彩色影像資料的每一個像素值所包含的三原色成分,據此獲得複數個成分影像資料,如下列方程式所描述:X initial =r n +g n +b n ......(1),其中X initial 為該彩色影像資料,r n 、g n 、b n 為成分影像資料,如第二圖 所示之基於三原色模型的該等成分影像資料24r、24g、24b。在其他實施例中,可基於CMYK模型或HSI模型自該彩色影像資料識別其他的成分影像資料。該等成分影像資料具有相同的維度,如512×512。該等成分影像資料也是後續疊代運算的一輸入。 In step 302, the color components included in the color image data are identified by at least one processor, such as the aforementioned three primary colors. The three primary color components contained in each pixel value of the color image data are identified, and a plurality of component image data are obtained accordingly, as described by the following equation: X initial = r n + g n + b n ...... ( 1), where X initial is the color image data, and r n , g n , b n are component image data, as shown in the second figure, the component image data 24r, 24g, 24b based on the three primary color model. In other embodiments, other component image data may be identified from the color image data based on a CMYK model or an HSI model. These component image data have the same dimensions, such as 512 × 512. The component image data is also an input for subsequent iterative calculations.
步驟304,產生關於所述顏色成分的初始權重。該等成分影像資料的一比例關係(第一比例)被用來運算該等成分影像資料各自的加權。該第一比例為該等成分影像資料各自的均值的比例。根據該第一比例,產生複數個初始權重值。該各自成分影像資料的加權,即每一個成分影像資料所對應的一初始權重值,是各自成分影像資料的均值除以所有成分影像資料的均值之總和,如以下方程式所描述:
步驟306,執行一疊代運算,該疊代運算始於一強化運算,以產生複數個強化成分影像資料(n+1),其中每一強化成分影像資料是基於其他成分影像資料乘以各自的加權之總和。該等強化成分影像資料的每一者是基於經加權運算的該等成分影像資料的至少二者所運算,其中用於所述強化運算的該等成分影像資料(n)的至少二者分別對應兩相異顏色(例如綠色、藍色),而基於該等成分影像資料(n)的至少二者所運算的該強
化成分影像資料(n+1)對應另一顏色(例如紅色),該另一顏色不同於所述兩相異顏色。如基於三原色之該等強化成分影像資料(n+1)之運算如以下方程式所描述:
該疊代運算還包含一強化權重運算,步驟310,即對於運算獲得之該等強化影像資料的一加權運算。步驟310的加權運算與步驟304的加權運算概念相同,但運算的基礎不同。步驟304是基於初始的該等成分影像資料(n)所運算,而步驟308是基於強化成分影像資料(n+1)所運算。
基於該等強化成分影像資料(n+1)的一比例關係(第二比例),運算該等強化成分影像資料(n+1)各自的強化加權。產生複數個強化權重值Wk n+1,如下方程式所描述:
該疊代運算還包含一常態化運算,步驟312。基於對應的該強化加權與該強化成分影像資料的一均值,對各自該強化成分影像資料進 行常態化運算。其中對各自該強化成分影像資料進行常態化運算,是計算原來各自強化成分影像資料減去原來各自強化成分影像資料之均值與1加上各自強化加權之乘積,並將計算結果作為常態化之各自強化成分影像資料,如下列方程式所描述:r n+1(normalized)=r n+1-avg(r n+1)×(1+Wr n+1)......(7),g n+1(normalized)=g n+1-avg(g n+1)×(1+Wg n+1)......(8),bn+1(normalized)=bn+1-avg(bn+1)×(1+Wbn+1)......(9)。經常態化運算的該等強化成分影像資料可作為該疊代運算的一輸出。在某些實施例中,步驟312可省略,即以該等強化成分影像資料作為該疊代運算的輸出。 The iterative operation also includes a normalization operation, step 312. Based on a corresponding mean value of the enhancement weighting and the enhancement component image data, normalization operations are performed on the respective enhancement component image data. The normalization operation is performed on the respective enhanced component image data to calculate the original respective enhanced component image data minus the original average value of the respective enhanced component image data and 1 plus the product of the respective enhancement weights, and the calculation results are regarded as normalized respective Enhanced component image data, as described by the following equation: r n +1 (normalized) = r n +1 - avg ( r n +1 ) × (1+ Wr n +1 ) ...... (7), g n +1 (normalized) = g n +1 - avg ( g n +1 ) × (1+ Wg n +1 ) ...... (8), b n + 1 (normalized) = b n + 1 -avg (b n + 1 ) × (1 + Wb n + 1 ) ... (9). The enhanced component image data of the normalized operation can be used as an output of the iterative operation. In some embodiments, step 312 may be omitted, that is, the enhanced component image data is used as the output of the iterative operation.
步驟314,判斷是否結束或停止持續該疊代運算,即是否繼續強化該等強化成分影像資料及其強化加權運算。若未結束或停止該疊代運算,則將常態化運算的該等強化成分影像資料(步驟312)作為該疊代運算的該輸入。持續該疊代運算,回到步驟306,且持續的疊代運算(n+2)的輸入包含自前述步驟308獲得的該等強化成分影像資料(n+1)以及自步驟310獲得的強化權重值(n+1)。 Step 314: Determine whether to end or stop the iterative operation, that is, whether to continue to strengthen the enhanced component image data and its enhanced weighting operation. If the iterative operation is not ended or stopped, the enhanced component image data (step 312) of the normalized operation is used as the input of the iterative operation. Continue the iterative operation and return to step 306, and the input of the continuous iterative operation (n + 2) includes the enhanced component image data (n + 1) obtained from the foregoing step 308 and the enhanced weight obtained from step 310 Value (n + 1).
在一實施例中,所述判斷是否結束該疊代運算,是判斷是否達到一疊代次數,其可為預設。當持續的疊代運算滿足該疊代次數,例如10000次,則於(n+9999)之疊代運算結束或停止,並進入步驟316。若判斷未達到該疊代次數,則將常態化運算的該等強化成分影像資料作為持續該疊代運算的輸入,執行該疊代運算。此外,可能的作法還可包含手動操 作。例如,操作人員可依據如第二圖所示之該等中介重建影像26的觀察,決定是否還要執行幾次疊代運算。 In one embodiment, the determining whether to end the iteration operation is to determine whether the number of iterations is reached, which may be preset. When the continuous iteration operation satisfies the number of iterations, for example, 10,000 times, the iteration operation at (n + 9999) ends or stops, and proceeds to step 316. If it is determined that the number of iterations has not been reached, the enhanced component image data of the normalized operation is used as an input to continue the iteration operation, and the iteration operation is performed. In addition, possible practices may include manual operations Make. For example, the operator may decide whether to perform several iterative calculations based on the observations of the intermediate reconstruction images 26 as shown in the second figure.
該疊代運算進一步包含每達到一預設的疊代次數或其倍數,就基於該疊代次數所獲得的該等強化成分影像資料,重建該疊代次數之該彩色影像資料,即中所述介重建影像。例如,可預設當疊代次數達到1000次,或是達到1000的倍數時,基於最後一次疊代運算獲得的該等強化成分影像資料來重建該彩色影像資料,以獲得一或多個中介重建影像,並繼續執行疊代運算。如前面所提到的,該等中介重建影像可呈現該初始影像在處理過程中的連續變化。此可提供給操作人員或臨床人員作為觀察。 The iteration operation further includes reconstructing the color image data of the iteration times based on the enhanced component image data obtained based on the iteration times every time a preset iteration number or a multiple thereof is reached, as described in Reconstructed images. For example, when the number of iterations reaches 1,000 times or a multiple of 1,000, the color image data is reconstructed based on the enhanced component image data obtained by the last iteration operation to obtain one or more intermediary reconstructions. Image, and continue to iterate. As mentioned earlier, these intermediary reconstructed images can present continuous changes in the initial image during processing. This can be provided to the operator or clinician for observation.
步驟314,在其他實施例中,所述判斷是否結束該疊代運算,是判斷不同的疊代次數所獲得的不同運算結果之差異是否小於一臨界條件。在持續的疊代運算中,當某一疊代運算的強化成分影像資料(常態化)與先前的一疊代運算的強化成分影像資料(常態化)之間的差異小於該臨界條件,則結束或停止持續該疊代運算,並進入步驟316。例如,基於一疊代運算獲得之一中介重建影像與基於先前另一疊代運算獲得之另一中介重建影像兩者之間的整體像素值差異,來判斷是否小於該臨界條件。 Step 314, in other embodiments, the determining whether to end the iteration operation is to judge whether the difference between different operation results obtained by different iterations is less than a critical condition. In the continuous iterative operation, when the difference between the enhanced component image data (normalized) of a certain iterative operation and the enhanced component image data (normalized) of a previous iterative operation is less than the critical condition, the process ends Or stop continuing the iterative operation and proceed to step 316. For example, it is determined whether the overall pixel value difference between one intermediate reconstruction image obtained from an iterative operation and another intermediate reconstruction image obtained from another iterative operation is smaller than the critical condition.
所束臨界條件為可為收斂條件,當不同疊代運算的這些輸出(即多個中介重建影像,或多個強化成分影像資料)之間的差異呈現一收斂趨勢,即滿足該收斂條件,疊代運算便會結束或停止。 The critical condition is that it can be a convergence condition. When the differences between these outputs of different iterative operations (that is, multiple intermediate reconstruction images or multiple enhanced component image data) show a convergence trend, that is, the convergence condition is met, the convergence Algebraic operations will end or stop.
步驟316,基於步驟314的判斷,即疊代次數是否滿足或疊代結果是否收斂,停止執行疊代運算,強化終止。步驟318,基於最後一次疊代運算獲得的該等強化成分影像資料,重建該彩色影像資料。所述重建該 彩色影像資料,包含將結束該最後一次疊代運算後的該等強化成分影像資料與對應的該彩色影像資料的成分影像資料相互疊加,以產生一重建影像,如以下方程式所描述:X final =r final +g final +b final +X initial ......(10),其中,X final 為重建影像資料,r final 、g final 、b final 為疊代運算停止後最後一次疊代運算獲得的該等強化成分影像資料,且各自與對應的成分影像r n 、g n 、b n 疊加,即r final +r n 、g final +g n 、b final +b n 。步驟320,處理器基於該重建影像資料X final 輸出該重建影像(如第二圖重建影像22),其可被顯示於顯示裝置上。該重建影像與前述中介重建影像不同,此處的重建影像是在疊代運算結束或停止後所產生。一眼底影像經由上述步驟處理重建後,有眼底增生膜的區域(偏白色)更為明顯。 In step 316, based on the judgment in step 314, that is, whether the number of iterations is satisfied or whether the iteration result is converged, the iteration operation is stopped and the termination is strengthened. In step 318, the color image data is reconstructed based on the enhanced component image data obtained by the last iteration operation. The reconstructing the color image data includes superimposing the enhanced component image data and the corresponding component image data of the color image data on each other after the last iteration operation is completed to generate a reconstructed image, as described in the following equation : X final = r final + g final + b final + X initial ...... (10), where X final is the reconstructed image data, and r final , g final , and b final are the last time after the iteration operation stops. The enhanced component image data obtained by the iterative operation are superimposed on the corresponding component images r n , g n , b n , that is, r final + r n , g final + g n , b final + b n . In step 320, the processor outputs the reconstructed image (such as the second reconstructed image 22) based on the reconstructed image data X final , which can be displayed on the display device. This reconstructed image is different from the aforementioned intermediate reconstructed image. The reconstructed image here is generated after the iterative operation ends or stops. After the fundus image is reconstructed through the above-mentioned steps, the area (white) of the fundus hyperplasia film is more obvious.
上述實施例是以三原色模型作為舉例說明。在其他實施例中,亦可將三原色模型替換為CMYK模型或HSI模型。對於CMYK模型而言,上述運算可進一步包含有關顏色的一互補運算,例如,包含辨識即轉換一輸入影像資料的成分影像資料,如以下方程式所描述:X initial =c+m+y+k......(11),c n =255-c......(12),m n =255-m......(13),y n =255-y......(14),k n =255-k......(15),其中,c、m、y、k分別為青綠色、洋紅色、黃色、黑色的矩陣,c n 、m n 、 y n 、k n 為轉換後可作為所述疊代運算的一輸入及所述加權計算的基礎。此外,對於HSI模型而言,是基於三原色模型與HSI模型的一線性關係在上述運算包含一轉換運算。本領域技術者可基於通常知識來適當地調整或修飾上述運算流程,以使本發明適用於其他的彩色模型。 In the above embodiment, the three primary colors model is taken as an example for illustration. In other embodiments, the three primary color models may be replaced with a CMYK model or an HSI model. For the CMYK model, the above operation may further include a complementary operation on color, for example, component image data including identifying and converting an input image data, as described by the following equation: X initial = c + m + y + k . ..... (11), c n = 255- c ...... (12), m n = 255- m ...... (13), y n = 255- y ... ... (14), k n = 255- k ...... (15), where c , m , y , k are cyan, magenta, yellow, and black matrices, respectively, c n , m n, y n, k n is the converted as a basis for an iterative calculation of the weighted input and calculated. In addition, for the HSI model, it is based on a linear relationship between the three primary color models and the HSI model. The above operation includes a conversion operation. Those skilled in the art can appropriately adjust or modify the above-mentioned operation flow based on common knowledge, so that the present invention is applicable to other color models.
雖然為了清楚瞭解已經用某些細節來描述前述本發明,吾人將瞭解在申請專利範圍內可實施特定變更與修改。因此,以上實施例僅用於說明,並不設限,並且本發明並不受限於此處說明的細節,但是可在附加之申請專利範圍的領域及等同者下進行修改。 Although the foregoing invention has been described with certain details for the sake of clarity, we will understand that certain changes and modifications can be implemented within the scope of the patent application. Therefore, the above embodiments are used for illustration only, and are not limited, and the present invention is not limited to the details described herein, but may be modified in the scope of equivalent patent application fields and equivalents.
Claims (14)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW105115464A TWI660708B (en) | 2016-05-19 | 2016-05-19 | Method for reconstructing fundus image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW105115464A TWI660708B (en) | 2016-05-19 | 2016-05-19 | Method for reconstructing fundus image |
Publications (2)
Publication Number | Publication Date |
---|---|
TW201740871A TW201740871A (en) | 2017-12-01 |
TWI660708B true TWI660708B (en) | 2019-06-01 |
Family
ID=61230016
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
TW105115464A TWI660708B (en) | 2016-05-19 | 2016-05-19 | Method for reconstructing fundus image |
Country Status (1)
Country | Link |
---|---|
TW (1) | TWI660708B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110543802A (en) * | 2018-05-29 | 2019-12-06 | 北京大恒普信医疗技术有限公司 | Method and device for identifying left eye and right eye in fundus image |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020196350A1 (en) * | 2001-06-25 | 2002-12-26 | Sony Corporation | System and method for effectively performing and image data processing procedure |
US6792162B1 (en) * | 1999-08-20 | 2004-09-14 | Eastman Kodak Company | Method and apparatus to automatically enhance the quality of digital images by measuring grain trace magnitudes |
TW201118801A (en) * | 2009-11-16 | 2011-06-01 | Inst Information Industry | Image contrast enhancement apparatus and method thereof |
-
2016
- 2016-05-19 TW TW105115464A patent/TWI660708B/en not_active IP Right Cessation
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6792162B1 (en) * | 1999-08-20 | 2004-09-14 | Eastman Kodak Company | Method and apparatus to automatically enhance the quality of digital images by measuring grain trace magnitudes |
US20020196350A1 (en) * | 2001-06-25 | 2002-12-26 | Sony Corporation | System and method for effectively performing and image data processing procedure |
TW201118801A (en) * | 2009-11-16 | 2011-06-01 | Inst Information Industry | Image contrast enhancement apparatus and method thereof |
Also Published As
Publication number | Publication date |
---|---|
TW201740871A (en) | 2017-12-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6849824B2 (en) | Systems and methods for guiding users to take selfies | |
Iqbal et al. | Recent trends and advances in fundus image analysis: A review | |
Hua et al. | Convolutional network with twofold feature augmentation for diabetic retinopathy recognition from multi-modal images | |
WO2020183799A1 (en) | Medical image processing device, medical image processing method, and program | |
JP7297628B2 (en) | MEDICAL IMAGE PROCESSING APPARATUS, MEDICAL IMAGE PROCESSING METHOD AND PROGRAM | |
Davis et al. | Vision-based, real-time retinal image quality assessment | |
JP7258354B2 (en) | Method and system for detecting anomalies in living tissue | |
Teikari et al. | Embedded deep learning in ophthalmology: making ophthalmic imaging smarter | |
TW200844896A (en) | Device and program for correcting pupil color | |
Zandi et al. | PupilEXT: Flexible open-source platform for high-resolution pupillometry in vision research | |
US20220378548A1 (en) | Method for generating a dental image | |
Qayyum et al. | Single-shot retinal image enhancement using untrained and pretrained neural networks priors integrated with analytical image priors | |
CN118279299B (en) | Method for shooting retina image by using invisible light flash lamp | |
Li et al. | Nui-go: Recursive non-local encoder-decoder network for retinal image non-uniform illumination removal | |
Mu et al. | A generalized physical-knowledge-guided dynamic model for underwater image enhancement | |
WO2021087140A1 (en) | Multi-variable heatmaps for computer-aided diagnostic models | |
Lee et al. | A deep learning-based framework for retinal fundus image enhancement | |
CN116563398A (en) | Low-quality fundus color photograph generation method and device | |
TWI660708B (en) | Method for reconstructing fundus image | |
Ahmad et al. | 3D reconstruction of gastrointestinal regions using single-view methods | |
US20240032784A1 (en) | Integrated analysis of multiple spectral information for ophthalmology applications | |
Majumdar et al. | An automated graphical user interface based system for the extraction of retinal blood vessels using kirsch‘s template | |
CN110598652B (en) | Fundus data prediction method and device | |
CN113744254B (en) | Fundus image analysis method, fundus image analysis system, storage medium and computer equipment | |
Fang et al. | Perceptual quality assessment of HDR deghosting algorithms |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
MM4A | Annulment or lapse of patent due to non-payment of fees |