TWI775356B - Image pre-processing method and image processing apparatus for fundoscopic image - Google Patents

Image pre-processing method and image processing apparatus for fundoscopic image Download PDF

Info

Publication number
TWI775356B
TWI775356B TW110109989A TW110109989A TWI775356B TW I775356 B TWI775356 B TW I775356B TW 110109989 A TW110109989 A TW 110109989A TW 110109989 A TW110109989 A TW 110109989A TW I775356 B TWI775356 B TW I775356B
Authority
TW
Taiwan
Prior art keywords
image
interest
region
processor
eyeball
Prior art date
Application number
TW110109989A
Other languages
Chinese (zh)
Other versions
TW202238514A (en
Inventor
黃宜瑾
蔡金翰
陳名科
Original Assignee
宏碁智醫股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 宏碁智醫股份有限公司 filed Critical 宏碁智醫股份有限公司
Priority to TW110109989A priority Critical patent/TWI775356B/en
Priority to CN202110411128.7A priority patent/CN115115528A/en
Priority to US17/235,938 priority patent/US11954824B2/en
Priority to JP2021119515A priority patent/JP7337124B2/en
Priority to EP21196275.8A priority patent/EP4060601A1/en
Application granted granted Critical
Publication of TWI775356B publication Critical patent/TWI775356B/en
Publication of TW202238514A publication Critical patent/TW202238514A/en

Links

Images

Classifications

    • G06T5/90
    • G06T5/73
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • G06T5/94
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20004Adaptive image processing
    • G06T2207/20012Locally adaptive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20182Noise reduction or smoothing in the temporal domain; Spatio-temporal filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction

Abstract

An image pre-processing method and an image processing apparatus for fundoscopic are provided. A region of interest (ROI) is obtained from a fundoscopic image, to generate the first image. The ROI is focused on the eyeball in the fundoscopic image. The smoothing process is performed on the first image, to generate the second image. The value difference between neighboring pixels in the second image is increased, to generate the third image. Accordingly, the feature can be enhanced, and it may help the subsequent image recognizing operation.

Description

用於眼底圖的影像前處理方法及影像處理裝置Image preprocessing method and image processing device for fundus map

本發明是有關於一種影像處理技術,且特別是有關於一種用於眼底(fundoscopic)圖的影像前處理方法及影像處理裝置。The present invention relates to an image processing technology, and in particular, to an image preprocessing method and an image processing device for fundoscopic images.

醫療影像是針對生物體的特定部位拍攝所得的影像,且這些影像可用於評估罹患疾病的風險或疾病的嚴重程度。例如,透過眼底(fundoscopic)攝影檢查,可及早發現諸如視網膜病變、青光眼、黃斑部病或其他病變。一般而言,多數醫生都是透過人工判斷醫療影像上的病灶。雖然現今可透過電腦輔助評估醫療影像,但諸如效率、複雜度及準確度等指標仍有待突破。Medical images are images taken of specific parts of an organism and used to assess the risk or severity of disease. For example, through fundoscopic photographic examination, diseases such as retinopathy, glaucoma, macular disease or other diseases can be detected early. Generally speaking, most doctors judge lesions on medical images manually. Although today's computer-aided evaluation of medical images is possible, metrics such as efficiency, complexity, and accuracy still need to be broken through.

有鑑於此,本發明實施例提供一種用於眼底圖的影像前處理方法及影像處理裝置,可強化特徵,進而增進後續辨識病灶或其他特徵的準確度。In view of this, embodiments of the present invention provide an image preprocessing method and an image processing apparatus for fundus images, which can enhance features, thereby improving the accuracy of subsequent identification of lesions or other features.

本發明實施例的影像前處理方法包括(但不僅限於)下列步驟:自眼底圖取得興趣區域以產生第一影像。興趣區域是針對眼底圖中的眼球。對第一影像進行平滑(smoothing)處理以產生第二影像。增加第二影像中多個相鄰畫素之間的數值差異以產生第三影像。第三影像用於影像辨識。The image preprocessing method according to the embodiment of the present invention includes (but is not limited to) the following steps: obtaining a region of interest from a fundus map to generate a first image. The region of interest is for the eyeball in the fundus map. The first image is smoothed to generate the second image. A third image is generated by increasing the numerical difference between a plurality of adjacent pixels in the second image. The third image is used for image recognition.

本發明實施例的影像處理裝置包括(但不僅限於) 儲存器及處理器。儲存器儲存程式碼。處理器耦接儲存器。處理器載入且執行程式碼以經配置用以自眼底圖取得興趣區域以產生第一影像,對第一影像進行平滑處理以產生第二影像,並增加第二影像中多個相鄰畫素之間的數值差異以產生第三影像。興趣區域是針對眼底圖中的眼球。第三影像用於影像辨識。The image processing apparatus of the embodiment of the present invention includes (but is not limited to) a memory and a processor. The memory stores the code. The processor is coupled to the storage. The processor loads and executes code to be configured to obtain a region of interest from the fundus map to generate a first image, smooth the first image to generate a second image, and add a plurality of adjacent pixels in the second image to generate the third image. The region of interest is for the eyeball in the fundus map. The third image is used for image recognition.

基於上述,依據本發明實施例的用於眼底圖的影像前處理方法及影像處理裝置,在進行影像辨識之前,自初始的眼底圖切割出興趣區域,並進一步平滑處理及數值強化。藉此,可增強特徵,降低雜訊,並提昇後續影像辨識的辨識準確度。Based on the above, according to the image preprocessing method and image processing apparatus for fundus images according to the embodiments of the present invention, the region of interest is cut out from the initial fundus image before image recognition, and further smoothing and numerical enhancement are performed. In this way, features can be enhanced, noise can be reduced, and the recognition accuracy of subsequent image recognition can be improved.

為讓本發明的上述特徵和優點能更明顯易懂,下文特舉實施例,並配合所附圖式作詳細說明如下。In order to make the above-mentioned features and advantages of the present invention more obvious and easy to understand, the following embodiments are given and described in detail with the accompanying drawings as follows.

圖1是依據本發明一實施例的影像處理裝置100的元件方塊圖。請參照圖1,影像處理裝置100包括(但不僅限於)儲存器110及處理器130。影像處理裝置100可以是桌上型電腦、筆記型電腦、智慧型手機、平板電腦、伺服器、醫療檢測儀器或其他運算裝置。FIG. 1 is a block diagram of components of an image processing apparatus 100 according to an embodiment of the present invention. Referring to FIG. 1 , the image processing apparatus 100 includes (but is not limited to) a storage 110 and a processor 130 . The image processing device 100 may be a desktop computer, a notebook computer, a smart phone, a tablet computer, a server, a medical testing instrument, or other computing devices.

儲存器110可以是任何型態的固定或可移動隨機存取記憶體(Radom Access Memory,RAM)、唯讀記憶體(Read Only Memory,ROM)、快閃記憶體(flash memory)、傳統硬碟(Hard Disk Drive,HDD)、固態硬碟(Solid-State Drive,SSD)或類似元件。在一實施例中,儲存器110用以記錄程式碼、軟體模組、組態配置、資料(例如,影像、數值、參考值、距離等)或檔案,並待後文詳述其實施例。The storage 110 may be any type of fixed or removable random access memory (RAM), read only memory (ROM), flash memory, conventional hard disks (Hard Disk Drive, HDD), Solid-State Drive (Solid-State Drive, SSD) or similar components. In one embodiment, the storage 110 is used to record code, software modules, configuration, data (eg, images, values, reference values, distances, etc.) or files, and embodiments thereof will be described in detail later.

處理器130耦接儲存器110,處理器130並可以是中央處理單元(Central Processing Unit,CPU)、圖形處理單元(Graphic Processing unit,GPU),或是其他可程式化之一般用途或特殊用途的微處理器(Microprocessor)、數位信號處理器(Digital Signal Processor,DSP)、可程式化控制器、現場可程式化邏輯閘陣列(Field Programmable Gate Array,FPGA)、特殊應用積體電路(Application-Specific Integrated Circuit,ASIC)、神經網路加速器或其他類似元件或上述元件的組合。在一實施例中,處理器130用以執行影像處理裝置100的所有或部份作業,且可載入並執行儲存器110所記錄的各程式碼、軟體模組、檔案及資料。The processor 130 is coupled to the storage 110. The processor 130 can be a central processing unit (CPU), a graphics processing unit (GPU), or other programmable general-purpose or special-purpose Microprocessor (Microprocessor), Digital Signal Processor (DSP), Programmable Controller, Field Programmable Gate Array (FPGA), Application-Specific Integrated Circuit Integrated Circuit, ASIC), neural network accelerator or other similar elements or a combination of the above elements. In one embodiment, the processor 130 is used to execute all or part of the operations of the image processing apparatus 100 , and can load and execute the codes, software modules, files and data recorded in the storage 110 .

下文中,將搭配影像處理裝置100中的各項裝置、元件及模組說明本發明實施例所述之方法。本方法的各個流程可依照實施情形而隨之調整,且並不僅限於此。Hereinafter, the method according to the embodiment of the present invention will be described in conjunction with various devices, components and modules in the image processing apparatus 100 . Each process of the method can be adjusted according to the implementation situation, and is not limited to this.

圖2是依據本發明一實施例的影像前處理方法的流程圖。請參照圖2,處理器130可自眼底圖取得興趣區域以產生第一影像(步驟S210)。具體而言,眼底圖是針對人或其他生物體進行眼底攝影所取得的影像。處理器130可透過內建或外接的影像擷取裝置取得眼底圖,或者處理器130可自伺服器、電腦或儲存媒體下載眼底圖。值得注意的是,來自不同來源的眼底圖的形狀或大小可能不同。為了正規化這些眼底圖,處理器130可先切割出視為重要或有用資訊的興趣區域。FIG. 2 is a flowchart of an image preprocessing method according to an embodiment of the present invention. Referring to FIG. 2 , the processor 130 may obtain the region of interest from the fundus map to generate the first image (step S210 ). Specifically, a fundus map is an image obtained by photographing a fundus of a human or other living body. The processor 130 can obtain the fundus map through a built-in or external image capturing device, or the processor 130 can download the fundus map from a server, a computer or a storage medium. It is worth noting that fundus maps from different sources may be different in shape or size. In order to normalize these fundus images, the processor 130 may first cut out regions of interest that are regarded as important or useful information.

本發明實施例的興趣區域是針對眼底圖中的眼球。在一實施例中,處理器130可自眼底圖中定位眼球的中心。例如,處理器130可在眼底圖中像素的梯度方向上的直線相交最多的一點視為眼球的位置。又例如,處理器130可對眼底圖進行霍夫轉換(Hough transformation),以選出最符合限制條件的圓形,並據以決定興趣區域邊界輪廓的圓心。處理器13可依據眼球的中心進一步決定興趣區域。例如,處理器130可依據中心設定較符合或最符合眼球輪廓的圓形,並將此圓形的輪廓作為興趣區域的邊界。又例如,處理器130可將霍夫轉換所得的圓形作為興趣區域。The region of interest in the embodiment of the present invention is for the eyeball in the fundus map. In one embodiment, the processor 130 may locate the center of the eyeball from the fundus map. For example, the processor 130 may consider the position of the eyeball at the point where the straight lines in the gradient directions of the pixels in the fundus map intersect the most. For another example, the processor 130 may perform Hough transformation on the fundus image to select the circle that best meets the constraints, and determine the circle center of the boundary contour of the region of interest accordingly. The processor 13 may further determine the region of interest according to the center of the eyeball. For example, the processor 130 may set a circle that fits better or best with the contour of the eyeball according to the center, and uses the contour of the circle as the boundary of the region of interest. For another example, the processor 130 may use the circle obtained by the Hough transform as the region of interest.

在另一實施例中,處理器130可由眼底圖的外側開始至中心找尋眼球的邊界。例如,處理器130自眼底圖的四邊朝向中心依序逐漸掃描,並判斷掃描區域的亮度。值得注意的是,眼球的邊界的一側的亮度值(或稱明度(Brightness),即顏色的亮度)高於另一側。一般而言,眼底圖中眼球以外的區域的亮度值較低且可能是黑色。當任一邊上的相鄰畫素的亮度差異高於差異門檻值或某一個或更多個畫素的亮度值高於亮度門檻值(即,一側的亮度值高於另一側)時,處理器130可決定此邊上的興趣區域的最外側已找到。處理器130可將眼底圖的四個邊分別找到的最外側為界線,即形成四邊形。處理器130可將此四邊形的最短邊長度作為圓形(視為眼球)的直徑,並據以將此直徑所形成的圓形作為眼球的邊界。圓心位置是四邊形的中心。在另一實施例中,為了先濾除常出現在週邊的干擾,處理器130以四邊形的最短邊長度的一半再乘上一個大於0且小於1的浮點數作為半徑長度,再以此為半徑長度取圓形。此外,圓心位置仍是位於四邊形的中心。接著,處理器130可依據眼球的邊界決定興趣區域。即,將眼球的邊界作為興趣區域的邊界。In another embodiment, the processor 130 may search for the boundary of the eyeball from the outside to the center of the fundus map. For example, the processor 130 scans sequentially from the four sides of the fundus image toward the center, and determines the brightness of the scanned area. It is worth noting that the brightness value (or Brightness, that is, the brightness of the color) on one side of the boundary of the eyeball is higher than that on the other side. In general, areas other than the eyeball in the fundus map have lower luminance values and may be black. When the luminance difference of adjacent pixels on either side is higher than the difference threshold or the luminance value of one or more pixels is higher than the luminance threshold (ie, the luminance value of one side is higher than the other side), The processor 130 may determine that the outermost portion of the region of interest on this edge has been found. The processor 130 may take the outermost sides found by the four sides of the fundus image as a boundary line, that is, to form a quadrilateral. The processor 130 can use the length of the shortest side of the quadrilateral as the diameter of a circle (referred to as an eyeball), and use the circle formed by the diameter as a boundary of the eyeball. The center position is the center of the quadrilateral. In another embodiment, in order to first filter out the interference that often appears in the surrounding area, the processor 130 multiplies half of the length of the shortest side of the quadrilateral by a floating point number greater than 0 and less than 1 as the radius length, and then uses this as the length of the radius. The length of the radius is taken as a circle. In addition, the position of the center of the circle is still at the center of the quadrilateral. Next, the processor 130 may determine the region of interest according to the boundary of the eyeball. That is, the boundary of the eyeball is used as the boundary of the region of interest.

在一實施例中,處理器130可自眼底圖切割出興趣區域。即,處理器130將眼底圖中不為興趣區域的區域刪除。處理器130可進一步在興趣區域外加上背景色以形成第一影像。此背景色在後續針對眼底圖的影像辨識(例如,辨識病灶、辨識嚴重程度等)將視為無用資訊。無用資訊可能被特徵擷取(feature extraction)排除或其數值較低。例如,背景色是紅、綠、藍的數值皆為128、64或0所形成,但不以此為限。此外,第一影像的大小、形狀及/或比例可固定,從而正規化不同眼底圖。在一些實施例中,前述圓形可能變更成橢圓形或其他幾何圖形。In one embodiment, the processor 130 may cut out the region of interest from the fundus map. That is, the processor 130 deletes the region that is not the region of interest in the fundus map. The processor 130 may further add a background color outside the region of interest to form the first image. This background color will be regarded as useless information in the subsequent image recognition of the fundus map (eg, identification of lesions, identification of severity, etc.). Useless information may be excluded by feature extraction or its value is low. For example, the background color is formed by the values of red, green and blue all being 128, 64 or 0, but not limited thereto. In addition, the size, shape and/or scale of the first image can be fixed, thereby normalizing different fundus images. In some embodiments, the aforementioned circles may be modified into ellipses or other geometric shapes.

舉例而言,圖3是依據本發明一實施例的第一影像的示意圖。請參照圖3,圖中所示的圓形區域(即,興趣區域)對應到眼球。For example, FIG. 3 is a schematic diagram of a first image according to an embodiment of the present invention. Referring to FIG. 3 , the circular area (ie, the area of interest) shown in the figure corresponds to the eyeball.

處理器130可對第一影像進行平滑處理以產生第二影像(步驟S230)。具體而言,平滑處理是一種空間域濾波技術,可直接對影像中的畫素模糊化及去除雜訊。例如,相鄰畫素的數值差異(或稱距離)將被拉近。The processor 130 may smooth the first image to generate the second image (step S230). Specifically, smoothing is a spatial domain filtering technique that can directly blur and remove noise from pixels in an image. For example, the numerical difference (or distance) of adjacent pixels will be narrowed.

在一實施例中,平滑處理為高斯模糊(Gaussian Blur)。而處理器130可對第一影像進行高斯模糊。例如,處理器130使用高斯卷積核與第一影像中的各像素進行卷積運算,再對卷積結果進行求和,從而得到第二影像。In one embodiment, the smoothing process is Gaussian Blur. The processor 130 may perform Gaussian blur on the first image. For example, the processor 130 uses a Gaussian convolution kernel to perform a convolution operation with each pixel in the first image, and then sums the convolution results to obtain the second image.

舉例而言,圖4是依據本發明一實施例的第二影像的示意圖。請參照圖3及圖4,與圖3相比,經高斯模糊的圖4中的部分雜訊的細節被模糊,但血管、黃斑部、靜脈及/或動脈的邊緣仍被保留。For example, FIG. 4 is a schematic diagram of a second image according to an embodiment of the present invention. Please refer to FIG. 3 and FIG. 4 . Compared with FIG. 3 , the Gaussian blurred part of the noise in FIG. 4 is blurred in detail, but the edges of blood vessels, macula, veins and/or arteries are still preserved.

在其他實施例中,平滑處理也可以是中值濾波、均值濾波、盒形濾波或其他處理。In other embodiments, the smoothing process may also be median filtering, mean filtering, box filtering or other processes.

處理器130可增加第二影像中多個相鄰畫素之間的數值差異以產生第三影像(步驟S250)。具體而言,平滑處理會拉近相鄰像素的數值差異。而為了進一步強化特徵,在一實施例中,處理器130可依據那些相鄰像素之間的數值差異與參考值之間的距離按比例增加對應數值差異(即,更新/改變數值差異)。例如,參考值是128,處理器130可分別計算各像素在紅、綠、藍通道的數值與相鄰的像素之間的原數值差異,並比較原數值差異與參考值之間的距離。若距離越遠,則處理器130可將增加數值差異的幅度加大。若距離越近,則處理器130可將增加數值差異的幅度減少。比例可能是1、2、5、或10倍。接著,處理器130可依據增加的數值差異(即,更新的數值差異)改變對應像素的數值,使兩像素的數值差異符合更新的數值差異。The processor 130 may increase the numerical difference between a plurality of adjacent pixels in the second image to generate a third image (step S250). Specifically, smoothing closes the numerical differences between adjacent pixels. To further enhance the feature, in one embodiment, the processor 130 may proportionally increase the corresponding numerical difference (ie, update/change the numerical difference) according to the distance between the numerical difference between those adjacent pixels and the reference value. For example, if the reference value is 128, the processor 130 can calculate the original value difference between the values in the red, green and blue channels of each pixel and the adjacent pixels, and compare the distance between the original value difference and the reference value. If the distance is farther, the processor 130 can increase the magnitude of the numerical difference. If the distance is closer, the processor 130 may decrease the magnitude of the increase in the numerical difference. The ratio may be 1, 2, 5, or 10 times. Then, the processor 130 can change the value of the corresponding pixel according to the increased value difference (ie, the updated value difference), so that the value difference between the two pixels is consistent with the updated value difference.

在一些實施例中,所改變的數值有上限或下限的規範。例如,上限為255,且下限為0。當所改變的數值超過上限或下限,將設為特定值(例如,上限、下限或其他值)。In some embodiments, the values that are changed have upper or lower specifications. For example, the upper limit is 255 and the lower limit is 0. When the changed value exceeds the upper or lower limit, it will be set to a specific value (eg, upper limit, lower limit, or other value).

須說明的是,原數值差異與更新的數值差異之間的數學關係不限於比例關係,在其他實施例中,處理器130也可能視實際需求而採用線性關係、指數關係或其他數學關係。It should be noted that the mathematical relationship between the original numerical difference and the updated numerical difference is not limited to a proportional relationship. In other embodiments, the processor 130 may also adopt a linear relationship, an exponential relationship or other mathematical relationship according to actual requirements.

舉例而言,圖5是依據本發明一實施例的第三影像的示意圖。請參照圖4及圖5,圖5中的血管、黃斑部、視盤、靜脈、動脈或其他物件更加明顯。For example, FIG. 5 is a schematic diagram of a third image according to an embodiment of the present invention. Referring to FIG. 4 and FIG. 5 , the blood vessels, macula, optic disc, veins, arteries or other objects in FIG. 5 are more obvious.

另值得注意的是,本發明實施例的第三影像可用於影像辨識。在一實施例中,處理器130將第三影像輸入至基於機器學習演算法(例如,深度神經網路(Deep Neural Network,DNN)、多層感知器(Multi-Layer Perceptron,MLP)、支持向量機(Support Vector Machine,SVM)或其他機器學習模型)的檢測模型。在一實施例中,檢測模型即可用於影像辨識。須說明的是,第三影像可用於檢測模型的訓練階段及/或推論(inference)階段的前處理。一般而言,檢測模型通常不經影像前處理,且立即對初始的眼底圖進行特徵擷取。透過本發明實施例的影像前處理,可讓針對諸如出血、滲出物、水腫等病灶的影像辨識結果更加準確。或者,本發明實施例的影像前處理可讓血管、黃斑部、靜脈等部分更加容易被辨識,但不以此為限。It is also worth noting that the third image of the embodiment of the present invention can be used for image recognition. In one embodiment, the processor 130 inputs the third image to an algorithm based on machine learning (eg, deep neural network (DNN), multi-layer perceptron (MLP), support vector machine) (Support Vector Machine, SVM) or other machine learning models) detection model. In one embodiment, the detection model can be used for image recognition. It should be noted that the third image can be used for preprocessing in the training phase and/or the inference phase of the detection model. In general, detection models usually do not undergo image preprocessing, and feature extraction is performed immediately on the original fundus image. Through the image preprocessing of the embodiments of the present invention, the image recognition results for lesions such as hemorrhage, exudate, and edema can be more accurate. Alternatively, the image preprocessing according to the embodiment of the present invention can make blood vessels, macula, veins and other parts more easily identified, but not limited to this.

在另一實施例中,影像辨識可以是基於尺度不變特徵轉換(scale-invariant feature transform,SIFT) 、haar特徵、Adaboost或其他辨識技術。In another embodiment, the image recognition may be based on scale-invariant feature transform (SIFT), haar features, Adaboost or other recognition techniques.

綜上所述,在本發明實施例的用於眼底圖的影像前處理方法及影像處理裝置中,決定眼底圖中的興趣區域,對影像平滑處理及增加數值差異。藉此,可增強特徵,並有助於後續的影像辨識、模型訓練或其他影像應用。To sum up, in the image preprocessing method and image processing apparatus for fundus map according to the embodiments of the present invention, the region of interest in the fundus map is determined, the image is smoothed and the numerical difference is added. In this way, features can be enhanced and facilitated in subsequent image recognition, model training or other image applications.

雖然本發明已以實施例揭露如上,然其並非用以限定本發明,任何所屬技術領域中具有通常知識者,在不脫離本發明的精神和範圍內,當可作些許的更動與潤飾,故本發明的保護範圍當視後附的申請專利範圍所界定者為準。Although the present invention has been disclosed above by the embodiments, it is not intended to limit the present invention. Anyone with ordinary knowledge in the technical field can make some changes and modifications without departing from the spirit and scope of the present invention. Therefore, The protection scope of the present invention shall be determined by the scope of the appended patent application.

100:影像處理裝置 110:儲存器 130:處理器 S210~S250:步驟100: Image processing device 110: Storage 130: Processor S210~S250: Steps

圖1是依據本發明一實施例的影像處理裝置的元件方塊圖。 圖2是依據本發明一實施例的影像前處理方法的流程圖。 圖3是依據本發明一實施例的第一影像的示意圖。 圖4是依據本發明一實施例的第二影像的示意圖。 圖5是依據本發明一實施例的第三影像的示意圖。 FIG. 1 is a block diagram of components of an image processing apparatus according to an embodiment of the present invention. FIG. 2 is a flowchart of an image preprocessing method according to an embodiment of the present invention. FIG. 3 is a schematic diagram of a first image according to an embodiment of the present invention. FIG. 4 is a schematic diagram of a second image according to an embodiment of the present invention. FIG. 5 is a schematic diagram of a third image according to an embodiment of the present invention.

S210~S250:步驟 S210~S250: Steps

Claims (14)

一種影像前處理方法,包括: 自一眼底圖取得一興趣區域以產生一第一影像,其中該興趣區域是針對該眼底圖中的一眼球; 對該第一影像進行一平滑(smoothing)處理以產生一第二影像;以及 增加該第二影像中多個相鄰畫素之間的數值差異以產生一第三影像。 An image preprocessing method, comprising: Obtaining a region of interest from the fundus map to generate a first image, wherein the region of interest is for an eyeball in the fundus map; performing a smoothing process on the first image to generate a second image; and A third image is generated by increasing the numerical difference between a plurality of adjacent pixels in the second image. 如請求項1所述的影像前處理方法,其中自該眼底圖取得該興趣區域的步驟包括: 自該眼底圖中定位該眼球的一中心;以及 依據該中心決定該興趣區域。 The image preprocessing method according to claim 1, wherein the step of obtaining the region of interest from the fundus map comprises: locate a center of the eyeball from the fundus map; and The area of interest is determined according to the center. 如請求項1所述的影像前處理方法,其中自該眼底圖取得該興趣區域的步驟包括: 由該眼底圖的外側開始找尋該眼球的邊界,其中該眼球的邊界的一側的亮度值高於另一側;以及 依據該眼球的邊界決定該興趣區域。 The image preprocessing method according to claim 1, wherein the step of obtaining the region of interest from the fundus map comprises: starting from the outside of the fundus map to find the boundary of the eyeball, wherein the brightness value of one side of the boundary of the eyeball is higher than the other side; and The region of interest is determined according to the boundary of the eyeball. 如請求項1所述的影像前處理方法,其中產生該第一影像的步驟包括: 自該眼底圖切割出該興趣區域;以及 在該興趣區域外加上一背景色以形成該第一影像。 The image preprocessing method as claimed in claim 1, wherein the step of generating the first image comprises: cutting out the region of interest from the fundus map; and A background color is added outside the region of interest to form the first image. 如請求項1所述的影像前處理方法,其中該平滑處理為一高斯模糊(Gaussian Blur),且對該第一影像進行該平滑處理的步驟包括: 對該第一影像進行該高斯模糊。 The image preprocessing method according to claim 1, wherein the smoothing process is a Gaussian Blur, and the step of performing the smoothing process on the first image comprises: The Gaussian blur is performed on the first image. 如請求項1所述的影像前處理方法,其中增加該第二影像中該些相鄰畫素之間的數值差異的步驟包括: 依據該數值差異與一參考值之間的距離按比例增加該數值差異。 The image preprocessing method according to claim 1, wherein the step of increasing the numerical difference between the adjacent pixels in the second image comprises: The numerical difference is proportionally increased according to the distance between the numerical difference and a reference value. 如請求項1所述的影像前處理方法,其中產生該第三影像的步驟之後,更包括: 將該第三影像輸入至基於機器學習演算法的一檢測模型。 The image preprocessing method according to claim 1, wherein after the step of generating the third image, the method further comprises: The third image is input to a detection model based on a machine learning algorithm. 一種影像處理裝置,包括: 一儲存器,儲存一程式碼;以及 一處理器,耦接該儲存器,載入且執行該程式碼以經配置用以: 自一眼底圖取得一興趣區域以產生一第一影像,其中該興趣區域是針對該眼底圖中的一眼球; 對該第一影像進行一平滑處理以產生一第二影像;以及 增加該第二影像中多個相鄰畫素之間的數值差異以產生一第三影像。 An image processing device, comprising: a memory that stores a code; and A processor, coupled to the storage, loads and executes the code to be configured to: Obtaining a region of interest from the fundus map to generate a first image, wherein the region of interest is for an eyeball in the fundus map; performing a smoothing process on the first image to generate a second image; and A third image is generated by increasing the numerical difference between a plurality of adjacent pixels in the second image. 如請求項8所述的影像處理裝置,其中該處理器更經配置用以: 自該眼底圖中定位該眼球的一中心;以及 依據該中心決定該興趣區域。 The image processing device of claim 8, wherein the processor is further configured to: locate a center of the eyeball from the fundus map; and The area of interest is determined according to the center. 如請求項8所述的影像處理裝置,其中該處理器更經配置用以: 由該眼底圖的外側自中心找尋該眼球的邊界,其中該眼球的邊界的一側的亮度值高於另一側;以及 依據該眼球的邊界決定該興趣區域。 The image processing device of claim 8, wherein the processor is further configured to: Find the boundary of the eyeball from the center from the outside of the fundus map, wherein the brightness value of one side of the boundary of the eyeball is higher than the other side; and The region of interest is determined according to the boundary of the eyeball. 如請求項8所述的影像處理裝置,其中該處理器更經配置用以: 自該眼底圖切割出該興趣區域;以及 在該興趣區域外加上一背景色以形成該第一影像。 The image processing device of claim 8, wherein the processor is further configured to: cutting out the region of interest from the fundus map; and A background color is added outside the region of interest to form the first image. 如請求項8所述的影像處理裝置,其中該平滑處理為一高斯模糊,且該處理器更經配置用以: 對該第一影像進行該高斯模糊。 The image processing device of claim 8, wherein the smoothing is a Gaussian blur, and the processor is further configured to: The Gaussian blur is performed on the first image. 如請求項8所述的影像處理裝置,其中該處理器更經配置用以: 依據該數值差異與一參考值之間的距離按比例增加該數值差異。 The image processing device of claim 8, wherein the processor is further configured to: The numerical difference is proportionally increased according to the distance between the numerical difference and a reference value. 如請求項8所述的影像處理裝置,其中該處理器更經配置用以: 將該第三影像輸入至基於機器學習演算法的一檢測模型。 The image processing device of claim 8, wherein the processor is further configured to: The third image is input to a detection model based on a machine learning algorithm.
TW110109989A 2021-03-19 2021-03-19 Image pre-processing method and image processing apparatus for fundoscopic image TWI775356B (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
TW110109989A TWI775356B (en) 2021-03-19 2021-03-19 Image pre-processing method and image processing apparatus for fundoscopic image
CN202110411128.7A CN115115528A (en) 2021-03-19 2021-04-16 Image preprocessing method and image processing device for fundus image
US17/235,938 US11954824B2 (en) 2021-03-19 2021-04-21 Image pre-processing method and image processing apparatus for fundoscopic image
JP2021119515A JP7337124B2 (en) 2021-03-19 2021-07-20 Image preprocessing method and image processing apparatus for fundus examination images
EP21196275.8A EP4060601A1 (en) 2021-03-19 2021-09-13 Image pre-processing for a fundoscopic image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW110109989A TWI775356B (en) 2021-03-19 2021-03-19 Image pre-processing method and image processing apparatus for fundoscopic image

Publications (2)

Publication Number Publication Date
TWI775356B true TWI775356B (en) 2022-08-21
TW202238514A TW202238514A (en) 2022-10-01

Family

ID=77739002

Family Applications (1)

Application Number Title Priority Date Filing Date
TW110109989A TWI775356B (en) 2021-03-19 2021-03-19 Image pre-processing method and image processing apparatus for fundoscopic image

Country Status (5)

Country Link
US (1) US11954824B2 (en)
EP (1) EP4060601A1 (en)
JP (1) JP7337124B2 (en)
CN (1) CN115115528A (en)
TW (1) TWI775356B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9002085B1 (en) * 2013-10-22 2015-04-07 Eyenuk, Inc. Systems and methods for automatically generating descriptions of retinal images

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3618877B2 (en) 1996-02-05 2005-02-09 キヤノン株式会社 Ophthalmic image processing device
US6915024B1 (en) * 2000-09-29 2005-07-05 Hewlett-Packard Development Company, L.P. Image sharpening by variable contrast mapping
JP4190221B2 (en) * 2002-07-09 2008-12-03 Hoya株式会社 Image contour enhancement device
KR20050025927A (en) * 2003-09-08 2005-03-14 유웅덕 The pupil detection method and shape descriptor extraction method for a iris recognition, iris feature extraction apparatus and method, and iris recognition system and method using its
JP4636841B2 (en) 2004-09-29 2011-02-23 キヤノン株式会社 Ophthalmic image photographing apparatus and photographing method
US8320641B2 (en) * 2004-10-28 2012-11-27 DigitalOptics Corporation Europe Limited Method and apparatus for red-eye detection using preview or other reference images
JP2006263127A (en) 2005-03-24 2006-10-05 Gifu Univ Ocular fundus diagnostic imaging support system and ocular fundus diagnostic imaging support program
JP2007117154A (en) 2005-10-25 2007-05-17 Pentax Corp Electronic endoscope system
US20070248277A1 (en) * 2006-04-24 2007-10-25 Scrofano Michael A Method And System For Processing Image Data
JP2015051054A (en) 2013-09-05 2015-03-19 キヤノン株式会社 Image processing device, image processing system and image processing method
BR112018004755A2 (en) * 2015-09-11 2018-09-25 EyeVerify Inc. image and feature quality, image enhancement and feature extraction for ocular-vascular and facial recognition and fusion of ocular-vascular and / or subfacial information for biometric systems
CN108229252B (en) * 2016-12-15 2020-12-15 腾讯科技(深圳)有限公司 Pupil positioning method and system
US20190014982A1 (en) * 2017-07-12 2019-01-17 iHealthScreen Inc. Automated blood vessel feature detection and quantification for retinal image grading and disease screening
CN111833334A (en) 2020-07-16 2020-10-27 上海志唐健康科技有限公司 Fundus image feature processing and analyzing method based on twin network architecture

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9002085B1 (en) * 2013-10-22 2015-04-07 Eyenuk, Inc. Systems and methods for automatically generating descriptions of retinal images

Also Published As

Publication number Publication date
US11954824B2 (en) 2024-04-09
TW202238514A (en) 2022-10-01
CN115115528A (en) 2022-09-27
JP2022145411A (en) 2022-10-04
EP4060601A1 (en) 2022-09-21
JP7337124B2 (en) 2023-09-01
US20220301111A1 (en) 2022-09-22

Similar Documents

Publication Publication Date Title
WO2021169128A1 (en) Method and apparatus for recognizing and quantifying fundus retina vessel, and device and storage medium
Sevastopolsky Optic disc and cup segmentation methods for glaucoma detection with modification of U-Net convolutional neural network
Rebouças Filho et al. Automatic histologically-closer classification of skin lesions
Liu et al. Automatic skin lesion classification based on mid-level feature learning
Moradi et al. Kernel sparse representation based model for skin lesions segmentation and classification
Panda et al. New binary Hausdorff symmetry measure based seeded region growing for retinal vessel segmentation
Hsu et al. Chronic wound assessment and infection detection method
Khowaja et al. A framework for retinal vessel segmentation from fundus images using hybrid feature set and hierarchical classification
CN111882566B (en) Blood vessel segmentation method, device, equipment and storage medium for retina image
Cavalcanti et al. Macroscopic pigmented skin lesion segmentation and its influence on lesion classification and diagnosis
WO2020001236A1 (en) Method and apparatus for extracting annotation in medical image
Liu et al. Joint optic disc and cup segmentation based on densely connected depthwise separable convolution deep network
Dhane et al. Spectral clustering for unsupervised segmentation of lower extremity wound beds using optical images
Ma et al. Multichannel retinal blood vessel segmentation based on the combination of matched filter and U-Net network
Wang et al. Retinal vessel segmentation approach based on corrected morphological transformation and fractal dimension
JP6578058B2 (en) Image processing apparatus, method for operating image processing apparatus, and operation program for image processing apparatus
Montaha et al. A shallow deep learning approach to classify skin cancer using down-scaling method to minimize time and space complexity
Choudhary et al. Skin lesion detection based on deep neural networks
Khan et al. An efficient technique for retinal vessel segmentation and denoising using modified ISODATA and CLAHE
Biswal et al. Robust retinal optic disc and optic cup segmentation via stationary wavelet transform and maximum vessel pixel sum
TWI775356B (en) Image pre-processing method and image processing apparatus for fundoscopic image
Kanca et al. Learning hand-crafted features for k-NN based skin disease classification
WO2020140380A1 (en) Method and device for quickly dividing optical coherence tomography image
Xiang et al. Segmentation of retinal blood vessels based on divergence and bot-hat transform
Wu et al. Retinal vessel radius estimation and a vessel center line segmentation method based on ridge descriptors

Legal Events

Date Code Title Description
GD4A Issue of patent certificate for granted invention patent