TW201427643A - Image capturing apparatus and capturing method - Google Patents

Image capturing apparatus and capturing method Download PDF

Info

Publication number
TW201427643A
TW201427643A TW102100592A TW102100592A TW201427643A TW 201427643 A TW201427643 A TW 201427643A TW 102100592 A TW102100592 A TW 102100592A TW 102100592 A TW102100592 A TW 102100592A TW 201427643 A TW201427643 A TW 201427643A
Authority
TW
Taiwan
Prior art keywords
image
images
peripheral
central
eyeball
Prior art date
Application number
TW102100592A
Other languages
Chinese (zh)
Other versions
TWI561210B (en
Inventor
Te-Chao Tsao
Original Assignee
Altek Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Altek Corp filed Critical Altek Corp
Priority to TW102100592A priority Critical patent/TWI561210B/en
Publication of TW201427643A publication Critical patent/TW201427643A/en
Application granted granted Critical
Publication of TWI561210B publication Critical patent/TWI561210B/en

Links

Landscapes

  • Eye Examination Apparatus (AREA)

Abstract

An image capturing apparatus including a plurality of image sensing modules and at least a light source is provided for image capturing an eye. Each of the image sensing modules includes an image sensor and a lens. The light source emits an illumination light, and the illumination light irradiates the eye. The eye reflects the illumination light into an image light. The image light includes a plurality of image sub-beams, and the image sub-beams are transmitted to the image sensors of the image sensing modules through the lenses of the image sensing modules, respectively. A detecting method is also provided.

Description

影像擷取裝置及擷取方法 Image capturing device and reading method

本發明是有關於一種影像擷取裝置以及擷取方法,且特別是有關於一種用以擷取眼球的影像擷取裝置以及擷取方法。 The present invention relates to an image capturing device and a capturing method, and more particularly to an image capturing device for capturing an eyeball and a capturing method.

眼睛是靈魂之窗,透過眼睛人們得以感受這個世界的光與色彩。而眼睛中感覺顏色與光的錐狀細胞與桿狀細胞皆位於眼底的視網膜上,是人體中唯一可將光轉換成生理電訊號的組織。其中,供給眼睛血液與養分的血管亦位於眼底。當眼底出現血管增生或是破裂的時候,例如黃斑部病變溢血等症狀,極易造成視網膜上的錐狀細胞與桿狀細胞死亡,而使得病患喪失視力。因此,在眼部疾病診斷及預防保健上,眼底的影像的觀察與追蹤是極其重要的。 The eyes are the window of the soul, through which people can feel the light and color of the world. The cone-shaped cells and rod-shaped cells that sense color and light in the eye are located on the retina of the fundus, and are the only tissues in the human body that can convert light into physiological electrical signals. Among them, the blood vessels that supply blood and nutrients to the eyes are also located at the fundus. When there is vascular hyperplasia or rupture in the fundus, such as symptoms such as macular hemorrhage, it is easy to cause cone and cell death on the retina, which causes the patient to lose vision. Therefore, in the diagnosis and prevention of eye diseases, the observation and tracking of the fundus images is extremely important.

一般而言,由於人眼的瞳孔大小限制之故,因此傳統的眼底影像攝影方法在一個角度的單次攝影中,即使使用散瞳劑等瞳孔擴張藥品,能夠拍攝到的眼底影像範圍亦僅約為30到40度視角。因此,若要拍攝位於眼底較邊緣的影像時,常令病患凝視一個觀察參考點,而後再以緩慢、穩定持續的速度將眼睛凝視的點作上下左右的移動,藉此可連續取得多張眼底之影像。而後,再利用電腦等資料處理裝置,透過後續專門的影像合成軟體以組合多張眼底之影像。然而,反覆地拍攝多張眼底影像時的照明光亦會使 得病患眼睛感到不適與疲勞,可能造成不自覺的眨眼或眼球顫動而影響攝像品質。此外,由於這些眼底的影像是由多次拍攝而得來,因此每次拍攝的曝光值與白平衡都不盡相同,且需對這些影像先行校正,才能利用電腦等大型運算器進行拼接,使得其校正之難度增加而影響到拼接後的影像品質。而拼接影像品質若受影響,容易導致醫護人員難以辨識眼底的微血管影像,而可能造成判斷上的困難甚至延誤病患的治療時機。因此,如何快速得到更完整、更清晰的眼底影像是醫療目前的當務之急。 In general, due to the limitation of the pupil size of the human eye, the traditional fundus imaging method can only capture a range of fundus images in a single shot at one angle, even if a pupil dilating drug such as a mydriatic agent is used. It is a 30 to 40 degree angle of view. Therefore, if you want to take an image at the edge of the fundus, the patient will always stare at an observation reference point, and then move the point where the eye is gazing at a slow, steady and sustained speed, up and down, and so on. The image of the fundus. Then, using a data processing device such as a computer, a plurality of images of the fundus are combined through a subsequent dedicated image synthesizing software. However, the illumination light when shooting multiple fundus images repeatedly will also The patient's eyes feel uncomfortable and tired, which may cause unconscious blinking or eyeball tremors that affect the quality of the camera. In addition, since the images of these fundus are obtained by multiple shots, the exposure value and white balance of each shot are not the same, and these images need to be corrected first, so that a large computer such as a computer can be used for splicing. The difficulty of the correction increases and the image quality after stitching is affected. If the quality of the stitched image is affected, it is easy for the medical staff to identify the microvascular image of the fundus, which may cause difficulty in judgment or even delay the treatment of the patient. Therefore, how to quickly obtain a more complete and clear fundus image is a current priority for medical care.

本發明提供一種影像擷取裝置,可擷取眼球的多個不同區域的影像。 The invention provides an image capturing device capable of capturing images of a plurality of different regions of an eyeball.

本發明提供一種擷取方法,可由多個不同角度同步擷取眼球的多個影像並將這些影像合併。 The present invention provides a method of capturing, which can simultaneously capture multiple images of an eyeball from a plurality of different angles and combine the images.

本發明提供一種影像擷取裝置,用以擷取一眼球的影像,影像偵測裝置包括多個影像感測模組以及至少一光源。每一影像感測模組包括一影像感測器及一鏡頭。光源發出一照明光,照明光照射於眼球,眼球將照明光反射成一影像光,影像光包括多個子影像光束,這些子影像光束分別經由這些影像感測模組的這些鏡頭而傳遞至這些影像感測器。 The invention provides an image capturing device for capturing an image of an eyeball, the image detecting device comprising a plurality of image sensing modules and at least one light source. Each image sensing module includes an image sensor and a lens. The light source emits an illumination light, and the illumination light illuminates the eyeball, and the eyeball reflects the illumination light into an image light. The image light includes a plurality of sub-image light beams, and the sub-image light beams are respectively transmitted to the image senses through the lenses of the image sensing modules. Detector.

在本發明之一實施例中,上述之照明光經由眼球的瞳孔照射於眼球的眼底,眼底將照明光反射成影像光,且影 像光的這些子影像光束經由瞳孔分別傳遞至這些影像感測模組。 In an embodiment of the invention, the illumination light is irradiated to the fundus of the eyeball through the pupil of the eyeball, and the fundus reflects the illumination light into the image light, and the shadow These sub-image beams of light are transmitted to the image sensing modules via the pupils, respectively.

在本發明之一實施例中,上述之相鄰二影像感測模組對眼底的取像範圍部分重疊。 In an embodiment of the present invention, the image capturing range of the fundus is partially overlapped by the adjacent two image sensing modules.

在本發明之一實施例中,上述之這些影像感測模組的這些鏡頭的光軸彼此不平行,且這些鏡頭的光軸通過眼球的瞳孔。 In an embodiment of the invention, the optical axes of the lenses of the image sensing modules are not parallel to each other, and the optical axes of the lenses pass through the pupils of the eyeballs.

在本發明之一實施例中,上述之每一影像感測模組更包括一致動器,連接至這些影像感測器與鏡頭之至少其中之一,以使影像感測模組對焦。 In one embodiment of the present invention, each of the image sensing modules further includes an actuator coupled to at least one of the image sensor and the lens to focus the image sensing module.

在本發明之一實施例中,上述之每一影像感測模組更包括一微處理單元,電性連接至對應的影像感測器,以取出影像感測器所測得的子影像光束所產生的影像的資料。 In an embodiment of the present invention, each of the image sensing modules further includes a micro processing unit electrically connected to the corresponding image sensor to extract the sub image beam measured by the image sensor. The data of the resulting image.

在本發明之一實施例中,上述之影像擷取裝置更包括一處理單元,電性連接至這些影像感測模組,以將這些影像感測器所測得的這些子影像光束所分別產生的多個眼球的影像合併。 In an embodiment of the present invention, the image capturing device further includes a processing unit electrically connected to the image sensing modules to respectively generate the sub-image beams measured by the image sensors. The images of multiple eyeballs merge.

在本發明之一實施例中,上述之相鄰二影像感測器所測得的這些眼球的影像部分重疊。 In an embodiment of the invention, the images of the eyeballs measured by the adjacent two image sensors are partially overlapped.

在本發明之一實施例中,上述之處理單元比較這些影像的重疊部分,以作為合併這些影像時的校正參考。 In one embodiment of the invention, the processing unit compares the overlapping portions of the images as a correction reference when combining the images.

在本發明之一實施例中,上述之處理單元包括一第一比較模組、一第二比較模組、一第三比較模組、一第四比較模組以及一判斷模組。第一比較模組取這些周邊影像中 之一第一周邊影像與中央影像之重疊部分作比較。第二比較模組取這些周邊影像中之一第二周邊影像與中央影像之重疊部分作比較。第三比較模組計算出第一周邊影像與第二周邊影像的重疊部分的平均影像,且取平均影像與中央影像的重疊部分作比較。第四比較模組計算出第一周邊影像與第二周邊影像的重疊部分的漸層影像,且取漸層影像與中央影像的重疊部分作比較。判斷模組判斷第一至第四比較模組的比較結果何者差異最小,其中當第一比較模組的比較差異最小時,判斷模組對於第一周邊影像與第二周邊影像的重疊部分採用第一周邊影像的數據;當第二比較模組的比較差異最小時,判斷模組對於第一周邊影像與第二周邊影像的重疊部分採用第二周邊影像的數據;當第三比較模組的比較差異最小時,判斷模組對於第一周邊影像與第二周邊影像的重疊部分採用平均影像的數據;當第四比較模組的比較差異最小時,判斷模組對於第一周邊影像與第二周邊影像的重疊部分採用漸層影像的數據。 In one embodiment of the present invention, the processing unit includes a first comparison module, a second comparison module, a third comparison module, a fourth comparison module, and a determination module. The first comparison module takes these peripheral images One of the first peripheral images is compared with the overlap of the central image. The second comparison module compares the overlap between one of the peripheral images and the central image. The third comparison module calculates an average image of the overlapping portion of the first peripheral image and the second peripheral image, and compares the average image with the overlapping portion of the central image. The fourth comparison module calculates a gradation image of the overlapping portion of the first peripheral image and the second peripheral image, and compares the overlapping portion of the gradation image with the central image. The judging module judges whether the comparison result of the first to fourth comparison modules is the smallest, wherein when the comparison difference of the first comparison module is the smallest, the judging module adopts the first part of the overlap between the first peripheral image and the second peripheral image. Data of a peripheral image; when the comparison difference of the second comparison module is the smallest, the judging module uses the data of the second peripheral image for the overlapping portion of the first peripheral image and the second peripheral image; when comparing the third comparison module When the difference is the smallest, the determining module uses the average image data for the overlapping portion of the first surrounding image and the second surrounding image; when the comparison difference of the fourth comparing module is the smallest, the determining module is for the first surrounding image and the second surrounding The overlapping part of the image uses the data of the gradation image.

在本發明之一實施例中,上述之處理單元先對這些眼球的影像進行降低枕形畸變的校正,然後再合併經由降低枕形畸變的校正後的這些影像。 In an embodiment of the invention, the processing unit first corrects the image of the eyeballs to reduce the pincushion distortion, and then combines the corrected images by reducing the pincushion distortion.

本發明提供一種擷取方法,用以擷取一眼球的影像,擷取方法包括從多個不同的方向同步擷取眼球的多個影像。擷取方法亦包括將這些影像合併。 The present invention provides a capture method for capturing an image of an eyeball, the capture method comprising simultaneously capturing a plurality of images of the eyeball from a plurality of different directions. The method of capturing also includes merging these images.

在本發明之一實施例中,上述之眼球的這些影像為眼球的眼底的多個影像,且從這些不同的方向同步擷取眼球 的這些影像的步驟包括經由眼球的瞳孔擷取眼球的眼底的這些影像。 In an embodiment of the invention, the images of the eyeballs described above are a plurality of images of the fundus of the eyeball, and the eyeballs are simultaneously captured from the different directions. The steps of these images include capturing these images of the fundus of the eye through the pupil of the eye.

在本發明之一實施例中,上述之眼球的眼底的相鄰二影像部分重疊。 In an embodiment of the invention, adjacent image portions of the fundus of the eyeball are partially overlapped.

在本發明之一實施例中,上述之將這些影像合併的步驟包括比較這些影像的重疊部分,以作為合併這些影像時的校正參考。 In one embodiment of the invention, the step of merging the images includes comparing the overlapping portions of the images as a corrected reference when merging the images.

在本發明之一實施例中,校正參考包括顏色校正參考、座標轉換校正參考及消除雜訊校正參考之至少其中之一。 In an embodiment of the invention, the correction reference includes at least one of a color correction reference, a coordinate conversion correction reference, and a noise cancellation correction reference.

在本發明之一實施例中,上述之眼球的這些影像包括一中央影像及多個位於中央影像旁的周邊影像。 In an embodiment of the invention, the images of the eyeball include a central image and a plurality of peripheral images located adjacent to the central image.

在本發明之一實施例中,上述之將這些影像合併的步驟包括:(a)取這些周邊影像中之一第一周邊影像與中央影像之重疊部分作比較;(b)取這些周邊影像中之一第二周邊影像與中央影像之重疊部分作比較;(c)計算出第一周邊影像與第二周邊影像的重疊部分的平均影像,且取平均影像與中央影像的重疊部分作比較;(d)計算出第一周邊影像與第二周邊影像的重疊部分的漸層影像,且取漸層影像與中央影像的重疊部分作比較;以及(e)從步驟(a)至步驟(d)中取比較後差異最小的步驟,當步驟(a)的比較差異最小,則第一周邊影像與第二周邊影像的重疊部分採用第一周邊影像的數據;當步驟(b)的比較差異最小,則第一周邊影像與第二周邊影像的重疊部分採用第二周邊影像的數 據;當步驟(c)的比較差異最小,則第一周邊影像與第二周邊影像的重疊部分採用平均影像的數據;當步驟(d)的比較差異最小,則第一周邊影像與第二周邊影像的重疊部分採用漸層影像的數據。 In an embodiment of the invention, the step of merging the images comprises: (a) comparing an overlap between one of the peripheral images and the central image; and (b) taking the peripheral images. Comparing one of the second peripheral images with the overlapping portion of the central image; (c) calculating an average image of the overlapping portion of the first peripheral image and the second surrounding image, and comparing the average image with the overlapping portion of the central image; d) calculating a gradation image of the overlapping portion of the first peripheral image and the second peripheral image, and comparing the overlapping image with the overlapping portion of the central image; and (e) from step (a) to step (d) Taking the step of minimizing the difference after the comparison, when the comparison difference of the step (a) is the smallest, the overlapping portion of the first peripheral image and the second peripheral image adopts the data of the first peripheral image; when the comparison difference of the step (b) is the smallest, The overlapping portion of the first peripheral image and the second peripheral image adopts the number of the second peripheral image According to the data; when the comparison difference of the step (c) is the smallest, the overlapping portion of the first peripheral image and the second peripheral image adopts the average image data; when the comparison difference of the step (d) is the smallest, the first peripheral image and the second periphery The overlapping part of the image uses the data of the gradation image.

在本發明之一實施例中,上述之中央影像的中央區採用中央影像的數據,且中央影像之周邊區中與相鄰的這些周邊影像的中央區重疊的部分,採用這些周邊影像的數據。 In an embodiment of the present invention, the central region of the central image is data of a central image, and the portion of the peripheral region of the central image that overlaps with the central region of the adjacent peripheral images uses data of the peripheral images.

在本發明之一實施例中,上述之擷取方法更包括在將這些影像合併之前,先對這些影像進行降低枕形畸變的校正,其中將這些影像合併的步驟為合併經由降低枕形畸變的校正後的這些影像。 In an embodiment of the invention, the method further includes correcting the pincushion distortions of the images before merging the images, wherein the step of merging the images is to merge by reducing pincushion distortion. These images are corrected.

基於上述,本發明之實施例的影像擷取裝置利用多個影像感測模組分別擷取眼球的多個影像。藉此,可減少因多次拍攝眼睛影像所花費的時間,並可得到更廣視角的眼球影像。而在本發明之實施例的擷取方法中,可由不同方向同步擷取眼球的多個影像,並且可將這些影像合併。藉由合併這些同時截取的多個影像,可避免由於分別地多次拍攝眼睛影像所產生之多張影像間的亮度與對比度不均的現象,因此可提升後續合併影像時的效率與準確性。 Based on the above, the image capturing device of the embodiment of the present invention uses a plurality of image sensing modules to respectively capture a plurality of images of the eyeball. Thereby, the time taken for taking an image of the eye multiple times can be reduced, and an eyeball image with a wider viewing angle can be obtained. In the capturing method of the embodiment of the present invention, multiple images of the eyeball can be synchronously captured by different directions, and the images can be combined. By combining these multiple images captured at the same time, the phenomenon of uneven brightness and contrast between multiple images generated by separately capturing the image of the eye can be avoided, thereby improving the efficiency and accuracy of the subsequent merged images.

為讓本發明之上述特徵和優點能更明顯易懂,下文特舉實施例,並配合所附圖式作詳細說明如下。 The above described features and advantages of the present invention will be more apparent from the following description.

圖1A是本發明之一實施例中影像擷取裝置的側面示 意圖。圖1B是依照圖1A實施例中的影像擷取裝置的正面示意圖。請參照圖1A及圖1B。在本實施例中,影像擷取裝置10用以擷取一眼球20之影像,影像擷取裝置10包括多個影像感測模組100以及至少一光源200。每一影像感測模組100包括一影像感測器110及一鏡頭120。其中,影像感測器110可為互補式金屬氧化物半導體(Complementary Metal-Oxide-Semiconductor,CMOS)感測器、電荷耦合元件(charge coupled device,CCD)或是其他適於接收影像之光感測器。此外,在本實施例中,鏡頭120可以是由音圈馬達所驅動之鏡頭。在本實施例中,影像擷取裝置10包括5個影像感測模組101、102、103、104及105,然而在其他實施例中,影像擷取裝置10可依照實際需求而包括更多或更少的影像感測模組100,本發明不以此為限。同時,在本實施例中,影像擷取裝置10包括4個光源200,然而在其他實施例中,可依照需求而設計光源200的數量以及分佈的位置以達到照明的目的,本發明亦不以此為限。其中,光源200可以是發光二極體(light-emitting diode,LED)或是其他適於發光之元件,並且其發出的光可為可見光或紅外線等不可見光,本發明亦不以此為限。光源200發出一照明光L,照明光L照射於眼球20。眼球20將照明光L反射成一影像光B,影像光B包括多個子影像光束BS(如圖1中的子影像光束B1、B2以及B3),這些子影像光束BS分別經由這些影像感測模組100的這些鏡頭120而傳遞至這些影像感測器110。 1A is a side view of an image capturing device according to an embodiment of the present invention; intention. FIG. 1B is a front elevational view of the image capturing device in accordance with the embodiment of FIG. 1A. Please refer to FIG. 1A and FIG. 1B. In the embodiment, the image capturing device 10 is configured to capture an image of an eyeball 20, and the image capturing device 10 includes a plurality of image sensing modules 100 and at least one light source 200. Each image sensing module 100 includes an image sensor 110 and a lens 120. The image sensor 110 can be a Complementary Metal-Oxide-Semiconductor (CMOS) sensor, a charge coupled device (CCD), or other light sensing suitable for receiving images. Device. Further, in the present embodiment, the lens 120 may be a lens driven by a voice coil motor. In this embodiment, the image capturing device 10 includes five image sensing modules 101, 102, 103, 104, and 105. However, in other embodiments, the image capturing device 10 may include more or The image sensing module 100 is less, and the invention is not limited thereto. Meanwhile, in the embodiment, the image capturing device 10 includes four light sources 200. However, in other embodiments, the number of the light sources 200 and the position of the distribution may be designed according to requirements to achieve the purpose of illumination, and the present invention does not This is limited. The light source 200 may be a light-emitting diode (LED) or other component suitable for emitting light, and the light emitted by the light source 200 may be invisible light such as visible light or infrared light, and the invention is not limited thereto. The light source 200 emits an illumination light L that illuminates the eyeball 20. The eyeball 20 reflects the illumination light L into an image light B. The image light B includes a plurality of sub-image light beams BS (such as the sub-image light beams B1, B2, and B3 in FIG. 1), and the sub-image light beams BS respectively pass through the image sensing modules. These lenses 120 of 100 are passed to these image sensors 110.

詳細而言,在本實施例中,照明光L可經由眼球20的瞳孔P照射於眼球20的眼底F。眼底F將照明光L反射成影像光B,且影像光B的這些子影像光束BS經由瞳孔P分別傳遞至這些影像感測模組100。其中,相鄰二影像感測模組100對眼底F的取像範圍可部分重疊。並且,這些影像感測模組100的這些鏡頭120的光軸X彼此可不平行,且這些鏡頭120的光軸X通過眼球20的瞳孔P。舉例而言,圖1A中的影像感測模組101具有光軸X1,影像感測模組102具有光軸X2,而影像感測模組103具有光軸X3,其中,彼此不平行的光軸X1、光軸X2以及光軸X3通過瞳孔P,使影像感測模組101、影像感測模組102及影像感測模組103可分別由不同角度擷取眼底F上不同區域的影像。亦即,影像感測模組101可擷取眼底區域F1之影像,影像感測模組102可擷取眼底區域F2之影像,而影像感測模組103可擷取眼底區域F3之影像。其中,眼底區域F1與眼底區域F2部份重疊,而眼底區域F2與眼底區域F3部分重疊。藉此,影像擷取裝置10可同時擷取眼底F不同區域的影像,而能協助醫護人員易於觀察病患更完備的眼部影像資訊,進而幫助醫護人員在臨床上診斷的準確性及效率。 In detail, in the present embodiment, the illumination light L can be irradiated to the fundus F of the eyeball 20 via the pupil P of the eyeball 20. The fundus F reflects the illumination light L into the image light B, and the sub-image beams BS of the image light B are respectively transmitted to the image sensing modules 100 via the pupils P. The image capturing range of the fundus F of the adjacent two image sensing modules 100 may partially overlap. Moreover, the optical axes X of the lenses 120 of the image sensing modules 100 may not be parallel to each other, and the optical axes X of the lenses 120 pass through the pupils P of the eyeballs 20 . For example, the image sensing module 101 in FIG. 1A has an optical axis X1, the image sensing module 102 has an optical axis X2, and the image sensing module 103 has an optical axis X3, wherein the optical axes are not parallel to each other. The X1, the optical axis X2, and the optical axis X3 pass through the pupil P, so that the image sensing module 101, the image sensing module 102, and the image sensing module 103 can respectively capture images of different regions on the fundus F from different angles. That is, the image sensing module 101 can capture the image of the fundus region F1, the image sensing module 102 can capture the image of the fundus region F2, and the image sensing module 103 can capture the image of the fundus region F3. The fundus region F1 partially overlaps the fundus region F2, and the fundus region F2 partially overlaps the fundus region F3. Thereby, the image capturing device 10 can simultaneously capture images of different regions of the fundus F, and can assist the medical staff to easily observe the more complete eye image information of the patient, thereby helping the medical staff to diagnose and verify the accuracy and efficiency in the clinical.

更詳細而言,請再參照圖1A,在本實施例中,影像感測模組100更包括一致動器130,可連接至這些影像感測器110與鏡頭120之至少其中之一,以使影像感測模組100對焦。其中,致動器130可以是音圈馬達(voice coil motor,VCM)或其他類型的馬達。舉例而言,在本實施例中,致動器130可使影像感測模組101、102及103分別地對焦在眼球20的眼底F,由於人眼的屈光度因人而異,即使是同一眼,從不同角度透過瞳孔觀看眼底之屈光度亦有所不同,因此藉由致動器130分別地控制每一個影像感測模組100的對焦,可適應眼睛局部的屈光度差異亦可適應不同人眼,進而縮短拍攝眼底時間並增加影像品質。 In more detail, please refer to FIG. 1A again. In this embodiment, the image sensing module 100 further includes an actuator 130 connectable to at least one of the image sensor 110 and the lens 120. The image sensing module 100 focuses. Wherein, the actuator 130 can be a voice coil motor (voice coil) Motor, VCM) or other type of motor. For example, in the embodiment, the actuator 130 can respectively focus the image sensing modules 101, 102, and 103 on the fundus F of the eyeball 20, because the diopter of the human eye varies from person to person, even the same eye. The diopter of the fundus viewed through the pupil from different angles is also different. Therefore, the focus of each image sensing module 100 is controlled by the actuator 130, and the diopter difference of the eye can be adapted to different human eyes. In turn, it shortens the time of shooting the fundus and increases the image quality.

此外,影像感測模組100可更包括一微處理單元140,電性連接至對應的影像感測器100,以取出影像感測器100所測得的子影像光束BS所產生的影像的資料。微處理單元140可為影像訊號處理器(image signal processor,ISP)等微處理器。舉例而言,在本實施例中,影像感測模組101包括微處理單元141,影像感測模組102包括微處理單元142,而影像感測模組103包括微處理單元143。亦即,每一個影像感測模組100可具有各自的微處理單元140以作為眼底影像處理的子系統。此外,影像擷取裝置100可更包括一處理單元150,電性連接至這些影像感測模組100,以將這些影像感測器100所測得的這些子影像光束BS所分別產生的多個眼球20的影像合併。其中,處理單元150可以是數位訊號處理器(digital signal processor,DSP)等處理器。舉例而言,請參照圖2,在本實施例中,影像感測模組101、102、103、104及105分別具有與其對應的微處理單元141、142、143、144及145,且影像感測模組101、102、103、104及105亦具有與其相對應的隨機存取記憶 體(random access memory,RAM)RAM1、RAM2、RAM3、RAM4及RAM5可儲存經微處理單元141、142、143、144及145所處理過的影像資訊。並且,處理單元150可合併來自影像感測模組101、102、103、104及105經過微處理單元140所處理過的影像資訊,亦可將合併後的結果或運算過程中的數據儲存於一記憶單元SR,其中記憶單元SR例如為同步動態隨機存取記憶體(synchronous dynamic random access memory,SDRAM)。因此,可有效增加影像合併的效率及節省使用快速處理器所產生的高額成本。同時,在本實施例中,由於微處理單元140搭配處理單元150的方式能縮短眼底拍攝的時間,因而可連續處理由多個角度所同時取得之不同區域的眼底影像,並於一顯示單元DU上顯示合併後的眼底影像,進而可使得影像擷取裝置10具有即時顯示(live view)的功能以輔助對焦眼底影像,可增加拍攝眼底影像的品質及準確度。 In addition, the image sensing module 100 can further include a micro processing unit 140 electrically connected to the corresponding image sensor 100 to extract the image data generated by the sub image beam BS measured by the image sensor 100. . The micro processing unit 140 can be a microprocessor such as an image signal processor (ISP). For example, in the embodiment, the image sensing module 101 includes a micro processing unit 141, the image sensing module 102 includes a micro processing unit 142, and the image sensing module 103 includes a micro processing unit 143. That is, each image sensing module 100 can have a respective micro processing unit 140 as a subsystem for fundus image processing. In addition, the image capturing device 100 may further include a processing unit 150 electrically connected to the image sensing modules 100 to respectively generate the plurality of the sub-image beams BS measured by the image sensors 100. The images of the eyeball 20 merge. The processing unit 150 may be a processor such as a digital signal processor (DSP). For example, referring to FIG. 2, in the embodiment, the image sensing modules 101, 102, 103, 104, and 105 respectively have micro processing units 141, 142, 143, 144, and 145 corresponding thereto, and the image sense is The test modules 101, 102, 103, 104 and 105 also have random access memories corresponding thereto Random access memory (RAM) RAM1, RAM2, RAM3, RAM4, and RAM5 can store image information processed by the microprocessing units 141, 142, 143, 144, and 145. In addition, the processing unit 150 may combine the image information processed by the image sensing modules 101, 102, 103, 104, and 105 through the processing unit 140, and may also store the combined result or the data in the operation process. The memory unit SR, wherein the memory unit SR is, for example, a synchronous dynamic random access memory (SDRAM). Therefore, the efficiency of image combining can be effectively increased and the high cost of using a fast processor can be saved. At the same time, in the embodiment, since the micro processing unit 140 is combined with the processing unit 150, the time of fundus shooting can be shortened, so that the fundus images of different regions simultaneously acquired by multiple angles can be continuously processed, and displayed on a display unit DU. The combined fundus image is displayed on the top, which further enables the image capturing device 10 to have a live view function to assist in focusing the fundus image, thereby increasing the quality and accuracy of the fundus image.

詳細而言,請參照圖1A、圖1B及圖3。在本實施例中,相鄰二影像感測器110所測得的這些眼球20的影像部分重疊,這些眼球20的影像可包括一中央影像P0及多個位於中央影像P0旁的周邊影像P。處理單元150可比較這些影像的重疊部分,以作為合併這些影像時的校正參考。其中,校正參考包括顏色校正參考、座標轉換校正參考及消除雜訊校正參考之至少其中之一。更詳細而言,請參照圖1A、圖1C及圖3。其中,處理單元150可包括一第一比較模組M1、一第二比較模組M2、一第三比較模組M3、 一第四比較模組M4以及一判斷模組MJ。第一比較模組M1取這些周邊影像P中之一第一周邊影像P1與中央影像P0之重疊部分作比較,亦即將第一周邊影像P1在重疊區域P01(即畫斜線的區域)中的數據與中央影像P0在重疊區域P01中的數據作比較。第二比較模組M2取這些周邊影像P中之一第二周邊影像P2與中央影像P1之重疊部分作比較,亦即將第二週邊影像P2在重疊區域P02(即畫十字的區域)的數據與中央影像P1在重疊區域P02的數據。第三比較模組M3計算出第一周邊影像P1與第二周邊影像P2的重疊部分的平均影像,且取平均影像與中央影像P0的重疊部分作比較,亦即將第一周邊影像P1在重疊區域P12(即同時畫有斜線及畫有十字的區域)的數據與第二周邊影像P2在重疊區域P12的數據平均後,再將此平均數據與中央影像P0在重疊區域P12的數據作比較。第四比較模組M4計算出第一周邊影像P1與第二周邊影像P2的重疊部分的漸層影像,且取漸層影像與中央影像P0的重疊部分作比較,亦即將第一周邊影像P1在重疊區域P12的數據與第二周邊影像P2在重疊區域P12的數據作漸層影像計算後,再將其計算結果之數據與中央影像P0在重疊區域P12的數據作比較。判斷模組MJ判斷第一至第四比較模組M1、M2、M3及M4的比較結果何者差異最小。其中,當第一比較模組M1的比較差異最小時,判斷模組MJ對於第一周邊影像P1與第二周邊影像P2的重疊部分採用第一周邊影像P1的數據。而當第二比較模組M2的比 較差異最小時,判斷模組對於第一周邊影像P1與第二周邊影像P2的重疊部分採用第二周邊影像P2的數據。當第三比較模組M3的比較差異最小時,判斷模組MJ對於第一周邊影像P1與第二周邊影像P2的重疊部分採用上述平均影像的數據。而當第四比較模組M4的比較差異最小時,判斷模組MJ對於第一周邊影像P1與第二周邊影像P2的重疊部分採用上述漸層影像的數據。藉此,可將影像感測模組100所接收到的多個影像以彼此差異最小且內容最正確的方式拼接。一般而言,中央影像P0是接近眼底F中央區域的影像,其影像的形變(distortion)如枕型畸變(pincushion distortion)相較於遠離眼底F中樣區域的影像來得小,因此易於後續修正。以中央影像P0為參考影像,輔以參考中央影像P0與其他周邊影像P的誤差以合併眼底F的影像,可更進一步地增加影像拼接的準確性。在本實施例之圖3中所述之周邊影像P的個數僅用於舉例說明本發明,實際上參與演算的影像可依照實際所拍攝的影像數量而有所不同,本發明不以此為限。 For details, please refer to FIG. 1A, FIG. 1B and FIG. In this embodiment, the images of the eyeballs 20 measured by the adjacent two image sensors 110 partially overlap, and the images of the eyeballs 20 may include a central image P0 and a plurality of peripheral images P located beside the central image P0. Processing unit 150 can compare the overlapping portions of these images as a correction reference when merging the images. The calibration reference includes at least one of a color correction reference, a coordinate conversion correction reference, and a noise cancellation correction reference. More specifically, please refer to FIG. 1A, FIG. 1C and FIG. The processing unit 150 can include a first comparison module M1, a second comparison module M2, and a third comparison module M3. A fourth comparison module M4 and a determination module MJ. The first comparison module M1 compares the overlapping portion of the first peripheral image P1 and the central image P0 of the peripheral images P, that is, the data of the first peripheral image P1 in the overlapping region P01 (ie, the region marked with a diagonal line). The data in the overlap region P01 is compared with the central image P0. The second comparison module M2 compares the overlap between the second peripheral image P2 and the central image P1 of the peripheral image P, that is, the data of the second peripheral image P2 in the overlapping area P02 (ie, the area of the cross) The central image P1 is in the data of the overlap region P02. The third comparison module M3 calculates an average image of the overlapping portion of the first peripheral image P1 and the second peripheral image P2, and compares the average image with the overlapping portion of the central image P0, that is, the first peripheral image P1 is in the overlapping region. The data of P12 (i.e., the area in which the oblique line and the cross are drawn) and the data of the second surrounding image P2 in the overlapping area P12 are averaged, and the averaged data is compared with the data of the central image P0 in the overlapping area P12. The fourth comparison module M4 calculates a gradation image of the overlapping portion of the first peripheral image P1 and the second peripheral image P2, and compares the overlapping image with the overlapping portion of the central image P0, that is, the first peripheral image P1 is The data of the overlap region P12 and the data of the second peripheral image P2 in the overlap region P12 are subjected to the gradation image calculation, and the data of the calculation result is compared with the data of the central image P0 in the overlap region P12. The judging module MJ judges which of the comparison results of the first to fourth comparison modules M1, M2, M3, and M4 has the smallest difference. When the comparison difference of the first comparison module M1 is the smallest, the determination module MJ uses the data of the first peripheral image P1 for the overlapping portion of the first peripheral image P1 and the second peripheral image P2. And when the ratio of the second comparison module M2 When the difference is the smallest, the judging module uses the data of the second peripheral image P2 for the overlapping portion of the first peripheral image P1 and the second peripheral image P2. When the comparison difference of the third comparison module M3 is the smallest, the determination module MJ uses the data of the average image for the overlapping portion of the first peripheral image P1 and the second peripheral image P2. When the comparison difference of the fourth comparison module M4 is the smallest, the determination module MJ uses the data of the gradation image for the overlapping portion of the first peripheral image P1 and the second peripheral image P2. Thereby, the plurality of images received by the image sensing module 100 can be spliced in such a manner that the difference between the two is the smallest and the content is the most correct. In general, the central image P0 is an image close to the central region of the fundus F, and the distortion of the image such as pincushion distortion is smaller than that of the image from the sample area of the fundus F, so that it is easy to be corrected later. Taking the central image P0 as a reference image, supplemented by the error of the reference central image P0 and other peripheral images P to merge the images of the fundus F, the accuracy of image stitching can be further increased. The number of the peripheral images P described in FIG. 3 of the present embodiment is only used to illustrate the present invention. Actually, the images participating in the calculation may be different according to the number of images actually captured, and the present invention does not limit.

此外,在本實施例中,中央影像P0與周邊影像P(在本實施例中舉例為第一周邊影像P1、第二周邊影像P2、第三周邊影像P3及第四週邊影像P4,然而在其他實施例中周邊影像數量可依照實際需求增減,本發明不以此為限)有部分重疊的區域。其中,由於人眼具有屈光度,因此眼底F的影像通常在影像外緣的影像畸變會較影像中央處明顯。而在本實施例中,處理單元150可對於中央影像P0 的中央區CZ採用中央影像P0的數據,且處理單元150對於中央影像P0之周邊區SZ中與相鄰的這些周邊影像P的中央區CZ重疊的部分,採用這些周邊影像P的數據。亦即,在拼接眼底F的影像時盡量採用靠近單張眼底影像的中央部份而避免使用其影像外緣具有較明顯影像畸變的區域。在本實施例中,由於處理單元150可先對這些眼球20的影像進行降低枕形畸變的校正再進行合併。特別是在影像畸變嚴重的影像外緣部份,其修正枕形畸變的方式通常可插入額外的影像點以彌補校正所帶來的解析度下降。因此,避免採用影像外緣來合併眼底F的影像的方式可減少額外插入的影像點所帶來的影像品質下降。 In addition, in the present embodiment, the central image P0 and the peripheral image P (in the present embodiment are exemplified by the first peripheral image P1, the second peripheral image P2, the third peripheral image P3, and the fourth peripheral image P4, however, in other In the embodiment, the number of peripheral images may be increased or decreased according to actual needs, and the invention is not limited thereto. There are partially overlapping regions. Among them, since the human eye has diopter, the image of the fundus F is usually more pronounced at the outer edge of the image than at the center of the image. In the embodiment, the processing unit 150 can be used for the central image P0. The central area CZ uses the data of the central image P0, and the processing unit 150 uses the data of the peripheral image P for the portion of the peripheral area SZ of the central image P0 that overlaps with the central area CZ of the adjacent peripheral images P. That is, when splicing the image of the fundus F, the central portion of the single fundus image is used as much as possible to avoid the use of the region where the outer edge of the image has a relatively sharp image distortion. In this embodiment, the processing unit 150 may first correct the image of the eyeballs 20 to reduce the pincushion distortion and then combine them. Especially in the outer edge of the image with severe image distortion, the way to correct the pincushion distortion can usually insert additional image points to compensate for the decrease in resolution caused by the correction. Therefore, avoiding the use of the outer edge of the image to merge the image of the fundus F can reduce the image quality degradation caused by the additionally inserted image points.

上述第一比較模組M1、第二比較模組M2、第三比較模組M3、第四比較模組M4以及判斷模組MJ可以是儲存於影像擷取裝置10的儲存媒體中的程式,其可被載入處理單元150中以執行上述功能。或者,在其他實施例中,上述第一比較模組M1、第二比較模組M2、第三比較模組M3、第四比較模組M4以及判斷模組MJ亦可以是邏輯電路元件組成的硬體裝置,而可執行上述的功能。 The first comparison module M1, the second comparison module M2, the third comparison module M3, the fourth comparison module M4, and the determination module MJ may be programs stored in the storage medium of the image capturing device 10, It can be loaded into the processing unit 150 to perform the above functions. Alternatively, in other embodiments, the first comparison module M1, the second comparison module M2, the third comparison module M3, the fourth comparison module M4, and the determination module MJ may also be hard components composed of logic circuit components. The device is configured to perform the functions described above.

圖4是本發明之一實施例中的擷取方法的流程示意圖。請參照圖1A、圖3及圖4。在本實施例中,擷取方法用以擷取一眼球20之影像。擷取方法包括從多個不同的方向同步擷取眼球20的多個影像P(步驟S10)。擷取方法亦包括將這些影像P合併(步驟S20)。其中,眼球20的這些影像P可為眼球20的眼底F的多個影像P,且從這些不同 的方向同步擷取眼球20的這些影像P的步驟S10可包括經由眼球20的瞳孔P擷取眼球20的眼底F的這些影像P。藉此,同步地擷取到的眼底F多個部位的影像的亮度以及對比度相近,易於後續之影像合併,可節省影像合併的運算時間以及提升合併影像品質,利於臨床上眼部疾病的診斷。 4 is a flow chart showing a method of capturing in an embodiment of the present invention. Please refer to FIG. 1A, FIG. 3 and FIG. 4. In this embodiment, the capture method is used to capture an image of an eyeball 20. The capturing method includes simultaneously capturing a plurality of images P of the eyeball 20 from a plurality of different directions (step S10). The capturing method also includes combining the images P (step S20). Wherein, the images P of the eyeball 20 may be a plurality of images P of the fundus F of the eyeball 20, and from these differences The step S10 of simultaneously capturing the images P of the eyeball 20 may include capturing the images P of the fundus F of the eyeball 20 via the pupil P of the eyeball 20. Thereby, the brightness and contrast of the images of the plurality of parts of the fundus F that are simultaneously captured are similar, and the subsequent image combination is easy, which can save the operation time of the image combination and improve the combined image quality, and is convenient for the diagnosis of clinical eye diseases.

其中,上述之眼球20的眼底F的這些影像包括一中央影像P0及多個位於中央影像P0旁的周邊影像P。並且,影像合併的步驟S20可包括比較這些影像的重疊部分,以作為合併這些影像時的校正參考。此外,在本實施例中,擷取方法亦可更包括在將這些影像合併之前,先對這些影像進行降低枕形畸變的校正(步驟S10a),其中將這些影像合併的步驟S20為合併經由降低枕形畸變的校正後的這些影像。其合併影像之功效如圖1A之實施例所述,故在此不再贅述。 The images of the fundus F of the eyeball 20 include a central image P0 and a plurality of peripheral images P located beside the central image P0. Moreover, the step S20 of image merging may include comparing overlapping portions of the images as a correction reference when merging the images. In addition, in this embodiment, the capturing method may further include correcting the pincushion distortion of the images before combining the images (step S10a), wherein the step S20 of combining the images is merged to reduce These images are corrected for pincushion distortion. The effect of the combined image is as described in the embodiment of FIG. 1A, and therefore will not be described herein.

詳細而言,請參照圖5。將這些影像合併的步驟S20包括:(a)取這些周邊影像P中之一第一周邊影像P1與中央影像P0之重疊部分作比較(步驟S20a);(b)取這些周邊影像P中之一第二周邊影像P2與中央影像P0之重疊部分作比較(步驟S20b);(c)計算出第一周邊影像P1與第二周邊影像P2的重疊部分的平均影像,且取平均影像與中央影像P0的重疊部分作比較(步驟S20c);(d)計算出第一周邊影像P1與第二周邊影像P2的重疊部分的漸層影像,且取漸層影像與中央影像P0的重疊部分作比較(步驟 S20d);以及(e)從步驟(a)至步驟(d)中取比較後差異最小的步驟(步驟S20e)。其中,更詳細而言,當步驟(a)的比較差異最小,則第一周邊影像P1與第二周邊影像P2的重疊部分採用第一周邊影像P1的數據;當步驟(b)的比較差異最小,則第一周邊影像P1與第二周邊影像P2的重疊部分採用第二周邊影像P2的數據;當步驟(c)的比較差異最小,則第一周邊影像P1與第二周邊影像P2的重疊部分採用平均影像的數據;當步驟(d)的比較差異最小,則第一周邊影像P1與第二周邊影像P2的重疊部分採用漸層影像的數據。其中,中央影像P0的中央區CZ採用中央影像P0的數據,且中央影像P0之周邊區SZ中與相鄰的這些周邊影像P的中央區CZ重疊的部分,採用這些周邊影像P的數據。合併影像之更詳細過程及其功效如圖1A之實施例所述。其中,步驟(a)到步驟(e)可藉由上述之第一比較模組M1、第二比較模組M2、第三比較模組M3、第四比較模組M4以及判斷模組MJ來執行,其細節可參考對這些模組所執行的功能之描述,故在此不再贅述。並且,上述步驟之順序為舉例說明本發明之實施例,本發明不以此為限。 For details, please refer to FIG. 5. The step S20 of combining the images includes: (a) comparing the overlapping portions of the first peripheral image P1 and the central image P0 of the peripheral images P (step S20a); (b) taking one of the peripheral images P Comparing the overlapping portion of the second peripheral image P2 with the central image P0 (step S20b); (c) calculating an average image of the overlapping portion of the first peripheral image P1 and the second peripheral image P2, and taking the average image and the central image P0 The overlapping portion is compared (step S20c); (d) the gradation image of the overlapping portion of the first peripheral image P1 and the second peripheral image P2 is calculated, and the overlapping portion of the gradation image and the central image P0 is compared (step S20d); and (e) the step of taking the smallest difference after the comparison from the step (a) to the step (d) (step S20e). Wherein, in more detail, when the comparison difference of the step (a) is the smallest, the overlapping portion of the first peripheral image P1 and the second peripheral image P2 adopts the data of the first peripheral image P1; when the comparison of the step (b) is the smallest difference The overlapping portion of the first peripheral image P1 and the second peripheral image P2 uses data of the second peripheral image P2; when the comparison difference of the step (c) is the smallest, the overlapping portion of the first peripheral image P1 and the second peripheral image P2 The data of the average image is used; when the comparison difference of the step (d) is the smallest, the overlapping portion of the first peripheral image P1 and the second peripheral image P2 uses the data of the progressive image. The central area CZ of the central image P0 uses the data of the central image P0, and the portion of the peripheral area SZ of the central image P0 that overlaps with the central area CZ of the adjacent peripheral image P uses the data of the peripheral image P. A more detailed process of combining images and its efficacy is illustrated in the embodiment of Figure 1A. The steps (a) to (e) can be performed by using the first comparison module M1, the second comparison module M2, the third comparison module M3, the fourth comparison module M4, and the determination module MJ. For details, refer to the description of the functions performed by these modules, and therefore will not be described here. Moreover, the order of the above steps is an example of the invention, and the invention is not limited thereto.

舉例而言,請參照圖6。在本實施例中,拍攝眼底影像的流程可更包括自動瞳孔P偵測(步驟S5)及判斷是否有偵測到瞳孔P(步驟S6),倘若瞳孔P的影像未被偵測到,則重複偵測執行瞳孔P偵測之步驟S5。倘若瞳孔P的影像已被偵測到,則可再從多個不同方向同步擷取眼球20的多個影像(步驟S10),並驅動各鏡頭120對焦(步驟S11)。而 後,影像感測模組100上的各個鏡頭120(例如有N個鏡頭120,例如鏡頭1、鏡頭2、…及鏡頭N,其中N為大於1的正整數)可分別地進行對焦(步驟S12)並判斷是否對焦成功(步驟S13)。若沒有對焦成功,則回到步驟S12再次對焦。當所有的鏡頭120(包括位於影像感測裝置10中央部份以及週遭部份的鏡頭120)都對焦完成時(步驟S14),再驅動影像感測模組100拍照(步驟S15)。影像感測模組100及鏡頭120可進行拍照而擷取眼底F的影像(步驟S16)。接著,再利用處理單元150將這些影像合併(步驟S20)並將這些眼底影像輸出(步驟S30)。藉此,影像感測裝置10可自動地偵測人眼生理資訊,並將大範圍清晰的眼底影像輸出以利醫護人員診斷。 For example, please refer to Figure 6. In this embodiment, the process of capturing the fundus image may further include automatic pupil P detection (step S5) and determining whether the pupil P is detected (step S6), if the image of the pupil P is not detected, repeating Step S5 of detecting the execution of the pupil P detection is detected. If the image of the pupil P has been detected, the plurality of images of the eyeball 20 can be simultaneously captured from a plurality of different directions (step S10), and each lens 120 is driven to focus (step S11). and Thereafter, each lens 120 on the image sensing module 100 (for example, N lenses 120, such as lens 1, lens 2, ..., and lens N, where N is a positive integer greater than 1) can be separately focused (step S12) And it is judged whether or not the focus is successful (step S13). If the focus is not successful, the process returns to step S12 to focus again. When all the lenses 120 (including the lens 120 located at the central portion of the image sensing device 10 and the surrounding portion) are in focus (step S14), the image sensing module 100 is driven to take a picture (step S15). The image sensing module 100 and the lens 120 can take a picture and capture an image of the fundus F (step S16). Then, these images are combined by the processing unit 150 (step S20) and these fundus images are output (step S30). Thereby, the image sensing device 10 can automatically detect human eye physiological information, and output a wide range of clear fundus images for medical staff diagnosis.

綜上所述,本發明之實施例的影像擷取裝置可用以同步擷取眼球不同部位的多個眼底影像,由於這些眼底影像幾乎可同時被擷取,因此其亮度及對比度相近。而後再藉由處理單元比較這些眼底影像重疊部分的差異,並選用影像間差異最小的方式合併眼底影像,可有效地節省合併影像所需的時間,而能快速地得到具有較廣範圍且影像品質良好的眼底影像。此外,每一個影像感測模組可包括一致動器而可同步地由不同角度分別對焦於眼底的不同區域,可節省拍攝眼底影像的時間、降低病患眼睛負擔以及增加拍攝大範圍的清晰眼底影像的成功率,進而增進醫療品質以及醫護人員的診斷準確率。 In summary, the image capturing device of the embodiment of the present invention can be used to simultaneously capture multiple fundus images of different parts of the eyeball. Since these fundus images can be captured at the same time, the brightness and contrast are similar. Then, by comparing the difference between the overlapping portions of the fundus images by the processing unit and combining the fundus images with the smallest difference between the images, the time required for the combined images can be effectively saved, and the wide range and image quality can be quickly obtained. Good fundus imagery. In addition, each image sensing module can include an actuator and can simultaneously focus on different regions of the fundus at different angles, which can save time for taking fundus images, reduce the burden on the patient's eyes, and increase the clear fundus of a wide range of images. The success rate of the image, which in turn improves the quality of the medical care and the diagnostic accuracy of the medical staff.

雖然本發明已以實施例揭露如上,然其並非用以限定 本發明,任何所屬技術領域中具有通常知識者,在不脫離本發明之精神和範圍內,當可作些許之更動與潤飾,故本發明之保護範圍當視後附之申請專利範圍所界定者為準。 Although the invention has been disclosed above by way of example, it is not intended to be limiting The scope of the present invention is defined by the scope of the appended claims, and the scope of the invention is defined by the scope of the appended claims. Prevail.

10‧‧‧影像擷取裝置 10‧‧‧Image capture device

20‧‧‧眼球 20‧‧‧ eyeballs

100、101、102、103、104、105‧‧‧影像感測模組 100, 101, 102, 103, 104, 105‧‧‧ image sensing module

200‧‧‧光源 200‧‧‧Light source

110‧‧‧影像感測器 110‧‧‧Image Sensor

120‧‧‧鏡頭 120‧‧‧ lens

L‧‧‧照明光 L‧‧‧Lights

B‧‧‧影像光 B‧‧‧Image light

BS、B1、B2、B3‧‧‧子影像光束 BS, B1, B2, B3‧‧‧ sub-image beam

F‧‧‧眼底 F‧‧‧ fundus

X、X1、X2、X3‧‧‧光軸 X, X1, X2, X3‧‧‧ optical axis

P‧‧‧瞳孔 P‧‧‧ pupil

F1、F2、F3‧‧‧眼底區域 F1, F2, F3‧‧‧ fundus area

130‧‧‧致動器 130‧‧‧Actuator

140、141、142、143、144、145‧‧‧微處理單元 140, 141, 142, 143, 144, 145‧‧‧ microprocessing units

150‧‧‧處理單元 150‧‧‧Processing unit

DU‧‧‧顯示單元 DU‧‧‧ display unit

RAM1、RAM2、RAM3、RAM4、RAM5‧‧‧隨機存取記憶體 RAM1, RAM2, RAM3, RAM4, RAM5‧‧‧ random access memory

SR‧‧‧記憶單元 SR‧‧‧ memory unit

P0‧‧‧中央影像 P0‧‧‧ central image

P‧‧‧周邊影像 P‧‧‧ peripheral image

M1‧‧‧第一比較模組 M1‧‧‧ first comparison module

M2‧‧‧第二比較模組 M2‧‧‧Second comparison module

M3‧‧‧第三比較模組 M3‧‧‧ third comparison module

M4‧‧‧第四比較模組 M4‧‧‧fourth comparison module

MJ‧‧‧判斷模組 MJ‧‧‧ judgment module

CZ‧‧‧中央區 CZ‧‧‧Central District

SZ‧‧‧周邊區 SZ‧‧‧ surrounding area

S5、S6、S10、S10a、S11、S12、S13、S14、S15、S16、S20、S20a、S20b、S20c、S20d、S20e、S30‧‧‧步驟 S5, S6, S10, S10a, S11, S12, S13, S14, S15, S16, S20, S20a, S20b, S20c, S20d, S20e, S30‧‧

圖1A是本發明之一實施例中影像擷取裝置的側面示意圖。 1A is a side elevational view of an image capture device in accordance with an embodiment of the present invention.

圖1B是依照圖1A實施例中的影像擷取裝置的正面示意圖。 FIG. 1B is a front elevational view of the image capturing device in accordance with the embodiment of FIG. 1A.

圖1C是依照圖1A實施例中處理單元的示意圖。 Figure 1C is a schematic illustration of a processing unit in accordance with the embodiment of Figure 1A.

圖2是依照圖1A實施例中影像擷取裝置的方塊圖。 2 is a block diagram of an image capture device in accordance with the embodiment of FIG. 1A.

圖3是依照圖1A實施例中疊合多個眼底影像的示意圖。 3 is a schematic view of a plurality of fundus images superimposed in accordance with the embodiment of FIG. 1A.

圖4是依照圖1A實施例中的擷取方法的流程示意圖。 FIG. 4 is a schematic flow chart of a method for capturing in accordance with the embodiment of FIG. 1A.

圖5是依照圖4之步驟S20的詳細步驟流程圖。 Figure 5 is a flow chart showing the detailed steps in accordance with step S20 of Figure 4.

圖6是依照圖4之實施例中拍攝眼底影像的流程圖。 Figure 6 is a flow chart of capturing a fundus image in accordance with the embodiment of Figure 4.

10‧‧‧影像擷取裝置 10‧‧‧Image capture device

20‧‧‧眼球 20‧‧‧ eyeballs

100‧‧‧影像感測模組 100‧‧‧Image Sensing Module

200‧‧‧光源 200‧‧‧Light source

110‧‧‧影像感測器 110‧‧‧Image Sensor

120‧‧‧鏡頭 120‧‧‧ lens

101、102、103‧‧‧影像感測模組 101, 102, 103‧‧‧ image sensing module

L‧‧‧照明光 L‧‧‧Lights

B‧‧‧影像光 B‧‧‧Image light

BS、B1、B2、B3‧‧‧子影像光束 BS, B1, B2, B3‧‧‧ sub-image beam

F‧‧‧眼底 F‧‧‧ fundus

X、X1、X2、X3‧‧‧光軸 X, X1, X2, X3‧‧‧ optical axis

P‧‧‧瞳孔 P‧‧‧ pupil

F1、F2、F3‧‧‧眼底區域 F1, F2, F3‧‧‧ fundus area

130‧‧‧致動器 130‧‧‧Actuator

140、141、142、143‧‧‧微處理單元 140, 141, 142, 143‧‧‧ microprocessing units

150‧‧‧處理單元 150‧‧‧Processing unit

Claims (23)

一種影像擷取裝置,用以擷取一眼球之影像,該影像擷取裝置包括:多個影像感測模組,每一該影像感測模組包括一影像感測器及一鏡頭;以及至少一光源,發出一照明光,該照明光照射於該眼球,該眼球將該照明光反射成一影像光,該影像光包括多個子影像光束,該些子影像光束分別經由該些影像感測模組的該些鏡頭而傳遞至該些影像感測器。 An image capturing device for capturing an image of an eyeball, the image capturing device comprising: a plurality of image sensing modules, each of the image sensing modules comprising an image sensor and a lens; and at least a light source that emits an illumination light that is incident on the eyeball, the eyeball reflecting the illumination light into an image light, the image light comprising a plurality of sub-image light beams, wherein the sub-image light beams respectively pass through the image sensing modules The lenses are transmitted to the image sensors. 如申請專利範圍第1項所述之影像擷取裝置,其中該照明光經由該眼球的瞳孔照射於該眼球的眼底,該眼底將該照明光反射成該影像光,且該影像光的該些子影像光束經由該瞳孔分別傳遞至該些影像感測模組。 The image capturing device of claim 1, wherein the illumination light is incident on a fundus of the eyeball through a pupil of the eyeball, and the fundus reflects the illumination light into the image light, and the image light The sub-image beams are respectively transmitted to the image sensing modules via the pupils. 如申請專利範圍第2項所述之影像擷取裝置,其中相鄰二該影像感測模組對該眼底的取像範圍部分重疊。 The image capturing device of claim 2, wherein two adjacent image sensing modules partially overlap the image capturing range of the fundus. 如申請專利範圍第1項所述之影像擷取裝置,其中該些影像感測模組的該些鏡頭的光軸彼此不平行,且該些鏡頭的光軸通過該眼球的瞳孔。 The image capturing device of claim 1, wherein the optical axes of the lenses of the image sensing modules are not parallel to each other, and the optical axes of the lenses pass through the pupil of the eyeball. 如申請專利範圍第1項所述之影像擷取裝置,其中每一影像感測模組更包括一致動器,連接至該些影像感測器與該鏡頭之至少其中之一,以使該影像感測模組對焦。 The image capturing device of claim 1, wherein each of the image sensing modules further includes an actuator coupled to at least one of the image sensor and the lens to enable the image The sensing module focuses. 如申請專利範圍第5項所述之影像擷取裝置,其中每一影像感測模組更包括一微處理單元,電性連接至對應的該影像感測器,以取出該影像感測器所測得的該子影 像光束所產生的影像的資料。 The image capturing device of claim 5, wherein each of the image sensing modules further includes a micro processing unit electrically connected to the corresponding image sensor to take out the image sensor Measured Information about the image produced by the beam. 如申請專利範圍第1項所述之影像擷取裝置,更包括一處理單元,電性連接至該些影像感測模組,以將該些影像感測器所測得的該些子影像光束所分別產生的多個眼球的影像合併。 The image capturing device of claim 1, further comprising a processing unit electrically connected to the image sensing modules to measure the sub-image beams measured by the image sensors The images of the plurality of eyeballs generated separately are combined. 如申請專利範圍第7項所述之影像擷取裝置,其中相鄰二該影像感測器所測得的該些眼球的影像部分重疊。 The image capturing device of claim 7, wherein the images of the eyeballs measured by the adjacent two image sensors partially overlap. 如申請專利範圍第8項所述之影像擷取裝置,其中該處理單元比較該些影像的重疊部分,以作為合併該些影像時的校正參考。 The image capturing device of claim 8, wherein the processing unit compares overlapping portions of the images as a correction reference when combining the images. 如申請專利範圍第9項所述之影像擷取裝置,其中該校正參考包括顏色校正參考、座標轉換校正參考及消除雜訊校正參考之至少其中之一。 The image capturing device of claim 9, wherein the correction reference comprises at least one of a color correction reference, a coordinate conversion correction reference, and a noise suppression correction reference. 如申請專利範圍第8項所述之影像擷取裝置,其中該眼球的該些影像包括一中央影像及多個位於該中央影像旁的周邊影像。 The image capturing device of claim 8, wherein the images of the eyeball comprise a central image and a plurality of peripheral images located adjacent to the central image. 如申請專利範圍第11項所述之影像擷取裝置,其中項所述之處理單元包括:一第一比較模組,取該些周邊影像中之一第一周邊影像與該中央影像之重疊部分作比較;一第二比較模組,取該些周邊影像中之一第二周邊影像與該中央影像之重疊部分作比較;一第三比較模組,計算出該第一周邊影像與該第二周 邊影像的重疊部分的平均影像,且取該平均影像與該中央影像的重疊部分作比較;一第四比較模組,計算出該第一周邊影像與該第二周邊影像的重疊部分的漸層影像,且取該漸層影像與該中央影像的重疊部分作比較;以及一判斷模組,判斷該第一至第四比較模組的比較結果何者差異最小,其中當該第一比較模組的比較差異最小時,該判斷模組對於該第一周邊影像與該第二周邊影像的重疊部分採用該第一周邊影像的數據;當該第二比較模組的比較差異最小時,該判斷模組對於該第一周邊影像與該第二周邊影像的重疊部分採用該第二周邊影像的數據;當該第三比較模組的比較差異最小時,該判斷模組對於該第一周邊影像與該第二周邊影像的重疊部分採用該平均影像的數據;當該第四比較模組的比較差異最小時,該判斷模組對於該第一周邊影像與該第二周邊影像的重疊部分採用該漸層影像的數據。 The image capture device of claim 11, wherein the processing unit comprises: a first comparison module, wherein an overlap between the first peripheral image and the central image of the peripheral images is taken; Comparing; a second comparison module compares a second peripheral image of the peripheral images with an overlap of the central image; and a third comparison module calculates the first peripheral image and the second week An average image of the overlapping portion of the side image, and comparing the average image with the overlapping portion of the central image; a fourth comparing module calculates a gradient of the overlapping portion of the first surrounding image and the second surrounding image Comparing the image with the overlapping portion of the gradation image and the central image; and determining a comparison module for determining a difference between the comparison results of the first to fourth comparison modules, wherein when the first comparison module is When the comparison difference is the smallest, the determining module uses the data of the first peripheral image for the overlapping portion of the first peripheral image and the second peripheral image; and when the comparison difference of the second comparison module is the smallest, the determining module The data of the second peripheral image is used for the overlapping portion of the first peripheral image and the second peripheral image; and when the comparison difference of the third comparison module is the smallest, the determining module is configured for the first surrounding image and the first The overlapping portion of the two peripheral images adopts the data of the average image; when the comparison difference of the fourth comparison module is the smallest, the determining module is configured for the first peripheral image and the second Using overlapping portion of video image data of the image layer gradually. 如申請專利範圍第11項所述之影像擷取裝置,其中該處理單元對於該中央影像的中央區採用該中央影像的數據,且該處理單元對於該中央影像之周邊區中與相鄰的該些周邊影像的中央區重疊的部分,採用該些周邊影像的數據。 The image capturing device of claim 11, wherein the processing unit uses the data of the central image for the central region of the central image, and the processing unit is adjacent to the adjacent region of the central image. The overlapping portions of the peripheral images of the peripheral images use the data of the peripheral images. 如申請專利範圍第7項所述之影像擷取裝置,其中該處理單元先對該些眼球的影像進行降低枕形畸變的校正,然後再合併經由降低枕形畸變的校正後的該些影像。 The image capturing device of claim 7, wherein the processing unit first corrects the image of the eyeballs to reduce the pincushion distortion, and then combines the corrected images by reducing the pincushion distortion. 一種擷取方法,用以擷取一眼球的影像,該擷取方法包括:從多個不同的方向同步擷取該眼球的多個影像;以及將該些影像合併。 A method for capturing an image of an eyeball, the method comprising: capturing a plurality of images of the eyeball from a plurality of different directions; and combining the images. 如申請專利範圍第15項所述之擷取方法,其中該眼球的該些影像為該眼球的眼底的多個影像,且從該些不同的方向同步擷取該眼球的該些影像的步驟包括經由該眼球的瞳孔擷取該眼球的該眼底的該些影像。 The method of claim 15, wherein the image of the eyeball is a plurality of images of the fundus of the eyeball, and the step of synchronously capturing the images of the eyeball from the different directions comprises: The images of the fundus of the eyeball are captured through the pupil of the eyeball. 如申請專利範圍第16項所述之擷取方法,其中該眼球的該眼底的相鄰二該影像部分重疊。 The method of claim 16, wherein the two adjacent images of the fundus of the eyeball partially overlap. 如申請專利範圍第17項所述之擷取方法,其中將該些影像合併的步驟包括比較該些影像的重疊部分,以作為合併該些影像時的校正參考。 The method of claim 17, wherein the step of combining the images comprises comparing overlapping portions of the images as a correction reference when combining the images. 如申請專利範圍第18項所述之擷取方法,其中該校正參考包括顏色校正參考、座標轉換校正參考及消除雜訊校正參考之至少其中之一。 The method of claim 18, wherein the calibration reference comprises at least one of a color correction reference, a coordinate conversion correction reference, and a noise cancellation correction reference. 如申請專利範圍第17項所述之擷取方法,其中該眼球的該眼底的該些影像包括一中央影像及多個位於該中央影像旁的周邊影像。 The method of claim 17, wherein the images of the fundus of the eyeball comprise a central image and a plurality of peripheral images located adjacent to the central image. 如申請專利範圍第20項所述之擷取方法,其中將該些影像合併的步驟包括:(a)取該些周邊影像中之一第一周邊影像與該中央影像之重疊部分作比較;(b)取該些周邊影像中之一第二周邊影像與該中央影 像之重疊部分作比較;(c)計算出該第一周邊影像與該第二周邊影像的重疊部分的平均影像,且取該平均影像與該中央影像的重疊部分作比較;(d)計算出該第一周邊影像與該第二周邊影像的重疊部分的漸層影像,且取該漸層影像與該中央影像的重疊部分作比較;以及(e)從步驟(a)至步驟(d)中取比較後差異最小的步驟,當步驟(a)的比較差異最小,則該第一周邊影像與該第二周邊影像的重疊部分採用該第一周邊影像的數據;當步驟(b)的比較差異最小,則該第一周邊影像與該第二周邊影像的重疊部分採用該第二周邊影像的數據;當步驟(c)的比較差異最小,則該第一周邊影像與該第二周邊影像的重疊部分採用該平均影像的數據;當該步驟(d)的比較差異最小,則該第一周邊影像與該第二周邊影像的重疊部分採用該漸層影像的數據。 The method of claim 20, wherein the step of combining the images comprises: (a) comparing one of the peripheral images with the overlap of the central image; b) taking one of the peripheral images and the second peripheral image Comparing the overlapping portions with the image; (c) calculating an average image of the overlapping portion of the first peripheral image and the second surrounding image, and comparing the average image with the overlapping portion of the central image; (d) calculating a gradation image of the overlapping portion of the first peripheral image and the second peripheral image, and comparing the overlapping image with the overlapping portion of the central image; and (e) from step (a) to step (d) Taking the step of the smallest difference after the comparison, when the comparison difference of the step (a) is the smallest, the overlapping portion of the first peripheral image and the second peripheral image adopts the data of the first peripheral image; when the comparison of the step (b) is different If the difference between the first peripheral image and the second peripheral image is the smallest, the data of the second peripheral image is used; when the comparison difference of the step (c) is the smallest, the overlapping of the first peripheral image and the second surrounding image The data of the average image is used in part; when the comparison difference of the step (d) is the smallest, the overlapping portion of the first peripheral image and the second peripheral image adopts the data of the gradation image. 如申請專利範圍第20項所述之擷取方法,其中該中央影像的中央區採用該中央影像的數據,且該中央影像之周邊區中與相鄰的該些周邊影像的中央區重疊的部分,採用該些周邊影像的數據。 The method of claim 20, wherein the central region of the central image uses data of the central image, and a portion of the peripheral region of the central image that overlaps with a central region of the adjacent peripheral images , using the data of the peripheral images. 如申請專利範圍第15項所述之擷取方法,更包括:在將該些影像合併之前,先對該些影像進行降低枕形畸變的校正,其中將該些影像合併的步驟為合併經由降低枕形畸變的校正後的該些影像。 The method of claim 15 further includes: correcting the pincushion distortion of the images before merging the images, wherein the step of merging the images is to reduce by combining The corrected images of the pincushion distortion.
TW102100592A 2013-01-08 2013-01-08 Image capturing apparatus and capturing method TWI561210B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW102100592A TWI561210B (en) 2013-01-08 2013-01-08 Image capturing apparatus and capturing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW102100592A TWI561210B (en) 2013-01-08 2013-01-08 Image capturing apparatus and capturing method

Publications (2)

Publication Number Publication Date
TW201427643A true TW201427643A (en) 2014-07-16
TWI561210B TWI561210B (en) 2016-12-11

Family

ID=51725871

Family Applications (1)

Application Number Title Priority Date Filing Date
TW102100592A TWI561210B (en) 2013-01-08 2013-01-08 Image capturing apparatus and capturing method

Country Status (1)

Country Link
TW (1) TWI561210B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI647472B (en) * 2018-01-22 2019-01-11 國立臺灣大學 Dual mode line of sight tracking method and system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3784247B2 (en) * 2000-08-31 2006-06-07 株式会社ニデック Fundus camera
WO2012118907A2 (en) * 2011-03-02 2012-09-07 Quantum Catch, Llc Ocular fundus camera system and methodology

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI647472B (en) * 2018-01-22 2019-01-11 國立臺灣大學 Dual mode line of sight tracking method and system

Also Published As

Publication number Publication date
TWI561210B (en) 2016-12-11

Similar Documents

Publication Publication Date Title
US8985771B2 (en) Image capturing apparatus and capturing method
US10307054B2 (en) Adaptive camera and illuminator eyetracker
US10863897B2 (en) Image processing apparatus and method
US8837862B2 (en) Image stitching method and camera system
JP6615748B2 (en) Eyelid irradiation system and method for imaging meibomian glands for meibomian gland analysis
JP5371638B2 (en) Ophthalmic imaging apparatus and method
US20070139613A1 (en) Method and apparatus for optical imaging of retinal function
JP6850728B2 (en) Devices and methods for fixation measurements with refraction error measurements using wavefront aberrations
US20150257639A1 (en) System and device for preliminary diagnosis of ocular diseases
JP6638354B2 (en) Eye gaze detection device and eye gaze detection method
US12023128B2 (en) System and method for eye tracking
JP7195619B2 (en) Ophthalmic imaging device and system
CN110215186A (en) One kind being automatically aligned to positioning fundus camera and its working method
JP2014161439A (en) Eyeground information acquisition device and method, and program
JP7046347B2 (en) Image processing device and image processing method
JP6900994B2 (en) Line-of-sight detection device and line-of-sight detection method
TW201427643A (en) Image capturing apparatus and capturing method
JP5025761B2 (en) Image processing apparatus, image processing method, and program
CN103908223B (en) Image acquiring device and acquisition methods
TWI524876B (en) Image stitching method and camera system
JPWO2020111103A1 (en) Ophthalmic equipment
EP3440990A1 (en) System for imaging a fundus of an eye
US11995792B2 (en) System and method for detecting and rectifying vision for individuals with imprecise focal points
TWI544897B (en) Image detecting apparatus and image detecting method
US8967803B2 (en) Image capturing apparatus and auto-focusing method thereof