TWI358231B - Auto-focus method for camera and digital camera - Google Patents

Auto-focus method for camera and digital camera Download PDF

Info

Publication number
TWI358231B
TWI358231B TW96142645A TW96142645A TWI358231B TW I358231 B TWI358231 B TW I358231B TW 96142645 A TW96142645 A TW 96142645A TW 96142645 A TW96142645 A TW 96142645A TW I358231 B TWI358231 B TW I358231B
Authority
TW
Taiwan
Prior art keywords
image
parameter
camera
digital signal
boundary
Prior art date
Application number
TW96142645A
Other languages
Chinese (zh)
Other versions
TW200922302A (en
Inventor
Yongbing Chen
Original Assignee
Via Tech Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Via Tech Inc filed Critical Via Tech Inc
Priority to TW96142645A priority Critical patent/TWI358231B/en
Publication of TW200922302A publication Critical patent/TW200922302A/en
Application granted granted Critical
Publication of TWI358231B publication Critical patent/TWI358231B/en

Links

Landscapes

  • Studio Devices (AREA)

Description

1358231 S3U07-0002 23194twf.d〇c/n 九、發明說明: 【發明所屬之技術領城】 本發明是有關於/種相機自動對焦方法,且特別是有 關於一種利用單張影像自動取得相機對焦位置的方法。 【先前技術】 隨著科技的發展與數位裝置的普遍性使用,人們與數 位產品關係愈來愈密不可分。在數位產品當中,數位相機 (Digital Camera,DC)占了相當重要的一席之地。要决定數 位相機的優劣之分,除了相機的成像品質好壞之外,相機 ^焦技術更是科忽視。聽不但要速度快,而且要對 忒1二焦速度快慢與對焦效果會直接影響到成像快慢 否對焦:的而:’是使用特徵函數判別相機是 請參照圖!,相機在^ 種相機調_程之示意圖。 別拍攝圖像,崎計的過針,賴在多個對焦點分 攝的取景圖像必須經過^^佳^點之位置。因此所拍 點”的過程,尋找對隹點模糊-再返回清晰 此’以這種方式進行對=過程相當地浪費時間。不僅如 須在多個對焦點拍攝圖i t料費者的自«慣,還必 隨之增加。然而,現在的、、肖2相機所損耗之電力自然也 片品質十分講究,對於m不f對於相機所拍攝之相 、相機之電池的續航力也非常的重 4 1358231 S3U07-0002 23194twf,doc/n 視。因此若能以更加省電之方式進行對焦,則可大幅度提 升相機之電池的續航力。 有馨於此’相機的相關製造商莫不急于尋求適當的解 . 决方式’以克服上述的問題。 【發明内容】 本發明提供-種相機之自動對焦方法,以節省聚隹、時 _ 树賴供-賊帅機,不反統即可達成 對焦功能’藉以節省硬體成本、提升晝質並且能減小數位 相機之體積。 本發明提出一種相機之自動對焦方法包括下列步 驟:在一步驟中,設定與相機相關之第一參數P與第二參 數q。在另一步驟中,採集唯一之一影像。在又一步驟中/, 依據此影像、第—參數P與第二參數q計算物距。在更一 步驟中’依據物距調整對焦位置。 籲 在本發明之一實施例中,上述之相機自動對焦方法, 其中设定該第一參數P與該第二參數q之步驟包括下列步 驟^在可變距離Di處,分別拍攝同一點光源,得到相對應 之影像Fi;利用高斯分佈並分別依據影像Η計算相對應之 擴散參數σί ;依據可變距離Di及其相對應之擴散參數出 建立資料集(Di,〇i);依據資料集(Di,(yi),來設定第一參數? 與第二參數q,其中i為相對應之編號。 在本發明之一實施例中,上述之相機自動對焦方法, 5 S3U07-0002 23194twf.doc/n 其中依據影像、第—參數P與第二參數q計算物距之步驟 包括下列步驟:_邊界演算法鎌影像之邊界;估計目 標擴散參數σ;依據目標擴散參數σ、第一參數p與第二 參數q計算物距。 、一 ,發明提* 1數位城,包括絲元件、影像 相播早凡與數位訊號處理11。光學調焦元件用以調整數位 押上之難位置。f彡像齡單元配置於光學難元件之光 ^與丄i用以擷取影像。數位訊號處理器_光學調焦元 機相。其中數位訊號處理11設定與該數位相 —之〜1 —參數p與第二參數q。影像擷取單元擷取唯 二參:影像。數位訊號處理器依據影像、第—參數P與第 訊號。距1數位訊號處理器並依據物距送出調焦 光于調焦70件依據調焦訊號進行對焦。 二^發明在-步驟巾,設定油機減之第、—參數p與第 驟中二在另—步驟中’採集唯—之—影像。在又一步 在更一Ϊ據此影像、第—參數p與第二參數q計算物距。 唯V财’依據物距調整聽位置。因此,藉由分析 〜影像即可調整焦距,大幅度減少調焦時間。 爲^本㈣之上㈣徵和優職更明顯鎌,下文特 只施例,幷配合所附圖式,作詳細說明如下。 【實施方式】 意圖圖iA是㈣本發明之之—種触相機的示 圖2B是依照本發明之一實施例之一種自動對焦方 S3U07-0002 23194twf.doc/n 法的流程圖。請合幷參照圖2A與2B, 了光:調焦元件2。、影像操取單元3。與η 40。光學嫌、元件2〇㈣調整數位相處2 擷取單元30酉己置於光學調焦元件2〇 I、、位置。衫像 影像。影像操取單元30例如是電荷=件用:擷取 Coupled DeviceCCD) 〇 光學調焦元件-與影 ^上述,首先’步驟讓由數位訊號處理器40設定 與相,相關之第-參數p與第二參數q。步驟㈣,影像 操取单兀30齡唯-之一影像。步驟S2〇3,數位訊號處 理器40依據此影像、第一參數p與第二參數q計算物^。 步驟S204 ’數位職處理器4G依據物輯出難訊號給 光學調焦το件2G,此光學難林2()則依據訊號進 行對焦。如此即可僅單張影像進行難,大幅度減少 數位相機之對f、_。接著騎耻叙各步雜更詳細 之說明。 圖3疋依照本發明之一實施例之一種成像系統的示意 圖。请參照圖3,其中物距u為一點光源〇與鏡片2〇1之 距離’s爲散焦平面與鏡片2〇1之距離,f為鏡片2〇1之焦 點,D為鏡片201之光圈,d為點光源在散焦平面之成像。 由圖3可^得物距u與光圈D具有下列公式(丨)之關係: w=7^b^.·.公式(1),其中,而F亦即鏡片 2〇1之光圈係數。在考慮衍射作用(Diffracti〇nEffect)與 S3U07-0002 23194twf.doc/n 鏡片成像祕的影響,散好面之點 高斯描述。其中r為高斯分佈離開中 距離參數。^,而k是由制特性所蚊的常數。將 此關係代人公式⑴可推物距u與點擴散 的 關係如下列公式(2)所示: /双σ的 9~( -f-Fd S — V .·公式(2),其中第一參數P與第1358231 S3U07-0002 23194twf.d〇c/n IX. Description of the invention: [Technology of the invention] The present invention relates to a camera autofocus method, and more particularly to a method for automatically taking a camera focus using a single image. The method of location. [Prior Art] With the development of technology and the universal use of digital devices, the relationship between people and digital products is becoming more and more inseparable. Among digital products, digital cameras (Digital Camera, DC) occupy a very important place. In order to determine the pros and cons of a digital camera, in addition to the image quality of the camera, the camera technology is neglected. Listening not only needs to be fast, but also the speed of the 二1 difocal speed and the focusing effect will directly affect the imaging speed. No focus: And: 'Use the feature function to discriminate the camera. Please refer to the figure! The camera is in the schematic diagram of the camera. Do not take images, the over-the-counter stitches, and the framing images that are captured by multiple focus points must pass the position of ^^^^^. Therefore, the process of taking a point, looking for the blur of the point - and then returning to the clearness of this in this way is quite a waste of time. Not only does it have to be used in multiple focus points. However, the current power consumption of the Xiao 2 camera is naturally very particular. For the m not f, the battery life of the camera and the battery life of the camera are very heavy. 4 1358231 S3U07 -0002 23194twf, doc/n view. Therefore, if you can focus more in a power-saving manner, you can greatly improve the battery life of the camera. The manufacturer of this camera is not in a hurry to seek an appropriate solution. The method of the present invention is to overcome the above problems. [Invention] The present invention provides an autofocus method for a camera to save the convergence time, and the _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ The body cost, improve the enamel and reduce the volume of the digital camera. The invention provides a camera autofocus method comprising the following steps: setting a first parameter related to the camera in one step P and the second parameter q. In another step, a unique image is acquired. In a further step /, the object distance is calculated according to the image, the first parameter P and the second parameter q. In a further step, 'based on In the embodiment of the present invention, in the camera autofocus method, the step of setting the first parameter P and the second parameter q includes the following steps: at the variable distance Di, Shooting the same point source separately, and obtaining the corresponding image Fi; using the Gaussian distribution and calculating the corresponding diffusion parameter σί according to the image ;; establishing the data set according to the variable distance Di and its corresponding diffusion parameter (Di, 〇i According to the data set (Di, (yi), the first parameter is set? and the second parameter q, where i is the corresponding number. In an embodiment of the invention, the above camera autofocus method, 5 S3U07 -0002 23194twf.doc/n The step of calculating the object distance according to the image, the first parameter P and the second parameter q comprises the following steps: _ boundary algorithm 镰 image boundary; estimated target diffusion parameter σ; according to target diffusion parameter σ, First reference The number p and the second parameter q calculate the object distance. First, the invention provides a digital city, including the silk component, the video image broadcast and the digital signal processing 11. The optical focusing component is used to adjust the difficult position of the digital position. The 彡 龄 单元 配置 配置 配置 光学 光学 光学 光学 光学 光学 光学 光学 光学 光学 光学 光学 光学 光学 光学 光学 光学 光学 光学 光学 光学 光学 光学 光学 光学 光学 光学 光学 光学 光学 光学 光学 光学 光学 光学 光学 光学 光学 光学 光学 光学 光学 光学 光学 光学 光学And the second parameter q. The image capturing unit captures only the second parameter: the image. The digital signal processor is based on the image, the first parameter P and the first signal. The distance is from the digital signal processor and the focusing light is sent according to the object distance. 70 pieces are focused according to the focusing signal. 2^Invented in the -step towel, set the oil machine to reduce the first, - parameter p and the second step in the second step - in the other step - collect the only image. In another step, the object distance is calculated based on the image, the first parameter p and the second parameter q. Only V Cai's adjusted the position according to the object distance. Therefore, the focal length can be adjusted by analyzing the ~ image, and the focusing time is greatly reduced. For the above (four) above (four) and the excellent position is more obvious, the following is only a case, with the accompanying drawings, as detailed below. [Embodiment] FIG. 2B is a flowchart of a method of the present invention, which is an embodiment of the present invention. FIG. 2B is a flowchart of an autofocus side S3U07-0002 23194twf.doc/n method according to an embodiment of the present invention. Referring to Figures 2A and 2B in combination, the light: focusing element 2 is used. , image manipulation unit 3. With η 40. Optical suspicion, component 2 〇 (4) adjustment of digital phase 2 2 capture unit 30 置于 has been placed in the optical focusing component 2 〇 I, position. The shirt is like an image. The image manipulation unit 30 is, for example, a charge=piece: a coupled device CCD), an optical focus component, and a shadow. First, the first step is to set the phase-parameter and the correlation parameter by the digital signal processor 40. Two parameters q. Step (4), the image is operated by a single 30-inch image. In step S2, the digital signal processor 40 calculates the object based on the image, the first parameter p and the second parameter q. Step S204 ’ The digital processor 4G assigns a difficult signal to the optical focus το 2G according to the object, and the optical dynasty 2 () is focused according to the signal. This makes it difficult to perform only a single image, greatly reducing the pair f and _ of the digital camera. Then take a more detailed description of the steps. Figure 3 is a schematic illustration of an imaging system in accordance with an embodiment of the present invention. Please refer to FIG. 3 , wherein the object distance u is a distance between the light source 〇 and the lens 2 〇 1 's is the distance between the defocus plane and the lens 2 〇 1 , f is the focus of the lens 2 〇 1 , and D is the aperture of the lens 201 , d Imaging the point source in the defocus plane. From Fig. 3, the object distance u and the aperture D have the following relationship (丨): w = 7^b^.·. Formula (1), where F is the aperture coefficient of the lens 2〇1. Considering the effects of diffraction (Diffracti〇nEffect) and S3U07-0002 23194twf.doc/n lens imaging, the point of the good surface is Gaussian. Where r is the Gaussian distribution leaving the mid-range parameter. ^, and k is a constant of the mosquitoes made by the characteristics. The relationship between the formula (1) and the point spread can be as shown in the following formula (2): / double σ 9~( -f-Fd S - V . · formula (2), first Parameter P and

二參數q爲較的系統參數,因此藉由對σ正確估 得到物距II的正確估計。接著則針對如 叙 與第二參數q作更詳細之介紹。 7又疋第參數P …圖4是賴本發明之—實關之—種設定第一參數p 與第一參數q的流程圖。請參照圖4與圖2a與圖3,首先 步驟S401,數位相機1〇在多個可變距離以處,分別拍攝 同一點光源,得到相對應之多張影像Fi。步驟S4〇2,利用 高斯分佈並分別依據相對應之多張影像Fi計算相對應之 擴散參數〇i。步驟S403,依據多個可變距離Di及其相對 應之擴散參數σί建立資料集(Di,ai)。步驟S404,依據資料 集(Di,cii)來設定第一參數p與第二參數q,其中丨為相對應 之編號。換言之,資料集(Di,ai)所搜集之資料愈多,則所 估計之第一參數p與第二參數q則會愈精確。 接著依據資料集(Dipi)與代價函數 = -―^-)2,使用迭代法’例如使用牛頓下降法 * q-σΐ (Newton’sMethod)估計第一參數P與第二參數q。本領 域具有通常知識者應當知道,本實施例所舉例的「牛頓下 降法」僅是一特定實施例’在另一實施例中,迭代法仍可 1358231 S3U07-0002 23194twf.doc/n 頓下降法與最速下降法交錯計算… = f定於此種特定實施例。最後再將 夕P 一 >數q設定於數位訊號處理器40中。數 位相機1㈣可细唯-的單張影像與公式⑵計算求得 物距u。The second parameter q is a relatively systematic parameter, so the correct estimation of the object distance II is correctly estimated by correcting σ. This is followed by a more detailed description of the second parameter q. 7 Further parameter P ... Fig. 4 is a flow chart for setting the first parameter p and the first parameter q of the invention. Referring to FIG. 4 and FIG. 2a and FIG. 3, firstly, in step S401, the digital camera 1 captures the same point light source at a plurality of variable distances to obtain a corresponding plurality of images Fi. In step S4〇2, the Gaussian distribution is used and the corresponding diffusion parameter 〇i is calculated according to the corresponding plurality of images Fi, respectively. Step S403, establishing a data set (Di, ai) according to the plurality of variable distances Di and their corresponding diffusion parameters σί. In step S404, the first parameter p and the second parameter q are set according to the data set (Di, cii), where 丨 is the corresponding number. In other words, the more data collected by the data set (Di, ai), the more accurate the first parameter p and the second parameter q are estimated. Then, according to the data set (Dipi) and the cost function = -^^-) 2, the first parameter P and the second parameter q are estimated using an iterative method, for example, using the Newton's method *q-σΐ (Newton's Method). It should be understood by those skilled in the art that the "Newton descent method" exemplified in this embodiment is only a specific embodiment. In another embodiment, the iterative method can still be 1358231 S3U07-0002 23194 twf.doc/n descent method Interleaved with the steepest descent method... = f is set for this particular embodiment. Finally, the number of times P > q is set in the digital signal processor 40. The digital camera 1 (four) can calculate the object distance u by the single image and the formula (2).

值得-提的是’步驟S4〇1〜4〇4可以在數位相機ι〇 出廠前就先設定好’消費者在使賴位相機㈣時候就可 =去設定第-參數P與第二參數q的麻煩。本領域具有通 ^知識者應當知道’步驟S4G2〜S4()4可由數位訊號處理 器40來執行。在另一實施例中也可利用其他外部運算器, =如使用個人電腦進行運算。因此本發明並不限定於此。 當設定好第一參數P與第二參數q之後,消費者在使用數 位相機10的時候,只需拍攝唯一的單張影像即可進行對 焦。接著則針對如何依據唯一的單張影像計算物距u進行 更詳細之說明。It is worth mentioning that 'Steps S4〇1~4〇4 can be set before the digital camera ι〇 factory. 'The consumer can set the first parameter P and the second parameter q when making the camera (4) Trouble. It should be understood by those skilled in the art that the steps S4G2 to S4() 4 can be performed by the digital signal processor 40. Other external operators may also be utilized in another embodiment, such as using a personal computer for operations. Therefore, the invention is not limited thereto. When the first parameter P and the second parameter q are set, when the consumer uses the digital camera 10, only a single single image can be captured to focus. Next, a more detailed description of how to calculate the object distance u based on a single single image is given.

圖5是依照本發明之一實施例之一種依據唯一的單張 影像、第一參數P與第二參數q計算物距之流程圖。請^ 照圖5與2A ’在本實施例中,以數位訊號處理器4〇執行 下列之各步驟為例進行說明。首先由步驟S501,利用邊界 演算法擷取單張影像之邊界。步驟S502 ’估計目標擴散^ 數σ。步驟S503,依據目標擴散參數σ、第一參數p與第 二參數q計算物距u,例如使用公式(2)計算物距u。接 著則針對步驟S501作更詳細之說明。 圖6A是依照本發明之一實施例之一種利用邊界演算 9 S3U07-0002 23194twf.doc/n 法得到影像之邊界的流程圖。圖6B是依照本發明之一實 施例之一種邊界演算法之一種水平運算元的示意圖。圖6C 是依照本發明之一實施例之一種邊界演算法之一種垂直運 算元的示意圖。請合幷參照圖6A、6B與6C,在本實施例 中,以索貝爾(Sobel)邊界演算法進行說明之。在另一實 施例中也可使用梯度邊界演算法。首先步驟S601,將所擷 取唯一的單張影像之多個區域分別與水平運算元 (Operator) 601作捲積(Convolution)得到單張影像之多 個晝素之水平邊緣響應(Edge Response),運算元又可稱 作遮罩(Mask)或核心(Kernel)。步驟S602,將所擷取 唯一的單張影像之多個區域分別與垂直運算元6〇2作播積 得到單張影像之多個晝素之垂直邊緣響應。 值得一提的是’水平運算元601具有強化水平邊界之 特性。垂直運算元602具有強化垂直邊界之特性。爲了兼 具水平運算元601與垂直運算元602之特性,因此可藉由 步驟S603’取影像中任一晝素之水平邊緣響應與垂直邊緣 響應之中較大者,作爲此一畫素的輸出值。如此一來即玎 將輪出值爲極值的多個晝素之連綫視爲唯一的單張影像之 邊界。此處所述之極值例如是畫素所能表現灰階值之上限 值’本領域具有通常知識者也可自行定一設定值,作爲判 斷晝素之輸出值是否爲極值之依據。 另外,本實施例圖6B與6C所舉例的「運算元」僅是 一特定實施例,在另一實施例中,運算元仍可以使用梯度 運算元、或其他權重與大小(Size)之運算元,故本發明 1358231 S3U07-0002 23194twf.doc/n 不應當限定於此種特定實施例 更詳細之說明。 。接著則針對步驟S502作 斤發明之一實施例之一種以迭代法估計目 標擴散參數σ的流賴。圖8A是依照本發明之—實施 j想邊縣影像劃分成兩區域之示意圖。圖犯是依照 之之—種以迭代法估計目標擴散參數σ的 流私圖。凊合幷參日g圄7、βΔ你〇-Μ鮮在本實施例中,迭代Figure 5 is a flow chart for calculating the object distance based on a unique single image, a first parameter P, and a second parameter q, in accordance with an embodiment of the present invention. Please refer to FIG. 5 and FIG. 2A'. In the present embodiment, the following steps are performed by the digital signal processor 4 to illustrate. First, in step S501, the boundary algorithm is used to capture the boundary of a single image. Step S502' estimates the target diffusion number σ. In step S503, the object distance u is calculated according to the target diffusion parameter σ, the first parameter p and the second parameter q, for example, the object distance u is calculated using the formula (2). Next, step S501 will be described in more detail. 6A is a flow diagram of obtaining a boundary of an image using a boundary calculus 9 S3U07-0002 23194 twf.doc/n method in accordance with an embodiment of the present invention. Figure 6B is a schematic illustration of a horizontal operand of a boundary algorithm in accordance with an embodiment of the present invention. Figure 6C is a schematic illustration of a vertical operand of a boundary algorithm in accordance with an embodiment of the present invention. Referring to Figures 6A, 6B and 6C in combination, in the present embodiment, a Sobel boundary algorithm is described. Gradient boundary algorithms can also be used in another embodiment. First, in step S601, a plurality of regions of the unique single image captured are respectively convolved with a horizontal operation unit 601 to obtain a horizontal edge response (Edge Response) of the plurality of pixels of the single image. The operand can also be called a mask or a kernel. Step S602, the plurality of regions of the unique single image captured are respectively distributed with the vertical operation unit 6〇2 to obtain a vertical edge response of the plurality of pixels of the single image. It is worth mentioning that the 'horizontal operation element 601 has the characteristic of strengthening the horizontal boundary. Vertical operand 602 has the property of enhancing vertical boundaries. In order to combine the characteristics of the horizontal operation unit 601 and the vertical operation unit 602, the larger of the horizontal edge response and the vertical edge response of any element in the image can be taken as the output of the pixel by step S603'. value. In this way, the line connecting multiple elements with the maximum value is considered to be the boundary of the unique single image. The extreme value described here is, for example, the upper limit value of the gray scale value that the pixel can represent. A person having ordinary knowledge in the art can also determine a set value as a basis for determining whether the output value of the pixel is an extreme value. In addition, the "operating element" exemplified in FIGS. 6B and 6C of the present embodiment is only a specific embodiment. In another embodiment, the operating element can still use a gradient operation element, or other weights and sizes (Size) operation elements. Therefore, the present invention 1352831 S3U07-0002 23194 twf.doc/n should not be limited to a more detailed description of such a specific embodiment. . Next, for one of the embodiments of step S502, an iterative method is used to estimate the flow of the target diffusion parameter σ. Fig. 8A is a schematic view showing the division of the image of the Xiangbian County into two regions in accordance with the present invention. The graph is based on a flow-flow graph that estimates the target diffusion parameter σ in an iterative manner.凊合幷参日g圄7, βΔ你〇-Μ鲜, in this embodiment, iterate

ϋΐΙΪ例進行說明之。在另—實施例中,迭代 法也可使料頓錢最速下較。首先由步驟珊,估計 二之中ν==(§1’β2’σ) ’假設有—假想邊界將所 早張讀s彳分成之第—假想_ gl與第二假想區 域g2 〇 接著步驟S702 ,定義代價 C(v>= Σ 2φΓ^ΒΆγ 、 (〇,_ σ σ JJ , 其 數 中 /(a,b) = gVφ[^^)]_g2φτ£(α>ό>Ί。藤< c<;ni ,The example is explained. In another embodiment, the iterative method also allows for the fastest rate of money. First, by step Shan, it is estimated that ν==(§1'β2'σ) 'hypothetically--the hypothetical boundary divides the early reading s彳 into the first-imaginary_gl and the second imaginary area g2, and then step S702 , define the cost C(v>= Σ 2φΓ^ΒΆγ , (〇, _ σ σ JJ , whose number /(a,b) = gVφ[^^)]_g2φτ£(α>ό>Ί.藤藤<c<;ni ,

σ ^ σ \由步驟S501所求得的影像 之邊界將影像劃分成第—實際區域gl,與第二實際區域 g2、M為影像之—區域。(a,b)為區域之任—晝素座標ed_ 為晝素座標(a,b)與影像之假想邊界的距離。d,(a,b)為畫素 座標(a,b)與影像之邊界的距離β φ為高斯分佈函數。換言 之’ f(a,b)為已知的值]為第一假想區域以與高 斯分佈函數進行捲積所得之估測值,為第二假 想區域g2與高斯分佈函數進行捲積所得之估測值。代價函 數C(v)即透過均方誤差最小原黯測出較佳之目標擴散 11 1358231 S3U07-0002 23194twf.doc/n 參數σ。 承上述,由步驟S703計算®、ΏΏ盥ac⑺.^ 初 dg2 do。步驟 S704計算△▽。▽匸⑺,亦即根據計算所得之生22、^2盘 dC(V) 初 dgl I及將目前之v值代入代價函數c(v)計算所得之值,獲 ,V的一組調整值。步驟S7〇5,依據ν==ν+Δν對V進 行修正;步驟S706,當I Δν h、于一誤差值或迭代次數達 ^卜設定侧完成迭代法。值得-提的是,本領域具有通 • *知識者可依其需求自行設定此誤差值與設定值。若| △v |未小於此誤差值且迭代次數未達到此設定值則回到 步驟S703 _執行迭代法。如此—來,即可求得較佳之 目標擴散參數σ。由於此時之第—參數p、第二參數^與 目標擴散參數σ皆為已知,因此再依據公式(2可^ 得物距u。 此外,值得-提的是,雖然上述實施例中已經對古十曾 物距U描繪出了一個可能的型態,但所屬技術領域中具= • 11常知識者應當知道,各廢商對于計算物距的方法都不一 樣ii廠商當然也可依其需求以更簡易的公式取代上述之 各演算法’藉以節省運算時間幷降低硬體成本。換言之, f要是計算她u之方法符合上述#_之顧,^已經 是符合了本發明的精神所在。 。。請再參照圖2A,當求得物距U之後,數位訊號處理 益40即可依據物距U輸出調焦訊號給光學調焦元件2〇。 調焦訊號包括了數位相機1〇之鏡賴需調整之距離以及 調整之方向。光學齡元件2G則依據難訊號進行對焦, 12 1358231 S3U07-0002 23194twf.doc/n 使影像類取單元3〇辑得、生你 像之品質,並減少‘二耗影像。藉以提昇所掏取之影 本發明之實施例至少具有下列優點· .位相機像即可求得物距u,藉以調整數 提升⑽加速物間,更可 2· 例!: Ϊ位相機不需要反光鏡即可 …、功月b,猎以節省硬體成本與電、 提升畫質幷且能减小數位相機之體積。 雖然本發明已以較佳實施例揭露如上,铁 月’任何所屬技術領域中具有通常知識者,在不 ,本發狄_範圍當視後附之申請專利範 為準β 【圖式簡單說明】 圖1是習知之一種相機調焦過程之示意圖。 立圖2Α是依照本發明之一實施例之一種數位相機的示 意圖。 圖2Β是依照本發明之一實施例之一種自動對焦方法 的流程圖。 圖3是依照本發明之一實施例之一種成像系統的示意 圖。 ^ 圖4是依照本發明之一實施例之一種設定第一參數ρ 13 S3U07-0002 23194twf.doc/n 與第二參數q的流程圖。 圖5是依照本發明之一實施例之一種依據唯一的單張 影像、第一參數p與第二參數q計算物距之流程圖。 圖6A是依照本發明之一實施例之一種利用邊界演算 法得到影像之邊界的流程圖。 圖6B是依照本發明之一實施例之一種邊界演算法之 〜種水平運算元的示意圖。 圖6C是依照本發明之一實施例之一種邊界演算法之 〜種垂直運算元的示意圖。 圖7是依照本發明之一實施例之一種以迭代法估計目 標擴散參數σ的流程圖。 圖8Α是依照本發明之一實施例之假想邊界將景 分成兩區域之示意圖。 ’、’ a,i 圖8B是依照本發明之一實施例之一種以迭代法估 目榡擴散參數σ的流程圖。 < 【主要元件符號說明】 10 : 數位相機 20 : 光學調焦元件 30 : 影像擷取單元 40 : 數位訊號處理器 201 •鏡片 601 :水平運算元 602 :垂直運算元 1358231 S3U07-0002 23194twf.doc/n U :物距 S:散焦平面與鏡片之距離 f :焦點 D :光圈σ ^ σ \ divides the image into the first-real area gl by the boundary of the image obtained in step S501, and the second actual area g2, M is the image-area. (a, b) is the responsibility of the region - the symplectic coordinate ed_ is the distance between the atomic coordinates (a, b) and the imaginary boundary of the image. d, (a, b) is the distance between the coordinates of the pixel (a, b) and the boundary of the image β φ is a Gaussian distribution function. In other words, 'f(a,b) is a known value] is an estimated value obtained by convolving the first imaginary region with a Gaussian distribution function, and estimating the convolution of the second imaginary region g2 and the Gaussian distribution function. value. The cost function C(v) is the best target diffusion through the minimum mean square error. 11 1358231 S3U07-0002 23194twf.doc/n Parameter σ. In the above, the calculation of step 703, ΏΏ盥ac(7).^ initial dg2 do. Step S704 calculates Δ▽. ▽匸(7), that is, a set of adjusted values of V, based on the calculated value of 22, ^2 disk dC(V) initial dgl I and the current v value substituted into the cost function c(v). In step S7〇5, V is corrected according to ν==ν+Δν; in step S706, the iterative method is completed when I Δν h, at an error value or the number of iterations reaches the setting side. It is worth mentioning that there are people in the field who can set the error value and set value according to their needs. If | Δv | is not smaller than the error value and the number of iterations does not reach the set value, then return to step S703 to execute the iterative method. In this way, a better target diffusion parameter σ can be obtained. Since the first parameter p, the second parameter ^ and the target diffusion parameter σ are all known at this time, according to the formula (2, the object distance u can be obtained. Furthermore, it is worth mentioning that although the above embodiment has already There is a possible pattern for the ancient ten-objects from the U, but those who have the knowledge in the technical field should know that the waste merchants have different methods for calculating the object distance. The need to replace the above algorithms with a simpler formula 'to save computing time and reduce hardware costs. In other words, f if the method of calculating her u meets the above #_, ^ is in line with the spirit of the present invention. Referring to FIG. 2A again, after the object distance U is obtained, the digital signal processing benefit 40 can output the focusing signal to the optical focusing component 2 according to the object distance U. The focusing signal includes the mirror of the digital camera. It depends on the distance to be adjusted and the direction of adjustment. The optical age component 2G focuses on the difficult signal, 12 1358231 S3U07-0002 23194twf.doc/n Makes the image class 3 〇, the quality of your image, and reduces ' Second consumption image. Borrow Enhancing the captured image The embodiment of the present invention has at least the following advantages: The object camera image can be used to obtain the object distance u, thereby adjusting the number (10) between the acceleration objects, and more. 2: Example: The position camera does not require a mirror It can be..., power b, hunting to save hardware cost and power, improve picture quality, and can reduce the volume of digital camera. Although the present invention has been disclosed in the preferred embodiment as above, Tie Yue's in any technical field Those who have the usual knowledge, no, the scope of the patent application is as follows: [Simplified illustration of the drawing] Figure 1 is a schematic diagram of a conventional camera focusing process. Figure 2 is in accordance with the present invention 1 is a flow chart of an autofocus method in accordance with an embodiment of the present invention. Fig. 3 is a schematic diagram of an imaging system in accordance with an embodiment of the present invention. A flowchart for setting a first parameter ρ 13 S3U07-0002 23194 twf.doc/n and a second parameter q according to an embodiment of the present invention. FIG. 5 is a single sheet according to an embodiment of the present invention. FIG. 6A is a flowchart of obtaining a boundary of an image by using a boundary algorithm according to an embodiment of the present invention. FIG. 6B is a flowchart according to the present invention. FIG. 6C is a schematic diagram of a vertical operation unit of a boundary algorithm according to an embodiment of the present invention. FIG. 7 is a schematic diagram of a vertical operation unit according to an embodiment of the present invention. A flow chart for estimating the target diffusion parameter σ by an iterative method. Figure 8A is a schematic diagram of the imaginary boundary dividing the scene into two regions according to an embodiment of the present invention. ', ' a, i Figure 8B is an implementation in accordance with the present invention A flow chart for estimating the diffusion parameter σ by an iterative method. < [Description of main component symbols] 10: Digital camera 20: Optical focusing component 30: Image capturing unit 40: Digital signal processor 201 • Lens 601: Horizontal operation unit 602: Vertical operation unit 1358231 S3U07-0002 23194twf.doc /n U : object distance S: distance between defocus plane and lens f: focus D: aperture

d:散焦平面之成像 gl :第一假想區域 g2 :第二假想區域 gl’ :第一實際區域 g2’ :第二實際區域 S201〜S204 :圖2B之一種自動對焦方法之各步驟 S401〜S404:圖4之一種設定第一參數p與第二參數 q的各步驟 S501〜S503 :圖5之一種依據唯一的單張影像、第一 參數P與第二參數q計算物距的各步驟 S601〜S603 :圖6A之一種利用邊界演算法得到影像 之邊界的各步驟d: imaging of the defocus plane gl: first imaginary area g2: second imaginary area gl': first actual area g2': second actual area S201 to S204: steps S401 to S404 of an autofocus method of FIG. 2B FIG. 4 is a step S501 to S503 for setting the first parameter p and the second parameter q: each step S601 of FIG. 5 for calculating the object distance according to the unique single image, the first parameter P and the second parameter q. S603: FIG. 6A is a step of obtaining a boundary of an image by using a boundary algorithm

S701〜S706 :圖7之一種以迭代法估計目標擴散參數 σ的各步驟 15S701~S706: FIG. 7 is a step of estimating the target diffusion parameter σ by an iterative method. 15

Claims (1)

1358231 |cto年丨。月J日修正本 100-10-3 十、申請專利範圍: 1.一種相機之自動對焦方法,包括下列步驟: 在多個可變距離Di處,分別拍攝同一點光源,得到 相對應之多張影像Fi ; 利用高斯分佈並分別依據該些影像Fi計算相對應之 擴散參數σί ;1358231 | cto years old. Month J Day Amendment 100-10-3 X. Application Patent Range: 1. A camera autofocus method, including the following steps: Shooting the same point source separately at multiple variable distances Di, and obtaining corresponding multiple sheets Image Fi; using a Gaussian distribution and calculating corresponding diffusion parameters σί according to the images Fi respectively; 依據該些可變距離Di及其相對應之擴散參數σί建立 一資料集(Di,Gi); 依據該資料集(Di,oi),設定該第一參數p與該第二參 數q,其中i為相對應之編號; 採集一影像; 依據該影像、該第一參數P與該第二參數q.計算一物 距;以及 依據該物距調整對焦位置。And establishing a data set (Di, Gi) according to the variable distance Di and the corresponding diffusion parameter σί; setting the first parameter p and the second parameter q according to the data set (Di, oi), wherein i For the corresponding number; acquiring an image; calculating an object distance according to the image, the first parameter P and the second parameter q; and adjusting the focus position according to the object distance. 2.如申請專利範圍第1項所述之相機自動對焦方法’ 其中所述之依據該資料集(Di,ai),設定該第一參數p與該 第二參數q之步驟,包括下列步驟: 依據該資料集(Di,(?i)與一代價函數,使用一迭代法設 定該第一參數p與該第二參數q,其中該代價函數為 q-Gi 〇(ρ^) = Σφί- 3. 如申請專利範圍第2項所述之相機自動對焦方法, 其中該迭代法爲牛頓下降法。 4. 如申請專利範圍第1項所述之相機自動對焦方法, 其中所述之依據該影像、該第一參數P,與該第二參數q計 16 1358231 100-10-3 算該物距之步驟,包括下列步驟: 利用一邊界演算法擷取該影像之邊界; 估計一目標擴散參數σ;以及 依據該目標擴散參數σ、該第一參數ρ與該第二參數 q計算該物距。 5. 如申請專利範圍第4項所述之相機自動對焦方法, 其中所述之利用該邊界演算法擷取該影像之邊界之步驟, 包括下列步驟: 將該影像之多個區域分別與一水平運算元作捲積以 得到該影像之多個畫素之水平邊緣響應; 將該影像之該些區域分別與一垂直運算元作捲積以 得到該影像之該些畫素之垂直邊緣響應;以及 取任一畫素之水爭邊緣響應與該畫素之垂直邊緣響 應之中較大者,作爲該晝素的輸出值,其中將輸出值爲極 值的多個晝素之連綫祝爲該影像之邊界。 6. 如申請專利範園第5項所述之相機自動對焦方法, 其中該水平運算元與該垂直運算元為索貝爾運算元。 7. 如申請專利範園第5項所述之相機自動對焦方法, 其中該水平運算元與該垂直運算元為梯度運算元。 8. 如申請專利範圍第4項所述之相機自動辦焦方法, 其中所述之估計該目標擴散參數σ之步驟,包括下列步驟: 以一送代法估計該目標擴散參數σ。 專利範園第8項所述之相機自動對焦方法, 其中s亥迭代法爲梯度下降法。 17 1358231 100-10-3 10.如申請專利範圍帛8項所述之相機自動對焦方 法’其中所述之以該迭代法估計該目標擴散參數σ之步 驟,包括下列步驟: 估3十V的一組初值,其中v==(gl,g2⑹,一假想邊界 將該影像劃分成之-第-假想區域gl與―第二假想區域 g2 ; 定 義一代 價函數 ,其中 制= 卜洲^],該影像之邊界將該影像劃分 成-第-實際區域gl’與-第二實際區域g2,,M為該影像 之一區域,(a,b)為該區域之任一畫素座標,d(a,b)g該晝素 座標與該影像之該假想邊界的距離,d,(a,b)為該晝素座標 與δ亥影像之該邊界的距離,φ為高斯分佈數 計算盥 制 dgl ^ do ' 計算Δν = ν(:(ν),亦即根據計算所得之@、$22诳 dC(V) 初 ^2 及將目如之V值代入代價函數Q>)計算所得之值,獲 得v的一組調整值Δν ; 依據ν = ν+Δν對ν進行修正;以及 當| Δν丨小于一誤差值或迭代次數達到—設定值則 完成該迭代法。 11 ·如申請專利範圍第4項所述之相機自動對焦方 法,其中所述之依據該目標擴散參數σ、該第一參數ρ與 該第二參數q計算該物距之步驟,包括下列步驟: 18 1358231 100-10-3 依據計算該物距’其中u為該物距。 12. —種數位相機,包括: 光學調焦元件,用以調整該數位相機之對焦位置; 一影像擷取單元,配置於該光學調焦元件之光徑上, 用以擷取影像;以及 數位訊號處理盗,搞接該光學調焦元件盘該影像擷 取單元; ' ' 其中該數位相機在多個可變距離Di處,分別拍攝同 一點光源,使該影像擷取單元得到相對應之多張影像Fi, 並且該數位訊號處理态利用南斯分佈並分別依據該些影像 Fi计异相對應之擴散參數⑴,該數位訊號處理器並依據該 些可變距離Di及其相對應之擴散參數σί建立一資料集 (〇ι,σι) ’該數位訊號處理器並依據該資料集(Di々广來設 定該第一參數p與該第二參數q,其中丨為相對應之編號& 該數位訊號處理器更依據該影像、該第一參數ρ與該第= 參數q計算一物距,該數位訊號處理器依據該物距^ = 凋焦5fL號,§亥光學調焦元件依據該調焦訊號進行對焦。 13. 如申請專利範圍第12項所述之數位相機,其 數位訊號處理器依據該資料集(Ι)ί,σί)與一代價函數了 $該 用一迭代法設定該第一參數:卩與該第二參數q,其中=使 價函數為c(p,的=)2。 ^代 / q-〇i 14. 如申請專利範圍第13項所述之數位相機,耸 迭代法爲牛頓下降法。 "、中該 15. 如申請專利範圍第12項所述之數位相機,其中, 19 1358231 100-10-3 數位訊號處刻_-邊界演算法_取該影像之邊界,並 且該數位訊號處理器估計-目標擴散參數σ,該數位訊號 處理器並依據該目標擴散參數σ、該第一參數ρ與該第二 參數q計算該物距。 16.如申請專利範圍第15項所述之數位相機,其中該 數位訊號處理器將該影像之多個區域分別與一水平運算元 作捲積以得到斜彡像之多個4素之水平邊緣響應,並且該 數位訊號處理器將該影像之該些區域分別盥一垂直運瞀元 作捲積以得_影像之該些晝素之垂直邊緣響應,並1該 數位訊號處理ϋ取任-晝素之水平邊轉應與該晝素之垂 直邊緣響應之巾較大者,作爲該晝_輸綠,其中將輸 出值爲極值的多個畫素之連綫視爲該影像之邊界。 Π.如申請專利範圍第16項所述之數位相機,其中該 數位訊號處理1職狀該水转算元與麵 ^ 索貝爾(Sobel)運算元。 18.如申請專利範圍第16項所述之數位相機,1中, 理器所使用之該水平運算元與該垂直運算元: 數位專郷圍第15項所述讀㈣目機,其中該 數位减處理㈣—迭代法估計該目標擴散參數〇。 20:如申請專利範圍第19項所述之數位相機,其 數位訊號處理1所制之該迭代法爲梯度 八 21= 申請專鄕圍第19項所述之触相機,兑中今 數位訊號處理11估計1 —組初值,其h = (gtg2& i 20 1*358231 100-10-3 益疋義一代價函數C(v)= 其中/(亦頭,該影:;象之實際邊α界將該 衫像劃分成一第一貫際區域gi’與一第二實際區域,Μ 為該影像之一區域,(a,b)為該區域之任一晝素座標,d(a,b) 為該晝素座標與該影像之該假想邊界的距離,d,(a,b)為該2. The camera autofocus method according to claim 1, wherein the step of setting the first parameter p and the second parameter q according to the data set (Di, ai) comprises the following steps: The first parameter p and the second parameter q are set according to the data set (Di, (?i) and a cost function, wherein the cost function is q-Gi ρ(ρ^) = Σφί-3 The camera autofocus method according to claim 2, wherein the iterative method is a Newton's descent method. 4. The camera autofocus method according to claim 1, wherein the image is based on the image, The first parameter P, and the second parameter q count 16 1358231 100-10-3 to calculate the object distance, comprising the following steps: using a boundary algorithm to capture the boundary of the image; estimating a target diffusion parameter σ; And calculating the object distance according to the target diffusion parameter σ, the first parameter ρ and the second parameter q. 5. The camera autofocus method according to claim 4, wherein the boundary algorithm is utilized Steps to capture the boundaries of the image, package The following steps: convolving a plurality of regions of the image with a horizontal operation unit to obtain a horizontal edge response of the plurality of pixels of the image; and respectively convolving the regions of the image with a vertical operation element Obtaining a vertical edge response of the pixels of the image; and taking the larger of the water edge response of the pixel and the vertical edge response of the pixel as the output value of the pixel, wherein the output value is The connection between the plurality of elements of the extreme value is intended to be the boundary of the image. 6. The camera autofocus method according to claim 5, wherein the horizontal operation unit and the vertical operation element are Sobel operations. 7. The camera autofocus method according to claim 5, wherein the horizontal operation unit and the vertical operation unit are gradient operation elements. 8. The camera automatically as described in claim 4 a focal method, wherein the step of estimating the target diffusion parameter σ comprises the steps of: estimating the target diffusion parameter σ by a delivery method. The camera autofocus method described in Patent No. 8, The sigma iterative method is a gradient descent method. 17 1358231 100-10-3 10. The method of estimating the target diffusion parameter σ by the iterative method according to the camera autofocus method described in claim 8 The method includes the following steps: estimating a set of initial values of 30 V, where v==(gl, g2(6), an imaginary boundary dividing the image into the -first-imaginary region gl and the second hypothetical region g2; defining a cost function , where system = Buzhou ^], the boundary of the image divides the image into a -first-real area gl' and a second actual area g2, where M is a region of the image, and (a, b) is the region Any of the pixel coordinates, d(a, b)g, the distance between the pixel coordinates and the imaginary boundary of the image, d, (a, b) is the distance between the pixel coordinates and the boundary of the δ hai image. φ is the Gaussian distribution number calculation tgl ^ do 'calculate Δν = ν(:(ν), that is, according to the calculated @, $22诳dC(V) initial ^2 and substituting the V value into the cost function Q&gt ;) Calculate the value obtained, obtain a set of adjustment values Δν of v; correct ν according to ν = ν + Δν; and when | Δν An error value is smaller than the number of iterations reaches or - the set value of the iteration method is completed. The camera autofocus method of claim 4, wherein the step of calculating the object distance according to the target diffusion parameter σ, the first parameter ρ and the second parameter q comprises the following steps: 18 1358231 100-10-3 Calculate the object distance 'where u is the object distance. 12. A digital camera comprising: an optical focusing component for adjusting a focus position of the digital camera; an image capturing unit disposed on an optical path of the optical focusing component for capturing an image; and a digital position The signal processing thief, the optical focusing component disk is connected to the image capturing unit; ' ' wherein the digital camera captures the same point light source at a plurality of variable distances Di, so that the image capturing unit obtains correspondingly The image is Fi, and the digital signal processing state utilizes the Nans distribution and respectively according to the diffusion parameters (1) corresponding to the images, and the digital signal processor according to the variable distance Di and its corresponding diffusion parameter Σί establishes a data set (〇ι, σι) 'the digital signal processor and sets the first parameter p and the second parameter q according to the data set, where 丨 is the corresponding number & The digital signal processor further calculates an object distance according to the image, the first parameter ρ and the third parameter q, and the digital signal processor according to the object distance ^ = withered 5fL number, § hai optical focusing component according to the adjustment The signal is focused. 13. The digital camera according to claim 12, wherein the digital signal processor sets the first one according to the data set (Ι) ί, σί) and a cost function. Parameters: 卩 and the second parameter q, where = the valence function is c(p, ==)2. ^代 / q-〇i 14. As for the digital camera described in claim 13, the iterative method is the Newton's descent method. ", 中中15. The digital camera described in claim 12, wherein 19 1358231 100-10-3 digital signal engraving _-boundary algorithm _ takes the boundary of the image, and the digital signal processing The target estimates a target diffusion parameter σ, and the digital signal processor calculates the object distance according to the target diffusion parameter σ, the first parameter ρ and the second parameter q. 16. The digital camera of claim 15, wherein the digital signal processor convolves the plurality of regions of the image with a horizontal operation element to obtain a plurality of horizontal edges of the oblique image. Responding, and the digital signal processor convolves the regions of the image into a vertical transport element to obtain a vertical edge response of the pixels of the image, and the digital signal processing takes any command The horizontal edge of the element should be larger than the vertical edge response of the element, and the line of the plurality of pixels whose output value is the extreme value is regarded as the boundary of the image.数. The digital camera of claim 16, wherein the digital signal processing 1 is a water conversion unit and a Sobel operator. 18. The digital camera according to claim 16, wherein the horizontal operation unit and the vertical operation unit are used: the number is specifically for the reading (four) of the item 15 of the range, wherein the number is Subtraction processing (4) - Iterative method estimates the target diffusion parameter 〇. 20: The digital camera according to claim 19, wherein the iterative method of digital signal processing 1 is a gradient eight 21= application for the camera described in item 19, and the digital signal processing 11 estimate 1 - the initial value of the group, its h = (gtg2 & i 20 1 * 358231 100-10-3 Yi Yiyi a cost function C (v) = where / (also head, the shadow:; the actual edge of the image The shirt image is divided into a first intersecting area gi' and a second actual area, where is a region of the image, (a, b) is any of the pixel coordinates of the region, and d(a, b) is The distance between the pixel coordinates and the imaginary boundary of the image, d, (a, b) is the 晝素座標與該影像之該實際邊界的距離,φ為高斯分佈函 數,該數位訊號處理器並計算、dC(V) ^ dC(y), 位訊號處理1麟算Δν=ν(χνρ亦|y處j 根據計算所得之,、翅*趣㈣日 、# ^ 制 及將目珂之v值代入 似貝函數c(v)=算所得之值,獲得ν的—_整值Λν,該 數位訊號纽H並域ν=ν+Δν對ν進雜正 位訊號處理器_是否完成該迭代法,t丨 ^ 差值或迭似數相—設紐敢_迭代法。 °、The distance between the pixel coordinates and the actual boundary of the image, φ is a Gaussian distribution function, and the digital signal processor calculates, dC(V)^dC(y), bit signal processing 1 lining Δν=ν(χνρ也| y j is calculated according to the calculation, wing * interesting (four) day, # ^ system and the v value of the target is substituted into the value of the like function c (v) = to obtain the value of ν - _ integral value Λ ν, the digit Signal New H and ν = ν + Δν for ν into the positive bit signal processor _ whether to complete the iterative method, t 丨 ^ difference or a number of similar phase - set the dare _ iterative method. 數位訊號處理器利用一假想邊界將該影像劃分成之一第一 饭想區域gl與一第一假想區域g2,並且該數位訊號處理 22.如申請專利範圍第15項所述之數位相機,豆中今 减處心依據“六計算該物距,其巾u為該物距: 21The digital signal processor divides the image into a first rice area gl and a first imaginary area g2 by using an imaginary boundary, and the digital signal processing 22. The digital camera according to claim 15 of the patent, the bean In the present and the present, the heart is reduced according to "six calculating the object distance, and its towel u is the object distance: 21
TW96142645A 2007-11-12 2007-11-12 Auto-focus method for camera and digital camera TWI358231B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW96142645A TWI358231B (en) 2007-11-12 2007-11-12 Auto-focus method for camera and digital camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW96142645A TWI358231B (en) 2007-11-12 2007-11-12 Auto-focus method for camera and digital camera

Publications (2)

Publication Number Publication Date
TW200922302A TW200922302A (en) 2009-05-16
TWI358231B true TWI358231B (en) 2012-02-11

Family

ID=44728135

Family Applications (1)

Application Number Title Priority Date Filing Date
TW96142645A TWI358231B (en) 2007-11-12 2007-11-12 Auto-focus method for camera and digital camera

Country Status (1)

Country Link
TW (1) TWI358231B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI566001B (en) * 2012-11-23 2017-01-11 鴻海精密工業股份有限公司 Method of auto-focus

Also Published As

Publication number Publication date
TW200922302A (en) 2009-05-16

Similar Documents

Publication Publication Date Title
TWI292472B (en)
TWI351574B (en) Syetem and method for efficiently performing a dep
TWI554103B (en) Image capturing device and digital zooming method thereof
Zhang et al. Gradient-directed multiexposure composition
JP6147347B2 (en) Imaging device
US20190385285A1 (en) Image Processing Method and Device
BR102012030034B1 (en) system and method for generating a raw depth map
JP2012249070A (en) Imaging apparatus and imaging method
US8164683B2 (en) Auto-focus method and digital camera
WO2019124040A1 (en) Distance measuring camera
JP5846172B2 (en) Image processing apparatus, image processing method, program, and imaging system
JP2015148532A (en) Distance measuring device, imaging apparatus, distance measuring method, and program
JP2018107526A (en) Image processing device, imaging apparatus, image processing method and computer program
TW201110684A (en) Method and device for adjusting weighting values in light metering
TW439010B (en) Photographic method with multi-focal-length and the device thereof
WO2018058476A1 (en) Image correction method and device
US10805584B2 (en) Projection system and image projection method
JP6222205B2 (en) Image processing device
Hahne et al. PlenoptiCam v1. 0: A light-field imaging framework
TWI358231B (en) Auto-focus method for camera and digital camera
JP5796611B2 (en) Image processing apparatus, image processing method, program, and imaging system
US20110074982A1 (en) Apparatus and method for digital photographing
JP6645711B2 (en) Image processing apparatus, image processing method, and program
JP6665917B2 (en) Image processing device
JP6079838B2 (en) Image processing apparatus, program, image processing method, and imaging system