TWI458339B - 3d image sensor alignment detection method - Google Patents

3d image sensor alignment detection method Download PDF

Info

Publication number
TWI458339B
TWI458339B TW100105813A TW100105813A TWI458339B TW I458339 B TWI458339 B TW I458339B TW 100105813 A TW100105813 A TW 100105813A TW 100105813 A TW100105813 A TW 100105813A TW I458339 B TWI458339 B TW I458339B
Authority
TW
Taiwan
Prior art keywords
search
block
image
condition
height
Prior art date
Application number
TW100105813A
Other languages
Chinese (zh)
Other versions
TW201236439A (en
Inventor
Kuo Hsiang Hung
Chui Chu Cheng
Original Assignee
Sanjet Technology Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sanjet Technology Corp filed Critical Sanjet Technology Corp
Priority to TW100105813A priority Critical patent/TWI458339B/en
Publication of TW201236439A publication Critical patent/TW201236439A/en
Application granted granted Critical
Publication of TWI458339B publication Critical patent/TWI458339B/en

Links

Description

3D影像感測器校正方法3D image sensor calibration method

本發明是關於一種3D影像感測器校正方法,尤指一種能在由3D影像感測器拍攝而得之兩影像中精確地找出兩影像之間的水平垂直位移量,進而對3D影像感測器做校正的一種方法。The invention relates to a method for correcting a 3D image sensor, in particular to accurately finding the horizontal and vertical displacement between two images in two images captured by a 3D image sensor, thereby further sensing the 3D image. A method by which the detector is calibrated.

立體3D影像的資訊內容在近年來發展快速,但3D影像品質的好壞也決定使用者觀看的舒適度。一般來說,3D影像通常是藉由一具有左、右兩組鏡頭與影像感測模組的3D攝像機來拍攝外界影像,以模擬人眼之左、右眼觀看景物的3D視覺效果。倘若3D攝像機在製造過程中有某一鏡頭與影像感測模組的位置或角度發生誤差的狀況時,便會造成所拍攝到的3D影像品質不良,無法讓使用者舒適地觀看。因此,如何確保3D攝像機在製造過程中,其鏡頭與影像感測模組的位置或角度能被迅速且精確地定位,乃為3D攝像機生產業者致力研究的課題。The content of stereoscopic 3D images has developed rapidly in recent years, but the quality of 3D images also determines the comfort of users. In general, 3D images usually capture external images by a 3D camera with left and right sets of lens and image sensing modules to simulate the 3D visual effects of the left and right eyes of the human eye. If the 3D camera has an error in the position or angle of the lens and the image sensing module during the manufacturing process, the quality of the captured 3D image is poor and cannot be viewed comfortably by the user. Therefore, how to ensure that the position or angle of the lens and image sensing module can be quickly and accurately positioned during the manufacturing process of the 3D camera is a subject that the 3D camera manufacturer is committed to.

本發明之主要目的是在於提供一種3D影像感測器校正方法,可針對一3D攝像模組之兩組鏡頭與影像感測模組所拍攝到的兩影像進行位置差異值的檢測並據以調整校正,以確保3D攝像模組可以攝得良好的3D影像品質。The main purpose of the present invention is to provide a 3D image sensor calibration method, which can detect the position difference values of two images captured by two sets of lenses and image sensing modules of a 3D camera module and adjust accordingly. Correction to ensure that the 3D camera module can capture good 3D image quality.

為達上述之目的,本發明揭露了一種3D影像感測器校正方法,包括有下列步驟:步驟(A):將一3D攝像模組定位,該3D攝像模組包括有至少兩組之鏡頭與影像感測模組;步驟(B):使用該3D攝像模組之該至少兩組之鏡頭與影像感測模組自外界擷取至少兩影像;步驟(C):以一控制裝置計算所擷取之該至少兩影像的位置差異值;步驟(D):判斷該位置差異值是否位於一預設範圍內;倘若是位於該預設範圍內則執行步驟(F),倘若不是位於該預設範圍內則執行步驟(E);步驟(E):依據該位置差異值來調整該至少兩組之鏡頭與影像感測模組中的至少一組鏡頭與影像感測模組的位置,之後並重回步驟(B)執行;以及步驟(F):停止。In order to achieve the above purpose, the present invention discloses a 3D image sensor calibration method, which includes the following steps: Step (A): Positioning a 3D camera module, the 3D camera module includes at least two groups of lenses and Image sensing module; step (B): using at least two sets of lens and image sensing module of the 3D camera module to capture at least two images from the outside; step (C): calculating the device by a control device Taking the position difference value of the at least two images; step (D): determining whether the position difference value is within a preset range; if it is within the preset range, performing step (F), if not at the preset Step (E) is performed in the range; step (E): adjusting the position of at least one of the lens and the image sensing module of the at least two groups of the lens and the image sensing module according to the position difference value, and then Return to step (B) to execute; and step (F): stop.

於一較佳實施例中,該至少兩影像係包括有一第一影像以及一第二影像,該第一與第二影像均同為寬具有P畫素且高具有Q畫素的P*Q影像資料,並且,步驟(C)所述之計算該位置差異值的方法係包括有下列步驟:步驟(C1):於該第二影像中決定一搜尋區塊,該搜尋區塊係位於該第二影像之P*Q的影像資料範圍內;步驟(C2):自該第一影像中建立一找尋條件,該找尋條件係依據該第一影像之P*Q的影像資料範圍內中之一條件區塊的資料所建立,其中,該條件區塊的寬與高的畫素值是分別小於該搜尋區塊的寬與高;步驟(C3):自該第二影像之該搜尋區塊中,找尋符合該找尋條件之一對應區塊的位置;以及步驟(C4):依據該條件區塊以及該對應區塊相對於其兩者在P*Q影像資料範圍中所在的位置的差異,計算出該條件區塊與該對應區塊兩者間的位置差異值。In a preferred embodiment, the at least two image systems include a first image and a second image, and the first and second images are both P*Q images having a P pixel and a Q pixel. And the method for calculating the position difference value in the step (C) includes the following steps: Step (C1): determining a search block in the second image, the search block is located in the second The image data range of the P*Q of the image; step (C2): establishing a search condition from the first image, the search condition is based on one of the condition areas of the P*Q image data of the first image The data of the block is established, wherein the width and height pixel values of the condition block are respectively smaller than the width and height of the search block; and step (C3): searching from the search block of the second image Corresponding to the location of the corresponding block of the search condition; and step (C4): calculating the difference according to the condition block and the position of the corresponding block relative to the position of the P*Q image data in the P*Q image data range The position difference value between the conditional block and the corresponding block.

於一較佳實施例中,該條件區塊之寬與高同為W畫素,且該搜尋區塊的寬與高同為W+2t畫素,其中,(W+2t)<P且(W+2t)<Q。In a preferred embodiment, the width and height of the conditional block are W pixels, and the width and height of the search block are W+2t pixels, where (W+2t)<P and ( W+2t)<Q.

於一較佳實施例中,t為W的整數倍,並且,該步驟(C3)中所述之找尋符合該找尋條件之一對應區塊的方法,係先以該條件區塊的資料藉由至少一數學函數計算出至少一條件值,並且,在該搜尋區塊中分割出複數個同樣具有寬與高同為W畫素的次搜尋區塊,且各相鄰次搜尋區塊之邊緣相接;之後,以相同之該至少一數學函數依序分別計算出各次搜尋區塊的值並與該至少一條件值進行比對,倘若相同時則表示該次搜尋區塊就是符合該找尋條件的該對應區塊。In a preferred embodiment, t is an integer multiple of W, and the method for finding a corresponding block corresponding to the search condition in the step (C3) is performed by using the data of the condition block first. At least one mathematical function calculates at least one condition value, and divides a plurality of secondary search blocks having the same width and height as the W pixel in the search block, and the edge of each adjacent search block Then, the values of each search block are respectively calculated and compared with the at least one condition value by the same at least one mathematical function, and if they are the same, it means that the search block meets the search condition. The corresponding block.

於一較佳實施例中,該步驟(C3)中所述之找尋符合該找尋條件之一對應區塊的方法是以多階段搜尋的方式進行。In a preferred embodiment, the method for finding a corresponding block corresponding to one of the search conditions described in the step (C3) is performed in a multi-stage search manner.

於一較佳實施例中,所述之多階段搜尋的方式係包括:先以一相對較大值之W1作為進行一第一階段搜尋之一第一條件區塊的寬與高,並據以在寬與高同為W1+2t畫素的一第一搜尋區塊中找尋符合該找尋條件且寬與高同為W1之一第一對應區塊;當找到該寬與高同為W1之該第一對應區塊後,便以該第一對應區塊作為進行一第二階段搜尋時之一第二搜尋區塊,且以一相對較小值之W2作為進行該第二階段搜尋之一第二條件區塊的寬與高,在該寬與高同為W1之該第二搜尋區塊中找尋符合該找尋條件且寬與高同為W2之一第二對應區塊。In a preferred embodiment, the multi-stage search method includes: first, using a relatively large value of W1 as the width and height of the first conditional block for performing a first stage search, and according to Finding a first corresponding block that meets the search condition and whose width and height are the same as W1 in a first search block whose width and height are W1+2t pixels; when the width and height are the same as W1, After the first corresponding block, the first corresponding block is used as one of the second search blocks for performing a second stage search, and a relatively small value of W2 is used as one of the second stage search. The width and height of the two conditional blocks are found in the second search block whose width and height are W1, and the second corresponding block that meets the search condition and whose width and height are the same as W2.

於一較佳實施例中,於該步驟(A)中,是藉由一定位裝置來將該3D攝像模組定位,且於該定位裝置中包括有可調整該至少兩組之鏡頭與影像感測模組中的至少一組鏡頭與影像感測模組的位置的機構。In a preferred embodiment, in the step (A), the 3D camera module is positioned by a positioning device, and the positioning device includes an adjustable lens and image sense of the at least two groups. A mechanism for measuring the position of at least one of the lenses and the image sensing module in the module.

為了能更清楚地描述本發明所提出之3D影像感測器校正方法,以下將配合圖式詳細說明之。In order to more clearly describe the 3D image sensor calibration method proposed by the present invention, the following will be described in detail in conjunction with the drawings.

本發明之3D影像感測器校正方法,主要是為了提高改善3D攝像機的拍攝效果,因此開發了3D影像感測器之對位校正檢測系統,利用精密硬體工具搭配軟體輔助設計,能有效對鏡頭與影像感測模組的水平、垂直方面作位置校正,使3D攝像機可以攝得最佳品質的3D影像。於本發明中,藉由在3D攝像機所拍攝取得之左右兩張相鄰影像中利用各種比對的方法找出它們之間的相對應關係,以計算兩張影像之間的位移量。本發明主要是利用移動估計方式來計算兩影像在水平、垂直的移動量。第一,為了得到精確的移動量,選擇了特定不變的區塊當成檢測樣本;第二,移動估測耗費了大部分的運算處理時間,利用快速有效的搜尋方法,找到最佳的匹配位置,得到整體影像的位移量,有效運用3D影像感測器之對位校正檢測系統。The 3D image sensor calibration method of the present invention is mainly for improving the shooting effect of the 3D camera. Therefore, the alignment correction detection system of the 3D image sensor is developed, and the precision hardware tool is matched with the software assisted design, which can effectively Position correction of the horizontal and vertical aspects of the lens and image sensing module enables the 3D camera to capture the best quality 3D images. In the present invention, the correlation between the two images is calculated by using various alignment methods in the two adjacent images captured by the 3D camera to find the corresponding relationship between the two images. The invention mainly uses the motion estimation method to calculate the horizontal and vertical movement amounts of the two images. First, in order to get an accurate amount of movement, a specific constant block is selected as the detection sample. Second, the motion estimation consumes most of the processing time, and uses a fast and efficient search method to find the best matching position. To obtain the displacement of the overall image, and effectively use the alignment correction detection system of the 3D image sensor.

請參閱圖一,為本發明之3D影像感測器之對位校正檢測系統的一實施例示意圖,其包括了:一3D攝像模組10、一定位裝置20、一控制裝置22、以及一測試圖樣23。1 is a schematic diagram of an embodiment of a registration correction detection system for a 3D image sensor of the present invention, which includes: a 3D camera module 10, a positioning device 20, a control device 22, and a test. Figure 23.

該3D攝像模組10係包括有至少兩組之鏡頭與影像感測模組11、12,於本實施例中,係包括了位於左側之一第一鏡頭與影像感測模組11以及一位於右側之一第二鏡頭與影像感測模組12,其所拍攝得到之左、右兩影像可分別模擬左、右眼在觀看相同景物時所看到的影像。各鏡頭與影像感測模組11、12係分別包括了一鏡頭組(Lens Set)以及一影像感測器(Image Sensor),藉由鏡頭組將來自外界景物之光影像投影在感測器上並轉換為對應於該影像之電氣訊號。該鏡頭組可為定焦或是變焦鏡頭組,且該感測器可為CCD或是CMOS感測器。於本實施例中,該3D攝像模組10可以是僅包括了一電路板、位於該電路板上之該兩組鏡頭與影像感測模組11、12、以及用來定位前述鏡頭與影像感測模組之一定位框體所構成之模組,而無包括諸如LCD顯示螢幕或是操作按鍵等之3D攝像機的其他元件為較佳;然而,該3D攝像模組10也可以是具備了3D攝像機的其他全部或部分元件。The 3D camera module 10 includes at least two sets of lens and image sensing modules 11 and 12. In this embodiment, the first lens and the image sensing module 11 on the left side and the image sensing module 11 are located. The second lens and the image sensing module 12 on the right side respectively capture the left and right images of the left and right eyes to simulate the images seen by the left and right eyes when viewing the same scene. Each lens and image sensing module 11 and 12 respectively includes a lens set (Lens Set) and an image sensor (Image Sensor), and the light image from the external scene is projected on the sensor by the lens group. And converted into an electrical signal corresponding to the image. The lens group can be a fixed focus or a zoom lens group, and the sensor can be a CCD or a CMOS sensor. In this embodiment, the 3D camera module 10 can include only one circuit board, the two sets of lens and image sensing modules 11 and 12 on the circuit board, and the positioning of the lens and image sense. One of the test modules positions the module formed by the frame, and other components including a 3D camera such as an LCD display screen or an operation button are preferred; however, the 3D camera module 10 may also have a 3D All or part of the components of the camera.

該定位裝置20是用以定位該3D攝像模組10,使3D攝像模組10上的所有鏡頭與影像感測模組11、12都可以拍攝到該測試圖樣23上某一特定區域或範圍的圖案24。於該定位裝置20上並設置包括有可精確調整該至少兩組之鏡頭與影像感測模組11、12中的其中至少一組鏡頭與影像感測模組11、12的至少於寬度與高度方向之位置以及傾斜角度的調整機構(圖中未示)。The positioning device 20 is configured to position the 3D camera module 10 so that all the lens and image sensing modules 11 and 12 on the 3D camera module 10 can capture a specific area or range of the test pattern 23. Pattern 24. The positioning device 20 is configured to include at least a width and a height of at least one of the lens and image sensing modules 11 and 12 of the lens and image sensing modules 11 and 12 that can accurately adjust the at least two groups. The position of the direction and the adjustment mechanism of the tilt angle (not shown).

該控制裝置22係藉由一傳輸線21連接於3D攝像模組10之電路板上,可接受來自該3D攝像模組10的該電氣訊號並加以檢測與分析。於本實施例中,該控制裝置22可以是包括有顯示器、操作鍵盤等配件之電腦設備,其執行有一特定的分析軟體,可對前述之電氣訊號進行分析處理並實施本發明之3D影像感測器校正方法中的種種檢測、運算與分析。The control device 22 is connected to the circuit board of the 3D camera module 10 via a transmission line 21, and can receive and detect and analyze the electrical signal from the 3D camera module 10. In this embodiment, the control device 22 may be a computer device including a display, an operating keyboard, and the like, which implements a specific analysis software for analyzing and processing the aforementioned electrical signals and implementing the 3D image sensing of the present invention. Various detection, calculation and analysis in the calibration method.

請參閱圖二,為本發明之3D影像感測器校正方法的一實施例流程圖,其包括有下列步驟:步驟31:將一3D攝像模組10定位於該定位裝置20上,於該3D攝像模組包括有至少兩組之鏡頭與影像感測模組11、12(例如但不侷限於該第一與第二鏡頭與影像感測模組11、12)。2 is a flowchart of an embodiment of a 3D image sensor calibration method according to the present invention, which includes the following steps: Step 31: Position a 3D camera module 10 on the positioning device 20, in the 3D The camera module includes at least two sets of lens and image sensing modules 11, 12 (such as but not limited to the first and second lens and image sensing modules 11, 12).

步驟32:使用該3D攝像模組10之該至少兩組之鏡頭與影像感測模組11、12自外界之該測試圖樣23進行拍攝,以擷取該測試圖樣23之特定區域圖案24的至少兩影像。請搭配圖四所示,該至少兩影像係包括了由該第一鏡頭與影像感測模組所攝得之第一影像41(左影像)、以及由該第二鏡頭與影像感測模組所攝得第二影像42(右影像);其中,該第一與第二影像41、42均同為寬具有P畫素且高具有Q畫素的P*Q影像資料。於本實施例中,該測試圖樣23可包含了專供影像測試分析用的特定圖案24,但是,於另一實施例中,該測試圖樣23也可以是一般景物圖樣。Step 32: The at least two sets of lens and image sensing modules 11 and 12 of the 3D camera module 10 are used to capture the test pattern 23 from the outside to capture at least the specific area pattern 24 of the test pattern 23. Two images. As shown in FIG. 4 , the at least two images include the first image 41 (left image) captured by the first lens and the image sensing module, and the second lens and image sensing module. The second image 42 (right image) is captured; wherein the first and second images 41 and 42 are both P*Q image data having a P pixel and a Q pixel. In the present embodiment, the test pattern 23 may include a specific pattern 24 for image test analysis. However, in another embodiment, the test pattern 23 may also be a general scene pattern.

步驟33:以控制裝置22接收該兩影像41、42之電氣訊號,並據以計算出所擷取之該至少兩影像41、42的位置差異值。Step 33: The electrical signals of the two images 41 and 42 are received by the control device 22, and the position difference values of the captured at least two images 41 and 42 are calculated accordingly.

步驟34:由控制裝置22判斷該位置差異值是否位於一預設範圍內。倘若是位於該預設範圍內則表示該第一與第二鏡頭與影像感測模組11、12是落在可容許範圍內的正確位置與角度,由該3D攝像模組10所拍攝到的3D影像品質合格,於是便執行步驟35以停止本發明之3D影像感測器校正方法。相反地,倘若判斷的結果顯示該位置差異值不是位於該預設範圍內時,則表示由該3D攝像模組10所拍攝到的3D影像品質不良,無法讓使用者舒適地觀看,於是便需再執行步驟36。Step 34: It is determined by the control device 22 whether the position difference value is within a predetermined range. If it is located within the preset range, it indicates that the first and second lens and image sensing modules 11 and 12 are in the correct position and angle within the allowable range, and are captured by the 3D camera module 10. The 3D image quality is qualified, and then step 35 is executed to stop the 3D image sensor correction method of the present invention. On the contrary, if the result of the judgment indicates that the position difference value is not within the preset range, it indicates that the 3D image captured by the 3D camera module 10 is of poor quality and cannot be viewed comfortably by the user, so that it is required Then go to step 36.

步驟36:依據該位置差異值來操作該定位裝置20,以調整該兩鏡頭與影像感測模組11、12中的至少其中一組鏡頭與影像感測模組的位置,之後並回到步驟32重新執行。Step 36: The positioning device 20 is operated according to the position difference value to adjust the position of at least one of the two lens and image sensing modules 11 and 12 and the image sensing module, and then return to the step. 32 re-execution.

請參閱圖三及圖四所示,其中,圖三為圖二所示步驟33中所述之計算該位置差異值的方法的流程圖,圖四為示意說明該第一與第二影像41、42上之區塊範圍的示意圖。於本發明中,該步驟33之計算方法係更包括了以下步驟:Referring to FIG. 3 and FIG. 4 , FIG. 3 is a flowchart of a method for calculating the position difference value in step 33 shown in FIG. 2 , and FIG. 4 is a schematic diagram illustrating the first and second images 41 , Schematic diagram of the block range on 42. In the present invention, the calculation method of the step 33 further includes the following steps:

步驟331:於該第二鏡頭與影像感測模組12所攝得之該第二影像中決定一搜尋區塊422,該搜尋區塊422係位於該第二影像42之P*Q的影像資料範圍內。於本實施例中,該搜尋區塊422為稍後進行搜尋時的搜尋範圍,其寬與高同為W+2t畫素;其中,t的值是因鏡頭與影像感測模組位置偏差所可能導致影像偏移的可預期最大偏差量,W的值將稍後說明,(W+2t)<P且(W+2t)<Q。Step 331: Determine a search block 422 in the second image captured by the second lens and the image sensing module 12, and the search block 422 is located in the P*Q image data of the second image 42. Within the scope. In this embodiment, the search block 422 is a search range when searching later, and the width and height are W+2t pixels; wherein the value of t is due to the positional deviation between the lens and the image sensing module. The expected maximum deviation amount that may cause image shift, and the value of W will be described later, (W+2t)<P and (W+2t)<Q.

步驟332:自該第一鏡頭與影像感測模組11所攝得之該第一影像41中建立一找尋條件,該找尋條件係依據該第一影像41之P*Q的影像資料範圍內中之一條件區塊411的資料所建立;其中,該條件區塊411之寬與高同為W畫素,換言之,該條件區塊411的寬與高的畫素值是分別小於該搜尋區塊422的寬與高。於本實施例中,本方法進行分析與計算方式乃是以區塊為基準,所以t為W的整數倍為較佳,但是,在另一實施例中,t也可以不是W的整數倍。Step 332: Establish a search condition in the first image 41 captured by the first lens and the image sensing module 11, and the search condition is based on the P*Q image data range of the first image 41. The data of one of the condition blocks 411 is established; wherein the width and height of the condition block 411 are W pixels, in other words, the width and height of the condition block 411 are smaller than the search block respectively. Width and height of 422. In this embodiment, the method for analyzing and calculating the method is based on a block, so t is an integer multiple of W. However, in another embodiment, t may not be an integer multiple of W.

步驟333:自該第二影像42之該搜尋區塊422中,找尋符合該找尋條件之一對應區塊421的位置。Step 333: From the search block 422 of the second image 42, find a location corresponding to the block 421 corresponding to the search condition.

步驟334:依據該條件區塊411以及該對應區塊421相對於其兩者在P*Q影像資料範圍中所在的位置的差異,計算出該條件區塊411與該對應區塊421兩者間的位置差異值,也就是該第一與第二影像41、42間的位置差異值。Step 334: Calculate the difference between the condition block 411 and the corresponding block 421 according to the difference between the condition block 411 and the position of the corresponding block 421 in the P*Q image data range. The position difference value, that is, the position difference value between the first and second images 41, 42.

請參閱圖五,為本發明之3D影像感測器校正方法中進行找尋符合找尋條件之對應區塊的示意圖。於本發明中,該步驟333中所述之找尋符合該找尋條件之一對應區塊的方法,係先以位於該第一影像上之條件區塊的資料藉由至少一數學函數計算出至少一條件值,並且,在位於該第二影像上之該搜尋區塊中分割出複數個同樣具有寬與高同為W畫素的次搜尋區塊,且各相鄰次搜尋區塊的邊緣相接。以圖五所示為例,於該第二影像上之該搜尋區塊中共分割成寬與高各15等分一共有225個次搜尋區塊。之後,依據區塊為單位,以相同之該至少一數學函數依序分別計算出各次搜尋區塊的值並與該至少一條件值進行比對,倘若相同時則表示該次搜尋區塊就是符合該找尋條件的該對應區塊,倘若找無完全相同之值時則以具有最接近值之次搜尋區塊來做為符合該找尋條件的該對應區塊。如圖五所示,假設該條件區塊的位置是位於第一影像之正中央也就是座標為(0,0)的位置,而經找尋發現具有與該條件區塊相同影像(也就是符合該找尋條件)的對應區塊是位於第二影像上以X註記之座標(-1,-5)的位置時,則可以得知該第一與第二影像之間的位置差異值為「寬度方向偏移-1W畫素,高度方向偏移-5W畫素」。Please refer to FIG. 5 , which is a schematic diagram of searching for a corresponding block in accordance with the search condition in the 3D image sensor calibration method of the present invention. In the present invention, the method for searching for a corresponding block corresponding to the search condition in the step 333 is to first calculate at least one of the data of the conditional block located on the first image by using at least one mathematical function. a condition value, and a plurality of sub-search blocks having the same width and height as the W pixel are segmented in the search block located on the second image, and the edges of the adjacent sub-search blocks are connected . As shown in FIG. 5, in the search block on the second image, a total of 225 search blocks are divided into a width and a height of 15 equal parts. Then, according to the block unit, the values of each search block are respectively calculated and compared with the at least one condition value by the same at least one mathematical function, and if they are the same, the search block is The corresponding block that meets the search condition, if the value is not exactly the same, the next search block having the closest value is used as the corresponding block that meets the search condition. As shown in FIG. 5, it is assumed that the position of the conditional block is located at the center of the first image, that is, the coordinate is (0, 0), and the search finds the same image as the conditional block (that is, it conforms to the When the corresponding block of the search condition is located at the coordinates (-1, -5) of the X image on the second image, it can be known that the position difference between the first and second images is "width direction". Offset -1W pixel, height direction offset -5W pixel".

於本發明之一較佳實施例中,於步驟333中所述之找尋符合該找尋條件之一對應區塊的方法,更可以多階段搜尋的方式來進行。以三階段搜尋為例,可先以一相對較大值之W1作為進行一第一階段搜尋之一第一條件區塊的寬與高,並據以在寬與高同為W1+2t畫素的一第一搜尋區塊中找尋符合該找尋條件且寬與高同為W1之一第一對應區塊。當找到該寬與高同為W1之該第一對應區塊後,便以該第一對應區塊作為進行一第二階段搜尋時之一第二搜尋區塊,且以一相對較小值之W2作為進行該第二階段搜尋之一第二條件區塊的寬與高,在該寬與高同為W1之該第二搜尋區塊中找尋符合該找尋條件且寬與高同為W2之一第二對應區塊。接著,再以該第二對應區塊作為進行一第三階段搜尋時之一第三搜尋區塊,且以一相對最小值之W3作為進行該第三階段搜尋之一第三條件區塊的寬與高,在該寬與高同為W2之該第三搜尋區塊中找尋符合該找尋條件且寬與高同為W3之一第三對應區塊,如此便能加速搜尋的效率。如圖六所示為例,在第一階段之大區塊搜尋過程中,第一條件區塊的寬與高均為W1,且W1等於7倍的W3,也就是需在寬與高至少各為編號-7至7之15等分或以上(每一等分均為W3畫素)之第一搜尋區塊範圍內找到寬與高各為7倍W3畫素的第一對應區塊a1(寬度編號為-7至-1,高度編號為1至7的範圍)。當找到第一對應區塊a1後,即進行第二階段的中區塊搜尋過程;此時第二條件區塊的寬與高均為W2,且W2等於3倍的W3,也就是需在寬編號為-7至-1與高編號為1至7的第二搜尋區塊範圍內找到寬與高各為3倍W3畫素的第二對應區塊a2(寬度編號為-3至-1,高度編號為5至7的範圍)。之後,當找到第二對應區塊a2後,即進行第三階段的小區塊搜尋過程,且其第三條件區塊的寬與高均為W3,也就是需在寬編號為-3至-1與高編號為5至7的第三搜尋區塊範圍內找到寬與高各W3畫素的第三對應區塊a3。一旦找到位於寬編號-1且高編號5的該第三對應區塊a3後,即可以得知該第一與第二影像之間的位置差異值為「寬度方向偏移-1W3畫素,高度方向偏移-5W3畫素」。In a preferred embodiment of the present invention, the method for finding a corresponding block corresponding to one of the search conditions described in step 333 can be performed in a multi-stage search manner. Taking a three-stage search as an example, W1 of a relatively large value may be used as a first conditional search for the width and height of the first conditional block, and according to the width and height, W1+2t pixels are used. A first search block of the first search block is found to meet the search condition and the width and height are the first corresponding block of W1. After the first corresponding block whose width and height are the same as W1 is found, the first corresponding block is used as one of the second search blocks when performing a second stage search, and the value is a relatively small value. W2 is used as the width and height of the second conditional block of the second stage search, and the second search block whose width and height are the same as W1 is found to meet the search condition and the width and height are the same as W2. The second corresponding block. Then, the second corresponding block is used as one of the third search blocks when performing a third stage search, and the W3 of a relative minimum is used as the width of the third condition block of the third stage search. And the high, in the third search block whose width and height are W2, find the third corresponding block that meets the search condition and the width and height are the same as W3, so that the efficiency of the search can be accelerated. As shown in Figure 6, in the first stage of the large block search process, the width and height of the first conditional block are both W1, and W1 is equal to 7 times of W3, that is, at least width and height are required. Find the first corresponding block a1 of width and height of 7 times W3 pixels for the first search block of 15 or more of numbers -7 to 7 (each of which is W3 pixels). The width number is -7 to -1, and the height number is from 1 to 7.) After the first corresponding block a1 is found, the middle block search process of the second stage is performed; at this time, the width and the height of the second condition block are both W2, and W2 is equal to 3 times of W3, that is, it needs to be wide Find a second corresponding block a2 (width number -3 to -1) with a width and height of 3 times W3 pixels in the range of the second search block numbered -7 to -1 and high number 1 to 7. The height is numbered from 5 to 7). After the second corresponding block a2 is found, the cell block search process of the third stage is performed, and the width and the height of the third condition block are both W3, that is, the width number is -3 to -1. A third corresponding block a3 of width and height of each W3 pixel is found in the range of the third search block numbered 5 to 7. Once the third corresponding block a3 located at the wide number -1 and the high number 5 is found, the position difference between the first and second images is obtained as "width shift -1 W3 pixel, height" Direction shift -5W3 pixels".

唯以上所述之實施例不應用於限制本發明之可應用範圍,本發明之保護範圍應以本發明之申請專利範圍內容所界定技術精神及其均等變化所含括之範圍為主者。即大凡依本發明申請專利範圍所做之均等變化及修飾,仍將不失本發明之要義所在,亦不脫離本發明之精神和範圍,故都應視為本發明的進一步實施狀況。The above-mentioned embodiments are not intended to limit the scope of application of the present invention, and the scope of the present invention should be based on the technical spirit defined by the content of the patent application scope of the present invention and the scope thereof. It is to be understood that the scope of the present invention is not limited by the spirit and scope of the present invention, and should be considered as a further embodiment of the present invention.

10...3D攝像模組10. . . 3D camera module

11、12...鏡頭與影像感測模組11,12. . . Lens and image sensing module

20...定位裝置20. . . Positioning means

21...傳輸線twenty one. . . Transmission line

22...控制裝置twenty two. . . Control device

23...測試圖樣twenty three. . . Test pattern

24‧‧‧圖案24‧‧‧ pattern

31~35、331~334‧‧‧步驟31~35, 331~334‧‧‧ steps

圖一為本發明之3D影像感測器之對位校正檢測系統的一實施例示意圖。FIG. 1 is a schematic diagram of an embodiment of a registration correction detection system for a 3D image sensor of the present invention.

圖二為本發明3D影像感測器校正方法之一實施例流程圖。FIG. 2 is a flow chart of an embodiment of a 3D image sensor calibration method according to the present invention.

圖三為圖二所示步驟33中所述之計算該位置差異值的方法的流程圖。FIG. 3 is a flow chart of a method for calculating the position difference value described in step 33 shown in FIG.

圖四為示意說明該第一與第二影像41、42上之區塊範圍的示意圖。FIG. 4 is a schematic diagram illustrating the range of blocks on the first and second images 41, 42.

圖五為本發明之3D影像感測器校正方法中進行找尋符合找尋條件之對應區塊的一實施例示意圖。FIG. 5 is a schematic diagram of an embodiment of performing a 3D image sensor calibration method according to the present invention for finding a corresponding block that meets a search condition.

圖六為本發明之3D影像感測器校正方法中進行找尋符合找尋條件之對應區塊的另一實施例示意圖。FIG. 6 is a schematic diagram of another embodiment of performing a search for a corresponding block that meets a search condition in the 3D image sensor calibration method of the present invention.

31~35...步驟31~35. . . step

Claims (4)

一種3D影像感測器校正方法,包括有下列步驟:步驟(A):將一3D攝像模組定位,該3D攝像模組包括有至少兩組之鏡頭與影像感測模組;步驟(B):使用該3D攝像模組之該至少兩組之鏡頭與影像感測模組自外界擷取至少兩影像;步驟(C):以一控制裝置計算所擷取之該至少兩影像的位置差異值;步驟(D):判斷該位置差異值是否位於一預設範圍內;倘若是位於該預設範圍內則執行步驟(F),倘若不是位於該預設範圍內則執行步驟(E);步驟(E):依據該位置差異值來調整該至少兩組之鏡頭與影像感測模組中的至少一組鏡頭與影像感測模組的位置,之後並重回步驟(B)執行;以及步驟(F):停止;其中,該至少兩影像係包括有一第一影像以及一第二影像,該第一與第二影像均同為寬具有P畫素且高具有Q畫素的P*Q影像資料,並且,步驟(C)所述之計算該位置差異值的方法係包括有下列步驟:步驟(C1):於該第二影像中決定一搜尋區塊,該搜尋區塊係位於該第二影像之P*Q的影像資料範圍內;步驟(C2):自該第一影像中建立一找尋條件,該找尋條件係依據該第一影像之P*Q的影像資料範圍內中之一條件區塊的資料所建立,其中,該條件區塊的寬與高的畫素值是分別小於該搜尋區塊的寬與高; 步驟(C3):自該第二影像之該搜尋區塊中,找尋符合該找尋條件之一對應區塊的位置;以及步驟(C4):依據該條件區塊以及該對應區塊相對於其兩者在P*Q影像資料範圍中所在的位置的差異,計算出該條件區塊與該對應區塊兩者間的位置差異值;其中,該條件區塊之寬與高同為W畫素,且該搜尋區塊的寬與高同為W+2t畫素,其中,(W+2t)<P且(W+2t)<Q;其中,t為W的整數倍,並且,該步驟(C3)中所述之找尋符合該找尋條件之一對應區塊的方法,係先以該條件區塊的資料藉由至少一數學函數計算出至少一條件值,並且,在該搜尋區塊中分割出複數個同樣具有寬與高同為W畫素的次搜尋區塊,且各相鄰次搜尋區塊之邊緣相接;之後,以相同之該至少一數學函數依序分別計算出各次搜尋區塊的值並與該至少一條件值進行比對,倘若相同時則表示該次搜尋區塊就是符合該找尋條件的該對應區塊。 A 3D image sensor calibration method includes the following steps: Step (A): positioning a 3D camera module, the 3D camera module includes at least two sets of lens and image sensing modules; and step (B) And using at least two sets of the lens and the image sensing module of the 3D camera module to capture at least two images from the outside; and (C): calculating, by a control device, a position difference value of the captured at least two images Step (D): determining whether the position difference value is within a predetermined range; if it is within the preset range, performing step (F), if not within the preset range, performing step (E); (E): adjusting the position of at least one of the lens and the image sensing module of the at least two groups of lenses and the image sensing module according to the position difference value, and then returning to step (B) to execute; and the step (F): stopping; wherein the at least two images comprise a first image and a second image, the first and second images being the same P*Q image having a P pixel and a Q pixel. And the method for calculating the position difference value described in step (C) includes There is the following steps: Step (C1): determining a search block in the second image, the search block is located in the image data range of the P*Q of the second image; Step (C2): from the first A search condition is established in the image, and the search condition is established according to data of one of the conditional blocks in the P*Q image data range of the first image, wherein the condition block has a width and a high pixel value. Is less than the width and height of the search block respectively; Step (C3): searching for a location corresponding to the block corresponding to the search condition from the search block of the second image; and step (C4): according to the condition block and the corresponding block relative to the two The difference between the position of the P*Q image data and the position difference between the condition block and the corresponding block is calculated; wherein the width and height of the condition block are W pixels. And the width and height of the search block are W+2t pixels, where (W+2t)<P and (W+2t)<Q; wherein t is an integer multiple of W, and the step (C3) The method for finding a block corresponding to one of the search conditions is to first calculate at least one condition value by using at least one mathematical function of the data of the condition block, and segmenting the search block in the search block a plurality of sub-search blocks having the same width and height as the W pixel, and the edges of the adjacent sub-search blocks are connected; then, the search regions are respectively calculated in sequence by the same at least one mathematical function The value of the block is compared with the at least one condition value, and if it is the same, it means that the search block is in accordance with the search The condition of the corresponding block. 如申請專利範圍第1項所述之3D影像感測器校正方法,其中,該步驟(C3)中所述之找尋符合該找尋條件之一對應區塊的方法是以多階段搜尋的方式進行。 The method for correcting a 3D image sensor according to claim 1, wherein the method for finding a corresponding block corresponding to the search condition in the step (C3) is performed in a multi-stage search manner. 如申請專利範圍第2項所述之3D影像感測器校正方法,其中,所述之多階段搜尋的方式係包括:先以一相對較大值之W1作為進行一第一階段搜尋之一第一條件區塊的寬與高,並據以在寬與高同為W1+2t畫素的一第一搜尋區塊中找尋符合該找尋條件且寬與高同為 W1之一第一對應區塊;當找到該寬與高同為W1之該第一對應區塊後,便以該第一對應區塊作為進行一第二階段搜尋時之一第二搜尋區塊,且以一相對較小值之W2作為進行該第二階段搜尋之一第二條件區塊的寬與高,在該寬與高同為W1之該第二搜尋區塊中找尋符合該找尋條件且寬與高同為W2之一第二對應區塊。 The method for correcting a 3D image sensor according to claim 2, wherein the multi-stage search method comprises: first performing a first stage search with a relatively large value of W1. The width and height of a conditional block, and according to the first search block whose width and height are W1+2t pixels are found to meet the search condition and the width and height are the same One of the first corresponding blocks of W1; after finding the first corresponding block whose width and height are the same as W1, the first corresponding block is used as one of the second search blocks when performing a second stage search And using W2 of a relatively small value as the width and height of the second condition block of the second stage search, and searching for the search condition in the second search block whose width and height are the same as W1 And the width and height are the second corresponding block of W2. 如申請專利範圍第1項所述之3D影像感測器校正方法,其中,於該步驟(A)中,是藉由一定位裝置來將該3D攝像模組定位,且於該定位裝置中包括有可調整該至少兩組之鏡頭與影像感測模組中的至少一組鏡頭與影像感測模組的位置的機構。The method for correcting a 3D image sensor according to claim 1, wherein in the step (A), the 3D camera module is positioned by a positioning device, and the positioning device is included in the positioning device. There is a mechanism for adjusting the position of at least one of the lens and the image sensing module of the at least two groups of the lens and the image sensing module.
TW100105813A 2011-02-22 2011-02-22 3d image sensor alignment detection method TWI458339B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW100105813A TWI458339B (en) 2011-02-22 2011-02-22 3d image sensor alignment detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW100105813A TWI458339B (en) 2011-02-22 2011-02-22 3d image sensor alignment detection method

Publications (2)

Publication Number Publication Date
TW201236439A TW201236439A (en) 2012-09-01
TWI458339B true TWI458339B (en) 2014-10-21

Family

ID=47222829

Family Applications (1)

Application Number Title Priority Date Filing Date
TW100105813A TWI458339B (en) 2011-02-22 2011-02-22 3d image sensor alignment detection method

Country Status (1)

Country Link
TW (1) TWI458339B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI503618B (en) 2012-12-27 2015-10-11 Ind Tech Res Inst Device for acquiring depth image, calibrating method and measuring method therefore

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040193413A1 (en) * 2003-03-25 2004-09-30 Wilson Andrew D. Architecture for controlling a computer using hand gestures
CN1554193A (en) * 2001-07-25 2004-12-08 �����J��ʷ����ɭ A camera control apparatus and method
US7397929B2 (en) * 2002-09-05 2008-07-08 Cognex Technology And Investment Corporation Method and apparatus for monitoring a passageway using 3D images
US20100020178A1 (en) * 2006-12-18 2010-01-28 Koninklijke Philips Electronics N.V. Calibrating a camera system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1554193A (en) * 2001-07-25 2004-12-08 �����J��ʷ����ɭ A camera control apparatus and method
US7397929B2 (en) * 2002-09-05 2008-07-08 Cognex Technology And Investment Corporation Method and apparatus for monitoring a passageway using 3D images
US20040193413A1 (en) * 2003-03-25 2004-09-30 Wilson Andrew D. Architecture for controlling a computer using hand gestures
US20100020178A1 (en) * 2006-12-18 2010-01-28 Koninklijke Philips Electronics N.V. Calibrating a camera system

Also Published As

Publication number Publication date
TW201236439A (en) 2012-09-01

Similar Documents

Publication Publication Date Title
WO2021208371A1 (en) Multi-camera zoom control method and apparatus, and electronic system and storage medium
US8711275B2 (en) Estimating optical characteristics of a camera component using sharpness sweep data
CN102223477B (en) Four-dimensional polynomial model for depth estimation based on two-picture matching
US8339463B2 (en) Camera lens calibration system
TWI520098B (en) Image capturing device and method for detecting image deformation thereof
US20150124059A1 (en) Multi-frame image calibrator
US20130002814A1 (en) Method for automatically improving stereo images
CN109656033B (en) Method and device for distinguishing dust and defects of liquid crystal display screen
CN103209298A (en) Blur-matching Model Fitting For Camera Automatic Focusing Adaptability
KR20120051308A (en) Method for improving 3 dimensional effect and reducing visual fatigue and apparatus of enabling the method
CN108833912A (en) A kind of measurement method and system of video camera machine core optical axis center and field angle
US20130002826A1 (en) Calibration data selection device, method of selection, selection program, and three dimensional position measuring apparatus
CN107851301B (en) System and method for selecting image transformations
WO2019184410A1 (en) Method and apparatus for measuring distortion parameters of virtual reality device, and measuring system
JP2016075658A (en) Information process system and information processing method
CN108805940B (en) Method for tracking and positioning zoom camera in zooming process
WO2017130650A1 (en) Stereo camera and image pickup system
US8817246B2 (en) Lens test device and method
US10623719B2 (en) Multi-aperture camera system for improving depth accuracy through focusing distance scan
TWI458339B (en) 3d image sensor alignment detection method
CN105093480A (en) Method for improving optical lens focusing accuracy
US8983125B2 (en) Three-dimensional image processing device and three dimensional image processing method
JP5996233B2 (en) Imaging device
US9020280B2 (en) System and method for evaluating focus direction under various lighting conditions
US20140327743A1 (en) Auto focus method and auto focus apparatus

Legal Events

Date Code Title Description
GD4A Issue of patent certificate for granted invention patent
MM4A Annulment or lapse of patent due to non-payment of fees