TWI814680B - Correction method for 3d image processing - Google Patents

Correction method for 3d image processing Download PDF

Info

Publication number
TWI814680B
TWI814680B TW112105240A TW112105240A TWI814680B TW I814680 B TWI814680 B TW I814680B TW 112105240 A TW112105240 A TW 112105240A TW 112105240 A TW112105240 A TW 112105240A TW I814680 B TWI814680 B TW I814680B
Authority
TW
Taiwan
Prior art keywords
image
parameter matrix
host
imaging device
correction
Prior art date
Application number
TW112105240A
Other languages
Chinese (zh)
Inventor
吳亦莊
劉永興
Original Assignee
國立中正大學
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 國立中正大學 filed Critical 國立中正大學
Priority to TW112105240A priority Critical patent/TWI814680B/en
Application granted granted Critical
Publication of TWI814680B publication Critical patent/TWI814680B/en

Links

Images

Landscapes

  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

A correction method for three-dimensional image processing, wherein a host obtains a first image and a second image of the image area through a first image acquisition device and a second image acquisition device, the first image and the second image containing a first image object and a second image object of the correction block, respectively. The host obtains a first parameter matrix and a second parameter matrix based on the first image and the second image, respectively, and The first image, the second image, the first image object, the second image object, the first parameter matrix, and the second parameter matrix are used to reconstruct and correct a three-dimensional space, wherein the first parameter matrix and the second parameter matrix correspond to the correction functions of the first imaging device and the second imaging device, respectively, to provide depth information for the three-dimensional space correction.

Description

三維影像處理之校正方法Correction method for 3D image processing

本發明係關於一種三維影像處理之校正方法,特別是一種重建三維影像之校正方法,使其校正時包含一深度資訊。 The present invention relates to a correction method for three-dimensional image processing, particularly a correction method for reconstructing a three-dimensional image so that the correction includes depth information.

現代科技發展迅速,隨著越發精密的製程以及高速電腦的普及化,對於高精度的量測需求大增,在追求自動化工廠的產線上往往有各式各樣的感測器將訊號回傳電腦,隨著感光元件以及電腦硬體的升級,電腦視覺(Computer Vision)逐漸成為工廠自動化的重要技術之一。 Modern technology develops rapidly. With the increasingly sophisticated manufacturing processes and the popularization of high-speed computers, the demand for high-precision measurement has greatly increased. In the pursuit of automated factory production lines, there are often various sensors that transmit signals back to computers. , with the upgrade of photosensitive components and computer hardware, computer vision (Computer Vision) has gradually become one of the important technologies for factory automation.

電腦視覺指利用影像處理技術,並從影像提取資訊以進行量測、追蹤、辨識等技術,屬於自動化光學檢測(Automated Optical Inspection,AOI)的一種,此技術已廣泛應用於各工程領域,相較於傳統量測方式,此技術具有高精度、非接觸量測、全域式量測的優點。 Computer vision refers to the use of image processing technology to extract information from images for measurement, tracking, identification, etc. It is a type of automated optical inspection (Automated Optical Inspection, AOI). This technology has been widely used in various engineering fields. Compared with Compared with traditional measurement methods, this technology has the advantages of high precision, non-contact measurement, and full-area measurement.

而數位影像相關法(Digital Image Correlation,DIC),是一種全場非接觸式的光學量測方法,透過計算影像與影像之間的相關度進而取得待測物的位移,利用位移資訊搭配待測物的材料係數則可求得應力、應變相關資訊。數位影像的資訊是以矩陣形式儲存,最小的單位為像素(Pixel),在灰階影像中每個像素的亮度是由8個位元(Bit)組成,亮度介於0~255之間,並且會依像素 位置對應到矩陣中的值,組成一個二維的陣列。彩色影像則會以三維陣列儲存,第三維度為分別代表紅(R)、G(綠)、B(藍)的亮度值。 Digital Image Correlation (DIC) is a full-field non-contact optical measurement method that calculates the correlation between images to obtain the displacement of the object under test, and uses the displacement information to match the object under test The material coefficient of the object can be used to obtain stress and strain related information. Digital image information is stored in matrix form. The smallest unit is pixel (Pixel). In grayscale images, the brightness of each pixel is composed of 8 bits (Bit), and the brightness ranges from 0 to 255, and Will be based on pixels The positions correspond to the values in the matrix, forming a two-dimensional array. Color images are stored in a three-dimensional array, with the third dimension representing the brightness values of red (R), G (green), and B (blue) respectively.

而立體數位影像相關法(Stereo digital image correlation,Stereo DIC)是透過兩台平行的相機拍攝物體表面,透過2D DIC結合三角量測法,成功量測到懸臂樑面外變形問題。透過兩個相機理想針孔成像模型(Ideal pin-hole model),利用三維空間中已知的點搭配非線性最小平方法,計算出相機的內、外部參數矩陣,並實際應用到物體表面之位移量測。 The Stereo digital image correlation (Stereo DIC) method uses two parallel cameras to capture the surface of an object. Through 2D DIC combined with the triangulation method, the out-of-plane deformation problem of the cantilever beam can be successfully measured. Through the ideal pin-hole imaging model of two cameras, known points in three-dimensional space are used with the nonlinear least squares method to calculate the internal and external parameter matrices of the camera, and actually apply it to the displacement of the object surface Measurement.

相機校正最為廣泛使用的校正方法是利用拍攝數張影像,並透過校正板的已知幾何長度,搭配影像座標上偵測位置的影像長度,來換算空間與影像之間的關係,而常用的校正板分為圓點校正板以及棋盤格角點校正板,棋盤格校正板是利用角點偵測找出影像座標,而圓點校正板是透過擬合圓心座標求得影像座標,但因為校正板是平面的,在換算空間座標與影像座標關係時因只有x、y軸資訊,會將z軸座標假設為零,但在實際拍攝校正板時會因校正板旋轉而導致z軸座標不為零,因此此校正方法容易產生誤差。 The most widely used correction method for camera calibration is to take several images and use the known geometric length of the correction plate to match the image length of the detection position on the image coordinates to convert the relationship between space and image. Commonly used correction methods The boards are divided into dot correction boards and checkerboard corner correction boards. The checkerboard correction board uses corner point detection to find the image coordinates, while the dot correction board obtains the image coordinates by fitting the center coordinates of the circle. However, because the correction board It is plane. When converting the relationship between spatial coordinates and image coordinates, since there is only x and y axis information, the z-axis coordinate is assumed to be zero. However, when the correction plate is actually photographed, the z-axis coordinate is not zero due to the rotation of the correction plate. , so this correction method is prone to errors.

傳統空間重建時的座標系習慣將座標系定義於左相機上,從校正的過程上所能得知的資訊為左、右相機位置,以及校正工具位置,由於校正板並沒有深度資訊,三維重建時通常以左相機作為主要座標系,而相機的座標平面又是與成像平面平行,導致在量測時左相機必須要平行於待測物,否則會因座標系的定義不同而變得不直觀,必須先求得相機與待測物角度再通過座標轉換才能知道實際的數據,當使用相機變多時,座標轉換將變得相當複雜、麻煩。 The coordinate system used in traditional space reconstruction is used to define the coordinate system on the left camera. The information that can be learned from the correction process is the position of the left and right cameras, and the position of the correction tool. Since the correction plate does not have depth information, 3D reconstruction The left camera is usually used as the main coordinate system, and the coordinate plane of the camera is parallel to the imaging plane. Therefore, the left camera must be parallel to the object to be measured during measurement, otherwise it will become unintuitive due to the different definitions of the coordinate system. , the angle between the camera and the object to be measured must be obtained first and then the actual data can be known through coordinate conversion. When more cameras are used, coordinate conversion will become quite complicated and troublesome.

為此,需要提供一種三維影像處理之校正方法,能夠提供深度資訊進行校正,且不須將座標系定義在相機上之方法,為本領域技術人員所欲解決的問題。 To this end, it is necessary to provide a correction method for three-dimensional image processing that can provide depth information for correction without defining a coordinate system on the camera. This is a problem that those skilled in the art want to solve.

本發明之一目的,在於提供一種三維影像處理之校正方法,其透過校正塊進行含有深度資訊之校正,減少校正時產生之誤差,本發明之另一目的,在於提供一種三維影像處理之校正方法,於重建三維空間時,其將校正塊之影像設為世界座標系之原點,使其能夠在多個取像裝置同時取像時,不須將座標系定義在相機上,同時不須大量轉換座標系,也不須控制取像裝置與待測物保持平行。 One purpose of the present invention is to provide a correction method for three-dimensional image processing, which performs correction containing depth information through correction blocks to reduce errors generated during correction. Another purpose of the present invention is to provide a correction method for three-dimensional image processing. , when reconstructing the three-dimensional space, it sets the image of the correction block as the origin of the world coordinate system, so that when multiple imaging devices capture images at the same time, there is no need to define the coordinate system on the camera, and it does not require a large number of When converting the coordinate system, there is no need to control the imaging device to remain parallel to the object to be measured.

針對上述之目的,本發明提供一種三維影像處理之校正方法,其中設置一第一取像裝置以及一第二取像裝置,該第一取像裝置以一第一取像角度鎖定一影像區域,並連接至一主機,該第二取像裝置,以一第二取像角度鎖定該影像區域,並連接至該主機,且配置一校正塊於該影像區域中,其步驟包含:該主機透過該第一取像裝置及該第二取像裝置,分別取得該影像區域之一第一影像以及一第二影像,該第一影像及該第二影像中分別包含該校正塊之一第一影像物件及一第二影像物件;該主機依據該第一影像及該第二影像分別取得一第一參數矩陣及一第二參數矩陣;該主機依據該第一影像及該第二影像重建一三維空間,該三維空間包含一空間物件,該空間物件對應於該校正塊;以及該主機依據該第一影像物件、該第二影像物件、該第一參數矩陣及該第二參數矩陣校正該三維空間;其中,該校正塊為一長方體,且該校正塊每一面上 皆設有一棋盤格圖案,該棋盤格圖案對應於一深度資訊,更進一步,該第一參數矩陣及該第二參數矩陣分別對應該第一取像裝置及該第二取像裝置之裝置參數,使重建三維空間時之校正能夠更為簡單、快速,同時也減少校正時的誤差。 To achieve the above objectives, the present invention provides a correction method for three-dimensional image processing, in which a first imaging device and a second imaging device are provided. The first imaging device locks an image area at a first imaging angle. And connected to a host, the second imaging device locks the image area at a second imaging angle, is connected to the host, and configures a correction block in the image area. The steps include: the host uses the The first imaging device and the second imaging device obtain a first image and a second image of the image area respectively, and the first image and the second image respectively include a first image object of the correction block. and a second image object; the host obtains a first parameter matrix and a second parameter matrix respectively based on the first image and the second image; the host reconstructs a three-dimensional space based on the first image and the second image, The three-dimensional space includes a spatial object corresponding to the correction block; and the host corrects the three-dimensional space according to the first image object, the second image object, the first parameter matrix and the second parameter matrix; wherein , the correction block is a rectangular parallelepiped, and each side of the correction block Both are provided with a checkerboard pattern, and the checkerboard pattern corresponds to a depth information. Furthermore, the first parameter matrix and the second parameter matrix respectively correspond to the device parameters of the first imaging device and the second imaging device, This makes the correction when reconstructing the three-dimensional space simpler and faster, and also reduces the error during correction.

本發明提供一實施例,其中依據該第一影像及該第二影像分別取得一第一參數矩陣及一第二參數矩陣之步驟中,更包含:該主機利用一角點偵測方法分別取得該第一影像物件其位於該第一影像中之複數個角點,以及該第二影像物件其位於該第二影像中之複數個角點;該主機依據該些個角點取得對應之一空間座標資訊;該主機依據該空間座標資訊及該第一影像得到該第一參數矩陣,該主機依據該空間座標資訊及該第二影像得到該第二參數矩陣。 The present invention provides an embodiment, in which the step of obtaining a first parameter matrix and a second parameter matrix respectively based on the first image and the second image further includes: the host uses a corner point detection method to obtain the third parameter matrix respectively. An image object is located at a plurality of corner points in the first image, and the second image object is located at a plurality of corner points in the second image; the host obtains corresponding spatial coordinate information based on the corner points. ; The host obtains the first parameter matrix based on the spatial coordinate information and the first image, and the host obtains the second parameter matrix based on the spatial coordinate information and the second image.

本發明提供一實施例,其中該角點偵測方法係透過該主機判斷該第一影像或該第二影像之灰階值,當有兩個主方向之灰階值大幅改變時為該角點,當僅有一個主方向灰階值大幅改變時為一圖像邊緣,皆未有變化則為一均勻區域。 The present invention provides an embodiment, in which the corner point detection method determines the grayscale value of the first image or the second image through the host, and when the grayscale values in two main directions change significantly, it is the corner point. , when the grayscale value in only one main direction changes significantly, it is an image edge, and when there is no change in the grayscale value, it is a uniform area.

本發明提供一實施例,其中於該主機依據該空間座標資訊及該第一影像得到該第一參數矩陣,該主機依據該空間座標資訊及該第二影像得到該第二參數矩陣之步驟中,該主機進一步使用奇異值分解運算該空間座標資訊及該第一影像與該第二影像,求得第一參數矩陣及第二參數矩陣。 The present invention provides an embodiment, in which the host obtains the first parameter matrix based on the spatial coordinate information and the first image, and the host obtains the second parameter matrix based on the spatial coordinate information and the second image, The host further uses singular value decomposition to calculate the spatial coordinate information and the first image and the second image to obtain a first parameter matrix and a second parameter matrix.

本發明提供一實施例,其該主機依據該第一影像及該第二影像重建一三維空間步驟中,更包含:該第一影像及該第二影像係利用一數位影像相關法重建該三維空間。 The present invention provides an embodiment in which the host reconstructs a three-dimensional space based on the first image and the second image, further including: the first image and the second image reconstruct the three-dimensional space using a digital image correlation method. .

本發明提供一實施例,其該第一取像裝置及該第二取像裝置為數位相機或智慧型手機。 The present invention provides an embodiment in which the first imaging device and the second imaging device are digital cameras or smart phones.

本發明提供一實施例,其該主機透過該第一參數矩陣及該第二參數矩陣校正該三維空間步驟中,更包含:當該主機校正該三維空間時,該主機將該三維空間之一原點設置於該空間物件上。 The present invention provides an embodiment, in which the step of the host correcting the three-dimensional space through the first parameter matrix and the second parameter matrix further includes: when the host corrects the three-dimensional space, the host converts an original value of the three-dimensional space The point is set on this space object.

本發明提供一實施例,其該第一參數矩陣及該第二參數矩陣為單應性矩陣。 The present invention provides an embodiment in which the first parameter matrix and the second parameter matrix are homography matrices.

S1~S7:步驟 S1~S7: steps

S31~S35:步驟 S31~S35: steps

10:主機 10:Host

20:第一取像裝置 20: First imaging device

22:第一影像 22:First image

222:第一影像物件 222: First image object

2222:第四面 2222:Side 4

2224:第五面 2224:The fifth side

2226:第六面 2226:Sixth side

30:第二取像裝置 30: Second imaging device

32:第二影像 32: Second image

322:第二影像物件 322: Second image object

3222:第七面 3222:Seventh side

3224:第八面 3224:Side 8

3226:第九面 3226:Ninth side

40:影像區域 40:Image area

50:校正塊 50: Correction block

502:第一面 502: Side 1

504:第二面 504:Second side

506:第三面 506:The third side

60:三維空間 60: Three-dimensional space

602:空間物件 602: Space objects

70:角點 70:Corner point

72:空間座標資訊 72: Space coordinate information

722:第一參數矩陣 722: First parameter matrix

724:第二參數矩陣 724: Second parameter matrix

第一圖:其為本發明之一實施例之流程示意圖;第二圖:其為本發明之一實施例之流程示意圖;第三圖:其為本發明之一實施例之系統架構示意圖;第四A圖:其為本發明之一實施例之校正塊模型示意圖;第四B圖:其為習知之校正板模型示意圖;第五圖:其為本發明之一實施例之第一影像及第二影像示意圖;第六圖:其為本發明之一實施例之三維空間示意圖;以及第七圖:其為本發明之一實施例之第一參數矩陣及第二參數矩陣示意圖。 The first figure: It is a schematic flow diagram of an embodiment of the present invention; The second figure: It is a schematic flow diagram of an embodiment of the present invention; The third figure: It is a schematic diagram of the system architecture of an embodiment of the present invention; Figure 4A: It is a schematic diagram of the correction block model of one embodiment of the present invention; Figure 4B: It is a schematic diagram of the conventional correction plate model; Figure 5: It is the first image and the third image of one embodiment of the present invention. Two image diagrams; the sixth diagram: which is a three-dimensional spatial diagram of an embodiment of the present invention; and the seventh diagram: which is a schematic diagram of the first parameter matrix and the second parameter matrix of one embodiment of the present invention.

為使 貴審查委員對本發明之特徵及所達成之功效有更進一步之瞭解與認識,謹佐以較佳之實施例及配合詳細之說明,說明如後: 有鑑於習知三維影像處理之校正便利性不足,而導致需要大量轉換座標系及取像裝置需與待測物平行擺放之問題,據此,本發明遂提出一種三維影像處理之校正方法,以解決習知之三維影像處理之校正便利性不足之問題。 In order to enable your review committee to have a further understanding of the characteristics and effects of the present invention, we would like to provide preferred embodiments and accompanying detailed descriptions, which are as follows: In view of the fact that the correction convenience of conventional three-dimensional image processing is insufficient, which results in the need for a large number of coordinate system conversions and the need to place the imaging device parallel to the object to be measured. Accordingly, the present invention proposes a correction method for three-dimensional image processing. To solve the problem of insufficient correction convenience in conventional three-dimensional image processing.

本發明改良以往利用校正板對於三維空間重建所需之取像裝置進行誤差校正,透過自製的校正塊,使得校正過程較以往多出深度資訊,以減少取像裝置所帶來之誤差,使的三維影像之重建能夠更為貼近現實。 This invention improves the previous use of correction plates to perform error correction on the imaging device required for three-dimensional space reconstruction. Through the self-made correction block, the correction process can produce more depth information than before, thereby reducing the errors caused by the imaging device and making the The reconstruction of three-dimensional images can be closer to reality.

在下文中,將藉由圖式來說明本發明之各種實施例來詳細描述本發明。然而本發明之概念可能以許多不同型式來體現,且不應解釋為限於本文中所闡述之例示性實施例。 In the following, the present invention will be described in detail by illustrating various embodiments of the present invention through drawings. This inventive concept may, however, be embodied in many different forms and should not be construed as limited to the illustrative embodiments set forth herein.

首先,請參閱第一圖,其為本發明之一實施例之流程圖,如圖所示,本發明之三維影像處理之校正方法的步驟包含: First, please refer to the first figure, which is a flow chart of an embodiment of the present invention. As shown in the figure, the steps of the correction method for three-dimensional image processing of the present invention include:

步驟S1:一主機透過一第一取像裝置及一第二取像裝置,分別取得一影像區域之一第一影像以及一第二影像,該第一影像及該第二影像中分別包含該校正塊之一第一影像物件及一第二影像物件。 Step S1: A host obtains a first image and a second image of an image area through a first imaging device and a second imaging device respectively, and the first image and the second image respectively include the correction block a first image object and a second image object.

步驟S3:該主機依據該第一影像及該第二影像分別取得一第一參數矩陣及一第二參數矩陣。 Step S3: The host obtains a first parameter matrix and a second parameter matrix based on the first image and the second image respectively.

步驟S5:該主機依據該第一影像及該第二影像重建一三維空間,該三維空間包含一空間物件,該空間物件對應於該校正塊。 Step S5: The host reconstructs a three-dimensional space based on the first image and the second image. The three-dimensional space includes a spatial object, and the spatial object corresponds to the correction block.

步驟S7:該主機依據該第一影像物件、該第二影像物件、該第一參數矩陣及該第二參數矩陣校正該三維空間。。 Step S7: The host corrects the three-dimensional space according to the first image object, the second image object, the first parameter matrix and the second parameter matrix. .

其中在本發明之一實施例之系統架構,如第三圖所示,其係包含:一主機10;一第一取像裝置20,該第一取像裝置20以一第一取像角度鎖定一影像區域40,並連接至該主機10;一第二取像裝置30,該第二取像裝置30以一第二取像角度鎖定該影像區域40,並連接至該主機10;配置一校正塊50於該影像區域40中,其中一第一影像22由該第一取像裝置20傳送至該主機10,一第二影像32由該第二取像裝置30傳送至該主機10。 The system architecture of one embodiment of the present invention, as shown in the third figure, includes: a host 10; a first imaging device 20, the first imaging device 20 is locked at a first imaging angle An image area 40 and connected to the host 10; a second imaging device 30, the second imaging device 30 locks the image area 40 with a second imaging angle and connected to the host 10; configure a correction In block 50 , in the image area 40 , a first image 22 is transmitted from the first imaging device 20 to the host 10 , and a second image 32 is transmitted from the second imaging device 30 to the host 10 .

接著,將詳述各個步驟,說明如下; 於步驟S1中,如第一圖所示,本實施例於一主機10透過一第一取像裝置20及一第二取像裝置30,分別取得一影像區域40之一第一影像22以及一第二影像32,該第一影像22及該第二影像32中分別包含該校正塊50之一第一影像物件222及一第二影像物件322,其中,如第四A圖所示,該校正塊50每一面上皆設有一棋盤格圖案,該棋盤格圖案對應於一深度資訊,該深度資訊係為影像之景深。 Next, each step will be detailed and explained as follows; In step S1, as shown in the first figure, in this embodiment, a host 10 obtains a first image 22 and a first image 22 of an image area 40 through a first imaging device 20 and a second imaging device 30 respectively. The second image 32, the first image 22 and the second image 32 respectively include a first image object 222 and a second image object 322 of the correction block 50, wherein, as shown in the fourth figure A, the correction Each side of the block 50 is provided with a checkerboard pattern, and the checkerboard pattern corresponds to a depth information, which is the depth of field of the image.

於步驟S3中,如第一圖所示,本實施例於該主機10依據該第一影像及該第二影像分別取得一第一參數矩陣722及一第二參數矩陣724,其中該第一參數矩陣722及該第二參數矩陣724分別對應該第一取像裝置20及該第二取像裝置30之裝置參數。 In step S3, as shown in the first figure, in this embodiment, the host 10 obtains a first parameter matrix 722 and a second parameter matrix 724 respectively based on the first image and the second image, wherein the first parameter The matrix 722 and the second parameter matrix 724 respectively correspond to the device parameters of the first imaging device 20 and the second imaging device 30 .

於步驟S5中,本實施例於該主機10依據該第一影像22及該第二影像32重建一三維空間60,該三維空間60包含一空間物件602,該空間物件602對應於該校正塊50,其中該主機10係透過數位影像相關法重建該三維空間60。 In step S5 , in this embodiment, the host 10 reconstructs a three-dimensional space 60 based on the first image 22 and the second image 32 . The three-dimensional space 60 includes a spatial object 602 , and the spatial object 602 corresponds to the correction block 50 , wherein the host 10 reconstructs the three-dimensional space 60 through a digital image correlation method.

於步驟S7中,本實施例於該主機10依據該第一影像物件222、該第二影像物件322、該第一參數矩陣722及該第二參數矩陣724校正該三維空間60,其中當該主機10校正該三維空間60時,該主機10將該三維空間60之一原點設置於該空間物件602上。 In step S7, in this embodiment, the host 10 corrects the three-dimensional space 60 based on the first image object 222, the second image object 322, the first parameter matrix 722 and the second parameter matrix 724, wherein when the host 10. When calibrating the three-dimensional space 60, the host 10 sets an origin of the three-dimensional space 60 on the spatial object 602.

於步驟S3中,更包含步驟S31、步驟S33及步驟S35,其說明如下:接著參閱第二圖,其為本發明之另一實施例之流程圖。如圖所示,本實施例係基於上述實施例,本發明之三維影像處理之校正方法的步驟: Step S3 further includes step S31, step S33 and step S35, which are described as follows: Next, refer to the second figure, which is a flow chart of another embodiment of the present invention. As shown in the figure, this embodiment is based on the above embodiment, and the steps of the correction method for three-dimensional image processing of the present invention are:

步驟S31:該主機利用一角點偵測方法分別取得該第一影像物件222其位於該第一影像22中之複數個角點70,以及該第二影像物件322其位於該第二影像32中之複數個角點70。 Step S31: The host uses a corner point detection method to respectively obtain the plurality of corner points 70 of the first image object 222 located in the first image 22, and the plurality of corner points 70 of the second image object 322 located in the second image 32. Plural corner points 70.

步驟S33:該主機依據該些個角點70取得對應之一空間座標資訊72。 Step S33: The host obtains corresponding spatial coordinate information 72 based on the corner points 70 .

步驟S35:該主機依據該空間座標資訊72及該第一影像22得到該第一參數矩陣722,該主機依據該空間座標資訊72及該第二影像32得到該第二參數矩陣724。 Step S35: The host obtains the first parameter matrix 722 based on the spatial coordinate information 72 and the first image 22, and the host obtains the second parameter matrix 724 based on the spatial coordinate information 72 and the second image 32.

於步驟S31中,本實施例於該主機10利用一角點偵測方法分別取得該第一影像物件222其位於該第一影像22中之複數個角點70,以及該第二影像物件322其位於該第二影像32中之複數個角點70,其中該角點偵測方法係透過該主機10判斷該第一影像22或該第二影像32之灰階值,當有兩個主方向之灰階值大幅改變時為該角點70,當僅有一個主方向灰階值大幅改變時為一圖像邊緣,皆未有變化則為一均勻區域。 In step S31, this embodiment uses a corner point detection method on the host 10 to respectively obtain a plurality of corner points 70 of the first image object 222 located in the first image 22, and a plurality of corner points 70 of the second image object 322 located in the first image 22. There are a plurality of corner points 70 in the second image 32. The corner point detection method is to determine the grayscale value of the first image 22 or the second image 32 through the host 10. When there are two main directions of grayscale, When the level value changes significantly, it is the corner point 70. When the gray level value in only one main direction changes significantly, it is an image edge. When there is no change, it is a uniform area.

於步驟S33中,本實施例於該主機10依據該些個角點70取得對應之一空間座標資訊72。 In step S33 , in this embodiment, the host 10 obtains corresponding spatial coordinate information 72 based on the corner points 70 .

於步驟S35中,本實施例於該主機10依據該空間座標資訊72及該第一影像22得到該第一參數矩陣722,該主機10依據該空間座標資訊72及該第二影像32得到該第二參數矩陣724,其中該第一參數矩陣722及該第二參數矩陣724分別對應該第一取像裝置20及該第二取像裝置30之裝置參數。 In step S35, in this embodiment, the host 10 obtains the first parameter matrix 722 based on the spatial coordinate information 72 and the first image 22, and the host 10 obtains the third parameter matrix 722 based on the spatial coordinate information 72 and the second image 32. Two parameter matrices 724, wherein the first parameter matrix 722 and the second parameter matrix 724 respectively correspond to the device parameters of the first imaging device 20 and the second imaging device 30.

接著,於此係舉下列實際範例說明以本實施例之三維影像處理之校正之實際使用範例,但不再此限。 Next, the following actual example is given to illustrate the actual use example of the correction of the three-dimensional image processing in this embodiment, but it is not limited to this.

下列實施例中,使用數位相機作為取像裝置之範例說明,實際應用不限於此。 In the following embodiments, a digital camera is used as an example of an imaging device, but the actual application is not limited to this.

下列實施例中,該第一取像角度相對於校正塊為0度,作為取像裝置之範例說明,實際應用不限於此。 In the following embodiments, the first imaging angle is 0 degrees relative to the correction block, which is used as an example of the imaging device, and the actual application is not limited thereto.

首先,一左相機(第一取像裝置20)及一右相機(第二取像裝置30)分別對一區域(影像區域40)中之一校正塊50拍照,並分別將影像(第一影像22、第二影像32)上傳至一主機10,該主機10利用角點偵測方法偵測影像中之校正塊50。 First, a left camera (the first imaging device 20) and a right camera (the second imaging device 30) respectively take pictures of a correction block 50 in an area (the image area 40), and respectively capture the image (the first image 22. The second image 32) is uploaded to a host 10, and the host 10 uses a corner detection method to detect the correction block 50 in the image.

其中,如第四A圖及第五圖所示,該第一影像物件222與該第二影像物件322分別為透過該第一取像角度(相對於校正塊為0度角)及該第二取像角度(相對於校正塊為45度角)拍攝該校正塊50之影像,且該校正塊50之第一面502對應該第一影像物件222之第四面2222及該第二影像物件322之第七面3222,該校正塊50之第二面504對應該第一影像物件222之第五面2224 及該第二影像物件322之第八面3224,該校正塊50之第三面506對應該第一影像物件222之第六面2226及該第二影像物件322之第九面3226。 Among them, as shown in Figure 4A and Figure 5, the first image object 222 and the second image object 322 are through the first imaging angle (0 degree angle relative to the correction block) and the second image object 322 respectively. The image of the correction block 50 is captured at an imaging angle (45 degrees relative to the correction block), and the first surface 502 of the correction block 50 corresponds to the fourth surface 2222 of the first image object 222 and the second image object 322 The seventh side 3222 of the correction block 50 corresponds to the fifth side 2224 of the first image object 222 And the eighth surface 3224 of the second image object 322, the third surface 506 of the correction block 50 corresponds to the sixth surface 2226 of the first image object 222 and the ninth surface 3226 of the second image object 322.

由於本實施例之角點偵測目標為棋盤格校正塊50,因此本實施例之角點偵測方法係利用兩個方向特徵點的灰階值突然大幅度改變時,極有可能就是角點70,如果在各方向上移動的窗格內兩方向的灰階值都發生變化,則認為是角點70;如果兩方向都沒變化,則是均勻區域;如果只有一個方向上有變化,則認定是圖像邊緣。假設I(x,y)為待偵測圖像的灰階值,(x,y)為臨域窗格內的點,E(u,v)為窗格平移前後的像素差平方和,以式(1)表示。 Since the corner point detection target in this embodiment is the checkerboard correction block 50, the corner point detection method in this embodiment utilizes the fact that when the grayscale values of feature points in two directions suddenly change significantly, it is very likely that they are corner points. 70. If the grayscale values in both directions of the pane moving in all directions change, it is considered a corner point 70; if there is no change in both directions, it is a uniform area; if there is a change in only one direction, then Identified as the edge of the image. Suppose I ( x, y ) is the grayscale value of the image to be detected, ( x, y ) is the point in the local pane, E ( u, v ) is the sum of squared pixel differences before and after the pane is translated, as Expressed by formula (1).

E(u,v)=Σ x Σ y w(x,y)[I(x+u,y+u)-I(x,y)]2 式(1) E ( u, v ) = Σ x Σ y w ( x, y ) [ I ( x + u, y + u ) - I ( x, y )] 2 Formula (1)

其中w(x,y)代表窗格的加權函數,窗格的大小可依影像的大小而改變,而加權函數的係數越大代表灰階值差異對結果的權重越大,通常使用均值函數或高斯函數過濾角點所在處。接著做一階泰勒展開可以得到式(2)I(x+u,y+v)=I(x,y)+I x (x,y)u+I y (x,y)v 式(2) where w ( x, y ) represents the weighting function of the pane. The size of the pane can be changed according to the size of the image. The larger the coefficient of the weighting function, the greater the weight of the grayscale value difference on the result. Usually, the mean function or Gaussian function filters where the corners are. Then do the first-order Taylor expansion to get equation (2) I ( x + u,y + v ) = I ( x,y ) + I x ( x,y ) u + I y ( x,y ) vEquation (2) )

將式(2)代入(1)經整理後可得到式(3)

Figure 112105240-A0305-02-0012-1
After substituting formula (2) into (1), we can get formula (3)
Figure 112105240-A0305-02-0012-1

其中M(x,y)為式(4)

Figure 112105240-A0305-02-0012-2
Where M ( x,y ) is formula (4)
Figure 112105240-A0305-02-0012-2

考慮到角點的邊界通常與座標軸一致,如圖3-10所示,在窗格內只有左側跟上側,左側I x 很大I y 很小,上側I y 很大I x 很小,可將M簡化為式(5)

Figure 112105240-A0305-02-0013-3
Considering that the boundary of the corner point is usually consistent with the coordinate axis, as shown in Figure 3-10, in the pane, only the left side follows the upper side. I x is large and I y is small on the left side, and I y is large and I x is small on the upper side. It can be M is simplified to equation (5)
Figure 112105240-A0305-02-0013-3

對於M矩陣,可將其與共變異數矩陣類比,共變異數矩陣代表多維度隨機變量之間的相關系,其對角線的元素表示各維度的方差,將其對角化後可降低不同維度的相關性,並取得特徵值較大的地方。可將矩陣M看成一個二維隨機分布的共變異數矩陣,通過將其對角化所取得的兩個特徵值來判斷角點。 For the M matrix, it can be compared with the covariance matrix. The covariance matrix represents the correlation between multi-dimensional random variables. Its diagonal elements represent the variance of each dimension. Diagonalizing it can reduce the difference. Dimensional correlation and obtain the places with larger eigenvalues. The matrix M can be regarded as a two-dimensional randomly distributed covariance matrix, and the corner points can be determined by diagonalizing the two eigenvalues obtained.

最後在判斷是否為角點時,只需計算近似角點的響應值R如式(6):R=|M|-α(traceM)2=λ 1 λ 2-α(λ 1+λ 2)2 式(6) Finally, when judging whether it is a corner point, you only need to calculate the response value R of the approximate corner point as shown in equation (6): R =| M |- α ( traceM ) 2 = λ 1 λ 2 - α ( λ 1 + λ 2 ) 2 formula(6)

式(6)中α

Figure 112105240-A0305-02-0013-17
[0.04,0.06],當響應值大於設定閥值時即判定為角點。 α in formula (6)
Figure 112105240-A0305-02-0013-17
[0.04,0.06], when the response value is greater than the set threshold, it is determined to be a corner point.

接著透過角點偵測方法偵測出角點並取得角點所對應的空間座標資訊後,將空間座標資訊轉化為左相機之第一參數矩陣及右相機之第二參數矩陣。 Then, the corner point is detected through the corner point detection method and the spatial coordinate information corresponding to the corner point is obtained, and then the spatial coordinate information is converted into the first parameter matrix of the left camera and the second parameter matrix of the right camera.

本實施例使用校正塊來取代校正板,透過針孔成像原理以及相機座標與世界座標的關係可以利用式(7)-式(11)求得左相機及右相機的第一參數矩陣及第二參數矩陣。 In this embodiment, a correction block is used instead of a correction plate. Through the pinhole imaging principle and the relationship between camera coordinates and world coordinates, the first parameter matrix and the second parameter matrix of the left camera and the right camera can be obtained using equations (7) to (11). parameter matrix.

Figure 112105240-A0305-02-0013-4
Figure 112105240-A0305-02-0013-4

其中,K為相機裝置參數,R為旋轉矩陣,

Figure 112105240-A0305-02-0014-5
平移矩陣,
Figure 112105240-A0305-02-0014-6
為影像座標以及
Figure 112105240-A0305-02-0014-7
為世界座標,並將式(7)改寫為式(8)。 Among them, K is the camera device parameter, R is the rotation matrix,
Figure 112105240-A0305-02-0014-5
translation matrix,
Figure 112105240-A0305-02-0014-6
are the image coordinates and
Figure 112105240-A0305-02-0014-7
are the world coordinates, and rewrite equation (7) into equation (8).

Figure 112105240-A0305-02-0014-8
Figure 112105240-A0305-02-0014-8

H稱為單應性矩陣(Homography Matrix),大小為3×4利用實際量測所獲得的影像座標點位置(u i ,v i ,1) T ,搭配已知的世界座標位置

Figure 112105240-A0305-02-0014-10
,若各有N個座標點則可將式(8)改寫為式(9)。 H is called the Homography Matrix, with a size of 3×4. The image coordinate point position ( u i , v i , 1) T obtained by actual measurement is matched with the known world coordinate position.
Figure 112105240-A0305-02-0014-10
, if each has N coordinate points, Equation (8) can be rewritten as Equation (9).

Figure 112105240-A0305-02-0014-11
Figure 112105240-A0305-02-0014-11

接著可將式(9)展開改寫為式(10)。 Then Equation (9) can be expanded and rewritten into Equation (10).

Figure 112105240-A0305-02-0014-12
Figure 112105240-A0305-02-0014-12

將式(10)寫成矩陣形式並整理為式(11)。 Write equation (10) into matrix form and organize it into equation (11).

Figure 112105240-A0305-02-0014-13
Figure 112105240-A0305-02-0014-13

其中的L和

Figure 112105240-A0305-02-0014-14
分別為式(12)和式(13)
Figure 112105240-A0305-02-0015-15
where L and
Figure 112105240-A0305-02-0014-14
They are formula (12) and formula (13) respectively.
Figure 112105240-A0305-02-0015-15

Figure 112105240-A0305-02-0015-16
Figure 112105240-A0305-02-0015-16

接著透過使用奇異值分解(Singular Value Decomposition,SVD)求解式(11)即可求得左相機及右相機之第一參數矩陣及第二參數矩陣。 Then, by using singular value decomposition (SVD) to solve equation (11), the first parameter matrix and the second parameter matrix of the left camera and the right camera can be obtained.

最後,如第六圖所示,該主機透過兩台相機所拍得之影像依據立體數位影像相關法重建一三維空間60,並透過求得之第一參數矩陣722校正該三維空間60,且該主機10將該三維空間60之一原點設置於該空間物件602上。 Finally, as shown in Figure 6, the host reconstructs a three-dimensional space 60 based on the stereoscopic digital image correlation method based on the images captured by the two cameras, and corrects the three-dimensional space 60 through the obtained first parameter matrix 722, and the The host 10 sets an origin of the three-dimensional space 60 on the spatial object 602 .

以上所述之實施例,本發明之方法不同於以往,習知的校正方式都是使用校正板進行校正,然而校正板只擁有平面空間資訊,如第四B圖所示,在計算相機的參數矩陣時往往都會假設拍攝到的校正板深度座標為零,再透過數張不同姿態的校正板做相機校正,本發明提出使用校正塊方式,因校正塊上擁有三維空間資訊,不需假設深度座標為零,又因校正塊上的三個面分別都有棋盤格,本發明只需要一張照片即可校正相機的參數矩陣,並且因深度座標不為零,經由校正後重建的三維空間重新投影到二維影像座標上的誤差可以遠小於使用校正板的方式,並且本發明將座標系建立在校正塊上,不同角度相機只要能拍攝到校正塊上之棋盤格即可校正相機的參數矩陣,習知的相機校正需要透過相機之間的旋轉、平移關係一一轉換到主要座標系上,這會導致在校 正相機的時候步驟非常繁雜且容易在座標系轉換時出現誤差,而此方式直接將座標系建立在待測物上可以避免多台相機校正時需以其中一台相機的座標系作為主要座標系的問題。 In the embodiments described above, the method of the present invention is different from the past. The conventional correction method uses a correction plate for correction. However, the correction plate only has plane space information. As shown in Figure 4B, when calculating the parameters of the camera When matrixing, it is often assumed that the depth coordinate of the photographed correction plate is zero, and then the camera is calibrated through several correction plates with different postures. The present invention proposes to use the correction block method, because the correction block has three-dimensional spatial information, and there is no need to assume the depth coordinate. is zero, and because the three faces on the correction block each have a checkerboard pattern, the present invention only needs one photo to correct the parameter matrix of the camera, and because the depth coordinate is not zero, it can be re-projected through the reconstructed three-dimensional space after correction The error in the two-dimensional image coordinates can be much smaller than the method of using a correction plate, and the present invention establishes the coordinate system on the correction block. As long as cameras from different angles can capture the checkerboard pattern on the correction block, the camera's parameter matrix can be corrected. Conventional camera calibration requires one-to-one conversion to the main coordinate system through the rotation and translation relationships between cameras, which will lead to The steps when aligning the camera are very complicated and errors are prone to occur during coordinate system conversion. This method of directly establishing the coordinate system on the object to be measured can avoid the need to use the coordinate system of one camera as the main coordinate system when calibrating multiple cameras. problem.

故本發明實為一具有新穎性、進步性及可供產業上利用者,應符合我國專利法專利申請要件無疑,爰依法提出發明專利申請,祈 鈞局早日賜准專利,至感為禱。 Therefore, this invention is indeed novel, progressive and can be used industrially. It should undoubtedly comply with the patent application requirements of my country’s Patent Law. I file an invention patent application in accordance with the law and pray that the Office will grant the patent as soon as possible. I am deeply grateful.

惟以上所述者,僅為本發明之較佳實施例而已,並非用來限定本發明實施之範圍,舉凡依本發明申請專利範圍所述之形狀、構造、特徵及精神所為之均等變化與修飾,均應包括於本發明之申請專利範圍內。 However, the above are only preferred embodiments of the present invention and are not intended to limit the scope of the present invention. All changes and modifications can be made equally in accordance with the shape, structure, characteristics and spirit described in the patent scope of the present invention. , should be included in the patent scope of the present invention.

S1~S7:步驟 S1~S7: steps

Claims (8)

一種三維影像處理之校正方法,其中設置一第一取像裝置以及一第二取像裝置,該第一取像裝置以一第一取像角度鎖定一影像區域,並連接至一主機,該第二取像裝置,以一第二取像角度鎖定該影像區域,並連接至該主機,且配置一校正塊於該影像區域中,其步驟包含: 該主機透過該第一取像裝置及該第二取像裝置,分別取得該影像區域之一第一影像以及一第二影像,該第一影像及該第二影像中分別包含該校正塊之一第一影像物件及一第二影像物件; 該主機依據該第一影像及該第二影像分別取得一第一參數矩陣及一第二參數矩陣; 該主機依據該第一參數矩陣及該第二參數矩陣重建一三維空間,該三維空間包含一空間物件,該空間物件對應於該校正塊;以及 該主機依據該第一影像物件、該第二影像物件、該第一參數矩陣及該第二參數矩陣校正該三維空間; 其中,該校正塊為一長方體,且該校正塊每一面上皆設有一棋盤格圖案,該棋盤格圖案對應於一深度資訊,更進一步,該第一參數矩陣及該第二參數矩陣分別對應該第一取像裝置及該第二取像裝置之裝置參數。 A correction method for three-dimensional image processing, in which a first imaging device and a second imaging device are provided. The first imaging device locks an image area at a first imaging angle and is connected to a host. The first imaging device locks an image area at a first imaging angle and is connected to a host. Two imaging devices lock the image area at a second imaging angle, connect to the host, and configure a correction block in the image area. The steps include: The host obtains a first image and a second image of the image area through the first imaging device and the second imaging device respectively, and the first image and the second image respectively include one of the correction blocks. a first image object and a second image object; The host obtains a first parameter matrix and a second parameter matrix respectively based on the first image and the second image; The host reconstructs a three-dimensional space based on the first parameter matrix and the second parameter matrix, the three-dimensional space includes a spatial object, and the spatial object corresponds to the correction block; and The host corrects the three-dimensional space according to the first image object, the second image object, the first parameter matrix and the second parameter matrix; Wherein, the correction block is a rectangular parallelepiped, and each side of the correction block is provided with a checkerboard pattern. The checkerboard pattern corresponds to a depth information. Furthermore, the first parameter matrix and the second parameter matrix respectively correspond to the depth information. Device parameters of the first imaging device and the second imaging device. 如請求項1所述之三維影像處理之校正方法,其中依據該第一影像及該第二影像分別取得一第一參數矩陣及一第二參數矩陣之步驟中,更包含: 該主機利用一角點偵測方法分別取得該第一影像物件其位於該第一影像中之複數個角點,以及該第二影像物件其位於該第二影像中之複數個角點; 該主機依據該些個角點取得對應之一空間座標資訊; 該主機依據該空間座標資訊及該第一影像得到該第一參數矩陣,並依據該空間座標資訊及該第二影像得到該第二參數矩陣。 The correction method for three-dimensional image processing as described in claim 1, wherein the step of obtaining a first parameter matrix and a second parameter matrix respectively based on the first image and the second image further includes: The host uses a corner detection method to respectively obtain the plurality of corner points of the first image object located in the first image, and the plurality of corner points of the second image object located in the second image; The host obtains corresponding spatial coordinate information based on the corner points; The host obtains the first parameter matrix based on the spatial coordinate information and the first image, and obtains the second parameter matrix based on the spatial coordinate information and the second image. 如請求項3所述之三維影像處理之校正方法,其中該角點偵測方法係透過該主機判斷該第一影像或該第二影像之灰階值,當有兩個主方向之灰階值大幅改變時為該角點,當僅有一個主方向灰階值大幅改變時為一圖像邊緣,皆未有變化則為一均勻區域。The correction method for three-dimensional image processing as described in claim 3, wherein the corner detection method is to determine the grayscale value of the first image or the second image through the host. When there are grayscale values in two main directions When there is a significant change, it is a corner point. When there is a significant change in the grayscale value in only one main direction, it is an image edge. When there is no change, it is a uniform area. 如請求項3所述之三維影像處理之校正方法,其中於該主機依據該空間座標資訊及該第一影像得到該第一參數矩陣,該主機依據該空間座標資訊及該第二影像得到該第二參數矩陣之步驟中,該主機進一步使用奇異值分解運算該空間座標資訊及該第一影像與該第二影像,求得第一參數矩陣及第二參數矩陣。The correction method for three-dimensional image processing as described in claim 3, wherein the host obtains the first parameter matrix based on the spatial coordinate information and the first image, and the host obtains the third parameter matrix based on the spatial coordinate information and the second image. In the two-parameter matrix step, the host further uses singular value decomposition to calculate the spatial coordinate information and the first image and the second image to obtain the first parameter matrix and the second parameter matrix. 如請求項1所述之三維影像處理之校正方法,其該主機依據該第一影像及該第二影像重建一三維空間步驟中,更包含: 該第一影像及該第二影像係利用一數位影像相關法重建該三維空間。 As for the three-dimensional image processing correction method described in claim 1, the step of reconstructing a three-dimensional space by the host based on the first image and the second image further includes: The first image and the second image use a digital image correlation method to reconstruct the three-dimensional space. 如請求項1所述之三維影像處理之校正方法,其該第一取像裝置及該第二取像裝置為數位相機或智慧型手機。As for the correction method of three-dimensional image processing described in claim 1, the first imaging device and the second imaging device are digital cameras or smart phones. 如請求項1所述之三維影像處理之校正方法,其該主機透過該第一參數矩陣及該第二參數矩陣校正該三維空間步驟中,更包含: 當該主機校正該三維空間時,該主機將該三維空間之一原點設置於該空間物件上。 As for the correction method of three-dimensional image processing described in claim 1, the step of correcting the three-dimensional space by the host through the first parameter matrix and the second parameter matrix further includes: When the host corrects the three-dimensional space, the host sets an origin of the three-dimensional space on the spatial object. 如請求項1所述之三維影像處理之校正方法,其該第一參數矩陣及該第二參數矩陣為單應性矩陣。As claimed in claim 1, the correction method for three-dimensional image processing, wherein the first parameter matrix and the second parameter matrix are homography matrices.
TW112105240A 2023-02-14 2023-02-14 Correction method for 3d image processing TWI814680B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW112105240A TWI814680B (en) 2023-02-14 2023-02-14 Correction method for 3d image processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW112105240A TWI814680B (en) 2023-02-14 2023-02-14 Correction method for 3d image processing

Publications (1)

Publication Number Publication Date
TWI814680B true TWI814680B (en) 2023-09-01

Family

ID=88965939

Family Applications (1)

Application Number Title Priority Date Filing Date
TW112105240A TWI814680B (en) 2023-02-14 2023-02-14 Correction method for 3d image processing

Country Status (1)

Country Link
TW (1) TWI814680B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103315739A (en) * 2013-05-22 2013-09-25 华东师范大学 Magnetic resonance imaging method and system for eliminating motion artifact based on dynamic tracking technology
TWI592020B (en) * 2016-08-23 2017-07-11 國立臺灣科技大學 Image correction method of projector and image correction system
TWI712310B (en) * 2019-11-15 2020-12-01 大陸商南京深視光點科技有限公司 Detection method and detection system for calibration quality of stereo camera
EP3776485B1 (en) * 2018-09-26 2022-01-26 Coherent Logix, Inc. Any world view generation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103315739A (en) * 2013-05-22 2013-09-25 华东师范大学 Magnetic resonance imaging method and system for eliminating motion artifact based on dynamic tracking technology
TWI592020B (en) * 2016-08-23 2017-07-11 國立臺灣科技大學 Image correction method of projector and image correction system
EP3776485B1 (en) * 2018-09-26 2022-01-26 Coherent Logix, Inc. Any world view generation
TWI712310B (en) * 2019-11-15 2020-12-01 大陸商南京深視光點科技有限公司 Detection method and detection system for calibration quality of stereo camera

Similar Documents

Publication Publication Date Title
CN110276808B (en) Method for measuring unevenness of glass plate by combining single camera with two-dimensional code
CN110555889B (en) CALTag and point cloud information-based depth camera hand-eye calibration method
Furukawa et al. Accurate camera calibration from multi-view stereo and bundle adjustment
CN107194972B (en) Camera calibration method and system
CN109410207B (en) NCC (non-return control) feature-based unmanned aerial vehicle line inspection image transmission line detection method
US7711182B2 (en) Method and system for sensing 3D shapes of objects with specular and hybrid specular-diffuse surfaces
Kim et al. A camera calibration method using concentric circles for vision applications
CN111784778B (en) Binocular camera external parameter calibration method and system based on linear solving and nonlinear optimization
US10771776B2 (en) Apparatus and method for generating a camera model for an imaging system
CN110009690A (en) Binocular stereo vision image measuring method based on polar curve correction
CN107941153B (en) Visual system for optimizing calibration of laser ranging
CN104537707A (en) Image space type stereo vision on-line movement real-time measurement system
CN109272555B (en) External parameter obtaining and calibrating method for RGB-D camera
WO2022126870A1 (en) Three-dimensional imaging method and method based on light field camera and three-dimensional imaging measuring production line
Von Gioi et al. Towards high-precision lens distortion correction
CN111145269B (en) Calibration method for external orientation elements of fisheye camera and single-line laser radar
CN111709985A (en) Underwater target ranging method based on binocular vision
CN113012234B (en) High-precision camera calibration method based on plane transformation
CN111340888B (en) Light field camera calibration method and system without white image
CN116129037B (en) Visual touch sensor, three-dimensional reconstruction method, system, equipment and storage medium thereof
CN108154536A (en) The camera calibration method of two dimensional surface iteration
CN113554709A (en) Camera-projector system calibration method based on polarization information
CN111462246A (en) Equipment calibration method of structured light measurement system
CN114998448A (en) Method for calibrating multi-constraint binocular fisheye camera and positioning space point
TWI814680B (en) Correction method for 3d image processing