TWI659390B - Data fusion method for camera and laser rangefinder applied to object detection - Google Patents

Data fusion method for camera and laser rangefinder applied to object detection Download PDF

Info

Publication number
TWI659390B
TWI659390B TW106128504A TW106128504A TWI659390B TW I659390 B TWI659390 B TW I659390B TW 106128504 A TW106128504 A TW 106128504A TW 106128504 A TW106128504 A TW 106128504A TW I659390 B TWI659390 B TW I659390B
Authority
TW
Taiwan
Prior art keywords
image
camera
laser
data
laser rangefinder
Prior art date
Application number
TW106128504A
Other languages
Chinese (zh)
Other versions
TW201913574A (en
Inventor
蕭瑛星
梁珮蓉
Original Assignee
國立彰化師範大學
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 國立彰化師範大學 filed Critical 國立彰化師範大學
Priority to TW106128504A priority Critical patent/TWI659390B/en
Publication of TW201913574A publication Critical patent/TW201913574A/en
Application granted granted Critical
Publication of TWI659390B publication Critical patent/TWI659390B/en

Links

Abstract

一種應用於物件檢測之攝影機與雷射測距儀的數據融合方法,以影像處理為基礎結合雷射測距儀,該影像處理是由幾何特徵與雷射測距資料融合的方法,用來重建物件的3D尺寸。 A data fusion method for a camera and a laser rangefinder applied to object detection, based on image processing combined with a laser rangefinder, the image processing is a method of fusion of geometric features and laser ranging data to reconstruct The 3D size of the object.

Description

應用於物件檢測之攝影機與雷射測距儀的數據融合方法 Data fusion method for camera and laser rangefinder applied to object detection

本發明是針對在未知的環境中要使自主機器人可以操作物體或執行任務,需要重建機器人所在的環境的三維場景,機器人必須被設計成具有移動的能力外具有認知環境的物體或景像的三維重建感知系統,是機械視覺研究的一個基本問題,立體攝影機系統被設計用來重建場景3D的資訊,本發明為融合雷射測距與影像特徵的重建場景3D影像處理方法。 The invention is aimed at making an autonomous robot to operate an object or perform a task in an unknown environment, and it is necessary to reconstruct a three-dimensional scene of the environment in which the robot is located. The robot must be designed as a three-dimensional object or scene with a cognitive environment outside the ability to move. Reconstructing the perception system is a basic problem in the research of machine vision. The stereo camera system is designed to reconstruct the 3D information of the scene. The present invention is a 3D image processing method for reconstructing a scene that combines laser ranging and image features.

習知3D影像擷取的優點是大範圍、高解析和短的資料擷取時間,同時可以得到顏色資訊,且取向設備的體積逐漸縮小,用影像處理的方法重建場景深度的品質取決於幾個特性,例如光照條件、場景中物體的紋理、以及場景中物件的複雜度等,通常情況下,立體影像的相對應點搜尋在邊緣和有紋理的區域會得到相當不錯的成果,但是在無影像特徵的區域則會失敗,因此用立體影像來重建無紋理特徵的物件表面深度無法得到好的結果,雷射測距儀可以直接得到點的距離,加上線掃描或面掃描的方式,可以得到二維或三維的距離,且不受環境光源的影響。 The advantages of conventional 3D image capture are large range, high resolution and short data capture time. At the same time, color information can be obtained, and the volume of the orientation device is gradually reduced. The quality of the scene depth reconstructed by image processing depends on several factors Characteristics, such as lighting conditions, the texture of objects in the scene, and the complexity of objects in the scene, etc. In general, searching for the corresponding points of a stereo image on the edges and textured areas will get quite good results, but in the absence of images The feature area will fail, so using stereo images to reconstruct the depth of the surface of the object without texture features can not get good results. The laser rangefinder can directly obtain the distance of the points, plus line scanning or area scanning, you can get two Dimensional or three-dimensional distance without being affected by ambient light sources.

攝影機和雷射測距儀,為不同的任務提供了不同的優勢。攝影機可以用來辨識物件的幾何形狀或顏色,而雷射測距儀可以很容易的得到景深資訊。雖然三維雷射測距儀可以量測空間的三維資訊但其價格昂 貴,因此本發明希望能將較便宜的1D雷射測距量測的資料和攝影機的影像融合來重建場景的3D資訊。 Cameras and laser rangefinders offer different advantages for different tasks. Cameras can be used to identify the geometry or color of objects, and laser rangefinders can easily obtain depth of field information. Although the three-dimensional laser rangefinder can measure three-dimensional information in space, it is expensive It is expensive, so the present invention hopes to fuse the cheaper 1D laser ranging measurement data with the camera image to reconstruct the 3D information of the scene.

由於生產線上擺放的物件位置與形狀是隨機的,如何將雷射測距掃描的資料與影像融合重建物件的3D尺寸及其位置是本發明的重點,在工具機產業中,本發明結合機械手臂是可以增加生產線上的流暢度及自動化的可能性,達到高產能及高精度且降低人事成本,雷射掃描得到的距離可以用幾何的關係求出被量測點與參考平面之間的距離,影像處理可以得到景物的邊緣特徵,若能得到這些特徵點與雷射掃描的相對應點,則可以組合成特徵點的3D資訊用來重建3D場景,因此,本發明探討影像特徵的處理方法,再提出雷射掃描資訊與影像特徵點融合的方法。 Because the position and shape of the objects placed on the production line are random, how to fuse the data of the laser ranging scan with the image to reconstruct the 3D size and position of the object is the focus of the invention. In the machine tool industry, the invention combines machinery The arm can increase the fluency of the production line and the possibility of automation, achieve high productivity and high accuracy, and reduce personnel costs. The distance obtained by laser scanning can be calculated by the geometric relationship between the measured point and the reference plane The image processing can obtain the edge features of the scene. If the corresponding points of the feature points and the laser scan can be obtained, the 3D information of the feature points can be combined to reconstruct the 3D scene. Therefore, the present invention discusses a method for processing image features. Then, a method of fusion of laser scanning information and image feature points is proposed.

上述的邊緣的檢測是以標識影像中亮度變化明顯的點,影像屬性中的顯著變化通常反映了屬性的重要事件和變化。這些包括(1)深度不連續、(2)表面不連續、(3)物質屬性變化、和(4)場景照明變化,邊緣的檢測在黑白影像中,通常以影像的梯度來辨識,偵測方法大致可分為一階導函數的局部極值和二階導函數的過零點來辨識邊緣區域。 The above-mentioned edge detection is to identify points with obvious changes in brightness in the image. Significant changes in image attributes usually reflect important events and changes in attributes. These include (1) depth discontinuities, (2) surface discontinuities, (3) changes in material properties, and (4) changes in scene lighting. Edge detection is in black and white images, usually identified by the gradient of the image. Detection methods It can be roughly divided into the local extreme value of the first derivative function and the zero-crossing point of the second derivative function to identify the edge region.

點、線、圓檢測算法正在廣泛的研究中,最常見的算法是Hough轉換算法,大多數算法應用圓周上的特徵點以檢測同心圓,根據區域分割方法分割同心圓的不同圓區域,以獲得圓心和半徑,在一個圓圈找出2個特徵點,然後連接這兩個點來獲得一條弦,並根據幾何特性,即圓上任何弦的垂直平分線都會通過圓心。 Point, line, and circle detection algorithms are being extensively researched. The most common algorithm is the Hough transform algorithm. Most algorithms use feature points on the circumference to detect concentric circles. According to the region segmentation method, different areas of concentric circles are divided to obtain Center and radius, find 2 characteristic points in a circle, then connect the two points to get a chord, and according to the geometric characteristics, that is, the vertical bisector of any chord on the circle will pass the center of the circle.

因此,適當的約束條件需要根據特徵點的算法來進行設置,並具有一定的局限性,基於特徵點的算法有不好的抗干擾和預處理的高需 求,當存在雜訊,失真或邊緣不連續時,將會增加運算的複雜度和時間。 Therefore, the appropriate constraint conditions need to be set according to the algorithm of the feature point, and have certain limitations. The algorithm based on the feature point has bad anti-interference and high requirements for preprocessing. Therefore, when there is noise, distortion or discontinuous edges, it will increase the complexity and time of the operation.

有人提出了一種基於不同位置的校正圖案和相關的物理約束校準的方法,將特徵集成到二維平面,以最大限度地減少在不同感測器特徵之間的距離;或者,其他方法基於障礙校正過程中與特定的圖案,在不同感測器中,允許物理約束相匹配的檢測,使用一個有CAD模型的校準物體,允許由單幀執行匹配,基於三角圖案校正,提出一個使用單眼攝像機基於圓形圖案校正的類似系統。 Some people have proposed a method based on different positions of correction patterns and related physical constraint calibration, integrating features into a two-dimensional plane to minimize the distance between different sensor features; or, other methods are based on obstacle correction In the process, specific patterns are detected in different sensors, allowing the detection of physical constraints. The use of a calibration object with a CAD model allows the matching to be performed by a single frame. Based on the triangular pattern correction, a circle-based camera using a monocular camera is proposed. Similar system for shape pattern correction.

上述大多數是透過使用特定的圖案進行校正,其中涉及該驅動或用戶的定義,這導致無法在任何時刻或是在任何地點進行校正,校準被限制到一些特定的時刻和地點,尤其是有特殊的條件需求或是需要人工操作。 Most of the above is through the use of a specific pattern for calibration, which involves the definition of the driver or the user, which makes it impossible to perform the calibration at any time or at any place. The calibration is limited to some specific times and places, especially the special The conditions require or require manual operation.

用於解決上述問題的三維測量系統,價格非常高且架構非常複雜不普遍應用。 The three-dimensional measurement system used to solve the above problems is very expensive and the architecture is very complicated, so it is not commonly used.

因此本發明應用於物件檢測之攝影機與雷射測距儀的數據融合方法,使用螺距致動雷射測距儀(PALRF)結合攝影機來完成設計重建三維深度影像的方法,而PALRF技術是使用一維雷射測距儀安裝在螺距致動器軸心上,在設定的每個角增量中,雷射測距儀會擷取一維的掃描深度資訊,這些向量將被投影到一個局部坐標系,產生一個3D影像為目的。 Therefore, the present invention applies a data fusion method of a camera and a laser rangefinder for object detection, and uses a pitch-actuated laser rangefinder (PALRF) in combination with a camera to complete the design and reconstruction of a three-dimensional depth image. The PALRF technology uses a The laser rangefinder is installed on the axis of the pitch actuator. In each set angular increment, the laser rangefinder will capture one-dimensional scanning depth information, and these vectors will be projected to a local coordinate. System, for the purpose of generating a 3D image.

本發明應用於物件檢測之攝影機與雷射測距儀的數據融合方法,係採用一台雷射測距儀及一台攝影機,有關於影像擷取、影像處理、邊緣提取等方法,以及雷射測距儀線平面校正的方法以及投影到立體幾何 空間的演算法,影像處理主要的方法是利用灰階調整、二值化、形態學運算及圖形分割方法,進行影像中的物件辨識及分析,雷射測距的量測以雷射測距資料轉換與校正,用以與影像處理得到的幾何特徵融合,藉尋找雷射測距儀掃描線與攝影機影像的對應關係,來找出與雷射掃描點相對應的影像特徵,再對這些匹配的圖像與物件的工程圖檔進行比對,來計算出物件表面輪廓與中心及深度等參數,所得到的融合演算法可以用2D的物件影像及1D的雷射測距資料來重建物件的3D尺寸。 The data fusion method of a camera and a laser rangefinder applied to object detection according to the present invention uses a laser rangefinder and a camera, and relates to methods such as image capture, image processing, edge extraction, and laser. Method for rangefinder line plane correction and projection to stereo geometry For spatial algorithms, the main methods of image processing are to use grayscale adjustment, binarization, morphological operations, and graphic segmentation methods to identify and analyze objects in the image. Laser ranging is measured using laser ranging data. Conversion and correction are used to fuse with the geometric features obtained by image processing. By looking for the correspondence between the laser rangefinder scan line and the camera image, we can find the image features corresponding to the laser scan points, and then match these matching features. The image is compared with the engineering drawing file of the object to calculate the surface contour, center and depth of the object. The obtained fusion algorithm can use 2D object images and 1D laser ranging data to reconstruct the 3D object. size.

1‧‧‧影像讀取 1‧‧‧Image reading

2‧‧‧RGB to HSV色彩轉換 2‧‧‧RGB to HSV color conversion

3‧‧‧臨界值選取 3‧‧‧Critical value selection

4‧‧‧進行二值化 4‧‧‧ binarization

5‧‧‧形態運算 5‧‧‧ Morphology

6‧‧‧影像增強 6‧‧‧Image enhancement

8‧‧‧輪廓追蹤 8‧‧‧ contour tracking

9‧‧‧分析 9‧‧‧ analysis

圖1:影像處理流程圖 Figure 1: Image processing flowchart

圖2:影像投影與雷射測距的關係 Figure 2: The relationship between image projection and laser ranging

圖3:三維掃描示意圖 Figure 3: Schematic of 3D scanning

圖4:等高線示意圖 Figure 4: Contour map

圖5:投影點在三維空間中的關係圖 Figure 5: Relationship between projection points in three-dimensional space

圖6:結合攝影機與雷射測距儀重建3D場景的方法圖 Figure 6: Method for reconstructing a 3D scene by combining a camera and a laser rangefinder

圖7:量測場景 Figure 7: Measurement scenario

圖8:為以αγ表示的量測場景的高度曲線 Figure 8: Height curve of the measurement scene represented by α and γ

圖9:為以αγ表示的量測場景的等高線圖 Figure 9: Contour map of the measurement scene represented by α and γ

圖10:為以世界座標表示的量測場景的高度曲線 Figure 10: Height curve of the measurement scene in world coordinates

圖11:為以世界座標表示的量測場景的等高線圖 Figure 11: Contour map of the measurement scene in world coordinates

圖12:為雷射測距量測資料的點雲圖 Figure 12: Point cloud diagram of laser ranging measurement data

圖13:為影像像素與雷射量測數據融合的點雲圖 Figure 13: Point cloud image fusion of image pixels and laser measurement data

圖14(a):十字形的物件圖 Figure 14 (a): Cross-shaped object diagram

圖14(b):角狀的物件圖 Figure 14 (b): Figure of angular object

圖14(c):多物件的情況圖 Figure 14 (c): Multi-object situation

圖14(d):為物件超出範圍的情況圖 Figure 14 (d): The situation of the object is out of range

圖15(a):為字形物件影像處理後的結果圖 Figure 15 (a): the result of image processing for glyph objects

圖15(b):為型物件其影像處理結果圖 Figure 15 (b): The image processing result of the object

圖15(c):為多物件其影像處理結果圖 Figure 15 (c): Image processing results for multiple objects

圖15(d):為件超出範圍影像處理後的結果圖 Figure 15 (d): The result of the image processing beyond the scope

圖16(a):為物件灰階的X方向梯度圖 Figure 16 (a): X-direction gradient map of object gray scale

圖16(b):為十字形物件灰階的Y方向梯度圖 Figure 16 (b): Y-direction gradient diagram of the gray scale of the cross-shaped object

圖16(c):為十字型物件灰階的X方向梯度圖 Figure 16 (c): X-direction gradient diagram of the gray scale of a cross-shaped object

圖16(d):為十字型物件灰階的Y方向梯度圖 Figure 16 (d): Y-direction gradient diagram of the gray scale of a cross-shaped object

圖16(e):為多物件灰階的X方向梯度圖 Figure 16 (e): X-direction gradient map of multi-object gray scale

圖16(f):為多物件灰階的Y方向梯度圖 Figure 16 (f): Y-direction gradient of multi-object gray scale

圖16(g):為物件超出範圍灰階的X方向梯度圖 Figure 16 (g): X-direction gradient map of the gray scale of the object out of range

圖16(h):為物件超出範圍灰階的Y方向梯度圖 Figure 16 (h): Y-direction gradient of the gray scale of the object out of range

圖17(a):為十字形物件的由影成形重建結果圖 Figure 17 (a): Reconstruction results of shadow-shaped objects for cross-shaped objects

圖17(b):為十字形物件經圖15(a)遮罩運算圖 Fig. 17 (b): operation diagram of the cross-shaped object through the mask of Fig. 15 (a)

圖17(c):為角型物件的由影成形重建結果圖 Figure 17 (c): Reconstruction result of shadow shape for angular object

圖17(d):為角型物件經圖15(b)遮罩運算圖 Fig. 17 (d): the calculation diagram of the angular object through the mask of Fig. 15 (b)

圖17(e):為多物件的由影成形重建結果圖 Figure 17 (e): Reconstruction results of shadow shaping for multiple objects

圖17(f):為多物件經圖15(c)遮罩運算圖 Fig. 17 (f): the calculation diagram of multiple objects passing through the mask of Fig. 15 (c)

圖17(g):為物件超出範圍的由影成形重建結果圖 Figure 17 (g): Reconstruction result of shadow forming for objects that are out of range

圖17(h):為物件超出範圍經圖15(d)遮罩運算圖 Fig. 17 (h): The calculation diagram of the object out of range after masking in Fig. 15 (d)

圖18(a):為以αγ表示的高度曲線 Fig. 18 (a): Height curve expressed by α and γ

圖18(b):為以世界座標表示的高度曲線 Figure 18 (b): Altitude curve in world coordinates

圖18(c):為以αγ表示的等高線圖 Figure 18 (c): Contour maps expressed by α and γ

圖18(d):為以世界座標表示的等高線圖 Figure 18 (d): Contour map in world coordinates

圖19(a):為以αγ表示的高度曲線圖 Figure 19 (a): Height graphs expressed by α and γ

圖19(b):為以世界座標表示的高度曲線 Figure 19 (b): Height curve in world coordinates

圖19(c):為以αγ表示的等高線圖 Figure 19 (c): Contour maps represented by α and γ

圖19(d):為以世界座標表示的等高線圖 Figure 19 (d): Contour map in world coordinates

圖20(a):為以αγ表示的高度曲線圖 Fig. 20 (a): Height graphs expressed by α and γ

圖20(b):為以世界座標表示的高度曲線圖 Figure 20 (b): Height curve in world coordinates

圖20(c):為以αγ表示的等高線圖 Figure 20 (c): Contour maps represented by α and γ

圖20(d):為以世界座標表示的等高線圖 Figure 20 (d): Contour map in world coordinates

圖21(a):為以αγ表示的高度曲線圖 Fig. 21 (a): Height graphs expressed by α and γ

圖21(b):為以世界座標表示的高度曲線圖 Figure 21 (b): Altitude curve in world coordinates

圖21(c):為以αγ表示的等高線圖 Figure 21 (c): Contour maps represented by α and γ

圖21(d):為以世界座標表示的等高線圖 Figure 21 (d): Contour map in world coordinates

圖22(a)十字形物件的雷射掃描點雲圖 Figure 22 (a) Laser scanning point cloud image of a cross-shaped object

圖22(b)十字形物件的數據融合的點雲圖 Figure 22 (b) Point cloud image of data fusion of cross-shaped object

圖22(c)角型物件的雷射掃描點雲圖 Figure 22 (c) Laser scanning point cloud image of angular objects

圖22(d)角型物件的數據融合的點雲圖 Figure 22 (d) Point cloud image of data fusion of angular objects

圖22(e)多物件的雷射掃描點雲圖 Figure 22 (e) Laser scanning point cloud image of multiple objects

圖22(f)多物件的數據融合的點雲圖 Figure 22 (f) Point cloud image of multi-object data fusion

圖22(g)物件超出範圍的雷射掃描點雲圖 Figure 22 (g) Laser scanning point cloud image of object out of range

圖22(h)物件超出範圍的數據融合的點雲圖 Figure 22 (h) Point cloud diagram of data fusion of objects out of range

應用於物件檢測之攝影機與雷射測距儀的數據融合方法,其方法為:首先偵測物件輪廓的影像處理流程如圖1所示,原始的場景影像讀取1,通常會受光源的影響產生不同亮度的背景影像,為了得到目標物的輪廓特徵,先進行影像格式轉換,再用色彩分割的方式進行RGB to HSV色彩轉換2,設定臨界值選取3之後進行二值化4,接著為了降低光源不均勻的影響,將二值化後的色彩分割影像經由形態運算5的開閉合方法去除雜訊,並經由銳化濾波器加強影像增強6線條的輪廓,最後,使用Canny運算子提取邊緣特徵並用連結的方式標記所有像素點進行輪廓追蹤8進而分析9輪廓。如圖2所示,c為攝影機的鏡頭中心,將雷射光源放置在c點上,可以得到雷射光從中心點發射掃描景物的測距方法與景物經由攝影機的影像投影關係,圖中l為場景L的投影,可用表示。雷射掃描場景L得到的距離z(α)可用下述第一式表示: 式中α為雷射掃描角,所以只要將雷射測距儀量測的值乘cosα,即可得到物件L與參考平面的距離,因此由雷射測距儀量測得到的物件L與參考平面的關係可用下述第二式表示:P(z i i )=P ref -LData(α i )×cosα i Data fusion method of camera and laser rangefinder applied to object detection. The method is as follows: the image processing flow of detecting the outline of the object is shown in Figure 1. The original scene image reading 1 is usually affected by the light source. Generate background images with different brightness. In order to obtain the contour features of the target, first convert the image format, then use the color segmentation method to perform RGB to HSV color conversion2, set the threshold value to 3, then binarize4, and then The effect of light source unevenness will remove the noise from the binarized color segmented image through the opening and closing method of morphological operation 5, and strengthen the image to enhance the outline of 6 lines through the sharpening filter. Finally, use Canny operator to extract edge features And all the pixels are marked in a linked manner for contour tracking 8 and then analysis of 9 contours. 2, the center c of the lens of the camera, the laser light source is placed at the point c, the laser light can be obtained from a distance measuring method and emission scanning center point of the scene for the scene camera via a video projection relationship, FIG. L Projection of scene L, available Means. The distance z ( α ) obtained by the laser scanning scene L can be expressed by the following first formula: Where α is the laser scanning angle, so as long as the value measured by the laser rangefinder is multiplied by cos α , the distance between the object L and the reference plane can be obtained. Therefore, the object L measured by the laser rangefinder and The relationship of the reference plane can be expressed by the following second formula: P ( z i , α i ) = P ref -LData ( α i ) × cos α i

以上第二式中的i為取樣點,P ref 為雷射測距儀與參考平面的距離,P(z i i )為量測點與參考平面的距離,LData(α i )為雷射測距儀在取樣角度α i 的量測值,若能將第二式與攝影機投影幾何結合,找出影像l與雷射掃描Z(α i )的相對應點,即可融合影像平面上的線段l與其相對應物長L的距離,來重建景物的3D資訊。 In the second formula above, i is the sampling point, P ref is the distance between the laser rangefinder and the reference plane, P ( z i , α i ) is the distance between the measurement point and the reference plane, and LData ( α i ) is the lightning If the measurement value of the radio rangefinder at the sampling angle α i can be combined with the projection geometry of the camera to find the corresponding point between the image l and the laser scan Z ( α i ), then the image plane can be fused. The distance of the line segment l from its corresponding object length L to reconstruct the 3D information of the scene.

如圖3所示,將圖2的影像投影與雷射測距的2D關係擴展到3D,若能控制雷射測距儀具有與雷射掃描方向垂直運動的能力,則進行場景的三維量測是有可能的,如圖3所示,我們以步進馬達驅動雷射測距儀進行俯仰掃描,可將各雷射掃描平面投影到三維空間,雷射平面掃描透過步進馬達驅動俯仰角γ i 為已知,量測物S得到的距離P(z i i i )用來表示量測物與參考平面的距離,如下述第三式所示:P(z i i i )=P ref -LData(α i i )×cosα i ×cosγ i As shown in Figure 3, the 2D relationship between the image projection of Figure 2 and laser ranging is extended to 3D. If the laser rangefinder can be controlled to have the ability to move vertically with the laser scanning direction, then three-dimensional measurement of the scene is performed It is possible, as shown in FIG 3 we drive the stepping motor to tilt the scanning laser rangefinder, laser scanning plane in each three-dimensional space may be projected, the laser scan plane through the drive of the stepping motor pitch angle γ i is known, and the distance P ( z i , α i , γ i ) obtained by the measurement object S is used to represent the distance between the measurement object and the reference plane, as shown in the following third formula: P ( z i , α i , γ i ) = P ref -LData ( α i , γ i ) × cos α i × cos γ i

上式中γ i 為步進馬達的角度,LData(α i i )為雷射測距儀在取樣角度α i 的量測值。 In the above formula, γ i is the angle of the stepping motor, and LData ( α i , γ i ) is the measurement value of the laser rangefinder at the sampling angle α i .

用雷射測距儀以圖3的方式掃描物件後,得到的量測資料用第三式轉換並對應到用α i γ i 表示的網格,可以得到物件的等高曲線,如圖4所示,等高線密集處表示物件表面曲度變化大,物件的影像經梯度運算及邊緣偵測的結果,可以與等高線融合,藉此找出物件的雷射掃描點與其影像的相對應點。 After scanning the object with the laser rangefinder in the manner shown in Figure 3, the measurement data obtained is converted in the third form and corresponds to the grid represented by α i and γ i . The contour curve of the object can be obtained, as shown in Figure 4 As shown, dense contour lines indicate that the curvature of the surface of the object changes greatly. The results of the gradient calculation and edge detection of the image of the object can be fused with the contour lines to find the corresponding points of the laser scan point of the object and its image.

一般而言,雷射光源無法與攝影機鏡頭中心重疊,它們之間會有偏移量b如圖5所示,O點為攝影機投影中心及攝影機坐標系原點,攝影機與雷射測距儀的基座距離b為已知,Z軸與攝影機光軸重疊,因此影像平 面位在焦距f上,目標點P在攝影機坐標系中的坐標P為(X O ,Y O ,Z O ),投影在影像平面上的二維坐標為p(x,y)。 Generally speaking, the laser light source cannot overlap the center of the camera lens, and there will be an offset b between them as shown in Figure 5. Point O is the camera projection center and the origin of the camera coordinate system. The base distance b is known, and the Z axis and the camera's optical axis overlap. Therefore, the image plane is located at the focal length f , and the coordinate P of the target point P in the camera coordinate system is ( X O , Y O , Z O ). The two-dimensional coordinates on the image plane are p ( x, y ).

根據針孔成像原理可以得到下述第四式為: 利用三角函數得到下述第五式為: 及由第四式與第五式得到下述第六式: 整理後可得下述第七式: According to the pinhole imaging principle, the following fourth formula can be obtained: The following fifth formula is obtained by using a trigonometric function: And from the fourth and fifth formulas, the following sixth formula is obtained: After finishing, the following seventh formula can be obtained:

P點的三維坐標P(X O ,Y O ,Z O )計算結果得到第八式如下: 上式中α為雷射測距儀X方向的掃描角,p點的x,y值可由攝影機參數得知,焦距f值由下述第九式計算得知: 以上式中l為雷射測距儀的量測值,整理後得到第十式如下: The eighth formula of the three-dimensional coordinate P ( X O , Y O , Z O ) of the P point is as follows: In the above formula, α is the scanning angle of the laser rangefinder in the X direction, the x and y values of the p point can be obtained from the camera parameters, and the focal length f value can be obtained from the following ninth formula: In the above formula, l is the measurement value of the laser rangefinder, and the tenth formula is obtained as follows:

上述,結合攝影機與雷射測距儀重建3D場景的方法如同圖6所示,由雷射測距量測得到的距離L(z,α,γ),配合已知的αγ用第三式可 以得到場景的等高線,由攝影機拍攝的場景影像,用影像處理得到場景中各個物件的邊緣線,若能由等高線得到與邊緣特徵的相對應點,即可用第十式來重建3D場景。 The method of reconstructing a 3D scene by combining a camera and a laser rangefinder is as shown in FIG. 6. The distance L ( z, α, γ ) obtained by the laser ranging measurement is used in conjunction with the known α and γ. The contour line of the scene can be obtained by using the formula. The image of the scene captured by the camera can be used to obtain the edge lines of each object in the scene. If the contour points can be obtained from the contour points, you can use the tenth formula to reconstruct the 3D scene.

經由攝影機及雷射測距儀的參數,如攝影機的焦距f、雷射測距儀及微步進馬達的解析度等,可以使用上述方法將攝影機影像及雷射測距儀的數據進行融合,首先取得微步進馬達的當前位置、雷射測距儀的量測距離以及攝影機的影像,之後進行相對應點搜尋,以得到融合的數據。 Through the parameters of the camera and the laser rangefinder, such as the focal length f of the camera, the resolution of the laser rangefinder and the micro-stepping motor, etc., the above method can be used to fuse the camera image and the data of the laser rangefinder. First obtain the current position of the micro-stepping motor, the distance measured by the laser rangefinder, and the camera image, and then perform a corresponding point search to obtain the fused data.

由於雷射掃描點的間隔投影在世界座標中的實際網格大小(△X,Y)可由雷射的掃描角α及俯仰角γ表示,△X=Ldata×(tan(α i+1)-tan(α i )),△Y=Ldata×(tan(γ i+1)-tan(γ i )),可知雷射掃描點間的實際網格的長△X與寬△Y皆不等距,所以雷射掃描得到的Ldata要放在(X,Y)的座標,需先將掃描座標(α,γ)轉成空間座標(X,Y)。影像座標與真實世界座標(X,Y)之間的關係則可以由參考物件的尺寸與參考物件的影像像素值之比例求得,將影像座標(x,y)轉換成空間座標(X,Y)之後,影像的像素就可以與雷射掃描的測距值進行匹配。 Because the actual grid size (△ X,Y ) of the interval projection of laser scanning points in world coordinates can be expressed by the scanning angle α and elevation angle γ of the laser, △ X = Ldata × (tan ( α i +1 ) -tan ( α i )), △ Y = Ldata × (tan ( γ i +1 ) -tan ( γ i )), we can know that the actual grid length △ X and width △ Y are different between the laser scanning points. Distance, so the Ldata obtained by laser scanning should be placed at the coordinates of ( X, Y ), and the scanning coordinates ( α, γ ) must be converted into spatial coordinates ( X, Y ). The relationship between the image coordinates and the real-world coordinates ( X, Y ) can be obtained from the ratio of the size of the reference object to the image pixel value of the reference object, and the image coordinates ( x, y ) are converted into spatial coordinates ( X, Y ), The pixels of the image can be matched with the ranging value of the laser scan.

如圖7所示的量測場景,矩型物件置於參考板前,設定雷射測距儀的掃描角α範圍為負8度至正32度,步進馬達驅動雷射測距儀的俯仰角γ範圍為正負18度。量測得到的雷射距離量測值,首先使用一維插值的方法,將雷射量測數據擴充至564*456筆資料,接著由第三式轉換,得到以參考平面為基準的高度曲線如圖8所示,為以αγ表示的量測場景的高度曲線,在圖8中 總共有564條雷射掃描所得到的雷射距離量測曲線,x軸方向為α角,y軸方向為γ角,將參考平面的距離設定為0,由每一行雷射掃描線的軌跡,可得到待測場景中由αγ定義的點與參考平面的距離P(z i i i )。將雷射量測距離L(z,α,γ)以第三式處理後,配合以αγ表示的平面座標,可以得到場景的等高線如圖9所示,以αγ表示的量測場景的等高線圖,由圖中明顯可以看出等高線,該密集處代表高度的變化大,為物件的邊緣線。 As shown in Figure 7, the rectangular object is placed in front of the reference plate, the scan range α of the laser rangefinder is set to be negative 8 degrees to positive 32 degrees, and the pitch angle of the laser rangefinder is driven by the stepping motor. The gamma range is plus or minus 18 degrees. The measured laser distance measurement value is firstly extended by one-dimensional interpolation to 564 * 456 records, and then converted by the third formula to obtain a height curve based on the reference plane, such as Figure 8 shows the height curve of the measurement scene represented by α and γ . In Figure 8, there are a total of 564 laser scans obtained by laser scanning. The x-axis direction is the α angle and the y-axis direction. gamma] is the angle, the distance from the reference plane is set to 0, the trajectory of the laser scan line in each row, the scene to be measured can be obtained by gamma] and [alpha] defined point from the reference plane P (z i, α i, γ i ). After the laser measurement distance L ( z, α, γ ) is processed in the third formula, and the plane coordinates expressed by α and γ are used, the contour lines of the scene can be obtained as shown in Figure 9, and the measurements expressed by α and γ The contour map of the scene can be clearly seen from the figure. The dense area represents a large change in height, which is the edge line of the object.

將圖8以αγ表示網格轉換成世界座標之後,可以得到564條雷射測距儀的線掃描所得到的雷射距離量測曲線,如圖10所示,以世界座標表示的量測場景的高度曲線,將參考平面的距離設定為0,配合以xy表示的平面座標,可以得到世界座標中的場景的等高線圖如圖11所示,以世界座標表示的量測場景的等高線圖,在圖11中可以明顯看出等高線密集處代表高度的變化大,為物件的邊緣線,可以得到物件的邊緣尺寸及平面高度。 After converting the grids of α and γ in FIG. 8 to world coordinates, the laser distance measurement curve obtained by the line scan of 564 laser rangefinders can be obtained. As shown in FIG. 10, the quantity expressed in world coordinates The height curve of the scene is measured, and the distance of the reference plane is set to 0. With the coordinates of the plane represented by x and y , the contour map of the scene in the world coordinates can be obtained as shown in Figure 11. In the contour map, it can be clearly seen in FIG. 11 that the dense areas of the contour lines represent a large change in height. As the edge line of the object, the edge size and the plane height of the object can be obtained.

再來,將雷射測距量測資料以點雲圖的形式表示,如圖12所示雷射測距量測資料的點雲圖,可以看出矩形的待測物件的表面輪廓及尺寸,由於量測物件表面無紋理資訊,無法判別此一物件的材質、特性,因此將以圖6的融合方法,將雷射測距量測得到的資料用世界座標表示得到的等高線圖,與影像顏色分割後的二值化影像所得的邊緣特徵加以匹配,得到如圖13所示為影像像素與雷射量測數據融合的點雲圖,圖13是以待測物件的邊界 盒、質心等參數,與雷射掃描所得的距離量測資料進行比對,以相同解析度的資料量,進行特徵匹配,將不匹配的部分移除,因此圖13中,只有520*434筆資料是融合成功的,由融合得到的點雲圖可以完整的得到待測物件的表面紋理特徵以及表面尺寸,圖13的融合結果,其偵測點數量、匹配錯誤點的數量與成功率,如下表所示,得到圖13矩形物件雷射掃描點與影像像素融合結果如下表: 其中成功率的計算如下式: 多物件的3D表面重建結果,如圖14所示,本節的測試將探討十字形的物件、角狀的物件、多物件的情況以及有物件超出範圍的情況,分別如圖14(a)(b)(c)(d)所示。如上所述,首先使用影像處理演算法與重新採樣法進行物件特徵擷取,處理後的二值化影像如圖15所示,其中圖14(a)十字形物件影像處理後的結果是圖15(a);以及,圖14(b)為角型物件,其影像處理結果為15(b);以及,圖14(c)為多物件,其影像處理。 Next, the laser ranging measurement data is expressed in the form of a point cloud diagram. As shown in the point cloud diagram of the laser ranging measurement data shown in FIG. 12, the surface contour and size of the rectangular object to be measured can be seen. There is no texture information on the surface of the measured object, and the material and characteristics of this object cannot be discriminated. Therefore, the fusion method shown in Fig. 6 will be used to obtain the contour map of the data measured by laser ranging with world coordinates. The edge features obtained from the binarized image are matched to obtain a point cloud image as shown in Figure 13 which is the fusion of image pixels and laser measurement data. Figure 13 is based on parameters such as the bounding box and centroid of the object to be measured. The distance measurement data obtained from the radio scan are compared, and feature matching is performed with the data amount of the same resolution, and the non-matching parts are removed. Therefore, in Figure 13, only 520 * 434 data were successfully merged. The obtained point cloud image can completely obtain the surface texture characteristics and surface size of the object to be measured. The fusion result of Fig. 13 includes the number of detection points, the number of matching error points, and the success rate. FIG laser scanning spot 13 and the rectangular object image pixel integration results in the following table: The calculation of success rate is as follows: Multi-object 3D surface reconstruction results, as shown in Figure 14, the tests in this section will explore cross-shaped objects, angular objects, multiple objects, and cases where objects are out of range, as shown in Figure 14 (a) (b ) (c) (d). As described above, the image processing algorithm and resampling method are used to extract object features. The processed binarized image is shown in FIG. 15, and the processed result of the cross-shaped object image in FIG. (a); and, FIG. 14 (b) is an angular object, and its image processing result is 15 (b); and, FIG. 14 (c) is a multi-object, its image processing.

結果為圖15(c)以及,圖14(d)物件超出範圍,影像處理後的結果為圖15(d)。 The results are shown in Fig. 15 (c) and Fig. 14 (d). The objects are out of range, and the results after image processing are shown in Fig. 15 (d).

影像處理得到待測物件的影像特徵如表二所示,包含色度、飽和度、亮度、邊界盒、質心及面積等參數,接著計算出圖14的灰階影像的X方向及Y方向梯度變化量,如圖16所示,將圖16(a)與圖16(b)得到的梯度值通過傅立葉轉換使不可積梯度場映射為頻域中的可積基本函數的組合並根據全域積分演算法計算影像深度如圖17(a)所示,得到待測物件的影像特徵如下表所示: The image characteristics of the object to be measured obtained by image processing are shown in Table 2. It includes parameters such as chroma, saturation, brightness, bounding box, centroid, and area, and then calculates the X-direction and Y-direction gradients of the grayscale image in FIG. 14 As shown in FIG. 16, the gradient values obtained in FIGS. 16 (a) and 16 (b) are mapped to a combination of integrable basic functions in the frequency domain by Fourier transformation and the global integral calculation is performed based on the Fourier transform. The image depth calculation method is shown in Figure 17 (a). The image characteristics of the object to be measured are shown in the following table:

圖17(b)為使用圖15(a)為遮罩將圖17(a)再運算後的結果,同理,將圖16(c)、圖16(d)、圖16(e)、圖16(f)、圖16(g)、圖16(h)所得的梯度值使用由影成形演算法進行深度重建的結果分別如圖17(c)、圖17(e)、圖17(g)所示,而圖17(d)、圖17(f)、圖17(h)則分別是經由圖15(b)、圖15(c)、圖15(d)為遮罩再運算後的結果,由圖17(a)、圖17(c)、圖17(e)、圖17(g)可以看出雖然由影成形演算法可以重建出待測物件的表面高度,但是容易受到表面紋理變化的影響,導致重建錯誤的情形發生。 FIG. 17 (b) is the result after recalculating FIG. 17 (a) using FIG. 15 (a) as a mask. Similarly, FIG. 16 (c), FIG. 16 (d), FIG. 16 (e), and FIG. The gradient values obtained from 16 (f), 16 (g), and 16 (h) are reconstructed using shadow shaping algorithms. The results are shown in Figs. 17 (c), 17 (e), and 17 (g), respectively. As shown in Fig. 17 (d), Fig. 17 (f), and Fig. 17 (h) are the results after mask recalculation in Fig. 15 (b), Fig. 15 (c), and Fig. 15 (d), respectively. From Figures 17 (a), 17 (c), 17 (e), and 17 (g), it can be seen that although the surface height of the object to be measured can be reconstructed by the shadow forming algorithm, it is susceptible to surface texture changes. Influence, which leads to the reconstruction error situation.

如參考板的黑白格分別被重建在最高點與最低點上,因此本文使用影像處理過後的二值化影像為遮罩,於由影成形演算法的重建結果進行遮罩運算,得到如圖17(b)、圖17(d)、圖17(f)、圖17(h)的結果。 For example, the black and white grids of the reference plate are reconstructed at the highest point and the lowest point respectively. Therefore, in this paper, the binarized image after image processing is used as a mask, and the mask calculation is performed on the reconstruction result of the shadow forming algorithm, as shown in Figure 17. (b), Fig. 17 (d), Fig. 17 (f), and Fig. 17 (h).

在圖17(c)中,有一個角形的待測物件,使用由影成形演算法的結果,將角狀物件邊緣的部分重建出高度,但是斜面的部分卻重建成斜率反方向的平面,在圖17(e)中,有三個方形物件位置呈現不適當的擺放,可看到方形物件的側面,有陰影的部分被重建在負的高度上,而正面的部分被重建在正的高度上,如圖17(g)所示,左上角的三角形物件及右下角方形物件,為較深咖啡色的物件,被重建在負的高度上,而另兩塊較淺色的方形物件則被重建在正的高度上,有此可知此一演算法受到顏色紋理的影響很大,且針對角狀物件容易重建失敗,如圖17(a)所示,由於此物件所得的影像只拍到正面的部分,且正面的顏色紋理變化很小,因此使用由影成形演算法可以成功的重建此一物件的表面高度。 In Fig. 17 (c), there is an angular object to be measured. Using the result of the shadow forming algorithm, the edge of the corner object is reconstructed to a height, but the oblique part is reconstructed into a plane with a slope opposite. In Figure 17 (e), the position of three square objects is improperly placed. You can see the side of the square object. The shaded part is reconstructed on the negative height, and the front part is reconstructed on the positive height. As shown in Figure 17 (g), the triangular objects in the upper left corner and the square objects in the lower right corner are darker brown objects, which are reconstructed at a negative height, while the other two lighter square objects are reconstructed at At a positive height, it can be seen that this algorithm is greatly affected by color and texture, and it is easy to fail to reconstruct corner objects. As shown in Figure 17 (a), because the image obtained by this object only captures the front part , And the color and texture of the front face change little, so the shadow height algorithm can be used to successfully reconstruct the surface height of this object.

分析過由影成形演算法的問題之後,接著使用本發明的方法進行物件的3D表面重建,如前節所述將雷射感測器對圖14的量測場景進行2D的面掃描,將所擷取的雷射測距的量測資料進行插值擴充成564*456筆資料之後,使用第三式將量測資料轉換成以αγ表示的高度曲線,分別如圖18(a)、圖19(a)、圖20(a)及圖21(a)所示,將參考平面的距離設定為0,配合以αγ表示的平面座標,可以得到場景的等高線圖分別如圖18(c)、圖19(c)、圖20(c)及圖21(c)所示。由於以αγ表示的實際網格大小皆不等距,因此需要轉換成以世界座標x及y表示的高度曲線,分別如圖18(b)、圖19(b)、圖20(b)及圖21(b)所示,將參考平面的距離設定為0,配合以xy表示的平面座標,可以得到世界座標中的場景的等高線圖如圖18(d)、圖 19(d)、圖20(d)及圖21(d)所示,由圖中明顯可以看出等高線密集處代表高度的變化大,為物件的邊緣線。 After analyzing the problem of the shadow forming algorithm, the method of the present invention is then used to reconstruct the 3D surface of the object. As described in the previous section, the laser sensor is used to perform a 2D surface scan of the measurement scene in FIG. 14 to capture the captured image. After taking the laser ranging measurement data and interpolating and expanding it to 564 * 456 records, use the third formula to convert the measurement data into a height curve represented by α and γ, as shown in Figure 18 (a) and Figure 19 respectively. (a), Fig. 20 (a) and Fig. 21 (a), set the distance of the reference plane to 0, and coordinate with the plane coordinates represented by α and γ to obtain the contour maps of the scene as shown in Figure 18 (c). 19, (c), 20 (c) and 21 (c). Because the actual grid sizes represented by α and γ are not equidistant, they need to be converted into height curves represented by world coordinates x and y, as shown in Figure 18 (b), Figure 19 (b), and Figure 20 (b). And as shown in Fig. 21 (b), the distance of the reference plane is set to 0, and with the plane coordinates represented by x and y , the contour maps of the scene in the world coordinates can be obtained as shown in Fig. 18 (d) and Fig. 19 (d) As shown in Figures 20 (d) and 21 (d), it can be clearly seen from the figure that the dense contours represent a large change in height, which is the edge line of the object.

將圖18(a)、圖19(a)、圖20(a)及圖21(a)的雷射測距量測資料以點雲圖的形式表示,分別如圖22(a)、圖22(c)、圖22(e)、圖22(g)所示,可以看出待測物件的表面輪廓及尺寸,但是無任何的紋理資訊,無法判別這些物件的材質及特性,因此本發明所述的融合方法,得到分別如圖22(b)、圖22(d)、圖22(f)、圖22(h)所示的融合結果。 The laser ranging measurement data of Fig. 18 (a), Fig. 19 (a), Fig. 20 (a), and Fig. 21 (a) are expressed in the form of point cloud diagrams, as shown in Fig. 22 (a) and Fig. 22 ( c), as shown in Figure 22 (e) and Figure 22 (g), the surface profile and size of the object to be measured can be seen, but without any texture information, the materials and characteristics of these objects cannot be discriminated, so the present invention The fusion method shown in Figure 22 (b), Figure 22 (d), Figure 22 (f), and Figure 22 (h).

以待測物件的邊界盒及質心等參數,與雷射掃描所得的距離量測資料進行比對,以相同解析度的資料量,進行特徵匹配,將不匹配的部分移除,得到融合的結果如圖22(b)所示,包括509*410筆資料、圖22(d)486*374筆資料、圖22(f)529*411筆資料及圖22(h)519*411筆資料是匹配成功的,由這些融合點的點雲圖得到待測物件的表面紋理特徵、表面尺寸及表面高度。圖22(b)、圖22(d)、圖22(f)、圖22(h)的融合結果,依偵測點數量、匹配錯誤點的數量與成功率加以比較,不同物件的雷射掃描點與影像像素融合結果為下表所示: The parameters such as the bounding box and centroid of the object to be measured are compared with the distance measurement data obtained by laser scanning, and feature matching is performed with the data amount of the same resolution, and the non-matching parts are removed to obtain a fused The results are shown in Figure 22 (b), including 509 * 410 records, Figure 22 (d) 486 * 374 records, Figure 22 (f) 529 * 411 records, and Figure 22 (h) 519 * 411 records. If the matching is successful, the surface texture features, surface size and surface height of the object to be measured are obtained from the point cloud images of these fusion points. The fusion results of Figure 22 (b), Figure 22 (d), Figure 22 (f), and Figure 22 (h) are compared according to the number of detected points, the number of mismatched points, and the success rate. Laser scanning of different objects The result of the fusion of points and image pixels is shown in the following table:

以上所述,攝影機拍攝物件(3D)所得到的影像(2D)失去物件的深度資訊;以及,雷射測距得到的距離是以雷射光源為起點與物的距離,演算法為: As described above, the image (2D) obtained by the camera shooting the object (3D) loses the depth information of the object; and the distance obtained by laser ranging is the distance from the object to the laser light source, and the algorithm is:

a.發展一種演算法將雷射測距得到的物件距離依雷射掃描角度轉成參考面的等高曲線。 a. Develop an algorithm to convert the object distance obtained by laser ranging to the contour curve of the reference surface according to the laser scanning angle.

b.發展一種演算法求出量測物件的幾何特徵 b. Develop an algorithm to find the geometric characteristics of the measurement object

c.發展一種演算法將a與b融合來量測得到物件的3D資訊。 c. Develop an algorithm to fuse a and b to measure the 3D information of the object.

其融合演算程序為: Its fusion calculation procedure is:

1.影像處理的部分,首先利用顏色分割的方法將待測物件的輪廓分割出來,配合形態學運算濾除雜訊,得到一張有標記的二值化影像,接著利用梯度的大小及方向,使用全域積分來計算影像深度,並使用物件分割出來的二值化影像做為遮罩進行遮蔽運算,所得的由影成型的3D重建圖,較容易因為紋理的變化或是特殊形狀的情況而出現重建錯誤的情形。 1. In the part of image processing, first use the color segmentation method to segment the outline of the object to be measured, filter out noise with morphological operations to obtain a labeled binary image, and then use the magnitude and direction of the gradient. Use global integration to calculate the depth of the image, and use the binary image segmented by the object as a mask to perform the masking operation. The resulting 3D reconstructed image formed by the shadow is more likely to appear due to changes in texture or special shapes. Rebuilding the wrong situation.

2.本發明使用物件分割出來的二值化影像及其特徵參數,與等高線圖的邊緣特徵如邊界盒、質心等參數進行比對,以相同解析度的資料量,進行特徵匹配,並將不匹配的部分移除,融合的結果以點雲圖的形式呈現,本發明的數據融合方法可以將雷射測距所得的資料加以轉換成以參考平面為基準的等高線,與由影像得到的邊緣線進行相對應點匹配,成功的將雷射測距資料與攝影機影像相對應的像素融合。 2. The present invention uses the binarized image and its feature parameters segmented by the object to compare with the edge features of the contour map such as the bounding box and centroid, and performs feature matching with the same amount of data. The unmatched parts are removed, and the result of the fusion is presented in the form of a point cloud image. The data fusion method of the present invention can convert the data obtained by laser ranging into contour lines based on the reference plane and edge lines obtained from the image. Corresponding point matching is performed to successfully fuse the laser ranging data with the pixels corresponding to the camera image.

3.由上述特徵擷取與由影成形演算法的實驗,可知使用全域積分演算法進行3D物件表面的重建結果,若沒有使用遮罩進行非待測物件的場景濾除,會受量測場景的影響;以及,由雷射掃描資料與影像像素點融合的實驗結果,可知雷射掃描點間的實際掃描網格的長與寬皆不等距,所以需先將雷射測距量測得到的資料轉換成以世界平面座標表示的等高線圖,以相同解析度的資料量,進行特徵匹配,再將不匹配的部分移除,以點雲圖的形式呈現融合雷射測距資料與影像特徵得到的待測物件表面紋理特徵以及表面尺寸。 3. From the experiments of feature extraction and shadow forming algorithms, we can know that the global integration algorithm is used to reconstruct the 3D object surface. If no mask is used for scene filtering of non-test objects, the scene will be measured. And the experimental results of the fusion of laser scan data and image pixels, it can be seen that the length and width of the actual scanning grid between laser scan points are not equidistant, so you need to measure the laser ranging first. The data is converted into contour maps expressed in world plane coordinates. Feature matching is performed with the same amount of data. Then the unmatched parts are removed, and the laser ranging data and image features are displayed in the form of point cloud diagrams. Surface texture characteristics and surface dimensions of the object under test.

本發明是以矩形物件、十字形物件、角形物件、多物件、物件超出範圍、非實心物件以及具有傾斜角的非實心物件作為待測場景的測試情況,其優點為:(1)在矩形物件的實驗中,得到99.8%的匹配率,而十字形物件、角形物件、多物件以及物件超出範圍的匹配率分別是98.80%、99.95%、97.10%及96.36%,(2)非實心物件的第一層匹配率為93.30%,具有傾斜角的非實心物件的第一層及第二層的匹配率分別為94.88%及98.54%,(3)由多物件的3D表面重建的實驗結果,可知平面型物件、角型物件與超出範圍的物件,以本發明方法為基礎的量測系統可以得到高於95%的良好的重建結果,但是對於非實心物件與具有傾斜角的非實心物件,所得到紋理映射的重建結果較差。經由本發明驗證所設計的數據融合方法,可以用2D的物件影像及1D的雷射測距資料來重建物件的3D尺寸。 The present invention uses rectangular objects, cross-shaped objects, angular objects, multiple objects, out-of-range objects, non-solid objects, and non-solid objects with oblique angles as test conditions for the test scene. The advantages are: (1) in rectangular objects In the experiment, the matching rate was 99.8%, while the matching rates for cross-shaped objects, angular objects, multi-objects, and objects out of range were 98.80%, 99.95%, 97.10%, and 96.36%, respectively. (2) The match rate of one layer is 93.30%. The match rate of the first layer and the second layer of non-solid objects with tilt angles are 94.88% and 98.54%, respectively. (3) The experimental results of 3D surface reconstruction from multiple objects show that the plane Measurement objects based on the method of the present invention can obtain good reconstruction results higher than 95% for non-solid objects and non-solid objects with oblique angles. The texture map reconstruction results are poor. After verifying the designed data fusion method of the present invention, 2D object images and 1D laser ranging data can be used to reconstruct the 3D size of the object.

Claims (3)

一種應用於物件檢測之攝影機與雷射測距儀的數據融合方法,係指以一攝影機的影像處理為基礎結合一雷射測距儀,以該攝影機的影像處理與雷射測距儀所掃描資料融合的方法:該雷射測距儀,以一種演算法將雷射測距得到的3D物件距離依雷射掃描角度轉成參考面的等高曲線,以演算法求出量測物件的幾何特徵,將等高曲線與幾何特徵以融合演算法取得量測得到物件的3D輪廓資訊;該攝影機的影像處理,由幾何特徵利用顏色分割的方法將待測物件的3D輪廓分割出來,配合形態學運算濾除雜訊,得到一張有標記的二值化影像,接著利用梯度的大小及方向,使用全域積分來計算影像深度,並使用物件分割出來的二值化影像做為遮罩進行遮蔽運算,運算所得成型為3D重建圖;以及,該物件分割出來的二值化影像及其特徵參數與等高線圖的邊緣特徵參數進行比對,以相同解析度的資料量,進行特徵匹配,並將不匹配的部分移除,融合的結果以點雲圖的形式呈現,藉此該等高曲線與幾何特徵以融合演算法取得數據與攝影機影像處理的相對應像素融合,取得到物件的3D尺寸。A method for data fusion of a camera and a laser rangefinder applied to object detection refers to combining a laser rangefinder based on the image processing of a camera, and the image processing of the camera and scanning by the laser rangefinder Data fusion method: The laser rangefinder uses an algorithm to convert the distance of the 3D object obtained by laser ranging into the contour curve of the reference surface according to the laser scanning angle. The geometry of the measured object is calculated by the algorithm. Feature, the contour curve and geometric features are merged to obtain the 3D contour information of the object. The image processing of this camera uses geometric features to segment the 3D contour of the object to be measured by color segmentation. The operation filters out noise to obtain a labeled binary image, then uses the magnitude and direction of the gradient, uses the global integration to calculate the image depth, and uses the binary image segmented by the object as a mask to perform the masking operation. 3D reconstruction image obtained from the operation; and the binarized image segmented by the object and its feature parameters are performed with the edge feature parameters of the contour map Yes, with the same resolution data volume, feature matching is performed, and the non-matching parts are removed. The result of the fusion is presented in the form of point cloud diagrams, so that these high curves and geometric features are obtained by fusion algorithms to obtain data and cameras. Corresponding pixel fusion of image processing to obtain the 3D size of the object. 一種應用於物件檢測之攝影機與雷射測距儀的數據融合方法,該數據融合方法係採用一雷射測距儀及一攝影機,該攝影機取得影像擷取、影像處理、邊緣提取等資料,該雷射測距儀線取得平面校正與投影到立體幾何空間的演算資料,藉由影像處理得到的資料利用灰階調整、二值化、形態學運算及圖形分割方法,進行影像中的物件辨識及分析,雷射測距的量測以雷射測距資料轉換與校正,用以與影像處理得到的幾何特徵融合,尋找雷射測距儀掃描線與攝影機影像的對應關係,找出與雷射掃描點相對應的影像特徵,再對這些匹配的圖像與物件的工程圖檔進行比對,計算出物件表面輪廓與中心及深度等參數,得到的融合演算法可將物件2D的影像及雷射測距1D的資料,取得物件的3D尺寸。A data fusion method for a camera and a laser rangefinder applied to object detection. The data fusion method uses a laser rangefinder and a camera. The camera obtains data such as image capture, image processing, and edge extraction. The laser rangefinder line obtains the calculation data of plane correction and projection into the three-dimensional geometric space. The data obtained through image processing uses gray-scale adjustment, binarization, morphological operations, and graphic segmentation methods to identify objects in the image and Analysis, the laser ranging measurement is converted and corrected by laser ranging data to fuse with the geometric features obtained by image processing, find the correspondence between the laser rangefinder scan line and the camera image, and find out the laser distance Scan the corresponding image characteristics of the points, and then compare these matching images with the object's engineering drawing file to calculate the surface contour, center, and depth of the object. The resulting fusion algorithm can convert the 2D image and lightning of the object. Shoot 1D data to get the 3D size of the object. 一種應用於物件檢測之攝影機與雷射測距儀的數據融合方法,該數據融合方法係採用一雷射測距儀及一攝影機,以該攝影機的影像處理為基礎結合該雷射測距儀,其特徵在於:使用螺距致動雷射測距儀結合攝影機來完成設計重建三維深度影像,將一維雷射測距儀安裝在螺距致動器軸心上,在設定的每個角增量中,雷射測距儀會擷取1D的掃描深度資訊,這些向量將被投影到一個局部坐標系,產生一個3D影像,影像處理是利用灰階調整、二值化、形態學運算及圖形分割方法,進行影像中的物件辨識及分析,雷射測距的量測以雷射測距資料轉換與校正,轉換與校正後的數據再以與攝影機影像處理得到的幾何特徵融合,藉由雷射測距儀掃描線與攝影機影像處理的幾何特徵對應關係,找出與雷射掃描點相對應的影像特徵,再對這些匹配的圖像與物件的工程圖檔進行比對,計算出物件表面輪廓與中心及深度等參數,藉此將攝影機得到物件影像以影像處理資料與1D的雷射測距資料融合演算來取得物件的3D尺寸。A data fusion method for a camera and a laser rangefinder applied to object detection. The data fusion method uses a laser rangefinder and a camera, and combines the laser rangefinder based on the image processing of the camera. It is characterized in that a pitch-actuated laser rangefinder combined with a camera is used to complete the design and reconstruction of a three-dimensional depth image, and a one-dimensional laser rangefinder is installed on the axis of the pitch actuator in each set angular increment The laser rangefinder will capture 1D scan depth information, and these vectors will be projected onto a local coordinate system to generate a 3D image. The image processing uses gray scale adjustment, binarization, morphological operations, and graphic segmentation methods. For object identification and analysis in the image, laser ranging measurement is converted and corrected with laser ranging data, and the converted and corrected data is then fused with geometric features obtained from camera image processing. Correspondence between the scan line of the telemeter and the geometric features of the camera image processing, find the image features corresponding to the laser scan points, and then map the matching drawings of these images and objects Compare the parameters, calculate the surface contour, center and depth of the object, and use the image processing data and 1D laser ranging data to obtain the 3D size of the object.
TW106128504A 2017-08-23 2017-08-23 Data fusion method for camera and laser rangefinder applied to object detection TWI659390B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW106128504A TWI659390B (en) 2017-08-23 2017-08-23 Data fusion method for camera and laser rangefinder applied to object detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW106128504A TWI659390B (en) 2017-08-23 2017-08-23 Data fusion method for camera and laser rangefinder applied to object detection

Publications (2)

Publication Number Publication Date
TW201913574A TW201913574A (en) 2019-04-01
TWI659390B true TWI659390B (en) 2019-05-11

Family

ID=66992127

Family Applications (1)

Application Number Title Priority Date Filing Date
TW106128504A TWI659390B (en) 2017-08-23 2017-08-23 Data fusion method for camera and laser rangefinder applied to object detection

Country Status (1)

Country Link
TW (1) TWI659390B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI722729B (en) * 2019-12-23 2021-03-21 財團法人石材暨資源產業研究發展中心 Stone image analysis method based on stone processing

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11263755B2 (en) * 2020-07-17 2022-03-01 Nanya Technology Corporation Alert device and alert method thereof

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7184088B1 (en) * 1998-10-28 2007-02-27 Measurement Devices Limited Apparatus and method for obtaining 3D images
US20080309662A1 (en) * 2005-12-14 2008-12-18 Tal Hassner Example Based 3D Reconstruction
US20090138138A1 (en) * 2006-09-29 2009-05-28 Bran Ferren Imaging and display system to aid helicopter landings in brownout conditions

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7184088B1 (en) * 1998-10-28 2007-02-27 Measurement Devices Limited Apparatus and method for obtaining 3D images
US20080309662A1 (en) * 2005-12-14 2008-12-18 Tal Hassner Example Based 3D Reconstruction
US20090138138A1 (en) * 2006-09-29 2009-05-28 Bran Ferren Imaging and display system to aid helicopter landings in brownout conditions

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI722729B (en) * 2019-12-23 2021-03-21 財團法人石材暨資源產業研究發展中心 Stone image analysis method based on stone processing

Also Published As

Publication number Publication date
TW201913574A (en) 2019-04-01

Similar Documents

Publication Publication Date Title
CN109872397B (en) Three-dimensional reconstruction method of airplane parts based on multi-view stereo vision
CN107607040B (en) Three-dimensional scanning measurement device and method suitable for strong reflection surface
CN107463918B (en) Lane line extraction method based on fusion of laser point cloud and image data
CN107203973B (en) Sub-pixel positioning method for center line laser of three-dimensional laser scanning system
Xu et al. Line structured light calibration method and centerline extraction: A review
CN109215063B (en) Registration method of event trigger camera and three-dimensional laser radar
WO2011070927A1 (en) Point group data processing device, point group data processing method, and point group data processing program
CN111754583A (en) Automatic method for vehicle-mounted three-dimensional laser radar and camera external parameter combined calibration
CN107767456A (en) A kind of object dimensional method for reconstructing based on RGB D cameras
Atsushi et al. System for reconstruction of three-dimensional micro objects from multiple photographic images
WO2013061976A1 (en) Shape inspection method and device
CN109580630A (en) A kind of visible detection method of component of machine defect
CN112116576A (en) Defect detection method based on polarization structure light imaging and improved Mask R-CNN
JP2021168143A (en) System and method for efficiently scoring probe in image by vision system
CN109507198B (en) Mask detection system and method based on fast Fourier transform and linear Gaussian
CN111602177A (en) Method and apparatus for generating a 3D reconstruction of an object
US9204130B2 (en) Method and system for creating a three dimensional representation of an object
TWI659390B (en) Data fusion method for camera and laser rangefinder applied to object detection
CN114170284B (en) Multi-view point cloud registration method based on active landmark point projection assistance
CN109506629B (en) Method for calibrating rotation center of underwater nuclear fuel assembly detection device
JP2003216931A (en) Specific pattern recognizing method, specific pattern recognizing program, specific pattern recognizing program storage medium and specific pattern recognizing device
CN112819935A (en) Method for realizing three-dimensional reconstruction of workpiece based on binocular stereo vision
CN116596987A (en) Workpiece three-dimensional size high-precision measurement method based on binocular vision
Hongsheng et al. Three-dimensional reconstruction of complex spatial surface based on line structured light
CN115184362A (en) Rapid defect detection method based on structured light projection