TW201534512A - Control method about obstacle avoidance and navigation by binocular images - Google Patents

Control method about obstacle avoidance and navigation by binocular images Download PDF

Info

Publication number
TW201534512A
TW201534512A TW103107700A TW103107700A TW201534512A TW 201534512 A TW201534512 A TW 201534512A TW 103107700 A TW103107700 A TW 103107700A TW 103107700 A TW103107700 A TW 103107700A TW 201534512 A TW201534512 A TW 201534512A
Authority
TW
Taiwan
Prior art keywords
image
obstacle avoidance
path
disparity map
map
Prior art date
Application number
TW103107700A
Other languages
Chinese (zh)
Other versions
TWI532619B (en
Inventor
Ying-Xing Xiao
chao-xing Wang
qing-wei Wu
zheng-rong Wu
Original Assignee
Univ Nat Changhua Education
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Univ Nat Changhua Education filed Critical Univ Nat Changhua Education
Priority to TW103107700A priority Critical patent/TWI532619B/en
Publication of TW201534512A publication Critical patent/TW201534512A/en
Application granted granted Critical
Publication of TWI532619B publication Critical patent/TWI532619B/en

Links

Landscapes

  • Traffic Control Systems (AREA)

Abstract

This invention focus on the control method about obstacle avoidance and navigation by binocular images. A mobile platform is navigated based on binocular images without manual operation. It utilizes binocular stereo images by matching method to generate a vision disparity map. The obstacle avoidance algorithm uses the vision disparity map to find the barrier free path. The automatic navigation and obstacle avoidance method are according to the mobile platform motion to design obstacle avoidance path. The navigation control algorithm includes binocular image corresponding point matching, detecting barrier free path and path planning.

Description

雙影像避障路徑規劃導航控制方法 Dual image obstacle avoidance path planning navigation control method

本發明雙影像避障路徑規劃導航控制方法,係關於一種移動式平台自主導航的方法,特別關於一種自主導航的方法,該導航係利用雙台攝影機取得影像,來進行判斷可移動的路徑並進行規劃避障。 The dual image obstacle avoidance path planning navigation control method is a method for autonomous navigation of a mobile platform, and particularly relates to a method for autonomous navigation, which uses a dual camera to acquire images to determine a movable path and perform Plan to avoid obstacles.

目前在汽車導航機器人領域裡,導航避障的相關研究已越來越多,常見的方式是透過超音波感測器或是雷射掃描器來實現,由感測器直接回饋前方的狀況,雖然是很直接的方式,但還是有一些需要再突破的挑戰,例如形狀較尖的障礙物或是地面有凹洞等狀況;現狀是以單眼攝影機取得影像感測數據來實現避障的功能,但由單眼攝影機僅可取得2D的影像來計算3D的資訊,必須要許多已知的資訊補助以進行影像校準,方能得到影像的大小與世界座標的比例關係,而障礙物在避障前都必須建立好特定的演算方式,方能達到單眼攝影機避障的功能。 At present, in the field of car navigation robots, there have been more and more researches on navigation obstacle avoidance. The common way is to use ultrasonic sensors or laser scanners to directly feed back the situation. It's a very straightforward way, but there are still some challenges that need to be broken, such as sharp-shaped obstacles or pits on the ground. The current situation is that monocular cameras can obtain image sensing data to achieve obstacle avoidance. Only a 2D image can be acquired by a single-eye camera to calculate 3D information. Many known information aids must be used for image calibration in order to obtain a proportional relationship between the size of the image and the world coordinates, and the obstacle must be obtained before the obstacle avoidance. Only by establishing a specific calculation method can the monocular camera avoid obstacles.

因此,本發明將採用雙台攝影機取得感測數據,再取得的感測數據進行立體空間計算的方式,藉此,能夠在不管障礙物形狀、特徵、顏色的情形下,以更精準的立體空間計算方式,使移動式平台能自主的閃避障礙物。 Therefore, the present invention uses a dual camera to obtain sensing data, and the obtained sensing data is subjected to stereoscopic space calculation, thereby enabling more accurate three-dimensional space regardless of obstacle shape, feature, and color. The calculation method enables the mobile platform to autonomously avoid obstacles.

本發明雙影像避障路徑規劃導航控制方法,指在提供一種能夠讓移動式平台自主式導航的方法,藉由兩台攝影機做為感測外在環境的感測器並取得感測數據,獲得影像感測數據後,以移動平均演算法處理,先將影像梯度化,並假設影像是連續的,因此取上一次產生的視差圖與兩張梯度影像做計算,產生新的視差圖;在獲得視差圖後,依地面影 像視差的線性特徵,偵測可行空間,利用地面與攝影機的幾何關係,找出地面在視差圖中的特徵,來偵測可行走的地面。 The dual image obstacle avoidance path planning navigation control method refers to a method for enabling autonomous navigation of a mobile platform, and the two cameras are used as sensors for sensing an external environment and obtaining sensing data. After the image sensing data is processed by the moving average algorithm, the image is firstly graded, and the image is assumed to be continuous. Therefore, the previously generated disparity map and the two gradient images are calculated to generate a new disparity map; After the parallax map, according to the ground Like the linear feature of parallax, the feasible space is detected, and the geometric relationship between the ground and the camera is used to find out the features of the ground in the disparity map to detect the ground that can be walked.

本發明雙影像避障路徑規劃導航控制方法,以雙眼立體視覺的匹配法來產生視差圖,再設計避障的演算法處理視差圖得到無障礙空間,其方法以亮度梯度差取代亮度差,並加入視差圖的平滑程度,處理左右影像得到視差圖。 The dual image obstacle avoidance path planning navigation control method adopts a binocular stereo vision matching method to generate a parallax map, and then designs an obstacle avoidance algorithm to process the parallax map to obtain an unobstructed space, and the method replaces the brightness difference by a brightness gradient difference. And add the smoothness of the disparity map, and process the left and right images to obtain a disparity map.

1‧‧‧攝影機得到的影像 1‧‧‧Image obtained by the camera

11‧‧‧生成視差圖的處理過程 11‧‧‧Processing of generating disparity maps

12‧‧‧路徑規劃演算法 12‧‧‧ Path Planning Algorithm

13‧‧‧決定車體的方向和速度 13‧‧‧Determining the direction and speed of the car body

21‧‧‧攝影機 21‧‧‧ camera

22‧‧‧光學中心 22‧‧‧Optical Center

23‧‧‧兩鏡頭的光學中心距離 23‧‧‧ Optical center distance of two lenses

31‧‧‧極線幾何限制 31‧‧‧Polar geometry limits

32‧‧‧空間任一點 32‧‧‧ Any point in the space

33‧‧‧焦距 33‧‧‧focal length

34‧‧‧P R 34‧‧‧ P R

35‧‧‧P T 35‧‧‧ P T

41‧‧‧極線 41‧‧‧polar line

42‧‧‧目標影像 42‧‧‧ Target image

43‧‧‧參考影像 43‧‧‧Reference image

51‧‧‧原始影像 51‧‧‧ original image

52‧‧‧水平梯度影像 52‧‧‧ horizontal gradient image

53‧‧‧垂直梯度影像 53‧‧‧ vertical gradient image

54‧‧‧邊緣偵測影像 54‧‧‧Edge detection image

61‧‧‧參考視窗 61‧‧‧ reference window

62‧‧‧搜尋區域範圍 62‧‧‧Search area

63‧‧‧視差值 63‧‧‧Disparity

72‧‧‧視差圖DM 72‧‧‧Disparity map DM

81‧‧‧從影像最下方開始偵測起的程式列 81‧‧‧Programs from the bottom of the image

82‧‧‧搜索無障礙路徑的程式列 82‧‧‧Search for programs with accessible paths

83‧‧‧重新設定車寬及下一列搜尋起始點位置的程式列 83‧‧‧Reprogramming the width of the car and the starting point of the next search starting point

91‧‧‧路面可行空間偵測 91‧‧‧ Pavement feasible space detection

92‧‧‧搜尋路徑點 92‧‧‧Search route points

93‧‧‧刪除或堆疊路徑點並執行迴圈到最後一列 93‧‧‧Delete or stack path points and perform loops to the last column

94‧‧‧估計影像中的車寬 94‧‧‧ Estimated car width in the image

111‧‧‧自走車車寬 111‧‧‧Self-driving car width

Z‧‧‧深度資訊 Z‧‧‧In-depth information

圖1系本發明之方法流程圖。 Figure 1 is a flow chart of the method of the present invention.

圖2系本發明之標準立體視覺攝影機的架設示意圖。 2 is a schematic view showing the erection of a standard stereoscopic camera of the present invention.

圖3系本發明之標準立體視覺的影像模型。 Figure 3 is an image model of the standard stereoscopic vision of the present invention.

圖4系本發明之標準立體視覺的影像資料。 Figure 4 is a view of the standard stereoscopic image data of the present invention.

圖5系本發明之Sobel梯度及邊緣影像處理。 Figure 5 is a Sobel gradient and edge image processing of the present invention.

圖6系本發明之匹配視窗估測視差示意圖。 Figure 6 is a schematic diagram of the estimated window disparity of the matching window of the present invention.

圖7系本發明之視差圖生成流程圖。 Fig. 7 is a flow chart showing the generation of a parallax map of the present invention.

圖8系本發明之選擇匹配點示意圖。 Figure 8 is a schematic illustration of selected matching points of the present invention.

圖9系本發明之路徑規劃流程圖。 Figure 9 is a flow chart of the path planning of the present invention.

圖10系本發明之地面與攝影機的幾何關係示意圖。 Figure 10 is a schematic illustration of the geometric relationship of the ground and camera of the present invention.

圖11系本發明之可行走空間的範圍圖。 Figure 11 is a range view of the walkable space of the present invention.

圖12系本發明之平行線的透視投影圖。 Figure 12 is a perspective projection view of parallel lines of the present invention.

圖13系本發明之可行走空間二值化結果圖。 Figure 13 is a graph showing the results of the binarization of the walkable space of the present invention.

圖14系本發明之視差圖推算影像中的車寬圖。 Fig. 14 is a view showing a vehicle width map in the parallax map estimation image of the present invention.

圖15系本發明之路徑規劃結果圖。 Figure 15 is a diagram showing the results of the path planning of the present invention.

圖16系本發明之搜尋路徑點演算法說明圖。 Figure 16 is an explanatory diagram of the search path point algorithm of the present invention.

圖17系本發明之計算轉彎角範例圖。 Figure 17 is a diagram showing an example of a calculated turning angle of the present invention.

圖18系本發明之自走車轉向運動示意圖。 Figure 18 is a schematic view showing the steering movement of the self-propelled vehicle of the present invention.

圖19系本發明之最小迴轉半徑交集區示意圖。 Figure 19 is a schematic view of the intersection of the minimum radius of gyration of the present invention.

圖20系本發明之硬體系統架構圖。 Figure 20 is a block diagram of the hardware system of the present invention.

圖21系本發明之室外場景、實驗場地視差圖結果。 Figure 21 is a diagram showing the results of the parallax map of the outdoor scene and the experimental site of the present invention.

圖22系本發明之框出可行空間及路徑規劃之結果。 Figure 22 is a result of the framework of the feasible space and path planning of the present invention.

圖23系本發明之實驗一結果路徑圖。 Figure 23 is a graph showing the results of the experiment one of the present invention.

圖24系本發明之實驗二結果路徑圖。 Figure 24 is a diagram showing the results of the experiment 2 of the present invention.

圖25系本發明之實驗三結果路徑圖。 Figure 25 is a diagram showing the results of the experiment three results of the present invention.

圖26系本發明之實驗四結果路徑圖。 Figure 26 is a diagram showing the results of the experiment 4 of the present invention.

為了充分瞭解本發明,以下將列舉較佳實例並配合附圖式作詳細說明,茲再配合本發明較佳實施例之圖式進一步說明如後,以期能使熟悉本發明相關技術之人士,得依本說明書之陳述據以實施,且其並非用以限定本發明之技術範圍。 In order to fully understand the present invention, the preferred embodiments of the present invention will be described in detail with reference to the accompanying drawings. The statements in this specification are hereby implemented and are not intended to limit the technical scope of the invention.

本發明雙影像避障路徑規劃導航控制方法,如圖1所示,以控制流程圖來說明本專利的方法,其包括:攝影機得到的影像(1)、生成視差圖的處理過程(11)、路徑規劃演算法(12)以及決定車體的方向和速度(13),系統的操作是用平行並列的左右兩台攝影機(21)擷取場景影像,兩張影像透過本專利的視差圖演算法來得到3D的環境影像資訊,藉由本專利的避障演算法來處理環境影像資訊,並偵測可行走空間及路徑規劃,再設計一種演算法利用車寬決定行駛車輛的運動方向與速度。內文第一部分說明視差圖的演算法,第二部分說明避障路徑規劃的演算法,第三個部分為決定車速與轉向的控制演算法,最後用實驗結果來說明本專利的功能。 The dual image obstacle avoidance path planning navigation control method of the present invention, as shown in FIG. 1, illustrates the method of the patent by a control flow chart, which includes: an image obtained by a camera (1), a process of generating a disparity map (11), The path planning algorithm (12) and the direction and speed of the car body (13), the operation of the system is to capture the scene image by two parallel cameras (21), and the two images are transmitted through the patented parallax map algorithm. To obtain 3D environmental image information, the environmental image information is processed by the obstacle avoidance algorithm of this patent, and the walkable space and path planning are detected, and an algorithm is designed to determine the moving direction and speed of the traveling vehicle by using the vehicle width. The first part of the text describes the algorithm of the disparity map, the second part describes the algorithm of obstacle avoidance path planning, the third part is the control algorithm that determines the speed and steering, and finally the experimental results are used to illustrate the function of this patent.

1.視差圖:下面說明由兩部攝影機(21)獲得重疊視野的影像資訊來計算影像的景深,以兩部攝影機(21)重疊視野裡的一目標點,來說明該目標點與兩個影像點的關係,如圖2所示,所述攝影機(21)具有光學中心 (22)(Optical Center),假設此兩部攝影機(21)的架設,除了基線(Base Line,沿X軸的平移量)之外,其餘影像座標系統皆一致,其光軸(Optical Axes)互相平行,兩鏡頭的光學中心距離(23)為B。 1. Parallax map: The following shows that the image information of the overlapping field of view is obtained by two cameras (21) to calculate the depth of field of the image, and the target point and the two images are illustrated by a target point in the overlapping field of view of the two cameras (21). Point relationship, as shown in Figure 2, the camera (21) has an optical center (22) (Optical Center), assuming that the two cameras (21) are erected, except for the baseline (Base Line, the amount of translation along the X axis), the other image coordinate systems are identical, and their optical axes (Optical Axes) are mutually Parallel, the optical center distance (23) of the two lenses is B.

在上述情況下的影像模型如圖3所示,XYZ為世界座標,假設P (X,Y.Z )為場景中二部攝影機(21)之重疊視野內的空間任一點(32)且無遮蔽現象,並分別沿著二部攝影機(21)各自的光學中心(22)O,投影在焦距(33)為f處的參考影像(43)上P R (34)及目標影像(42)上P T (35),其中,P R (34)為空間點P於右影像的成像點,P T (35)為空間點P於左影像的成像點。當滿足標準立體視覺幾何的限制時,兩影像平面在同一平面上,並出現兩影像對應的極線(41)會在同一條線上,使得P R (34)與P T (35)在左右影像平面Y軸上的位置一致。 The image model in the above case is shown in Figure 3. X , Y , and Z are world coordinates, assuming P ( X , Y . Z ) is any point in the space between the overlapping views of the two cameras (21) in the scene (32 ) and no shading phenomenon, and respectively along the two cameras (21) of each of the optical center (22) O, a projection for the reference image at the focal length f (33) (43) P R (34) and the target image ( 42) Upper P T (35), where P R (34) is the imaging point of the spatial point P in the right image, and P T (35) is the imaging point of the spatial point P in the left image. When the limitation of the standard stereoscopic geometry is met, the two image planes are on the same plane, and the polar lines (41) corresponding to the two images appear on the same line, so that P R (34) and P T (35) are in the left and right images. The positions on the plane Y axis are the same.

繼續,令P R (34)的影像座標為(x R ,y R ),C R 為參考影像(43)的影像中心點,P T (35)的影像座標為(x T ,y T ),藉由相似三角形的幾何關係可求得場景中某點的深度資訊(Z): Continue, let the image coordinates of P R (34) be ( x R , y R ), C R be the image center point of the reference image (43), and the image coordinates of P T (35) be ( x T , y T ), The depth information (Z) of a point in the scene can be obtained by the geometric relationship of similar triangles:

式1中Bf均為常數,因此要求得場景中某點的深度資訊(Z),最主要的是(x T -x R ),一般稱(x T -x R )為視差(disparity),以d來表示。而如何從參考影像(43)的P R (34)找出目標影像(42)中的相對應點P T (35),就成為設計立體視覺的核心技術。 In Equation 1, B and f are constants, so the depth information (Z) of a point in the scene is required, the most important one is ( x T - x R ), which is generally called ( x T - x R ) is disparity. , expressed in d . How to find the corresponding point P T (35) in the target image (42) from the P R (34) of the reference image (43) becomes the core technology for designing stereo vision.

由於在雙眼視覺架構下,極線(41)是在同一水平線上,因此可以從影像資料沿著同一列搜尋,如圖4所示,又因為x T x R ,所以只需要往單一方向搜尋,可大幅減少搜尋的時間,但如何從候選對象中找出最 佳匹配點的方法,便是視差圖中匹配演算法所探討的重點。為了尋找相對應的匹配點,事先做一些影像處理得到較明顯的比對特徵是有必要的,下文將說明藉由影像的濾波處理來得到影像特徵,用以尋找目標影像(42)與參考影像(43)的相對應點。 Since the polar line (41) is on the same horizontal line under the binocular vision architecture, it can be searched from the image data along the same column, as shown in Fig. 4, and because x T x R , it only needs to go in a single direction. Search can greatly reduce the search time, but how to find the best match point from the candidate is the focus of the matching algorithm in the disparity map. In order to find the corresponding matching points, it is necessary to do some image processing in advance to obtain more obvious matching features. The following will explain the image processing by image filtering to find the target image (42) and the reference image. Corresponding point of (43).

關於空間濾波:一般影像像素的亮度用灰階值表示,而影像的梯度(Gradient)是指像素的亮度變化,是在影像空間中亮度對像素的偏微分,如影像I(u,v)於(u,v)處的梯度向量,可用下式表示: Regarding spatial filtering: the brightness of a general image pixel is represented by a grayscale value, and the gradient of a image refers to a change in brightness of a pixel, which is a partial differential of luminance to a pixel in an image space, such as an image I( u,v ) The gradient vector at ( u,v ) can be expressed as:

要實現式(3)可以用相關聯運算來實現,運算說明如下,令原始影像I(u,v)經過迴旋積運算後得到新影像G(u,v),則可用下式表示: To realize the equation (3), it can be implemented by the associated operation. The operation is as follows. After the original image I( u,v ) is subjected to the convolution operation to obtain the new image G ( u,v ), it can be expressed by the following formula:

式中h(u,v)的遮罩大小為W×W的奇數方陣,a=(W-1)/2。因式(4)直接在影像上處理,h(u,v)可以稱為空間濾波器。利用式(4),可設計不同的遮罩h(u,v)來完成影像邊緣的偵測。 In the formula, the mask size of h ( u,v ) is an odd square matrix of W × W , a =( W -1)/2. Since equation (4) is directly processed on the image, h ( u , v ) can be called a spatial filter. Using equation (4), different masks h ( u , v ) can be designed to detect the edges of the image.

使用空間濾波器來實現邊緣偵測,可用Sobel遮罩來處理,3×3大小的Sobel遮罩如式(5)所示,式(5)的g u 為水平方向的遮罩,只對影像水平方向的變化有反應,另外式(5)的g v 為垂直方向的遮罩。將Sobel遮罩用於式(5)的運算為: The use of spatial filters for edge detection can be handled by Sobel masks. The 3×3 Sobel mask is shown in equation (5), and the g u of equation (5) is the horizontal mask, only for the image. The change in the horizontal direction is responsive, and the g v of the formula (5) is a mask in the vertical direction. The operation of using the Sobel mask for equation (5) is:

以圖5中原始影像(51)為原始圖,用g u g v 這兩遮罩與原始影像(51)進行迴旋積運算後,可分別得到水平方向的梯度影像G u ,如圖5 中的水平梯度影像(52),以及垂直方向的梯度影像G v ,如圖5中的垂直梯度影像(53)。梯度影像具有方向性,由暗變亮的梯度為正值,由亮變暗的梯度為負值,所述的水平梯度影像(52)和垂直梯度影像(53)兩張影像取平方相加開根號,可得到原始影像(51)梯度的大小。在影像中的邊緣處梯度會比較大,利用此特性設定一個用來辨別邊緣的閥值T,如式(6)所示,梯度大於閥值T的像素即為邊緣,閾值T越低所檢測的邊越多,雜訊也會越容易影響結果,相反的,太高的閥值T會遺失細節的部分。用此方法進行影像二值化的影像E,即為邊緣檢測,如圖5中的邊緣偵測影像(54)。 Taking the original image (51) in Fig. 5 as the original image, and using the g u and g v masks to perform a gyroscopic product operation with the original image (51), the horizontal gradient image G u can be obtained respectively, as shown in Fig. 5 . The horizontal gradient image (52) and the vertical gradient image G v , as in the vertical gradient image (53) in FIG. The gradient image has directionality, the gradient from dark to bright is positive, the gradient from bright to dark is negative, and the two images of horizontal gradient image (52) and vertical gradient image (53) are squared. The root number gives the size of the original image (51) gradient. The gradient at the edge of the image will be relatively large. This feature is used to set a threshold T for discriminating the edge. As shown in equation (6), the pixel whose gradient is greater than the threshold T is the edge, and the lower the threshold T is detected. The more edges there are, the more likely the noise will affect the result. Conversely, a too high threshold T will lose the details. The image E binarized by this method is edge detection, such as the edge detection image in Fig. 5 (54).

關於局部匹配(Local Matching):先決定匹配視窗的大小,從參考影像(43)(Reference Image)依匹配視窗提取比對影像,然後在目標影像(42)(Target Image)中比對,找出最匹配的位置,可找出參考影像(43)與目標影像(42)的相對應點。如圖6所示,以左邊參考影像(43)的(u,v)為中心,取一個搜尋用的匹配視窗W做為參考視窗(61),並形成一個搜尋區域範圍(62),在右邊目標影像(42)沿同一極線(41)進行搜尋,在搜尋區域範圍(62)中找出彼此相異程度最小的匹配視窗,相異程度的計算方法有差值絕對和(SAD)或差值平方和(SSD),分別用式(7)及式(8)所示,沿極線(41)以式(7)或式(8)計算後,可獲得(d max-d min)張的代價(Cost)評分圖,再從中搜索最適合的視差值(63)dAbout Local Matching: First determine the size of the matching window, extract the matching image from the reference image (43) (Reference Image), and then compare it in the target image (42) (Target Image) to find out The closest matching position finds the corresponding point of the reference image (43) and the target image (42). As shown in FIG. 6, centering on ( u , v ) of the left reference image (43), a matching matching window W is taken as a reference window (61), and a search area range (62) is formed, on the right. The target image (42) searches along the same polar line (41), and finds the matching window with the smallest degree of difference in the search area range (62). The difference degree calculation method has the difference absolute sum (SAD) or difference. The sum of squared values (SSD), as shown by equations (7) and (8), respectively, can be obtained by equation (7) or equation (8) along the polar line (41), and ( d max - d min ) can be obtained. The cost score map, and then search for the most suitable disparity value (63) d .

本專利提出的匹配方法,以亮度梯度差取代亮度差,並加入視差圖的平滑程度,處理左右影像得到視差圖的流程圖,如圖7所示。亮度梯度差可解決於因攝影機(21)之間進光量的不同,所造成的匹配誤差, 由於攝影機(21)在拍攝時是連續取像,在影像序列中,兩相鄰影像具有前後關連性,設視差圖DM(72)為上一次計算輸出的視差圖,從視差圖DM(72)中取匹配視窗W,與d相減後取絕對值加總,當d越接近W中的視差平均值時,計算結果越小,類似將視差圖做平均化的效果。由於攝影機(21)的雜訊通常在影像中是孤立的,且在時序中是不連續的,因此,這個演算法也具有抑制攝影機(21)雜訊的效果。 The matching method proposed in this patent replaces the luminance difference by the luminance gradient difference, adds the smoothness of the disparity map, and processes the left and right images to obtain a disparity map, as shown in FIG. 7. The brightness gradient difference can solve the matching error caused by the difference of the amount of light entering between the cameras (21). Since the camera (21) is continuously taking images during shooting, in the image sequence, the two adjacent images are related to each other. , the disparity map provided disparity map DM (72) is the last calculated output, taken from the disparity map matching window W DM (72), the absolute value of the sum of d and subtracting, when the parallax closer to the mean d and W When the value is used, the smaller the calculation result is, similar to the effect of averaging the disparity map. Since the noise of the camera (21) is usually isolated in the image and is discontinuous in the time series, this algorithm also has the effect of suppressing the noise of the camera (21).

圖7的Cost計算方法如下: The Cost calculation method of Figure 7 is as follows:

C(x,y,d)=C GRAD (x,y,d)+ω*C Smooth (x,y,d) (9) C ( x , y , d )= C GRAD ( x , y , d )+ω* C Smooth ( x , y , d ) (9)

式中ω為兩者之間的權重比,C Smooth 比重越大得到的視差圖越平滑,但是當比重過大時,由於C Smooth 的計算是使用上一次輸出的視差圖,且在非紋理區的C GRAD 影響力較弱,因此容易在非紋理區留下視差的殘影。dd min 逐步執行到d max 後,影像中的每個pixel會有(d max d min)個評分結果,如圖8所示,影像的評分資料多出一維,即d軸,接著每個pixel從評分結果中搜索Cost最小的d做為視差值(63),這是由於Cost越小表示相似度越高,因此搜索每個pixel中Cost最小的d,並以此d做為視差圖中,該pixel的視差值(63)。 Where ω is the weight ratio between the two, the larger the C Smooth specific gravity, the smoother the disparity map is obtained, but when the specific gravity is too large, the calculation of C Smooth is to use the disparity map of the last output, and in the non-texture area C GRAD has a weak influence, so it is easy to leave a residual image of parallax in the non-textured area. After gradually performed from d to d max d min, each pixel will be (d max d min) of the image rates result, as shown, an image score 8 a multi-dimensional data, i.e. d-axis, then each The pixel searches for the lowest d of the score from the score result as the disparity value (63). This is because the smaller the Cost is, the higher the similarity is, so the lowest d is searched for each pixel, and d is used as the parallax. In the figure, the pixel has a disparity value (63).

若從d min做到d max後再找最小值,需佔用大量的記憶體,因此,本專利將影像做整幅運算,減少記憶體的使用量。在介紹如何整幅運算視差圖之前,先定義一些代號如下: If done from d max d min and then find the minimum, it requires a large amount of memory, therefore, the image of this patent do the whole operation, to reduce the amount of memory used. Before describing how to manipulate the disparity map for the whole operation, first define some code numbers as follows:

●Cost評分圖(C):用於計算Cost的暫存空間,初始影像的pixel值皆為無窮大,Cost評分圖(C)的大小與影像一致。 ●Cost score map (C): used to calculate the temporary storage space of Cost. The pixel values of the initial image are infinite, and the size of the Cost score map (C) is consistent with the image.

●視差圖(D):用於計算視差圖的暫存空間,初始影像的pixel值皆為零,視差圖(D)的大小與影像一致。 ● Disparity map (D): used to calculate the temporary storage space of the disparity map. The pixel values of the initial image are all zero, and the size of the disparity map (D) is consistent with the image.

●輸出的視差圖(DM):為視差圖的輸出結果,初始影像的pixel值皆為零,視差圖(DM)的大小與影像一致。 ● Output disparity map ( DM) : For the output of the disparity map, the initial image has zero pixel values, and the disparity map ( DM) is the same size as the image.

●視差(d):對應點在水平上的位置差(pixel),搜尋範圍為d min~d maxParallax ( d ): The position difference (pixel) of the corresponding point in the horizontal direction, and the search range is d min ~ d max .

W遮罩:由全1組成,用於影像的迴旋積可代替鄰邊像素灰階值的加總計算。 W mask: consists of all 1s, and the convolution product for the image can replace the sum of the grayscale values of the adjacent pixels.

使用整幅運算得到視差圖的方法,步驟如下: The method of obtaining the disparity map using the whole operation is as follows:

步驟1. 沿用上一次的視差圖的輸出結果視差圖DM(72),將視差圖DM(72)整幅影像複製到D,而C做整幅初始化,初始值為無窮大。 Step 1. Use the output disparity map DM (72) of the previous disparity map to copy the entire image of the disparity map DM (72) to D, and C to initialize the whole frame, the initial value is infinity.

步驟2. 取得左右兩張影像的亮度梯度影像。 Step 2. Obtain the brightness gradient image of the left and right images.

步驟3. 目標影像(42)的梯度影像水平移動d行。 Step 3. The gradient image of the target image (42) is moved horizontally by d lines.

步驟4. 分別取得垂直梯度、水平梯度的兩邊整幅相差的絕對值,及上一次的視差圖結果視差圖DM(72)與常數d整幅相差的絕對值。 Step 4. Obtain the absolute values of the phase difference between the two sides of the vertical gradient and the horizontal gradient, respectively, and the absolute value of the difference between the disparity map DM (72) and the constant d of the previous disparity map.

步驟5. 使用W遮罩做迴旋積,遮罩大小同匹配窗口大小,求得公式(11)的能量圖。 Step 5. Use the W mask to make a gyro product. The mask size is the same as the matching window size, and the energy map of equation (11) is obtained.

步驟6. 求得的能量圖與C做比較,比C小的區塊用目前的能量值取代C的值,同一區塊用目前的d取代D的值,不斷的修改。 Step 6. The obtained energy map is compared with C. The block smaller than C replaces the value of C with the current energy value, and the same block replaces the value of D with the current d , and is constantly modified.

步驟7. 回到步驟3,dd min執行到d max結束後做下一步。 Step 7. Return to step 3, d from d min until the end of d max and do the next step.

步驟8. D成為新的視差圖,將D複製到視差圖DM(72)做為視差圖的輸出結果。 Step 8. D becomes a new disparity map, and D is copied to the disparity map DM (72) as the output result of the disparity map.

2.影像導航避障控制流程:本專利的雙眼視覺避障導航控制,其功能可以讓無人駕駛車在行走之前沒有環境模型,完全靠本專利的方法,從運動環境中的視差圖偵測可行走的空間,藉此無人駕駛車可以自行劃運動路徑,在充滿障礙的環境中移動,避免與障礙發生碰撞並完成導航控制。 2. Image navigation obstacle avoidance control process: The patented binocular visual obstacle avoidance navigation control has the function of allowing the driverless car to have no environmental model before walking, and completely relies on the patented method to detect the parallax map in the sports environment. A walkable space, whereby the driverless car can draw a path of movement on its own, moving in an environment full of obstacles, avoiding collisions with obstacles and completing navigation control.

本專利研究的無人駕駛車影像導航控制流程如圖9所示,兩台攝影機(21)是系統中唯一感測外在環境的感測器,這兩台攝影機(21)已經過極線幾何限制(31)校正,獲得影像後,先將影像梯度化,假設影像是連續的,因此取上一次產生的視差圖與兩張梯度影像做計算,產生新的視差圖。獲得視差圖後,依地面影像視差的線性特徵,偵測可行空間,從影像的第一列,依照初始車寬及初始起始點,搜尋視差圖中的路徑點,判斷路徑點是否合理之後更新車寬及起始點,其中,更新車寬即為圖9中估計影像中的車寬(94)的部份,而更新起始點即為圖9中路面可行空間偵測(91)的部份,循環掃描至最後一列,規劃出避障路徑即圖9中搜尋路徑點(92)的部份,從路徑決定車體行駛的方向,如圖9中刪除或堆疊路徑點並執行迴圈到最後一列(93)處,按最小迴轉半徑內偵測障礙物的情況決定車體的速度,即進行決定車體的方向和速度(13),關於詳細的可行走路徑之演算法說明如後。 The image navigation control process of the driverless car studied in this patent is shown in Figure 9. The two cameras (21) are the only sensors in the system that sense the external environment. The two cameras (21) have been subjected to the extreme line geometry limitation. (31) Correction, after obtaining the image, first gradient the image, assuming that the image is continuous, so the previous generated disparity map and the two gradient images are calculated to generate a new disparity map. After obtaining the disparity map, according to the linear feature of the parallax of the ground image, the feasible space is detected. From the first column of the image, according to the initial vehicle width and the initial starting point, the path point in the disparity map is searched to determine whether the path point is reasonable and updated. The vehicle width and the starting point, wherein the updated vehicle width is the portion of the vehicle width (94) in the estimated image in Fig. 9, and the update starting point is the portion of the pavement feasible space detection (91) in Fig. 9. Copy, cycle to the last column, plan the obstacle avoidance path, that is, the part of the search path point (92) in Figure 9, determine the direction of the vehicle body from the path, delete or stack the path point as shown in Figure 9 and perform the loop to In the last column (93), the speed of the vehicle body is determined by detecting the obstacle in the minimum radius of gyration, that is, determining the direction and speed of the vehicle body (13). The algorithm for the detailed walkable path is described later.

本專利的無人駕駛車導航方法是利用地面與攝影機(21)的幾何關係,找出地面在視差圖中的特徵,來偵測可行走的地面。地面影像視差與攝影機(21)的幾何關係如圖10所示,圖10中令攝影機(21)的光學中心點(22)為O,離地高hY軸與Z軸為攝影機(21)的座標系統,影像平面的焦距(33)為f,令P為地面上空間任一點(32)在影像中距離影像中心點CY軸方向為y,hZ軸的夾角為θ角,α=tan-1(y/f),由式(1)及圖10 得知,因此地面在視差圖中的d為: The patented driverless vehicle navigation method utilizes the geometric relationship between the ground and the camera (21) to find out the features of the ground in the parallax map to detect the ground that can be walked. The geometric relationship between the ground image parallax and the camera (21) is shown in Fig. 10. In Fig. 10, the optical center point (22) of the camera (21) is O , the height h from the ground, and the Y axis and the Z axis are the camera (21). The coordinate system, the focal length (33) of the image plane is f , so that P is any point on the ground space (32) in the image, the Y- axis direction from the center point C of the image is y, and the angle between h and the Z- axis is θ angle. α =tan -1 ( y / f ), known from equation (1) and Figure 10 , so the d in the disparity map is:

式中,與為常數係數。由式(12)推得到地面在視差圖中的d=a.y+b,為一條直線,因此利用此特徵來找出沒有障礙可行走的地面,其結果如圖11所示。 In the middle ,versus Is a constant coefficient. From the formula (12), the ground is d = a in the disparity map. y + b, which is a straight line, so this feature is used to find the ground that can be walked without obstacles, and the result is shown in FIG.

關於估計車寬的部份,由視差圖判斷可行走空間後,也要知道無人駕駛車在視差圖中的車寬,才能滿足可行走路徑規劃的需求。假想車體兩旁有一對平行線,沿著車體正前方無限延伸,如圖12所示,此平行線會在影像中相交於消失點(Vanishing Point),以V點表示,路徑起始點S位置與V點同行,藉由式(12)及圖3,得知,並由式(1)中得知 ,可推得下式: Regarding the part of the estimated vehicle width, after judging the walkable space by the parallax map, it is also necessary to know the vehicle width of the driverless vehicle in the parallax map to satisfy the demand for the travelable path planning. Imagine a pair of parallel lines on both sides of the car, extending infinitely along the front of the car body. As shown in Figure 12, this parallel line will intersect at the vanishing point in the image, indicated by V point, the path starting point S The position is in line with the V point, and it is known by the formula (12) and FIG. And learned from equation (1) , can be pushed to the following formula:

令車體實際的右端點X 1與左端點X 2的距離為車寬W=(X 1-X 2),將X 1X 2代入式(13)分別得到影像中的x R2 x R1 ,則影像中可供自行車行走的車寬w即為: Let the distance between the actual right end point X 1 and the left end point X 2 of the vehicle body be the vehicle width W=( X 1 - X 2 ), and substitute X 1 and X 2 into the equation (13) to obtain x R2 and x R1 in the image respectively. , the width w of the bicycle in the image is:

由式(14)得知行走車寬w與可行走空間的視差d成正比。 It is known from the equation (14) that the traveling vehicle width w is proportional to the parallax d of the walkable space.

在視差圖中估計車寬在列方向所佔的pixel數,提供搜尋路徑點做為是否容許無人駕駛車通過的依據,搜尋到路徑點的位置後,取該 位置在視差圖的w筆視差d值的中間數,重新計算視差圖中的車寬wIn the disparity map, the number of pixels occupied by the vehicle width in the column direction is estimated, and the search path point is provided as a basis for allowing the driverless vehicle to pass. After searching for the position of the path point, the w parallax d of the position in the disparity map is taken. The median of the values, recalculating the car width w in the disparity map.

本專利的路徑搜尋是由遠至近,位於遠處時視差趨近於零,當w過短時資料量太少,容易發生誤判,故限定w的最少pixel數。下文將以攝影機(21)置於無人駕駛車的的中間位置,來說明可行走的路徑的搜尋方法。 This patent search path is from far to near, far disparity is located close to zero, when w is too short, too little information, prone to false positives, it defines the minimum number of Pixel w. The search method of the walkable path will be described below by placing the camera (21) in the middle of the driverless vehicle.

關於路徑搜尋,如圖13所示,設E為目標點,與行車方向目的地之成像位置同行,設S為路徑起始點,與車寬中央的成像位置同行,由於無人駕駛車是朝最直的方向前進,因此將目的地設為與路徑起始點同行。搜尋路徑是從第一列並與E點同行的位置開始搜尋,分別往左及往右方向搜尋,依車寬長度w讀取資料,找出沒有障礙物的位置,換言之,可行走空間二值化影像的資料皆為零的位置。當車寬長度w<1或者太短時,會出現錯誤或資料太少而不可靠,因此必須設定最小車寬。向右搜尋或向左搜尋的最大範圍為影像寬向右加w/2向左減w/2,搜尋的可能結果有(1)向右或向左均無可行走空間,(2)向右或向左其中之一搜尋到可行走空間,(3)向右或向左皆搜尋到可行走空間。情況(1)時直接往下一列搜尋,情況(2)或(3)時表示出現可行走空間,為使路徑點能連成連續的曲線,且最終與S點連接,依搜尋位置與影像底部的差ħ與無人駕駛車現在位置(水平方向)與影像邊界的差l的比例關係設定下一次的搜尋點,如圖13所示,ħ為(Bottom-i),l為(找到的路徑點-S)。找到可行走的位置後,以該位置的中央做路徑的標記點,為了加快搜尋速度,分別按個自讀取的資料總值做讀取軌跡的移動,若往左、往右搜尋的讀取軌跡皆完全超出影像範圍,則代表這一列找不到路徑點,換搜尋下一列。當搜尋到路徑點後,以視差圖中相同位置上的視差值(63)d的中間數,配合式(14)計算無人駕駛車位於視差圖下一列的車寬w,如圖14所示,下一列的搜尋起始點往S點做線性趨近,為了解決在靠近影像底端搜尋時,車體在影像中的寬往往會超出影像大小的問題,因此設計換列搜尋的條件為讀取軌跡完全的超出影像範圍外,所以路徑點(讀取軌跡的中央)出現在影像外是合理的。當找不到路徑點時,下一列搜尋的起始點位置重新與E點同行,刪除之前的路徑點,圖13中的刪 除標記表示在這一列之前的所有路徑點皆無效。透過以上的演算,即可計算出可行走空間,其結果如圖15所示,其中a和c為可行走空間的範圍,b為演算規劃車體行走之路徑。由上述的說明歸納出的路徑搜尋的演算法如圖16所示,影像中的地面是從下往上延伸至無窮遠,因此搜尋路徑是從最遠處開始搜尋可行走的路徑,所以搜尋路徑點時,從第一列開始搜尋到最後一列,如圖16中第3行所示,即為從影像最下方開始偵測起的程式列(81)。圖16中第9至第16行為搜索無障礙路徑的程式列(82),假如第14行不成立,即找到路徑點,需再判斷是否完全超出影像範,圖16中第17行至第28行是說明若未超出則重新設定車寬及下一列搜尋起始點位置的程式列(83)並加以記錄。此一演算法的範例結果如圖15所示,圖中標示a、b及c藍色線為可行走路徑點連成的路徑,中央線段b為依車寬所規畫的行走區域,用來表式車體依路徑在影像中所佔的空間。 Regarding the path search, as shown in FIG. 13, let E be the target point, and the imaging position of the destination in the driving direction, and set S as the starting point of the path, and the imaging position in the center of the vehicle width, because the driverless car is toward the most Go straight in the direction, so set the destination to go with the path start point. The search path starts from the first column and travels with the E point. Search for the left and right directions. Read the data according to the length w of the vehicle to find the position without obstacles. In other words, the space can be doubled. The information of the image is zero. When the vehicle width length w <1 or too short, errors or too little information are unreliable, so the minimum vehicle width must be set. The maximum range for searching to the right or searching for the left is the image width plus w/ 2 to the left minus w/ 2, and the possible outcomes of the search are (1) no right or left to walk, (2) right Or search for a walkable space to one of the left, and (3) search for a walkable space to the right or left. In case (1), search directly to the next column. In case (2) or (3), there is a walkable space. In order to make the path points connect to a continuous curve, and finally connect with the S point, according to the search position and the bottom of the image. the difference between the current time ħ unmanned vehicle search at a ratio relationship between the position (horizontal direction) and a difference video border l set point, 13, as ħ (Bottom-i), l is the (waypoints found - S ). After finding the position that can be walked, the marker point of the path is made in the center of the position. In order to speed up the search speed, the total value of the self-read data is used to make the movement of the reading track, and if the search is to the left and right, the reading is performed. If the trajectory is completely beyond the image range, the path point cannot be found on this column, and the next column is searched. When the path point is found, the vehicle width w of the driver's car in the next row of the disparity map is calculated according to the middle number of the disparity value (63) d at the same position in the disparity map, as shown in FIG. The search starting point of the next column is linearly approached to the S point. In order to solve the problem that the width of the car body in the image tends to exceed the image size when searching near the bottom of the image, the condition for designing the search for the column is read. The trajectory is completely outside the image range, so it is reasonable to have the path point (the center of the read trajectory) appear outside the image. When the path point is not found, the starting point position of the next column search is again with the E point, and the previous path point is deleted. The delete mark in FIG. 13 indicates that all the path points before the column are invalid. Through the above calculations, the walkable space can be calculated. The result is shown in Fig. 15, where a and c are the ranges of the walkable space, and b is the path for calculating the planned vehicle body walking. The algorithm for path search summarized by the above description is shown in Fig. 16. The ground in the image extends from bottom to top to infinity, so the search path searches for the path that can be walked from the farthest point, so the search path When clicking, the last column is searched from the first column, as shown in the third row of Figure 16, which is the program sequence (81) detected from the bottom of the image. In the ninth to sixteenth steps of FIG. 16, the program for searching for the barrier-free path (82), if the 14th line is not established, the path point is found, and it is necessary to judge whether the image is completely exceeded, and the 17th to 28th lines in Fig. 16 Yes, if it is not exceeded, reset the program width (83) of the car width and the next search start position and record it. The sample result of this algorithm is shown in Fig. 15. The blue lines indicating a, b and c are the paths connecting the travelable path points, and the central line segment b is the walking area according to the width of the vehicle. The form car body depends on the space occupied by the path in the image.

3.車輛的運動方向與速度:規劃路徑後,取靠近S端的路徑點計算與S之垂直線的夾角θ,以θ做為無人駕駛車轉向控制的依據,如圖17所示。 3. Direction and speed of the vehicle: After planning the path, take the path point near the S end to calculate the angle θ with the vertical line of S , and use θ as the basis for the steering control of the driverless vehicle, as shown in Figure 17.

一般的車輛是由前輪控制前進方向,主要依靠左右輪的速度差轉向,其轉向運動如圖18所示,令v L v R 分別為左輪與右輪前進的速度(m/sec),v C 為車前進的速度(m/sec),ω C 為車行駛的轉彎的角速度(rad/sec),w為左右兩輪的間距(m),其運動方程式如下: The general vehicle is controlled by the front wheel, and mainly depends on the speed difference steering of the left and right wheels. The steering motion is as shown in Fig. 18, so that v L and v R are the speeds of the left and right wheels respectively (m/sec), v C is the speed at which the car advances (m/sec), ω C is the angular velocity of the turn (rad/sec) of the vehicle, and w is the distance between the left and right wheels (m). The equation of motion is as follows:

若車的最小迴轉半徑為k min,則方程式為: If the minimum radius of gyration of the car is k min , the equation is:

在限定的最小迴轉半徑下,由式(16)可計算最大轉角速度ω C 的最大上限ω C_maxAt the defined minimum radius of gyration, the maximum upper limit ω C _max of the maximum angular velocity ω C can be calculated from equation (16).

速度方面,由於已限制最小迴轉半徑,因此左右最小迴轉半徑的交集區域內出現障礙物時,無人駕駛車是避不掉的,此時無人駕駛車應進行煞車的動作。兩最小迴轉半徑的軌跡在影像中的交集區,類似三角形的形狀,如圖19所示,用三角形圍成的區域較易於搜索,當可行空間二值化圖的此區域內出現障礙物時,就命令車速降為零。為預防煞車不及,放大交集區的區域,依此區域出現障礙物的情況做減速的動作。 In terms of speed, since the minimum radius of gyration is limited, when an obstacle occurs in the intersection of the left and right minimum gyration radii, the driverless car cannot be avoided, and the driverless car should perform the braking action. The trajectory of the two minimum radius of gyrations is in the intersection of the images, similar to the shape of a triangle. As shown in Fig. 19, the area enclosed by triangles is easier to search. When an obstacle appears in this area of the feasible space binarization map, The command speed is reduced to zero. In order to prevent the car from getting off the bus, the area of the intersection area is enlarged, and the action of deceleration is performed according to the situation of obstacles in this area.

4.無人駕駛車導航控制實驗:本專利的雙影像避障路徑規劃導航控制方法之實驗系統,如圖20所示,實驗用車輛為左右前輪由馬達驅動之電動車,車體裝載二部攝影機(21),經取像裝置得到環境影像。首先,由平行的左右二部攝影機(21)取得環境的影像,透過電腦做影像處理產生視差圖,辨識可行空間,並經過路徑規劃演算法(12)決定行駛方向及速度後,傳送命令給馬達控制電路及馬達回授訊號處理介面電路,即時地運算左右輪馬達需要達到的轉速,輸出控制電壓訊號給左右輪的馬達驅動器使馬達運轉,達成無人駕駛車的前進及轉向控制。導航過程中,攝影機(21)是隨著無人駕駛車在環境中移動,更新無人駕駛車的動向,即使出現新的障礙物,也能隨時修正其行駛路徑。關於路徑規劃測試及導航測試說明於後。 4. Driverless navigation control experiment: The experimental system of the dual image obstacle avoidance path planning navigation control method of this patent, as shown in Fig. 20, the experimental vehicle is an electric vehicle driven by a motor with left and right front wheels, and the vehicle body is loaded with two cameras. (21) An environmental image is obtained by the image capturing device. First, the image of the environment is obtained by the parallel left and right cameras (21), the parallax map is generated by the computer for image processing, the feasible space is identified, and the direction and speed are determined by the path planning algorithm (12), and the command is sent to the motor. The control circuit and the motor feedback signal processing interface circuit instantaneously calculate the rotational speed required for the left and right wheel motors, and output a control voltage signal to the motor driver of the left and right wheels to operate the motor to achieve the forward and steering control of the driverless vehicle. During the navigation process, the camera (21) updates the direction of the driverless car as the driverless car moves in the environment, and can correct the driving path at any time even if a new obstacle appears. The path planning test and navigation test are described later.

路徑規劃測試:取不同場景進行的實驗,來說明本專利的功能。如圖21所示,有兩處是室外場景,一處室內場景,圖21中左右攝影機(21)的進光量並不相同,但對於本專利的方法,產生視差圖結果影響不大,而且匹配結果少有雜點出現。圖22取視差圖偵測可行空間再做路徑規劃,左圖所標示的框選範圍即為用本專利的方法所偵測的可行空間,右圖標示的線段為採用本專利的方法規劃之路徑結果,線段所標示的範圍為車寬在影像中所佔的區域。實驗的結果顯示在室內室外不同的場景皆可用本專利的方法,進行偵測可行空間並完成路徑規劃。 Path planning test: Experiments performed in different scenarios to illustrate the functions of this patent. As shown in FIG. 21, there are two outdoor scenes, one indoor scene, and the amount of light entering the left and right cameras (21) in FIG. 21 is not the same, but for the method of the present patent, the parallax map results have little effect and match. As a result, few noises appeared. Figure 22 takes the disparity map to detect the feasible space and then performs the path planning. The frame selection range indicated by the left figure is the feasible space detected by the method of the patent, and the line segment shown by the right icon is the path planned by the method of this patent. As a result, the range indicated by the line segment is the area occupied by the vehicle width in the image. The experimental results show that the different methods in the indoor and outdoor can be used to detect the feasible space and complete the path planning.

導航測試:實驗之目的是測試實驗車是否可在不同環境下能通過多重障礙,以下說明實驗車使用本專利的演算法完成避障路徑規劃 的實驗結果。 Navigation test: The purpose of the experiment is to test whether the test vehicle can pass multiple obstacles in different environments. The following description shows that the experimental vehicle uses the algorithm of this patent to complete the obstacle avoidance path planning. Experimental results.

實驗場地一是室內多障礙的環境,如圖23所示,讓實驗車避開障礙物並通過寬0.75m的門,實驗車的車身寬為0.6m,透過調整程式中的參數設定車寬,以及根據攝影機(21)架設在實驗車上的位置,調整攝影機(21)位於車寬的比值,經過這些校正後,實驗的車能順利的經過狹窄的門,實驗過程中,記錄實驗車左右輪馬達編碼器的值,來標定實驗車的行走軌跡,如圖23的虛線所示,此外在地上放置一張紙板,並與地面貼齊,由於本專利的方法主要是用立體視覺辨識可行空間,應當不受地面紙板的干擾,可直接從上面走過,由實驗結果證實本專利的方法具有此一功能;實驗場地二是大樓內走廊,如圖24所示,走廊的走道寬3m長13.5m,並在走廊上放置多個障礙物,實驗車根據雙眼攝影機(21)匹配的視差圖,偵測可行空間,經由本專利的路徑規劃演算法(12)規劃運動路線通過走廊,實驗車因牆壁及障礙物的阻撓而不斷修正行駛的方向,根據實驗顯示,實驗車在13.5m的障礙區中尋找出可行路徑,順利通過障礙區;實驗場地三為室外5m長3m寬的柏油道路,在寬闊的場地下沒有牆壁的限制,在路上放置多個障礙物,實驗車可以在柏油路面行駛,且都能順利的避開障礙物,實驗結果如圖25所示;實驗場地四同實驗場地三的場地,障礙物放置與場地三相同,但改變實驗車的初始位置,同時安排行人做為移動障礙物,實驗結果如圖26所示,由於在寬闊的場地下周圍沒有牆壁的限制,實驗車可以容易的離開障礙區,當實驗車遇上路旁移動的行人,實驗車可以正確的轉向,避開行人朝沒有障礙的區域行走;經多種實驗場景環境測試證明,本專利所提出的雙影像避障路徑規劃導航控制方法,可控制車輛成功避開不同類型的障礙物,並安全的通過障礙區,故本專利的方法可應用於無人駕駛的導航系統中。 The first experimental site is an indoor obstacle environment. As shown in Figure 23, the experimental vehicle avoids the obstacle and passes through a 0.75 m wide door. The width of the experimental vehicle is 0.6 m. The vehicle width is set by adjusting the parameters in the program. And according to the position of the camera (21) on the experimental vehicle, adjust the ratio of the camera (21) in the width of the vehicle. After these corrections, the experimental vehicle can smoothly pass through the narrow door. During the experiment, the left and right wheels of the experimental vehicle are recorded. The value of the motor encoder is used to calibrate the walking trajectory of the experimental vehicle, as shown by the dashed line in Fig. 23. In addition, a piece of cardboard is placed on the ground and aligned with the ground. Since the method of this patent mainly uses stereoscopic vision to identify a feasible space, It should be free from the interference of the ground cardboard, and can go directly from above. It is confirmed by the experimental results that the method of this patent has this function; the experimental site is the corridor inside the building, as shown in Figure 24, the corridor of the corridor is 3m long and 13.5m long. And placing a plurality of obstacles on the corridor, the experimental vehicle detects the feasible space according to the parallax map matched by the binocular camera (21), and plans the motion route through the path planning algorithm (12) of this patent. Through the corridor, the experimental vehicle constantly corrected the direction of the driving due to the obstruction of the wall and obstacles. According to the experiment, the experimental vehicle found a feasible path in the obstacle area of 13.5m and successfully passed the obstacle area; the experimental site 3 was outdoor 5m long and 3m. Wide asphalt road, there is no wall limitation under the wide field, and multiple obstacles are placed on the road. The experimental vehicle can drive on the asphalt road and can smoothly avoid obstacles. The experimental results are shown in Figure 25; The venue is the same as the site 3, and the obstacles are placed in the same position as the site 3. However, the initial position of the experimental vehicle is changed, and pedestrians are arranged as moving obstacles. The experimental results are shown in Figure 26, because there is no surrounding area under the wide field. With the limitation of the wall, the experimental vehicle can easily leave the obstacle area. When the experimental vehicle meets the pedestrian moving on the roadside, the experimental vehicle can turn correctly and avoid the pedestrian walking in the area without obstacles. The environmental test proved by various experimental scenes, this patent The proposed dual image obstacle avoidance path planning navigation control method can control the vehicle to successfully avoid different types of obstacles, and By hazard, so the method of the present patent can be applied to an unmanned navigation system.

5.結論:本專利的雙影像避障路徑規劃導航控制方法,透過兩部攝影機(21)取運動環境影像,經過影像處理獲得視差圖,從視差圖中偵測可行的空間,並規劃導航路徑,再依車寬決定車速與轉向控制,可控 制車輛行駛達到無人駕駛的目的。 5. Conclusion: The dual image obstacle avoidance path planning navigation control method of this patent takes a moving environment image through two cameras (21), obtains a disparity map through image processing, detects a feasible space from the disparity map, and plans a navigation path. , according to the width of the car to determine the speed and steering control, controllable The vehicle travels to achieve the goal of driverlessness.

綜上所陳,僅為本發明之一較佳實施例而已,並非用來限定本發明實施之範圍。即凡依本發明申請專利範圍所做之均等變化與修飾,皆為本發明專利範圍所涵蓋。 In conclusion, it is only a preferred embodiment of the invention, and is not intended to limit the scope of the invention. That is, the equivalent changes and modifications made by the scope of the patent application of the present invention are covered by the scope of the invention.

1‧‧‧攝影機得到的影像 1‧‧‧Image obtained by the camera

11‧‧‧生成視差圖的處理過程 11‧‧‧Processing of generating disparity maps

12‧‧‧路徑規劃演算法 12‧‧‧ Path Planning Algorithm

13‧‧‧決定車體的方向和速度 13‧‧‧Determining the direction and speed of the car body

Claims (2)

一種雙影像避障路徑規劃導航控制方法,主要以雙眼視覺匹配的方法,再設計避障的演算法處理視差圖得到無障礙空間,其方法以亮度梯度差取代亮度差,並加入視差圖的平滑程度,處理左右影像得到視差圖。 A dual image obstacle avoidance path planning navigation control method mainly adopts a binocular visual matching method, and then designs an obstacle avoidance algorithm to process the parallax map to obtain an unobstructed space, and the method replaces the luminance difference by a luminance gradient difference, and adds a disparity map. Smoothness, processing left and right images to obtain a disparity map. 一種雙影像避障路徑規劃導航控制方法,係透過雙眼攝影機取像,經影像處理得到視差圖後,以移動平均演算法處理,先將影像梯度化,並假設影像是連續的,因此取上一次產生的視差圖與兩張梯度影像做計算,產生新的視差圖,在獲得視差圖後,依地面影像視差的線性特徵,進行偵測可行空間,利用地面與攝影機的幾何關係,找出地面在視差圖中的特徵,來偵測可行走的地面。 A dual image obstacle avoidance path planning navigation control method is obtained by taking a binocular camera image, and after obtaining a parallax map by image processing, the image is processed by a moving average algorithm, and the image is firstly graded, and the image is assumed to be continuous, so The parallax map generated at one time and the two gradient images are calculated to generate a new parallax map. After obtaining the parallax map, according to the linear characteristics of the parallax of the ground image, the feasible space is detected, and the geometric relationship between the ground and the camera is used to find the ground. The features in the disparity map are used to detect the ground that can be walked.
TW103107700A 2014-03-06 2014-03-06 Dual Image Obstacle Avoidance Path Planning Navigation Control Method TWI532619B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW103107700A TWI532619B (en) 2014-03-06 2014-03-06 Dual Image Obstacle Avoidance Path Planning Navigation Control Method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW103107700A TWI532619B (en) 2014-03-06 2014-03-06 Dual Image Obstacle Avoidance Path Planning Navigation Control Method

Publications (2)

Publication Number Publication Date
TW201534512A true TW201534512A (en) 2015-09-16
TWI532619B TWI532619B (en) 2016-05-11

Family

ID=54695077

Family Applications (1)

Application Number Title Priority Date Filing Date
TW103107700A TWI532619B (en) 2014-03-06 2014-03-06 Dual Image Obstacle Avoidance Path Planning Navigation Control Method

Country Status (1)

Country Link
TW (1) TWI532619B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105869166A (en) * 2016-03-29 2016-08-17 北方工业大学 Human body action identification method and system based on binocular vision
CN108052111A (en) * 2017-10-24 2018-05-18 南京奇蛙智能科技有限公司 A kind of unmanned plane Real-time Obstacle Avoidance Method based on binocular vision technology
WO2019015158A1 (en) * 2017-07-21 2019-01-24 歌尔科技有限公司 Obstacle avoidance method for unmanned aerial vehicle, and unmanned aerial vehicle
TWI657011B (en) * 2017-11-30 2019-04-21 財團法人工業技術研究院 Unmanned aerial vehicle, control system for unmanned aerial vehicle and control method thereof

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI831176B (en) * 2022-04-08 2024-02-01 鴻海精密工業股份有限公司 Method and device for processing images, computer device and storage medium

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105869166A (en) * 2016-03-29 2016-08-17 北方工业大学 Human body action identification method and system based on binocular vision
CN105869166B (en) * 2016-03-29 2018-07-10 北方工业大学 A kind of human motion recognition method and system based on binocular vision
WO2019015158A1 (en) * 2017-07-21 2019-01-24 歌尔科技有限公司 Obstacle avoidance method for unmanned aerial vehicle, and unmanned aerial vehicle
CN108052111A (en) * 2017-10-24 2018-05-18 南京奇蛙智能科技有限公司 A kind of unmanned plane Real-time Obstacle Avoidance Method based on binocular vision technology
TWI657011B (en) * 2017-11-30 2019-04-21 財團法人工業技術研究院 Unmanned aerial vehicle, control system for unmanned aerial vehicle and control method thereof
CN109857144A (en) * 2017-11-30 2019-06-07 财团法人工业技术研究院 Unmanned plane, unmanned aerial vehicle control system and control method
US10703479B2 (en) 2017-11-30 2020-07-07 Industrial Technology Research Institute Unmanned aerial vehicle, control systems for unmanned aerial vehicle and control method thereof
CN109857144B (en) * 2017-11-30 2022-01-14 财团法人工业技术研究院 Unmanned aerial vehicle, unmanned aerial vehicle control system and control method

Also Published As

Publication number Publication date
TWI532619B (en) 2016-05-11

Similar Documents

Publication Publication Date Title
US20230054914A1 (en) Vehicle localization
CN107144285B (en) Pose information determination method and device and movable equipment
KR101725060B1 (en) Apparatus for recognizing location mobile robot using key point based on gradient and method thereof
WO2022110049A1 (en) Navigation method, apparatus, and system
KR101776622B1 (en) Apparatus for recognizing location mobile robot using edge based refinement and method thereof
US11670087B2 (en) Training data generating method for image processing, image processing method, and devices thereof
Konolige et al. Mapping, navigation, and learning for off‐road traversal
US20190278273A1 (en) Odometry system and method for tracking traffic lights
TWI532619B (en) Dual Image Obstacle Avoidance Path Planning Navigation Control Method
JP7077910B2 (en) Bound line detection device and lane marking method
JP2015518600A5 (en)
JPH03201110A (en) Position azimuth detecting device for autonomous traveling vehicle
CN110163963B (en) Mapping device and mapping method based on SLAM
CN111837136A (en) Autonomous navigation based on local sensing and associated systems and methods
US20220101534A1 (en) Sidewalk edge finder device, system and method
KR102373492B1 (en) Method for correcting misalignment of camera by selectively using information generated by itself and information generated by other entities and device using the same
JP2020126612A (en) Method and apparatus for providing advanced pedestrian assistance system for protecting pedestrian using smartphone
TW202020734A (en) Vehicle, vehicle positioning system, and vehicle positioning method
Zhao et al. Semantic probabilistic traversable map generation for robot path planning
Carrera et al. Lightweight SLAM and Navigation with a Multi-Camera Rig.
KR100906991B1 (en) Method for detecting invisible obstacle of robot
KR102316818B1 (en) Method and apparatus of updating road network
Roggeman et al. Embedded vision-based localization and model predictive control for autonomous exploration
CN114911223A (en) Robot navigation method and device, robot and storage medium
Muffert et al. An incremental map building approach via static stixel integration

Legal Events

Date Code Title Description
MM4A Annulment or lapse of patent due to non-payment of fees