TWI774543B - Obstacle detection method - Google Patents

Obstacle detection method Download PDF

Info

Publication number
TWI774543B
TWI774543B TW110132319A TW110132319A TWI774543B TW I774543 B TWI774543 B TW I774543B TW 110132319 A TW110132319 A TW 110132319A TW 110132319 A TW110132319 A TW 110132319A TW I774543 B TWI774543 B TW I774543B
Authority
TW
Taiwan
Prior art keywords
point cloud
cloud data
image
obstacle
data
Prior art date
Application number
TW110132319A
Other languages
Chinese (zh)
Other versions
TW202311781A (en
Inventor
張耘菱
蘇一峯
陳盈仁
Original Assignee
財團法人車輛研究測試中心
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 財團法人車輛研究測試中心 filed Critical 財團法人車輛研究測試中心
Priority to TW110132319A priority Critical patent/TWI774543B/en
Application granted granted Critical
Publication of TWI774543B publication Critical patent/TWI774543B/en
Publication of TW202311781A publication Critical patent/TW202311781A/en

Links

Images

Landscapes

  • Geophysics And Detection Of Objects (AREA)
  • Image Processing (AREA)

Abstract

一種障礙物偵測方法包含:(A)在接收到多筆點雲資料與多張影像後,對於每一點雲資料,根據該點雲資料自該等影像中獲得一待融合影像;(B)對於每一待融合影像,獲得早於該待融合影像拍攝的前第N張待合成影像;(C)對於每一點雲資料,根據所對應的待融合影像與前第N張待合成影像,獲得一整合影像;(D)對於每一點雲資料,將該點雲資料對齊投影至所對應之整合影像;及(E)對於每一點雲資料,根據對齊投影後的點雲資料與整合影像,利用一障礙物辨識模型,辨識出位於前方之每一障礙物對應的障礙物類別。An obstacle detection method includes: (A) after receiving a plurality of point cloud data and a plurality of images, for each point cloud data, obtaining a to-be-fused image from the images according to the point cloud data; (B) For each image to be fused, obtain the first N th image to be synthesized earlier than the image to be fused; (C) for each point cloud data, according to the corresponding image to be fused and the previous N th image to be synthesized, obtain an integrated image; (D) for each point cloud data, align and project the point cloud data to the corresponding integrated image; and (E) for each point cloud data, according to the aligned projected point cloud data and the integrated image, use An obstacle identification model identifies the obstacle type corresponding to each obstacle located in the front.

Description

障礙物偵測方法Obstacle detection method

本發明是有關於一種障礙物偵測方法,特別是指一種融合不同感測器之障礙物偵測方法。The present invention relates to an obstacle detection method, in particular to an obstacle detection method integrating different sensors.

移動載具引入越來越多的傳感器和智能算法,不斷增強其環境感知和靈活運動的能力,在各領域上,包含工廠運用AMR(Autonomous Mobile Robot)產業進行倉儲搬運及自駕車在路上運行進行障礙物偵測停止或閃避,皆需要使用到多感測器,如,雷達、光達及攝影機等偵測進而做避障功能。當移動載具行駛在任意路徑上,需要可做閃避障礙物與辨識障礙物種類為最主要之需求。Mobile vehicles have introduced more and more sensors and intelligent algorithms to continuously enhance their environmental perception and flexible movement capabilities. In various fields, factories use the AMR (Autonomous Mobile Robot) industry for warehousing and handling and self-driving cars run on the road. Obstacle detection, stop or avoidance requires the use of multiple sensors, such as radar, lidar, and cameras to detect and avoid obstacles. When the mobile vehicle travels on any path, it is the most important requirement to be able to avoid obstacles and identify the type of obstacles.

各感測元件有其不同優勢,現多以異質感知融合運用在各載具平台上。然而,各感測器的感測資料及取樣時間皆不相同,例如,雷達係在每50ms獲得一雷達資料,光達係在每100ms獲得一點雲資料,攝影機係在每33ms拍攝一影像,在感測器差異大時,皆有時間與空間無法同步問題,也無法得知障礙物為何,實有必要提出一解決方案。Each sensing element has its own advantages, and it is now mostly used on various vehicle platforms with heterogeneous sensing fusion. However, the sensing data and sampling time of each sensor are different. For example, the radar system obtains a radar data every 50ms, the lidar system obtains a point cloud data every 100ms, and the camera system captures an image every 33ms. When the sensors are very different, there is a problem that time and space cannot be synchronized, and it is impossible to know what the obstacles are. It is necessary to propose a solution.

因此,本發明的目的,即在提供一種在融合多感測器時進行空間與時間之同步並辨識出障礙物類別的障礙物偵測方法。Therefore, the object of the present invention is to provide an obstacle detection method for synchronizing space and time and recognizing the type of obstacles when fusing multiple sensors.

於是,本發明障礙物偵測方法,藉由一運算裝置來實施,該運算裝置與設置於一移動載具上之一影像拍攝裝置與一光達模組電連接,該影像拍攝裝置用於拍攝位於該移動載具前方之多個障礙物的一連串影像,該光達模組用於獲得位於該移動載具之前方且包含該等障礙物的一連串點雲資料,該障礙物偵測方法包含以下步驟:Therefore, the obstacle detection method of the present invention is implemented by a computing device, the computing device is electrically connected to an image capturing device and a lidar module disposed on a mobile carrier, and the image capturing device is used for capturing images. A series of images of a plurality of obstacles located in front of the mobile vehicle, the lidar module is used to obtain a series of point cloud data located in front of the mobile vehicle and including the obstacles, and the obstacle detection method includes the following step:

(A)在接收到該光達模組的該等點雲資料與該影像拍攝裝置的該等影像後,對於每一點雲資料,根據該點雲資料,自該影像拍攝裝置所拍攝之該等影像中獲得一對應該點雲資料的待融合影像;(A) After receiving the point cloud data of the lidar module and the images of the image capture device, for each point cloud data, according to the point cloud data, the images captured by the image capture device Obtain a pair of images to be fused corresponding to the point cloud data in the image;

(B) 對於每一待融合影像,獲得早於該待融合影像拍攝的前第N張待合成影像,其中N≧3;(B) For each image to be fused, obtain the Nth image to be synthesized earlier than the image to be fused, where N≧3;

(C) 對於每一點雲資料,根據所對應的該待融合影像與該前第N張待合成影像,獲得一對應該點雲資料的整合影像;(C) For each point cloud data, obtain a pair of integrated images corresponding to the point cloud data according to the corresponding image to be fused and the first N image to be synthesized;

(D) 對於每一點雲資料,將該點雲資料對齊投影至所對應之該整合影像;及(D) for each point cloud data, align the projection of the point cloud data to the corresponding integrated image; and

(E)對於每一點雲資料,根據經步驟(D)之對齊投影後的該點雲資料與該整合影像,利用一障礙物辨識模型,辨識出位於該移動載具前方之每一障礙物對應的障礙物類別。(E) For each point cloud data, according to the point cloud data and the integrated image after the alignment and projection in step (D), an obstacle identification model is used to identify each obstacle located in front of the mobile vehicle corresponding to obstacle category.

本發明的功效在於:藉由該運算裝置自該影像拍攝裝置所拍攝之該等影像中獲得對應該點雲資料的該待融合影像,並根據所對應的待融合影像與前第N張待合成影像獲得該整合影像,且將該點雲資料對齊投影至所對應之整合影像,以達成在融合該光達模組與該影像拍攝裝置之感測資料時進行空間與時間之同步,此外,藉由根據對齊投影後的整合影像與點雲資料,利用該障礙物辨識模型,來辨識出每一障礙物對應的障礙物類別,以達成辨識障礙物之目的。The effect of the present invention lies in: obtaining the image to be fused corresponding to the point cloud data from the images captured by the image capturing device by the computing device, and according to the corresponding image to be fused and the previous N th image to be synthesized The integrated image is obtained from the image, and the point cloud data is aligned and projected to the corresponding integrated image, so as to achieve spatial and temporal synchronization when fusing the sensing data of the lidar module and the image capture device. According to the integrated image and point cloud data after alignment and projection, the obstacle identification model is used to identify the obstacle type corresponding to each obstacle, so as to achieve the purpose of identifying the obstacle.

參閱圖1,本發明障礙物偵測方法的實施例,藉由一設置於一移動載具上之障礙物偵測系統來實施。該障礙物偵測系統包含設置於該移動載具上的一影像拍攝裝置11、一光達模組12、一雷達模組13及一電連接該影像拍攝裝置11、該光達模組12與該雷達模組13的運算裝置14。Referring to FIG. 1 , an embodiment of the obstacle detection method of the present invention is implemented by an obstacle detection system disposed on a mobile vehicle. The obstacle detection system includes an image capturing device 11 , a lidar module 12 , a radar module 13 and an electrical connection to the image capturing device 11 , the lidar module 12 and the The computing device 14 of the radar module 13 .

該影像拍攝裝置11用於持續拍攝位於該移動載具前方之多個障礙物的一連串影像。在本實施例中,該影像拍攝裝置11例如為一攝影機,且在每33ms拍攝一張影像。The image capturing device 11 is used for continuously capturing a series of images of obstacles located in front of the moving vehicle. In this embodiment, the image capturing device 11 is, for example, a camera, and captures an image every 33 ms.

該光達模組12用於持續獲得位於該移動載具之前方且包含該等障礙物的一連串點雲資料。在本實施例中,該光達模組12例如為一光達感測器,且在每100ms獲得一筆點雲資料。每100ms所獲得的點雲資料係該光達模組12在一自一第一個掃描時間點至一第N個掃描時間點之預設期間內由右至左所掃描到的點雲資料。The lidar module 12 is used to continuously obtain a series of point cloud data in front of the moving vehicle and including the obstacles. In this embodiment, the lidar module 12 is, for example, a lidar sensor, and obtains a point cloud data every 100ms. The point cloud data obtained every 100 ms is the point cloud data scanned from right to left by the lidar module 12 within a preset period from a first scan time point to an Nth scan time point.

該雷達模組13用於持續獲得位於該移動載具之前方且包含該等障礙物的一連串雷達資料。在本實施例中,該雷達模組13例如為一雷達感測器,且在每50ms獲得一筆雷達資料。The radar module 13 is used to continuously obtain a series of radar data in front of the moving vehicle and including the obstacles. In this embodiment, the radar module 13 is, for example, a radar sensor, and obtains a piece of radar data every 50 ms.

該運算裝置14例如為一處理器或一微處理器等其他可執行運算功能的晶片。The computing device 14 is, for example, a processor or a microprocessor and other chips capable of performing computing functions.

以下將藉由本發明障礙物偵測方法之實施例來說明該障礙物偵測系統中各元件之作動,該實施例包含一雷達融合程序及一影像融合程序。The operation of each element in the obstacle detection system will be described below by means of an embodiment of the obstacle detection method of the present invention, which includes a radar fusion process and an image fusion process.

參閱圖1與圖2,本發明障礙物偵測方法之實施例的雷達融合程序包含以下步驟。Referring to FIG. 1 and FIG. 2 , the radar fusion process of the embodiment of the obstacle detection method of the present invention includes the following steps.

在步驟21中,該運算裝置14在接收到該光達模組12的該等點雲資料與該雷達模組13的該等雷達資料後,對於每一點雲資料,根據該點雲資料,自該雷達模組13所獲得之該等雷達資料中獲得一對應該點雲資料的待融合雷達資料。In step 21, after the computing device 14 receives the point cloud data of the lidar module 12 and the radar data of the radar module 13, for each point cloud data, according to the point cloud data, automatically A pair of radar data to be fused corresponding to the point cloud data is obtained from the radar data obtained by the radar module 13 .

值得一提的是,步驟21還包含以下子步驟(參圖3)。It is worth mentioning that step 21 also includes the following sub-steps (refer to FIG. 3 ).

在子步驟211中,該運算裝置14在接收到該光達模組12的該等點雲資料與該雷達模組13的該等雷達資料後,對於每一點雲資料,根據該點雲資料,自該雷達模組13所獲得之該等雷達資料中取出一第一候選雷達資料及一第二候選雷達資料,該第一候選雷達資料之獲得時間早於該點雲資料的獲得時間,且與該點雲資料之獲得時間的時間間隔最短,該第二候選雷達資料為該第一候選雷達資料的前一筆雷達資料。In sub-step 211, after the computing device 14 receives the point cloud data of the lidar module 12 and the radar data of the radar module 13, for each point cloud data, according to the point cloud data, A first candidate radar data and a second candidate radar data are extracted from the radar data obtained by the radar module 13, and the acquisition time of the first candidate radar data is earlier than the acquisition time of the point cloud data, and is the same as that of the point cloud data. The point cloud data is obtained with the shortest time interval, and the second candidate radar data is the previous radar data of the first candidate radar data.

在子步驟212中,對於每一點雲資料,該運算裝置14根據該點雲資料對應的該第一候選雷達資料,獲得位於該移動載具前方之該等障礙物中之一特定障礙物對應的一第一特定障礙物參數。在本實施方式中,該第一特定障礙物參數為該移動載具與該特定障礙物的一第一特定障礙物距離,然而,在其他實施方式中,該第一特定障礙物參數亦可為該特定障礙物的一第一特定障礙物位置,或是該特定障礙物的一第一特定障礙物速度,並不以此為限。In sub-step 212, for each point cloud data, the computing device 14 obtains, according to the first candidate radar data corresponding to the point cloud data, the corresponding one of the obstacles located in front of the moving vehicle. a first specific obstacle parameter. In this embodiment, the first specific obstacle parameter is a first specific obstacle distance between the mobile vehicle and the specific obstacle. However, in other embodiments, the first specific obstacle parameter may also be A first specific obstacle position of the specific obstacle or a first specific obstacle speed of the specific obstacle is not limited thereto.

在子步驟213中,對於每一點雲資料,該運算裝置14根據該點雲資料對應的該第二候選雷達資料,獲得該特定障礙物對應的一第二特定障礙物參數。在本實施方式中,該第二特定障礙物參數為該移動載具與該特定障礙物的一第二特定障礙物距離,然而,在其他實施方式中,該第二特定障礙物參數亦可為該特定障礙物的一第二特定障礙物位置,或是該特定障礙物的一第二特定障礙物速度,並不以此為限。In sub-step 213, for each point cloud data, the computing device 14 obtains a second specific obstacle parameter corresponding to the specific obstacle according to the second candidate radar data corresponding to the point cloud data. In this embodiment, the second specific obstacle parameter is a second specific obstacle distance between the mobile vehicle and the specific obstacle. However, in other embodiments, the second specific obstacle parameter may also be A second specific obstacle position of the specific obstacle or a second specific obstacle speed of the specific obstacle is not limited thereto.

在子步驟214中,對於每一點雲資料,該運算裝置14根據該點雲資料之獲得時間

Figure 02_image001
、該點雲資料所對應之第一候選雷達資料的獲得時間
Figure 02_image003
及第二候選雷達資料的獲得時間
Figure 02_image005
、該第一特定障礙物參數
Figure 02_image007
以及該第二特定障礙物參數
Figure 02_image009
,利用以下公式(1)計算該特定障礙物在該點雲資料之獲得時間t的一推測障礙物參數
Figure 02_image011
。在本實施方式中,該推測障礙物參數為該移動載具與該特定障礙物的一推測障礙物距離,然而,在其他實施方式中,該推測障礙物參數亦可為該特定障礙物的一推測障礙物位置,或是該特定障礙物的一推測障礙物速度,並不以此為限。
Figure 02_image013
…(1) In sub-step 214, for each point cloud data, the computing device 14 obtains the time according to the point cloud data
Figure 02_image001
, the acquisition time of the first candidate radar data corresponding to the point cloud data
Figure 02_image003
and the acquisition time of the second candidate radar data
Figure 02_image005
, the first specific obstacle parameter
Figure 02_image007
and the second specific obstacle parameter
Figure 02_image009
, use the following formula (1) to calculate an estimated obstacle parameter of the specific obstacle at the acquisition time t of the point cloud data
Figure 02_image011
. In this embodiment, the estimated obstacle parameter is an estimated obstacle distance between the mobile vehicle and the specific obstacle. However, in other embodiments, the estimated obstacle parameter may also be a value of the specific obstacle. The estimated obstacle position, or an estimated obstacle speed of the specific obstacle, is not limited to this.
Figure 02_image013
…(1)

其中,

Figure 02_image015
為一誤差值。 in,
Figure 02_image015
is an error value.

在子步驟215中,對於每一點雲資料,該運算裝置14根據該推測障礙物參數及一障礙物參數如,距離對獲得區間的第一轉換規則,將該推測障礙物參數轉換至一推測獲得區間。值得特別說明的是,該第一轉換規則係先根據該推測障礙物參數利用一障礙物參數對時間的第一轉換函式,計算出該推測障礙物參數所對應的一推測獲得時間,接著,根據該推測獲得時間及多個雷達取樣區間,將該推測獲得時間所落入的雷達取樣區間作為該推測獲得區間。其中,該第一轉換函式係根據每一筆雷達資料的獲得時間及所指示出之該特定障礙物的障礙物參數而獲得,每一筆雷達資料的獲得時間點至下一筆雷達資料的獲得時間點,但不包含下一筆雷達資料的獲得時間點所定義出的時間區間即構成一雷達取樣區間,圖4示例出該第一轉換函式41及該等雷達取樣區間42。In sub-step 215 , for each point cloud data, the computing device 14 converts the estimated obstacle parameter into an estimated obstacle parameter according to the estimated obstacle parameter and a first conversion rule of the obstacle parameter such as the distance to the obtained interval interval. It is worth noting that, the first conversion rule first uses a first conversion function of an obstacle parameter to time according to the estimated obstacle parameter to calculate a presumed acquisition time corresponding to the estimated obstacle parameter, and then, According to the estimated acquisition time and the plurality of radar sampling intervals, the radar sampling interval in which the estimated acquisition time falls is used as the estimated acquisition interval. The first conversion function is obtained according to the acquisition time of each piece of radar data and the indicated obstacle parameter of the specific obstacle, and the acquisition time point of each radar data to the acquisition time point of the next radar data , but does not include the time interval defined by the acquisition time point of the next piece of radar data, which constitutes a radar sampling interval. FIG. 4 illustrates the first conversion function 41 and the radar sampling intervals 42 .

在子步驟216中,對於每一點雲資料,該運算裝置14根據該推測獲得區間自該雷達模組13所獲得之該等雷達資料中獲得對應該推測獲得區間的雷達資料,以作為該待融合雷達資料。對應該推測獲得區間的雷達資料即為對應有獲得時間在該推測獲得區間之起始時間點的雷達資料。In sub-step 216, for each point cloud data, the computing device 14 obtains the radar data corresponding to the estimated interval from the radar data obtained by the radar module 13 according to the estimated interval, as the to-be-fused Radar data. The radar data corresponding to the estimated acquisition interval is the radar data corresponding to the acquisition time at the starting time point of the estimated acquisition interval.

值得特別說明的是,由於該雷達模組13係每50ms獲得一筆雷達資料,該光達模組12係每100ms獲得一筆點雲資料,加上該雷達模組13與該光達模組12開始獲取資料的起始時間恐也存在誤差,故該雷達模組13與該光達模組12獲得資料的時間恐不同步,本發明藉由子步驟211~216之執行,以該光達模組12獲得該等點雲資料的獲得時間為準去獲得對應的該待融合雷達資料,進而盡量使所獲得的該待融合雷達資料能與所對應的該點雲資料在時間上同步。It is worth noting that, since the radar module 13 obtains one piece of radar data every 50ms, the lidar module 12 obtains one piece of point cloud data every 100ms, plus the radar module 13 and the lidar module 12 start There may be errors in the starting time of data acquisition, so the time when the radar module 13 and the lidar module 12 acquire data may not be synchronized. The acquisition time of the point cloud data is as a criterion to obtain the corresponding radar data to be fused, so that the obtained radar data to be fused can be synchronized in time with the corresponding point cloud data as much as possible.

在步驟22中,對於每一點雲資料,該運算裝置14利用一方向包圍盒演算法,定位該點雲資料中的該等障礙物。在本實施例中,該方向包圍盒演算法係基於主成分分析來定位出該等障礙物。In step 22, for each point cloud data, the computing device 14 uses a one-directional bounding box algorithm to locate the obstacles in the point cloud data. In this embodiment, the directional bounding box algorithm is based on principal component analysis to locate the obstacles.

在步驟23中,該運算裝置14利用一卡爾曼濾波器,追蹤每一點雲資料中的所定位出的該等障礙物以獲得該移動載具與位於該移動載具前方之每一障礙物對應的障礙物距離。In step 23, the computing device 14 uses a Kalman filter to track the located obstacles in each point cloud data to obtain the mobile vehicle corresponding to each obstacle located in front of the mobile vehicle obstacle distance.

在步驟24中,對於每一待融合雷達資料,該運算裝置14利用該方向包圍盒演算法,定位該待融合雷達資料中的該等障礙物。In step 24, for each radar data to be fused, the computing device 14 uses the directional bounding box algorithm to locate the obstacles in the radar data to be fused.

在步驟25中,該運算裝置14利用該卡爾曼濾波器,追蹤每一待融合雷達資料中所定位出的該等障礙物以獲得位於該移動載具前方之每一障礙物對應的障礙物速度。In step 25, the computing device 14 uses the Kalman filter to track the obstacles located in each radar data to be fused to obtain the obstacle velocity corresponding to each obstacle in front of the moving vehicle .

參閱圖1與圖5,本發明障礙物偵測方法之實施例的影像融合程序包含以下步驟。Referring to FIG. 1 and FIG. 5 , the image fusion process of the embodiment of the obstacle detection method of the present invention includes the following steps.

在步驟51中,該運算裝置14在接收到該光達模組12的該等點雲資料與該影像拍攝裝置11的該等影像後,對於每一點雲資料,根據該點雲資料,自該影像拍攝裝置11所拍攝之該等影像中獲得一對應該點雲資料的待融合影像。In step 51, after the computing device 14 receives the point cloud data of the lidar module 12 and the images of the image capturing device 11, for each point cloud data, according to the point cloud data, from the A pair of to-be-fused images corresponding to the point cloud data are obtained from the images captured by the image capturing device 11 .

值得一提的是,步驟51包含以下子步驟(參圖6)。It is worth mentioning that step 51 includes the following sub-steps (refer to FIG. 6 ).

在子步驟511中,該運算裝置14在接收到該光達模組12的該等點雲資料與該影像拍攝裝置11的該等影像後,對於每一點雲資料,根據該點雲資料,自該影像拍攝裝置11所拍攝之該等影像中取出一第一候選影像及一第二候選影像,該第一候選影像之拍攝時間早於該點雲資料的獲得時間,且與該點雲資料之獲得時間的時間間隔最短,該第二候選影像為該第一候選影像的前一張影像。In sub-step 511, after receiving the point cloud data of the lidar module 12 and the images of the image capturing device 11, the computing device 14, for each point cloud data, according to the point cloud data, automatically A first candidate image and a second candidate image are taken out of the images captured by the image capturing device 11 , and the capturing time of the first candidate image is earlier than the acquisition time of the point cloud data, and it is the same as the point cloud data. The time interval of the acquisition time is the shortest, and the second candidate image is the previous image of the first candidate image.

在子步驟512中,對於每一點雲資料,該運算裝置14根據該點雲資料對應的該第一候選影像,獲得位於該移動載具前方之該等障礙物中之該特定障礙物對應的一第三特定障礙物參數。在本實施方式中,該第三特定障礙物參數為該移動載具與該特定障礙物的一第三特定障礙物距離,然而,在其他實施方式中,該第三特定障礙物參數亦可為該特定障礙物的一第三特定障礙物位置,或是該特定障礙物的一第三特定障礙物速度,並不以此為限。其中,當該第三特定障礙物參數為該第三特定障礙物距離或第三特定障礙物速度時,該運算裝置14還需根據該影像拍攝裝置11之焦距來獲得該第三特定障礙物距離或第三特定障礙物速度。In sub-step 512, for each point cloud data, the computing device 14 obtains, according to the first candidate image corresponding to the point cloud data, a corresponding one of the specific obstacles among the obstacles located in front of the moving vehicle. The third specific obstacle parameter. In this embodiment, the third specific obstacle parameter is a third specific obstacle distance between the mobile vehicle and the specific obstacle. However, in other embodiments, the third specific obstacle parameter may also be A third specific obstacle position of the specific obstacle or a third specific obstacle speed of the specific obstacle is not limited thereto. Wherein, when the third specific obstacle parameter is the third specific obstacle distance or the third specific obstacle speed, the computing device 14 also needs to obtain the third specific obstacle distance according to the focal length of the image capturing device 11 or the third specific obstacle speed.

在子步驟513中,對於每一點雲資料,該運算裝置14根據該點雲資料對應的該第二候選影像,獲得該特定障礙物對應的一第四特定障礙物參數。在本實施方式中,該第四特定障礙物參數為該移動載具與該特定障礙物的一第四特定障礙物距離,然而,在其他實施方式中,該第四特定障礙物參數亦可為該特定障礙物的一第四特定障礙物位置,或是該特定障礙物的一第四特定障礙物速度,並不以此為限。其中,當該第四特定障礙物參數為該第四特定障礙物距離或第四特定障礙物速度時,該運算裝置14還需根據該影像拍攝裝置11之焦距來獲得該第四特定障礙物距離或第四特定障礙物速度。In sub-step 513, for each point cloud data, the computing device 14 obtains a fourth specific obstacle parameter corresponding to the specific obstacle according to the second candidate image corresponding to the point cloud data. In this embodiment, the fourth specific obstacle parameter is a fourth specific obstacle distance between the mobile vehicle and the specific obstacle. However, in other embodiments, the fourth specific obstacle parameter may also be A fourth specific obstacle position of the specific obstacle or a fourth specific obstacle speed of the specific obstacle is not limited thereto. Wherein, when the fourth specific obstacle parameter is the fourth specific obstacle distance or the fourth specific obstacle speed, the computing device 14 also needs to obtain the fourth specific obstacle distance according to the focal length of the image capturing device 11 or the fourth specific obstacle speed.

在子步驟514中,對於每一點雲資料,該運算裝置14根據該點雲資料之獲得時間

Figure 02_image001
、該點雲資料所對應之第一候選影像的拍攝時間
Figure 02_image017
及第二候選影像的拍攝時間
Figure 02_image019
、該第三特定障礙物參數
Figure 02_image021
以及該第四特定障礙物參數
Figure 02_image023
,利用以下公式(2)計算該特定障礙物在點雲資料之獲得時間t的另一推測障礙物參數
Figure 02_image025
。在本實施方式中,該另一推測障礙物參數為該移動載具與該特定障礙物的另一推測障礙物距離,然而,在其他實施方式中,該另一推測障礙物參數亦可為該特定障礙物的另一推測障礙物位置,或是該特定障礙物的另一推測障礙物速度,並不以此為限。
Figure 02_image027
…(2) In sub-step 514, for each point cloud data, the computing device 14 obtains the time according to the point cloud data
Figure 02_image001
, the shooting time of the first candidate image corresponding to the point cloud data
Figure 02_image017
and the shooting time of the second candidate image
Figure 02_image019
, the third specific obstacle parameter
Figure 02_image021
and the fourth specific obstacle parameter
Figure 02_image023
, use the following formula (2) to calculate another estimated obstacle parameter of the specific obstacle at the acquisition time t of the point cloud data
Figure 02_image025
. In this embodiment, the other estimated obstacle parameter is another estimated obstacle distance between the mobile vehicle and the specific obstacle. However, in other embodiments, the other estimated obstacle parameter can also be the other estimated obstacle parameter. Another estimated obstacle position of the specific obstacle, or another estimated obstacle speed of the specific obstacle, is not limited thereto.
Figure 02_image027
…(2)

其中,

Figure 02_image015
為一誤差值。 in,
Figure 02_image015
is an error value.

在子步驟515中,對於每一點雲資料,該運算裝置14根據該另一推測障礙物參數及一障礙物參數如,距離對拍攝區間的第二轉換規則,將該另一推測障礙物參數轉換至一推測拍攝區間。值得特別說明的是,該第二轉換規則係先根據該另一推測障礙物參數利用一障礙物參數對時間的第二轉換函式,計算出該另一推測障礙物參數所對應的一推測拍攝時間,接著,根據該推測拍攝時間及多個影像取樣區間,將該推測拍攝時間所落入的影像取樣區間作為該推測拍攝區間。其中,該第二轉換函式係根據每一張影像的拍攝時間及所指示出之該特定障礙物的障礙物參數而獲得,每一張影像的拍攝時間點至下一張影像的拍攝時間點,但不包含下一張影像的拍攝時間點所定義出的時間區間即構成一影像取樣區間,圖7示例出該第二轉換函式71及該等影像取樣區間72。In sub-step 515, for each point cloud data, the computing device 14 converts the other estimated obstacle parameter according to the other estimated obstacle parameter and an obstacle parameter such as a second conversion rule of distance to shooting interval to a presumed shooting interval. It is worth noting that, the second conversion rule first uses a second conversion function of an obstacle parameter to time according to the other estimated obstacle parameter to calculate a speculative shot corresponding to the other estimated obstacle parameter. Then, according to the estimated shooting time and a plurality of image sampling intervals, the image sampling interval in which the estimated shooting time falls is regarded as the estimated shooting interval. Wherein, the second conversion function is obtained according to the shooting time of each image and the indicated obstacle parameter of the specific obstacle, from the shooting time point of each image to the shooting time point of the next image , but does not include the time interval defined by the shooting time point of the next image, which constitutes an image sampling interval. FIG. 7 illustrates the second conversion function 71 and the image sampling intervals 72 .

在子步驟516中,對於每一點雲資料,該運算裝置14根據該推測拍攝區間自該影像拍攝裝置11所拍攝之該等影像中獲得對應該推測拍攝區間的影像,以作為該待融合影像。對應該推測拍攝區間的影像即為對應有拍攝時間在該推測拍攝區間之起始時間點的影像。In sub-step 516 , for each point cloud data, the computing device 14 obtains an image corresponding to the estimated shooting interval from the images captured by the image capturing device 11 according to the estimated shooting interval, as the image to be fused. The image corresponding to the estimated shooting interval is the image corresponding to the shooting time at the start time point of the estimated shooting interval.

值得特別說明的是,由於該影像拍攝裝置11係每33ms拍攝一張影像,該光達模組12係每100ms獲得一筆點雲資料,加上該影像拍攝裝置11與該光達模組12開始獲取資料的起始時間恐也存在誤差,故該影像拍攝裝置11與該光達模組12獲得資料的時間恐不同步,本發明藉由子步驟511~516之執行,以該光達模組12獲得該等點雲資料的獲得時間為準去獲得對應的待融合影像,進而盡量使所獲得的待融合影像能與所對應的點雲資料在時間上同步。It is worth noting that, since the image capturing device 11 captures an image every 33ms, the lidar module 12 obtains a point cloud data every 100ms, adding the image capturing device 11 and the lidar module 12 to start There may be errors in the starting time of data acquisition, so the time when the image capturing device 11 and the lidar module 12 acquire data may not be synchronized. The acquisition time of the point cloud data is based on the acquisition of the corresponding image to be fused, so that the obtained image to be fused and the corresponding point cloud data can be synchronized in time as far as possible.

在步驟52中,對於每一待融合影像,該運算裝置14獲得早於該待融合影像拍攝的前第N張待合成影像,其中N≧3。在本實施例中,N為3。In step 52 , for each image to be fused, the computing device 14 obtains the first N-th image to be synthesized that is captured earlier than the image to be fused, where N≧3. In this embodiment, N is 3.

在步驟53中,對於每一點雲資料,該運算裝置14根據所對應的該待融合影像與該前第N張待合成影像,獲得一對應該點雲資料的整合影像。在本實施例中,對於每一點雲資料,該運算裝置14係將所對應的該待融合影像之一部分(亦即,左半部)與前第N張待合成影像之一部分(亦即,右半部)進行影像合成,以獲得對應該點雲資料的該整合影像,由於該光達模組12每次在掃描時係由右至左掃描以獲得每筆點雲資料,因此所獲得之點雲資料中對應該移動載具之前方的右半部場景與對應該移動載具之前方的左半部場景間存在掃描延遲(亦即,所獲得之點雲資料中對應左半部場景的部分較新,所獲得之點雲資料中對應右半部場景的部分較舊),故藉由將所對應的待融合影像之左半邊與前第N張待合成影像之右半邊整合,可使合成後的整合影像較貼近該光達模組12的掃描情況,藉此達到空間上之同步。In step 53, for each point cloud data, the computing device 14 obtains a pair of integrated images corresponding to the point cloud data according to the corresponding image to be fused and the previous N th image to be synthesized. In this embodiment, for each point cloud data, the computing device 14 compares a part of the corresponding image to be fused (ie, the left half) with a part of the previous N-th image to be synthesized (ie, the right half of the image) half) to perform image synthesis to obtain the integrated image corresponding to the point cloud data. Since the lidar module 12 scans each time from right to left to obtain each point cloud data, the obtained point There is a scan delay between the right half of the scene in front of the moving vehicle and the left half of the scene in front of the moving vehicle in the cloud data (that is, the part of the obtained point cloud data that corresponds to the left half of the scene It is relatively new, and the part corresponding to the right half of the scene in the obtained point cloud data is older), so by integrating the left half of the corresponding image to be fused with the right half of the previous N-th image to be synthesized, the synthesis can be achieved. The latter integrated image is closer to the scanning condition of the lidar module 12, thereby achieving spatial synchronization.

在步驟54中,對於每一點雲資料,該運算裝置14將該點雲資料對齊投影至所對應之整合影像。In step 54, for each point cloud data, the computing device 14 aligns and projects the point cloud data to the corresponding integrated image.

值得一提的是,步驟54包含以下子步驟(參圖8)。It is worth mentioning that step 54 includes the following sub-steps (refer to FIG. 8 ).

在子步驟541中,對於每一點雲資料,該運算裝置14根據相關於該點雲資料之一點雲座標系與該整合影像之像素座標系之座標系轉換的一座標轉換參數組,將該點雲資料轉換為二維點雲資料。值得一提的是,該座標轉換參數組包含一外部參數矩陣及一內部參數矩陣,該外部參數矩陣係根據該光達模組12之架設位置與該影像拍攝裝置11之架設位置而獲得。該內部參數矩陣係根據該影像拍攝裝置11之以像素為單位的焦距距離,及影像中心座標而獲得。由於本發明之重點不在於如何將該點雲資料轉換為二維點雲資料,其運算細節可參考http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/parameters.html此網站之說明,不再於此贅述及細節。In sub-step 541, for each point cloud data, the computing device 14 converts the point cloud coordinate system according to the coordinate system conversion parameter set related to the point cloud coordinate system of the point cloud data and the coordinate system of the pixel coordinate system of the integrated image to the point cloud data. Convert cloud data to 2D point cloud data. It is worth mentioning that the coordinate conversion parameter set includes an external parameter matrix and an internal parameter matrix, and the external parameter matrix is obtained according to the installation position of the lidar module 12 and the installation position of the image capturing device 11 . The internal parameter matrix is obtained according to the focal distance in pixels of the image capturing device 11 and the image center coordinates. Since the focus of the present invention is not on how to convert the point cloud data into 2D point cloud data, the details of the calculation can be found in the website http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/parameters.html description, and details are not repeated here.

在子步驟542中,對於每一二維點雲資料,該運算裝置14獲得該二維點雲資料中每一物件的邊緣特徵。In sub-step 542, for each 2D point cloud data, the computing device 14 obtains edge features of each object in the 2D point cloud data.

在子步驟543中,對於每一點雲資料所對應的整合影像,該運算裝置14獲得該整合影像中每一物件的邊緣特徵。In sub-step 543, for the integrated image corresponding to each point cloud data, the computing device 14 obtains the edge feature of each object in the integrated image.

在子步驟544中,對於每一點雲資料,該運算裝置14根據所對應的該二維點雲資料中每一物件的邊緣特徵與所對應的整合影像中每一物件的邊緣特徵,將所對應的該二維點雲資料與所對應的整合影像對齊。In sub-step 544, for each point cloud data, the computing device 14 calculates the corresponding edge feature of each object in the corresponding two-dimensional point cloud data and the corresponding edge feature of each object in the integrated image. The 2D point cloud data is aligned with the corresponding integrated image.

在步驟55中,對於每一點雲資料,該運算裝置14根據經步驟54之對齊投影後的該點雲資料與該整合影像,利用一障礙物辨識模型,辨識出位於該移動載具前方之每一障礙物對應的障礙物類別。值得一提的是,該障礙物辨識模型係根據多筆標記有障礙物位置及障礙物類別的訓練資料利用一機器學習演算法而訓練出,每筆訓練資料包含一經對齊投影後的訓練點雲資料與訓練影像,且經對齊投影後的訓練點雲資料與訓練影像上標記有障礙物位置及障礙物類別。In step 55, for each point cloud data, the computing device 14 uses an obstacle recognition model to identify each point cloud data in front of the mobile vehicle according to the point cloud data and the integrated image after the alignment and projection in step 54. An obstacle category corresponding to an obstacle. It is worth mentioning that the obstacle recognition model is trained using a machine learning algorithm based on multiple training data marked with obstacle positions and obstacle categories. Each training data includes a training point cloud after alignment and projection. The training point cloud data and training image after alignment and projection are marked with obstacle positions and obstacle categories.

綜上所述,本發明障礙物偵測方法,藉由該運算裝置14自該影像拍攝裝置11所拍攝之該等影像中獲得對應該點雲資料的該待融合影像,且自該雷達模組13所獲得之該等雷達資料中獲得對應該點雲資料的該待融合雷達資料,以同步不同感測器所獲得之資料,此外,藉由該運算裝置14根據所對應的待融合影像與前第N張待合成影像獲得該整合影像,且將該點雲資料對齊投影至所對應之整合影像,以使該點雲資料與該整合影像在空間上融合,再者,藉由根據對齊投影後的整合影像與點雲資料,利用該障礙物辨識模型,來辨識出每一障礙物對應的障礙物類別,以達成辨識障礙物之功效,故確實能達成本發明的目的。To sum up, in the obstacle detection method of the present invention, the computing device 14 obtains the to-be-fused image corresponding to the point cloud data from the images captured by the image capturing device 11 , and obtains the image to be fused corresponding to the point cloud data from the radar module. The radar data to be fused corresponding to the point cloud data is obtained from the radar data obtained by 13, so as to synchronize the data obtained by different sensors. The integrated image is obtained from the Nth image to be synthesized, and the point cloud data is aligned and projected to the corresponding integrated image, so that the point cloud data and the integrated image are spatially fused. The integrated image and point cloud data are used to identify the obstacle type corresponding to each obstacle by using the obstacle identification model, so as to achieve the effect of identifying obstacles, so the object of the present invention can indeed be achieved.

惟以上所述者,僅為本發明的實施例而已,當不能以此限定本發明實施的範圍,凡是依本發明申請專利範圍及專利說明書內容所作的簡單的等效變化與修飾,皆仍屬本發明專利涵蓋的範圍內。However, the above are only examples of the present invention, and should not limit the scope of implementation of the present invention. Any simple equivalent changes and modifications made according to the scope of the patent application of the present invention and the contents of the patent specification are still included in the scope of the present invention. within the scope of the invention patent.

11:影像拍攝裝置 12:光達模組 13:雷達模組 14:運算裝置 41:第一轉換函式 42:雷達取樣區間 71:第二轉換函式 72:影像取樣區間 21~25:步驟 51~55:步驟 211~216:子步驟 511~516:子步驟 541~544:子步驟 11: Video shooting device 12: LiDAR module 13: Radar module 14: Computing device 41: The first conversion function 42: Radar sampling interval 71: Second conversion function 72: Image sampling interval 21~25: Steps 51~55: Steps 211~216: Substeps 511~516: Substeps 541~544: Substeps

本發明的其他的特徵及功效,將於參照圖式的實施方式中清楚地呈現,其中: 圖1是一方塊圖,說明實施本發明障礙物偵測方法之實施例的障礙物偵測系統; 圖2是一流程圖,說明本發明障礙物偵測方法之實施例的一雷達融合程序; 圖3是一流程圖,說明一運算裝置如何獲得一點雲資料所對應的一待融合雷達資料; 圖4是一示意圖,說明一第一轉換函式及多個雷達取樣區間; 圖5是一流程圖,說明本發明障礙物偵測方法之實施例的一影像融合程序; 圖6是一流程圖,說明該運算裝置如何獲得該點雲資料所對應的一待融合影像; 圖7是一示意圖,說明一第二轉換函式及多個影像取樣區間;及 圖8是一流程圖,說明該運算裝置如何將該點雲資料對齊投影至所對應的整合影像。 Other features and effects of the present invention will be clearly presented in the embodiments with reference to the drawings, wherein: 1 is a block diagram illustrating an obstacle detection system implementing an embodiment of the obstacle detection method of the present invention; 2 is a flow chart illustrating a radar fusion procedure of an embodiment of the obstacle detection method of the present invention; 3 is a flowchart illustrating how a computing device obtains a radar data to be fused corresponding to a point cloud data; 4 is a schematic diagram illustrating a first transfer function and a plurality of radar sampling intervals; 5 is a flow chart illustrating an image fusion process of an embodiment of the obstacle detection method of the present invention; 6 is a flowchart illustrating how the computing device obtains an image to be fused corresponding to the point cloud data; FIG. 7 is a schematic diagram illustrating a second transfer function and a plurality of image sampling intervals; and FIG. 8 is a flowchart illustrating how the computing device aligns and projects the point cloud data to the corresponding integrated image.

51~55:步驟 51~55: Steps

Claims (8)

一種障礙物偵測方法,藉由一運算裝置來實施,該運算裝置與設置於一移動載具上之一影像拍攝裝置與一光達模組電連接,該影像拍攝裝置用於持續拍攝位於該移動載具前方之多個障礙物的一連串影像,該光達模組用於持續獲得位於該移動載具之前方且包含該等障礙物的一連串點雲資料,該障礙物偵測方法包含以下步驟:(A)在接收到該光達模組的該等點雲資料與該影像拍攝裝置的該等影像後,對於每一點雲資料,根據該點雲資料,自該影像拍攝裝置所拍攝之該等影像中獲得一對應該點雲資料的待融合影像;(B)對於每一待融合影像,獲得早於該待融合影像拍攝的前第N張待合成影像,其中N≧3;(C)對於每一點雲資料,根據所對應的該待融合影像與該前第N張待合成影像,獲得一對應該點雲資料的整合影像;(D)對於每一點雲資料,將該點雲資料對齊投影至所對應之該整合影像,其中,步驟(D)包含以下子步驟,(D-1)對於每一點雲資料,根據相關於該點雲資料之一點雲座標系與該整合影像之像素座標系之座標系轉換的一座標轉換參數組,將該點雲資料轉換為二維點雲資料; (D-2)對於每一二維點雲資料,獲得該二維點雲資料中每一物件的邊緣特徵;(D-3)對於每一點雲資料所對應的整合影像,獲得該整合影像中每一物件的邊緣特徵;及(D-4)對於每一點雲資料,根據所對應的二維點雲資料中每一物件的邊緣特徵與所對應的整合影像中每一物件的邊緣特徵,將所對應的該二維點雲資料與所對應的該整合影像對齊;以及(E)對於每一點雲資料,根據經步驟(D)之對齊投影後的該點雲資料與該整合影像,利用一障礙物辨識模型,辨識出位於該移動載具前方之每一障礙物對應的障礙物類別。 An obstacle detection method is implemented by a computing device, the computing device is electrically connected with an image capturing device and a lidar module arranged on a mobile carrier, and the image capturing device is used for continuously capturing images located in the A series of images of a plurality of obstacles in front of the moving vehicle, the lidar module is used to continuously obtain a series of point cloud data in front of the moving vehicle and including the obstacles, and the obstacle detection method includes the following steps : (A) After receiving the point cloud data of the lidar module and the images of the image capture device, for each point cloud data, according to the point cloud data, the image captured by the image capture device Obtain a pair of images to be fused corresponding to the point cloud data in the other images; (B) for each image to be fused, obtain the Nth image to be synthesized earlier than the image to be fused, where N≧3; (C) For each point cloud data, obtain a pair of integrated images corresponding to the point cloud data according to the corresponding image to be fused and the previous N-th image to be synthesized; (D) for each point cloud data, align the point cloud data Projecting to the corresponding integrated image, wherein step (D) includes the following sub-steps, (D-1) for each point cloud data, according to a point cloud coordinate system related to the point cloud data and the pixel coordinates of the integrated image The coordinate conversion parameter group of the coordinate system conversion of the system, and the point cloud data is converted into a two-dimensional point cloud data; (D-2) For each two-dimensional point cloud data, obtain the edge features of each object in the two-dimensional point cloud data; (D-3) For the integrated image corresponding to each point cloud data, obtain the integrated image The edge feature of each object; and (D-4) for each point cloud data, according to the edge feature of each object in the corresponding two-dimensional point cloud data and the edge feature of each object in the corresponding integrated image, the The corresponding two-dimensional point cloud data is aligned with the corresponding integrated image; and (E) for each point cloud data, using a The obstacle identification model identifies the obstacle type corresponding to each obstacle located in front of the mobile vehicle. 如請求項1所述的障礙物偵測方法,其中,步驟(A)包含以下子步驟:(A-1)在接收到該光達模組的該等點雲資料與該影像拍攝裝置的該等影像後,對於每一點雲資料,根據該點雲資料,自該影像拍攝裝置所拍攝之該等影像中取出一第一候選影像及一第二候選影像,該第一候選影像之拍攝時間早於該點雲資料的獲得時間,且與該點雲資料之獲得時間的時間間隔最短,該第二候選影像為該第一候選影像的前一張影像;(A-2)對於每一點雲資料,根據該點雲資料對應的該 第一候選影像,獲得位於該移動載具前方之該等障礙物中之一特定障礙物對應的一第一特定障礙物參數;(A-3)對於每一點雲資料,根據該點雲資料對應的該第二候選影像,獲得該特定障礙物對應的一第二特定障礙物參數;(A-4)對於每一點雲資料,根據該點雲資料之獲得時間、該點雲資料所對應之第一候選影像及第二候選影像的拍攝時間、該第一特定障礙物參數以及該第二特定障礙物參數,計算該特定障礙物在點雲資料之獲得時間的一推測障礙物參數;(A-5)對於每一點雲資料,根據該推測障礙物參數及一障礙物參數對拍攝區間的轉換規則,將該推測障礙物參數轉換至一推測拍攝區間;及(A-6)對於每一點雲資料,根據該推測拍攝區間自該影像拍攝裝置所拍攝之該等影像中獲得對應該推測拍攝區間的影像,以作為該待融合影像。 The obstacle detection method according to claim 1, wherein step (A) includes the following sub-steps: (A-1) after receiving the point cloud data of the lidar module and the image capturing device After waiting for the images, for each point cloud data, according to the point cloud data, a first candidate image and a second candidate image are extracted from the images captured by the image capturing device, and the first candidate image was captured earlier At the acquisition time of the point cloud data, and the time interval with the acquisition time of the point cloud data is the shortest, the second candidate image is the previous image of the first candidate image; (A-2) For each point cloud data , according to the point cloud data corresponding to the For the first candidate image, obtain a first specific obstacle parameter corresponding to a specific obstacle among the obstacles located in front of the mobile vehicle; (A-3) For each point cloud data, according to the corresponding point cloud data obtain a second specific obstacle parameter corresponding to the specific obstacle; (A-4) For each point cloud data, according to the acquisition time of the point cloud data, the first parameter corresponding to the point cloud data The shooting time of a candidate image and a second candidate image, the first specific obstacle parameter and the second specific obstacle parameter, calculate an estimated obstacle parameter of the specific obstacle at the time of obtaining the point cloud data; (A- 5) For each point cloud data, according to the conversion rule of the estimated obstacle parameter and an obstacle parameter to the shooting interval, convert the estimated obstacle parameter to an estimated shooting interval; and (A-6) For each point cloud data , according to the estimated shooting interval, an image corresponding to the estimated photographing interval is obtained from the images captured by the image photographing device as the to-be-fused image. 如請求項2所述的障礙物偵測方法,其中,在子步驟(A-4)中,還根據以下公式計算出該推測障礙物參數fI(t):
Figure 110132319-A0305-02-0020-1
其中,t為該點雲資料之獲得時間、t-1為該點雲資料所對應之第一候選影像的拍攝時間、t-2為該點雲資料所對應之第二候選影像的拍攝時間、fI(t-1)為該第一特定障礙物 參數、fI(t-2)為該第二特定障礙物參數,以及ε為一誤差。
The obstacle detection method according to claim 2, wherein, in the sub-step (A-4), the estimated obstacle parameter f I (t) is also calculated according to the following formula:
Figure 110132319-A0305-02-0020-1
Among them, t is the acquisition time of the point cloud data, t -1 is the shooting time of the first candidate image corresponding to the point cloud data, t -2 is the shooting time of the second candidate image corresponding to the point cloud data, f I (t -1 ) is the first specific obstacle parameter, f I (t -2 ) is the second specific obstacle parameter, and ε is an error.
如請求項1所述的障礙物偵測方法,其中,在步驟(C)中,對於每一點雲資料,係將所對應的該待融合影像之一部分與該前第N張待合成影像之一部分進行影像合成,以獲得對應該點雲資料的該整合影像。 The obstacle detection method according to claim 1, wherein, in step (C), for each point cloud data, a part of the corresponding image to be fused and a part of the previous N th image to be synthesized are compared Image synthesis is performed to obtain the integrated image corresponding to the point cloud data. 如請求項1所述的障礙物偵測方法,還包含以下步驟:(F)在接收到該光達模組的該等點雲資料後,對於每一點雲資料,利用一方向包圍盒演算法,定位該點雲資料中的該等障礙物;及(G)利用一卡爾曼濾波器,追蹤每一點雲資料中所定位出的該等障礙物以獲得該移動載具與位於該移動載具前方之每一障礙物對應的障礙物距離。 The obstacle detection method according to claim 1, further comprising the following steps: (F) after receiving the point cloud data of the lidar module, for each point cloud data, use a unidirectional bounding box algorithm , locate the obstacles in the point cloud data; and (G) use a Kalman filter to track the obstacles located in each point cloud data to obtain the mobile vehicle and the location in the mobile vehicle The obstacle distance corresponding to each obstacle ahead. 如請求項1所述的障礙物偵測方法,該運算裝置還與設置於該移動載具上之一雷達模組電連接,該雷達模組用於持續獲得位於該移動載具之前方且包含該等障礙物的一連串雷達資料,該障礙物偵測方法還包含以下步驟:(H)在接收到該光達模組的該等點雲資料與該雷達模組的該等雷達資料後,對於每一點雲資料,根據該點雲資料,自該雷達模組所獲得之該等雷達資料中獲得一對應該點雲資料的待融合雷達資料;(I)對於每一待融合雷達資料,利用一方向包圍盒演算法,定位該待融合雷達資料中的該等障礙物;及 (J)利用一卡爾曼濾波器,追蹤每一待融合雷達資料中所定位出的該等障礙物以獲得位於該移動載具前方之每一障礙物對應的障礙物速度。 The obstacle detection method according to claim 1, wherein the computing device is further electrically connected to a radar module disposed on the mobile vehicle, and the radar module is used to continuously obtain information that is located in front of the mobile vehicle and includes A series of radar data of the obstacles, the obstacle detection method further includes the following steps: (H) after receiving the point cloud data of the lidar module and the radar data of the radar module, for For each point cloud data, according to the point cloud data, a pair of radar data to be fused corresponding to the point cloud data is obtained from the radar data obtained by the radar module; (1) for each radar data to be fused, use a a directional bounding box algorithm to locate the obstacles in the radar data to be fused; and (J) Using a Kalman filter to track the obstacles located in each radar data to be fused to obtain the obstacle velocity corresponding to each obstacle in front of the moving vehicle. 如請求項6所述的障礙物偵測方法,其中,步驟(H)包含以下子步驟:(H-1)在接收到該光達模組的該等點雲資料與該雷達模組的該等雷達資料後,對於每一點雲資料,根據該點雲資料,自該雷達模組所獲得之該等雷達資料中取出一第一候選雷達資料及一第二候選雷達資料,該第一候選雷達資料之獲得時間早於該點雲資料的獲得時間,且與該點雲資料之獲得時間的時間間隔最短,該第二候選雷達資料為該第一候選雷達資料的前一筆雷達資料;(H-2)對於每一點雲資料,根據該點雲資料對應的該第一候選雷達資料,獲得位於該移動載具前方之該等障礙物中之一特定障礙物對應的一第一特定障礙物參數;(H-3)對於每一點雲資料,根據該點雲資料對應的該第二候選雷達資料,獲得該特定障礙物對應的一第二特定障礙物參數;(H-4)對於每一點雲資料,根據該點雲資料之獲得時間、該點雲資料所對應之第一候選雷達資料及第二候選雷達資料的獲得時間、該第一特定障礙物參數以及該第二特定障礙物參數,計算該特定障礙物在該點雲資料之獲得時 間的一推測障礙物參數;(H-5)對於每一點雲資料,根據該推測障礙物參數及一障礙物參數對獲得區間的轉換規則,將該推測障礙物參數轉換至一推測獲得區間;及(H-6)對於每一點雲資料,根據該推測獲得區間自該雷達模組所獲得之該等雷達資料中獲得對應該推測獲得區間的雷達資料,以作為該待融合雷達資料。 The obstacle detection method according to claim 6, wherein step (H) comprises the following sub-steps: (H-1) after receiving the point cloud data of the lidar module and the data of the radar module After waiting for the radar data, for each point cloud data, according to the point cloud data, extract a first candidate radar data and a second candidate radar data from the radar data obtained by the radar module, the first candidate radar data The acquisition time of the data is earlier than the acquisition time of the point cloud data, and the time interval with the acquisition time of the point cloud data is the shortest, and the second candidate radar data is the previous radar data of the first candidate radar data; (H- 2) For each point cloud data, according to the first candidate radar data corresponding to the point cloud data, obtain a first specific obstacle parameter corresponding to a specific obstacle in the obstacles located in front of the mobile vehicle; (H-3) For each point cloud data, obtain a second specific obstacle parameter corresponding to the specific obstacle according to the second candidate radar data corresponding to the point cloud data; (H-4) For each point cloud data , according to the acquisition time of the point cloud data, the acquisition time of the first candidate radar data and the second candidate radar data corresponding to the point cloud data, the first specific obstacle parameter and the second specific obstacle parameter, calculate the The specific obstacle when the point cloud data is obtained (H-5) For each point cloud data, according to the conversion rule of the estimated obstacle parameter and an obstacle parameter to the obtained interval, convert the estimated obstacle parameter to a predicted obtained interval; and (H-6) for each point cloud data, obtain radar data corresponding to the inferred interval from the radar data obtained by the radar module according to the inferred interval, as the radar data to be fused. 如請求項7所述的障礙物偵測方法,在子步驟(H-4)中,還根據以下公式計算出該推測障礙物參數fR(t):
Figure 110132319-A0305-02-0023-2
其中,t為該點雲資料之獲得時間、t-1為該點雲資料所對應之第一候選雷達資料的獲得時間、t-2為該點雲資料所對應之第二候選雷達資料的獲得時間、fR(t-1)為該第一特定障礙物參數,以及fR(t-2)為該第二特定障礙物參數,ε為一誤差值。
According to the obstacle detection method described in claim 7, in sub-step (H-4), the estimated obstacle parameter f R (t) is also calculated according to the following formula:
Figure 110132319-A0305-02-0023-2
Among them, t is the acquisition time of the point cloud data, t -1 is the acquisition time of the first candidate radar data corresponding to the point cloud data, and t -2 is the acquisition time of the second candidate radar data corresponding to the point cloud data Time, f R (t -1 ) are the first specific obstacle parameter, and f R (t -2 ) is the second specific obstacle parameter, and ε is an error value.
TW110132319A 2021-08-31 2021-08-31 Obstacle detection method TWI774543B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW110132319A TWI774543B (en) 2021-08-31 2021-08-31 Obstacle detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW110132319A TWI774543B (en) 2021-08-31 2021-08-31 Obstacle detection method

Publications (2)

Publication Number Publication Date
TWI774543B true TWI774543B (en) 2022-08-11
TW202311781A TW202311781A (en) 2023-03-16

Family

ID=83807341

Family Applications (1)

Application Number Title Priority Date Filing Date
TW110132319A TWI774543B (en) 2021-08-31 2021-08-31 Obstacle detection method

Country Status (1)

Country Link
TW (1) TWI774543B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI824789B (en) * 2022-10-21 2023-12-01 國立臺北科技大學 Hybrid system for vehicle detection and hybrid method for vehicle detection

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW202020734A (en) * 2018-11-29 2020-06-01 財團法人工業技術研究院 Vehicle, vehicle positioning system, and vehicle positioning method
TW202124190A (en) * 2019-12-27 2021-07-01 財團法人工業技術研究院 3d image labeling method based on labeling information of 2d image and 3d image labeling device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW202020734A (en) * 2018-11-29 2020-06-01 財團法人工業技術研究院 Vehicle, vehicle positioning system, and vehicle positioning method
TW202124190A (en) * 2019-12-27 2021-07-01 財團法人工業技術研究院 3d image labeling method based on labeling information of 2d image and 3d image labeling device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI824789B (en) * 2022-10-21 2023-12-01 國立臺北科技大學 Hybrid system for vehicle detection and hybrid method for vehicle detection

Also Published As

Publication number Publication date
TW202311781A (en) 2023-03-16

Similar Documents

Publication Publication Date Title
JP5688456B2 (en) Security camera tracking and monitoring system and method using thermal image coordinates
KR101647370B1 (en) road traffic information management system for g using camera and radar
CN104108392B (en) Lane Estimation Apparatus And Method
US10867166B2 (en) Image processing apparatus, image processing system, and image processing method
CN111462200A (en) Cross-video pedestrian positioning and tracking method, system and equipment
US11132538B2 (en) Image processing apparatus, image processing system, and image processing method
JP2010063001A (en) Person-tracking device and person-tracking program
US20200097702A1 (en) Image recognition device
JP2014089687A (en) Obstacle detection apparatus and method for avm system
CN111818274A (en) Optical unmanned aerial vehicle monitoring method and system based on three-dimensional light field technology
JP6617150B2 (en) Object detection method and object detection apparatus
KR101203816B1 (en) Robot fish localization system using artificial markers and method of the same
TWI774543B (en) Obstacle detection method
JP6410231B2 (en) Alignment apparatus, alignment method, and computer program for alignment
JPH09297849A (en) Vehicle detector
JP2000293693A (en) Obstacle detecting method and device
JP4123138B2 (en) Vehicle detection method and vehicle detection device
CN111932590B (en) Object tracking method and device, electronic equipment and readable storage medium
Naser et al. Infrastructure-free NLoS obstacle detection for autonomous cars
CN110728703B (en) Registration fusion method for visible light image and solar blind ultraviolet light image
JP2009266155A (en) Apparatus and method for mobile object tracking
JP6886136B2 (en) Alignment device, alignment method and computer program for alignment
KR100844640B1 (en) Method for object recognizing and distance measuring
JPH1093954A (en) Object detection method/system
KR101990734B1 (en) Vehicle positioning system by fusing images from multi camera