TW202311781A - Obstacle detection method utilizing an obstacle recognition model to recognize obstacle category corresponding to each obstacle - Google Patents

Obstacle detection method utilizing an obstacle recognition model to recognize obstacle category corresponding to each obstacle Download PDF

Info

Publication number
TW202311781A
TW202311781A TW110132319A TW110132319A TW202311781A TW 202311781 A TW202311781 A TW 202311781A TW 110132319 A TW110132319 A TW 110132319A TW 110132319 A TW110132319 A TW 110132319A TW 202311781 A TW202311781 A TW 202311781A
Authority
TW
Taiwan
Prior art keywords
point cloud
cloud data
obstacle
image
data
Prior art date
Application number
TW110132319A
Other languages
Chinese (zh)
Other versions
TWI774543B (en
Inventor
張耘菱
蘇一峯
陳盈仁
Original Assignee
財團法人車輛研究測試中心
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 財團法人車輛研究測試中心 filed Critical 財團法人車輛研究測試中心
Priority to TW110132319A priority Critical patent/TWI774543B/en
Application granted granted Critical
Publication of TWI774543B publication Critical patent/TWI774543B/en
Publication of TW202311781A publication Critical patent/TW202311781A/en

Links

Images

Landscapes

  • Geophysics And Detection Of Objects (AREA)
  • Image Processing (AREA)

Abstract

An obstacle detection method comprises steps of: (A) for each point cloud data, obtaining an image to be fused from the images according to the point cloud data after receiving a plurality of point cloud data and a plurality of images; (B) for each image to be fused, obtaining a previous Nth image to be synthesized photographed earlier than the image to be fused; (C) for each point cloud data, obtaining an integrated image according to the corresponding image to be fused and the previous Nth image to be synthesized; (D) for each point cloud data, aligning and projecting the point cloud data on the corresponding integrated image; and (E) for each point cloud data, utilizing an obstacle recognition model to recognize an obstacle category corresponding to each obstacle located in the front direction according to the aligned and projected point cloud data and the integrated image.

Description

障礙物偵測方法Obstacle Detection Method

本發明是有關於一種障礙物偵測方法,特別是指一種融合不同感測器之障礙物偵測方法。The present invention relates to an obstacle detection method, in particular to an obstacle detection method that integrates different sensors.

移動載具引入越來越多的傳感器和智能算法,不斷增強其環境感知和靈活運動的能力,在各領域上,包含工廠運用AMR(Autonomous Mobile Robot)產業進行倉儲搬運及自駕車在路上運行進行障礙物偵測停止或閃避,皆需要使用到多感測器,如,雷達、光達及攝影機等偵測進而做避障功能。當移動載具行駛在任意路徑上,需要可做閃避障礙物與辨識障礙物種類為最主要之需求。Mobile vehicles introduce more and more sensors and intelligent algorithms to continuously enhance their environmental perception and flexible movement capabilities. In various fields, including factories using AMR (Autonomous Mobile Robot) industry for storage and handling and self-driving cars running on the road Obstacle detection, stop or dodge requires the use of multi-sensors, such as radar, lidar, and cameras for detection and obstacle avoidance. When the mobile vehicle is driving on any path, the most important requirements are to avoid obstacles and identify the types of obstacles.

各感測元件有其不同優勢,現多以異質感知融合運用在各載具平台上。然而,各感測器的感測資料及取樣時間皆不相同,例如,雷達係在每50ms獲得一雷達資料,光達係在每100ms獲得一點雲資料,攝影機係在每33ms拍攝一影像,在感測器差異大時,皆有時間與空間無法同步問題,也無法得知障礙物為何,實有必要提出一解決方案。Each sensing element has its own advantages, and now it is mostly used on various vehicle platforms with heterogeneous sensing fusion. However, the sensing data and sampling time of each sensor are different. For example, the radar system obtains a radar data every 50ms, the lidar system obtains a point cloud data every 100ms, and the camera system takes an image every 33ms. When there are large differences in sensors, there will be a problem that time and space cannot be synchronized, and it is impossible to know what the obstacle is. It is necessary to propose a solution.

因此,本發明的目的,即在提供一種在融合多感測器時進行空間與時間之同步並辨識出障礙物類別的障礙物偵測方法。Therefore, the object of the present invention is to provide an obstacle detection method for synchronizing space and time and identifying obstacle types when multi-sensors are fused.

於是,本發明障礙物偵測方法,藉由一運算裝置來實施,該運算裝置與設置於一移動載具上之一影像拍攝裝置與一光達模組電連接,該影像拍攝裝置用於拍攝位於該移動載具前方之多個障礙物的一連串影像,該光達模組用於獲得位於該移動載具之前方且包含該等障礙物的一連串點雲資料,該障礙物偵測方法包含以下步驟:Therefore, the obstacle detection method of the present invention is implemented by a computing device, and the computing device is electrically connected with an image capturing device installed on a mobile vehicle and a LiDAR module, and the image capturing device is used to capture A series of images of a plurality of obstacles located in front of the mobile vehicle, the lidar module is used to obtain a series of point cloud data located in front of the mobile vehicle and including the obstacles, the obstacle detection method includes the following step:

(A)在接收到該光達模組的該等點雲資料與該影像拍攝裝置的該等影像後,對於每一點雲資料,根據該點雲資料,自該影像拍攝裝置所拍攝之該等影像中獲得一對應該點雲資料的待融合影像;(A) After receiving the point cloud data of the LiDAR module and the images of the image capture device, for each point cloud data, according to the point cloud data, the image captured by the image capture device Obtain a pair of images to be fused corresponding to the point cloud data in the image;

(B) 對於每一待融合影像,獲得早於該待融合影像拍攝的前第N張待合成影像,其中N≧3;(B) For each image to be fused, obtain the Nth image to be synthesized earlier than the image to be fused, where N≧3;

(C) 對於每一點雲資料,根據所對應的該待融合影像與該前第N張待合成影像,獲得一對應該點雲資料的整合影像;(C) For each point cloud data, according to the corresponding image to be fused and the previous Nth image to be synthesized, obtain a pair of integrated images corresponding to the point cloud data;

(D) 對於每一點雲資料,將該點雲資料對齊投影至所對應之該整合影像;及(D) For each point cloud data, align and project the point cloud data to the corresponding integrated image; and

(E)對於每一點雲資料,根據經步驟(D)之對齊投影後的該點雲資料與該整合影像,利用一障礙物辨識模型,辨識出位於該移動載具前方之每一障礙物對應的障礙物類別。(E) For each point cloud data, according to the point cloud data and the integrated image after the aligned projection in step (D), use an obstacle recognition model to identify the correspondence of each obstacle located in front of the mobile vehicle obstacle category.

本發明的功效在於:藉由該運算裝置自該影像拍攝裝置所拍攝之該等影像中獲得對應該點雲資料的該待融合影像,並根據所對應的待融合影像與前第N張待合成影像獲得該整合影像,且將該點雲資料對齊投影至所對應之整合影像,以達成在融合該光達模組與該影像拍攝裝置之感測資料時進行空間與時間之同步,此外,藉由根據對齊投影後的整合影像與點雲資料,利用該障礙物辨識模型,來辨識出每一障礙物對應的障礙物類別,以達成辨識障礙物之目的。The effect of the present invention lies in: the image to be fused corresponding to the point cloud data is obtained from the images captured by the image shooting device by the computing device, and the image to be fused is synthesized according to the corresponding image to be fused and the previous Nth image Image obtains the integrated image, and aligns and projects the point cloud data to the corresponding integrated image to achieve spatial and temporal synchronization when fusing the sensing data of the lidar module and the image capture device. In addition, by According to the integrated image and point cloud data after alignment and projection, the obstacle identification model is used to identify the obstacle category corresponding to each obstacle, so as to achieve the purpose of identifying obstacles.

參閱圖1,本發明障礙物偵測方法的實施例,藉由一設置於一移動載具上之障礙物偵測系統來實施。該障礙物偵測系統包含設置於該移動載具上的一影像拍攝裝置11、一光達模組12、一雷達模組13及一電連接該影像拍攝裝置11、該光達模組12與該雷達模組13的運算裝置14。Referring to FIG. 1 , an embodiment of the obstacle detection method of the present invention is implemented by an obstacle detection system installed on a mobile vehicle. The obstacle detection system includes an image capture device 11, a lidar module 12, a radar module 13 and an electrical connection between the image capture device 11, the lidar module 12 and the mobile vehicle. The computing device 14 of the radar module 13 .

該影像拍攝裝置11用於持續拍攝位於該移動載具前方之多個障礙物的一連串影像。在本實施例中,該影像拍攝裝置11例如為一攝影機,且在每33ms拍攝一張影像。The image capture device 11 is used to continuously capture a series of images of obstacles located in front of the mobile vehicle. In this embodiment, the image capture device 11 is, for example, a video camera, and captures an image every 33 ms.

該光達模組12用於持續獲得位於該移動載具之前方且包含該等障礙物的一連串點雲資料。在本實施例中,該光達模組12例如為一光達感測器,且在每100ms獲得一筆點雲資料。每100ms所獲得的點雲資料係該光達模組12在一自一第一個掃描時間點至一第N個掃描時間點之預設期間內由右至左所掃描到的點雲資料。The lidar module 12 is used to continuously obtain a series of point cloud data located in front of the mobile vehicle and including the obstacles. In this embodiment, the LiDAR module 12 is, for example, a LiDAR sensor, and obtains a piece of point cloud data every 100 ms. The point cloud data obtained every 100 ms is the point cloud data scanned from right to left by the lidar module 12 during a preset period from a first scanning time point to an Nth scanning time point.

該雷達模組13用於持續獲得位於該移動載具之前方且包含該等障礙物的一連串雷達資料。在本實施例中,該雷達模組13例如為一雷達感測器,且在每50ms獲得一筆雷達資料。The radar module 13 is used to continuously obtain a series of radar data located in front of the mobile vehicle and including the obstacles. In this embodiment, the radar module 13 is, for example, a radar sensor, and obtains a piece of radar data every 50 ms.

該運算裝置14例如為一處理器或一微處理器等其他可執行運算功能的晶片。The computing device 14 is, for example, a processor or a microprocessor and other chips capable of performing computing functions.

以下將藉由本發明障礙物偵測方法之實施例來說明該障礙物偵測系統中各元件之作動,該實施例包含一雷達融合程序及一影像融合程序。The action of each component in the obstacle detection system will be described below by using an embodiment of the obstacle detection method of the present invention. The embodiment includes a radar fusion program and an image fusion program.

參閱圖1與圖2,本發明障礙物偵測方法之實施例的雷達融合程序包含以下步驟。Referring to FIG. 1 and FIG. 2 , the radar fusion procedure of the embodiment of the obstacle detection method of the present invention includes the following steps.

在步驟21中,該運算裝置14在接收到該光達模組12的該等點雲資料與該雷達模組13的該等雷達資料後,對於每一點雲資料,根據該點雲資料,自該雷達模組13所獲得之該等雷達資料中獲得一對應該點雲資料的待融合雷達資料。In step 21, after the computing device 14 receives the point cloud data of the lidar module 12 and the radar data of the radar module 13, for each point cloud data, according to the point cloud data, automatically Among the radar data obtained by the radar module 13, a pair of radar data to be fused corresponding to the point cloud data is obtained.

值得一提的是,步驟21還包含以下子步驟(參圖3)。It is worth mentioning that step 21 also includes the following sub-steps (refer to FIG. 3 ).

在子步驟211中,該運算裝置14在接收到該光達模組12的該等點雲資料與該雷達模組13的該等雷達資料後,對於每一點雲資料,根據該點雲資料,自該雷達模組13所獲得之該等雷達資料中取出一第一候選雷達資料及一第二候選雷達資料,該第一候選雷達資料之獲得時間早於該點雲資料的獲得時間,且與該點雲資料之獲得時間的時間間隔最短,該第二候選雷達資料為該第一候選雷達資料的前一筆雷達資料。In sub-step 211, after the computing device 14 receives the point cloud data of the lidar module 12 and the radar data of the radar module 13, for each point cloud data, according to the point cloud data, Extract a first candidate radar data and a second candidate radar data from the radar data obtained by the radar module 13, the acquisition time of the first candidate radar data is earlier than the acquisition time of the point cloud data, and The point cloud data is acquired at the shortest time interval, and the second candidate radar data is the previous radar data of the first candidate radar data.

在子步驟212中,對於每一點雲資料,該運算裝置14根據該點雲資料對應的該第一候選雷達資料,獲得位於該移動載具前方之該等障礙物中之一特定障礙物對應的一第一特定障礙物參數。在本實施方式中,該第一特定障礙物參數為該移動載具與該特定障礙物的一第一特定障礙物距離,然而,在其他實施方式中,該第一特定障礙物參數亦可為該特定障礙物的一第一特定障礙物位置,或是該特定障礙物的一第一特定障礙物速度,並不以此為限。In sub-step 212, for each point cloud data, the calculation device 14 obtains the corresponding one of the obstacles located in front of the mobile vehicle according to the first candidate radar data corresponding to the point cloud data. a first specific obstacle parameter. In this embodiment, the first specific obstacle parameter is a first specific obstacle distance between the mobile vehicle and the specific obstacle, however, in other embodiments, the first specific obstacle parameter can also be A first specific obstacle position of the specific obstacle, or a first specific obstacle speed of the specific obstacle are not limited thereto.

在子步驟213中,對於每一點雲資料,該運算裝置14根據該點雲資料對應的該第二候選雷達資料,獲得該特定障礙物對應的一第二特定障礙物參數。在本實施方式中,該第二特定障礙物參數為該移動載具與該特定障礙物的一第二特定障礙物距離,然而,在其他實施方式中,該第二特定障礙物參數亦可為該特定障礙物的一第二特定障礙物位置,或是該特定障礙物的一第二特定障礙物速度,並不以此為限。In sub-step 213 , for each point cloud data, the computing device 14 obtains a second specific obstacle parameter corresponding to the specific obstacle according to the second candidate radar data corresponding to the point cloud data. In this embodiment, the second specific obstacle parameter is a second specific obstacle distance between the mobile vehicle and the specific obstacle, however, in other embodiments, the second specific obstacle parameter can also be A second specific obstacle position of the specific obstacle, or a second specific obstacle speed of the specific obstacle are not limited thereto.

在子步驟214中,對於每一點雲資料,該運算裝置14根據該點雲資料之獲得時間

Figure 02_image001
、該點雲資料所對應之第一候選雷達資料的獲得時間
Figure 02_image003
及第二候選雷達資料的獲得時間
Figure 02_image005
、該第一特定障礙物參數
Figure 02_image007
以及該第二特定障礙物參數
Figure 02_image009
,利用以下公式(1)計算該特定障礙物在該點雲資料之獲得時間t的一推測障礙物參數
Figure 02_image011
。在本實施方式中,該推測障礙物參數為該移動載具與該特定障礙物的一推測障礙物距離,然而,在其他實施方式中,該推測障礙物參數亦可為該特定障礙物的一推測障礙物位置,或是該特定障礙物的一推測障礙物速度,並不以此為限。
Figure 02_image013
…(1) In sub-step 214, for each point cloud data, the computing device 14
Figure 02_image001
, The acquisition time of the first candidate radar data corresponding to the point cloud data
Figure 02_image003
and the acquisition time of the second candidate radar data
Figure 02_image005
, the first specific obstacle parameter
Figure 02_image007
and the second specific obstacle parameter
Figure 02_image009
, use the following formula (1) to calculate an estimated obstacle parameter of the specific obstacle at the acquisition time t of the point cloud data
Figure 02_image011
. In this embodiment, the estimated obstacle parameter is an estimated obstacle distance between the mobile vehicle and the specific obstacle. However, in other embodiments, the estimated obstacle parameter can also be an estimated obstacle distance of the specific obstacle. The estimated obstacle position, or an estimated obstacle velocity of the specific obstacle is not limited thereto.
Figure 02_image013
…(1)

其中,

Figure 02_image015
為一誤差值。 in,
Figure 02_image015
is an error value.

在子步驟215中,對於每一點雲資料,該運算裝置14根據該推測障礙物參數及一障礙物參數如,距離對獲得區間的第一轉換規則,將該推測障礙物參數轉換至一推測獲得區間。值得特別說明的是,該第一轉換規則係先根據該推測障礙物參數利用一障礙物參數對時間的第一轉換函式,計算出該推測障礙物參數所對應的一推測獲得時間,接著,根據該推測獲得時間及多個雷達取樣區間,將該推測獲得時間所落入的雷達取樣區間作為該推測獲得區間。其中,該第一轉換函式係根據每一筆雷達資料的獲得時間及所指示出之該特定障礙物的障礙物參數而獲得,每一筆雷達資料的獲得時間點至下一筆雷達資料的獲得時間點,但不包含下一筆雷達資料的獲得時間點所定義出的時間區間即構成一雷達取樣區間,圖4示例出該第一轉換函式41及該等雷達取樣區間42。In sub-step 215, for each point cloud data, the calculation device 14 converts the estimated obstacle parameter into an estimated obtained obstacle parameter according to the estimated obstacle parameter and an obstacle parameter such as the first conversion rule of the distance to the obtained interval. interval. It is worth noting that the first conversion rule is to first calculate an estimated acquisition time corresponding to the estimated obstacle parameter by using a first conversion function of the estimated obstacle parameter to time according to the estimated obstacle parameter, and then, According to the estimated acquisition time and the plurality of radar sampling intervals, the radar sampling interval in which the estimated acquisition time falls is used as the estimated acquisition interval. Wherein, the first conversion function is obtained according to the acquisition time of each piece of radar data and the indicated obstacle parameters of the specific obstacle, from the time point of each piece of radar data acquisition to the time point of the next radar data , but not including the time interval defined by the acquisition time point of the next radar data constitutes a radar sampling interval. FIG. 4 illustrates the first conversion function 41 and the radar sampling intervals 42 .

在子步驟216中,對於每一點雲資料,該運算裝置14根據該推測獲得區間自該雷達模組13所獲得之該等雷達資料中獲得對應該推測獲得區間的雷達資料,以作為該待融合雷達資料。對應該推測獲得區間的雷達資料即為對應有獲得時間在該推測獲得區間之起始時間點的雷達資料。In sub-step 216, for each point cloud data, the calculation device 14 obtains the radar data corresponding to the estimated interval from the radar data obtained by the radar module 13 according to the estimated interval, as the to-be-fused Radar data. The radar data corresponding to the estimated acquisition interval is the radar information corresponding to the starting time point of the estimated acquisition interval.

值得特別說明的是,由於該雷達模組13係每50ms獲得一筆雷達資料,該光達模組12係每100ms獲得一筆點雲資料,加上該雷達模組13與該光達模組12開始獲取資料的起始時間恐也存在誤差,故該雷達模組13與該光達模組12獲得資料的時間恐不同步,本發明藉由子步驟211~216之執行,以該光達模組12獲得該等點雲資料的獲得時間為準去獲得對應的該待融合雷達資料,進而盡量使所獲得的該待融合雷達資料能與所對應的該點雲資料在時間上同步。It is worth noting that since the radar module 13 obtains a piece of radar data every 50 ms, the lidar module 12 obtains a piece of point cloud data every 100 ms, plus the radar module 13 and the lidar module 12 start There may also be errors in the starting time of obtaining data, so the time of obtaining data by the radar module 13 and the lidar module 12 may not be synchronized. The present invention uses the execution of sub-steps 211 to 216 to obtain data from the lidar module 12 The acquisition time of the point cloud data is taken as the basis to obtain the corresponding radar data to be fused, and then the obtained radar data to be fused can be synchronized with the corresponding point cloud data as far as possible.

在步驟22中,對於每一點雲資料,該運算裝置14利用一方向包圍盒演算法,定位該點雲資料中的該等障礙物。在本實施例中,該方向包圍盒演算法係基於主成分分析來定位出該等障礙物。In step 22, for each point cloud data, the computing device 14 uses a directional bounding box algorithm to locate the obstacles in the point cloud data. In this embodiment, the bounding box algorithm locates the obstacles based on principal component analysis.

在步驟23中,該運算裝置14利用一卡爾曼濾波器,追蹤每一點雲資料中的所定位出的該等障礙物以獲得該移動載具與位於該移動載具前方之每一障礙物對應的障礙物距離。In step 23, the computing device 14 uses a Kalman filter to track the obstacles located in each point cloud data to obtain the correspondence between the mobile vehicle and each obstacle located in front of the mobile vehicle. obstacle distance.

在步驟24中,對於每一待融合雷達資料,該運算裝置14利用該方向包圍盒演算法,定位該待融合雷達資料中的該等障礙物。In step 24, for each radar data to be fused, the computing device 14 utilizes the bounding box algorithm to locate the obstacles in the radar data to be fused.

在步驟25中,該運算裝置14利用該卡爾曼濾波器,追蹤每一待融合雷達資料中所定位出的該等障礙物以獲得位於該移動載具前方之每一障礙物對應的障礙物速度。In step 25, the calculation device 14 uses the Kalman filter to track the obstacles located in each radar data to be fused to obtain the obstacle velocity corresponding to each obstacle in front of the mobile vehicle .

參閱圖1與圖5,本發明障礙物偵測方法之實施例的影像融合程序包含以下步驟。Referring to FIG. 1 and FIG. 5 , the image fusion procedure of the embodiment of the obstacle detection method of the present invention includes the following steps.

在步驟51中,該運算裝置14在接收到該光達模組12的該等點雲資料與該影像拍攝裝置11的該等影像後,對於每一點雲資料,根據該點雲資料,自該影像拍攝裝置11所拍攝之該等影像中獲得一對應該點雲資料的待融合影像。In step 51, after the computing device 14 receives the point cloud data of the lidar module 12 and the images of the image capture device 11, for each point cloud data, according to the point cloud data, from the A pair of images to be fused corresponding to the point cloud data is obtained from the images captured by the image capturing device 11 .

值得一提的是,步驟51包含以下子步驟(參圖6)。It is worth mentioning that step 51 includes the following sub-steps (refer to FIG. 6 ).

在子步驟511中,該運算裝置14在接收到該光達模組12的該等點雲資料與該影像拍攝裝置11的該等影像後,對於每一點雲資料,根據該點雲資料,自該影像拍攝裝置11所拍攝之該等影像中取出一第一候選影像及一第二候選影像,該第一候選影像之拍攝時間早於該點雲資料的獲得時間,且與該點雲資料之獲得時間的時間間隔最短,該第二候選影像為該第一候選影像的前一張影像。In sub-step 511, after the computing device 14 receives the point cloud data of the lidar module 12 and the images of the image capture device 11, for each point cloud data, according to the point cloud data, automatically A first candidate image and a second candidate image are extracted from the images captured by the image capture device 11, the first candidate image was captured earlier than the point cloud data acquisition time, and is consistent with the point cloud data The time interval of obtaining time is the shortest, and the second candidate image is a previous image of the first candidate image.

在子步驟512中,對於每一點雲資料,該運算裝置14根據該點雲資料對應的該第一候選影像,獲得位於該移動載具前方之該等障礙物中之該特定障礙物對應的一第三特定障礙物參數。在本實施方式中,該第三特定障礙物參數為該移動載具與該特定障礙物的一第三特定障礙物距離,然而,在其他實施方式中,該第三特定障礙物參數亦可為該特定障礙物的一第三特定障礙物位置,或是該特定障礙物的一第三特定障礙物速度,並不以此為限。其中,當該第三特定障礙物參數為該第三特定障礙物距離或第三特定障礙物速度時,該運算裝置14還需根據該影像拍攝裝置11之焦距來獲得該第三特定障礙物距離或第三特定障礙物速度。In sub-step 512, for each point cloud data, the computing device 14 obtains a corresponding one of the obstacles located in front of the mobile vehicle according to the first candidate image corresponding to the point cloud data. The third specific obstacle parameter. In this embodiment, the third specific obstacle parameter is a third specific obstacle distance between the mobile vehicle and the specific obstacle, however, in other embodiments, the third specific obstacle parameter can also be A third specific obstacle position of the specific obstacle, or a third specific obstacle speed of the specific obstacle are not limited thereto. Wherein, when the third specific obstacle parameter is the third specific obstacle distance or the third specific obstacle speed, the computing device 14 also needs to obtain the third specific obstacle distance according to the focal length of the image capture device 11 or third specific obstacle speed.

在子步驟513中,對於每一點雲資料,該運算裝置14根據該點雲資料對應的該第二候選影像,獲得該特定障礙物對應的一第四特定障礙物參數。在本實施方式中,該第四特定障礙物參數為該移動載具與該特定障礙物的一第四特定障礙物距離,然而,在其他實施方式中,該第四特定障礙物參數亦可為該特定障礙物的一第四特定障礙物位置,或是該特定障礙物的一第四特定障礙物速度,並不以此為限。其中,當該第四特定障礙物參數為該第四特定障礙物距離或第四特定障礙物速度時,該運算裝置14還需根據該影像拍攝裝置11之焦距來獲得該第四特定障礙物距離或第四特定障礙物速度。In sub-step 513 , for each point cloud data, the computing device 14 obtains a fourth specific obstacle parameter corresponding to the specific obstacle according to the second candidate image corresponding to the point cloud data. In this embodiment, the fourth specific obstacle parameter is a fourth specific obstacle distance between the mobile vehicle and the specific obstacle, however, in other embodiments, the fourth specific obstacle parameter can also be A fourth specific obstacle position of the specific obstacle, or a fourth specific obstacle speed of the specific obstacle are not limited thereto. Wherein, when the fourth specific obstacle parameter is the fourth specific obstacle distance or the fourth specific obstacle speed, the computing device 14 also needs to obtain the fourth specific obstacle distance according to the focal length of the image capture device 11 or the fourth specific obstacle speed.

在子步驟514中,對於每一點雲資料,該運算裝置14根據該點雲資料之獲得時間

Figure 02_image001
、該點雲資料所對應之第一候選影像的拍攝時間
Figure 02_image017
及第二候選影像的拍攝時間
Figure 02_image019
、該第三特定障礙物參數
Figure 02_image021
以及該第四特定障礙物參數
Figure 02_image023
,利用以下公式(2)計算該特定障礙物在點雲資料之獲得時間t的另一推測障礙物參數
Figure 02_image025
。在本實施方式中,該另一推測障礙物參數為該移動載具與該特定障礙物的另一推測障礙物距離,然而,在其他實施方式中,該另一推測障礙物參數亦可為該特定障礙物的另一推測障礙物位置,或是該特定障礙物的另一推測障礙物速度,並不以此為限。
Figure 02_image027
…(2) In sub-step 514, for each point cloud data, the computing device 14
Figure 02_image001
, The shooting time of the first candidate image corresponding to the point cloud data
Figure 02_image017
and the shooting time of the second candidate image
Figure 02_image019
, the third specific obstacle parameter
Figure 02_image021
and the fourth specific obstacle parameter
Figure 02_image023
, use the following formula (2) to calculate another estimated obstacle parameter of the specific obstacle at the acquisition time t of the point cloud data
Figure 02_image025
. In this embodiment, the other estimated obstacle parameter is another estimated obstacle distance between the mobile vehicle and the specific obstacle, however, in other embodiments, the another estimated obstacle parameter can also be the Another estimated obstacle position of the specific obstacle, or another estimated obstacle speed of the specific obstacle, are not limited thereto.
Figure 02_image027
…(2)

其中,

Figure 02_image015
為一誤差值。 in,
Figure 02_image015
is an error value.

在子步驟515中,對於每一點雲資料,該運算裝置14根據該另一推測障礙物參數及一障礙物參數如,距離對拍攝區間的第二轉換規則,將該另一推測障礙物參數轉換至一推測拍攝區間。值得特別說明的是,該第二轉換規則係先根據該另一推測障礙物參數利用一障礙物參數對時間的第二轉換函式,計算出該另一推測障礙物參數所對應的一推測拍攝時間,接著,根據該推測拍攝時間及多個影像取樣區間,將該推測拍攝時間所落入的影像取樣區間作為該推測拍攝區間。其中,該第二轉換函式係根據每一張影像的拍攝時間及所指示出之該特定障礙物的障礙物參數而獲得,每一張影像的拍攝時間點至下一張影像的拍攝時間點,但不包含下一張影像的拍攝時間點所定義出的時間區間即構成一影像取樣區間,圖7示例出該第二轉換函式71及該等影像取樣區間72。In sub-step 515, for each point cloud data, the calculation device 14 converts the other estimated obstacle parameter according to the other estimated obstacle parameter and an obstacle parameter such as the second conversion rule of distance to shooting interval To a presumed shooting interval. It is worth noting that the second conversion rule is to calculate an estimated shot corresponding to the other estimated obstacle parameter by using a second conversion function of an obstacle parameter versus time based on the other estimated obstacle parameter. time, and then, according to the estimated shooting time and a plurality of image sampling intervals, the image sampling interval in which the estimated shooting time falls is used as the estimated shooting interval. Wherein, the second conversion function is obtained according to the shooting time of each image and the indicated obstacle parameter of the specific obstacle, from the shooting time of each image to the shooting time of the next image , but not including the time interval defined by the shooting time point of the next image constitutes an image sampling interval. FIG. 7 illustrates the second conversion function 71 and the image sampling intervals 72 .

在子步驟516中,對於每一點雲資料,該運算裝置14根據該推測拍攝區間自該影像拍攝裝置11所拍攝之該等影像中獲得對應該推測拍攝區間的影像,以作為該待融合影像。對應該推測拍攝區間的影像即為對應有拍攝時間在該推測拍攝區間之起始時間點的影像。In sub-step 516, for each point cloud data, the computing device 14 obtains an image corresponding to the estimated shooting interval from the images captured by the image capturing device 11 according to the estimated shooting interval as the image to be fused. The image corresponding to the estimated shooting interval is the image corresponding to the starting time point of the estimated shooting interval.

值得特別說明的是,由於該影像拍攝裝置11係每33ms拍攝一張影像,該光達模組12係每100ms獲得一筆點雲資料,加上該影像拍攝裝置11與該光達模組12開始獲取資料的起始時間恐也存在誤差,故該影像拍攝裝置11與該光達模組12獲得資料的時間恐不同步,本發明藉由子步驟511~516之執行,以該光達模組12獲得該等點雲資料的獲得時間為準去獲得對應的待融合影像,進而盡量使所獲得的待融合影像能與所對應的點雲資料在時間上同步。It is worth noting that since the image capture device 11 captures an image every 33ms, the lidar module 12 obtains a piece of point cloud data every 100ms, and the image capture device 11 and the lidar module 12 start to There may also be errors in the starting time of data acquisition, so the time of obtaining data by the image capture device 11 and the LiDAR module 12 may not be synchronized. The present invention uses the LiDAR module 12 through the execution of sub-steps 511-516 The acquisition time of the point cloud data is used as the basis to obtain the corresponding image to be fused, and then the obtained image to be fused can be synchronized with the corresponding point cloud data as much as possible in time.

在步驟52中,對於每一待融合影像,該運算裝置14獲得早於該待融合影像拍攝的前第N張待合成影像,其中N≧3。在本實施例中,N為3。In step 52 , for each image to be fused, the computing device 14 obtains the Nth image to be synthesized earlier than the image to be fused, where N≧3. In this embodiment, N is 3.

在步驟53中,對於每一點雲資料,該運算裝置14根據所對應的該待融合影像與該前第N張待合成影像,獲得一對應該點雲資料的整合影像。在本實施例中,對於每一點雲資料,該運算裝置14係將所對應的該待融合影像之一部分(亦即,左半部)與前第N張待合成影像之一部分(亦即,右半部)進行影像合成,以獲得對應該點雲資料的該整合影像,由於該光達模組12每次在掃描時係由右至左掃描以獲得每筆點雲資料,因此所獲得之點雲資料中對應該移動載具之前方的右半部場景與對應該移動載具之前方的左半部場景間存在掃描延遲(亦即,所獲得之點雲資料中對應左半部場景的部分較新,所獲得之點雲資料中對應右半部場景的部分較舊),故藉由將所對應的待融合影像之左半邊與前第N張待合成影像之右半邊整合,可使合成後的整合影像較貼近該光達模組12的掃描情況,藉此達到空間上之同步。In step 53, for each point cloud data, the computing device 14 obtains a pair of integrated images corresponding to the point cloud data according to the corresponding image to be fused and the previous N-th image to be synthesized. In this embodiment, for each point cloud data, the computing device 14 combines a part of the corresponding image to be fused (that is, the left half) with a part of the previous Nth image to be synthesized (that is, the right half) to perform image synthesis to obtain the integrated image corresponding to the point cloud data. Since the lidar module 12 scans from right to left each time to obtain each piece of point cloud data, the obtained point There is a scanning delay between the right half of the scene in front of the mobile vehicle in the cloud data and the left half of the scene in front of the mobile vehicle (that is, the part of the obtained point cloud data corresponding to the left half of the scene is newer, the part corresponding to the right half of the scene in the obtained point cloud data is relatively old), so by integrating the left half of the corresponding image to be fused with the right half of the previous Nth image to be synthesized, the synthesis The resulting integrated image is closer to the scanning condition of the lidar module 12, thereby achieving spatial synchronization.

在步驟54中,對於每一點雲資料,該運算裝置14將該點雲資料對齊投影至所對應之整合影像。In step 54, for each point cloud data, the computing device 14 aligns and projects the point cloud data to the corresponding integrated image.

值得一提的是,步驟54包含以下子步驟(參圖8)。It is worth mentioning that step 54 includes the following sub-steps (refer to FIG. 8 ).

在子步驟541中,對於每一點雲資料,該運算裝置14根據相關於該點雲資料之一點雲座標系與該整合影像之像素座標系之座標系轉換的一座標轉換參數組,將該點雲資料轉換為二維點雲資料。值得一提的是,該座標轉換參數組包含一外部參數矩陣及一內部參數矩陣,該外部參數矩陣係根據該光達模組12之架設位置與該影像拍攝裝置11之架設位置而獲得。該內部參數矩陣係根據該影像拍攝裝置11之以像素為單位的焦距距離,及影像中心座標而獲得。由於本發明之重點不在於如何將該點雲資料轉換為二維點雲資料,其運算細節可參考http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/parameters.html此網站之說明,不再於此贅述及細節。In sub-step 541, for each point cloud data, the computing device 14 converts the point cloud coordinate system according to the coordinate conversion parameter set between the point cloud coordinate system of the point cloud data and the pixel coordinate system of the integrated image. The cloud data is converted into a 2D point cloud data. It is worth mentioning that the coordinate conversion parameter set includes an external parameter matrix and an internal parameter matrix, and the external parameter matrix is obtained according to the installation position of the lidar module 12 and the installation position of the image capture device 11 . The internal parameter matrix is obtained according to the focal distance in pixels of the image capture device 11 and the image center coordinates. Since the focus of the present invention is not how to convert the point cloud data into two-dimensional point cloud data, its calculation details can refer to http://www.vision.caltech.edu/bouguetj/calib_doc/htmls/parameters.html Description, no more repeat and details here.

在子步驟542中,對於每一二維點雲資料,該運算裝置14獲得該二維點雲資料中每一物件的邊緣特徵。In sub-step 542, for each 2D point cloud data, the computing device 14 obtains edge features of each object in the 2D point cloud data.

在子步驟543中,對於每一點雲資料所對應的整合影像,該運算裝置14獲得該整合影像中每一物件的邊緣特徵。In sub-step 543, for the integrated image corresponding to each point cloud data, the computing device 14 obtains the edge features of each object in the integrated image.

在子步驟544中,對於每一點雲資料,該運算裝置14根據所對應的該二維點雲資料中每一物件的邊緣特徵與所對應的整合影像中每一物件的邊緣特徵,將所對應的該二維點雲資料與所對應的整合影像對齊。In sub-step 544, for each point cloud data, the computing device 14 calculates the corresponding The 2D point cloud data is aligned with the corresponding integrated image.

在步驟55中,對於每一點雲資料,該運算裝置14根據經步驟54之對齊投影後的該點雲資料與該整合影像,利用一障礙物辨識模型,辨識出位於該移動載具前方之每一障礙物對應的障礙物類別。值得一提的是,該障礙物辨識模型係根據多筆標記有障礙物位置及障礙物類別的訓練資料利用一機器學習演算法而訓練出,每筆訓練資料包含一經對齊投影後的訓練點雲資料與訓練影像,且經對齊投影後的訓練點雲資料與訓練影像上標記有障礙物位置及障礙物類別。In step 55, for each point cloud data, the computing device 14 uses an obstacle recognition model to identify each object located in front of the mobile vehicle according to the aligned and projected point cloud data and the integrated image in step 54. An obstacle category corresponding to an obstacle. It is worth mentioning that the obstacle recognition model is trained using a machine learning algorithm based on multiple training data marked with obstacle positions and obstacle types. Each training data contains an aligned and projected training point cloud data and training images, and the aligned and projected training point cloud data and training images are marked with obstacle positions and obstacle categories.

綜上所述,本發明障礙物偵測方法,藉由該運算裝置14自該影像拍攝裝置11所拍攝之該等影像中獲得對應該點雲資料的該待融合影像,且自該雷達模組13所獲得之該等雷達資料中獲得對應該點雲資料的該待融合雷達資料,以同步不同感測器所獲得之資料,此外,藉由該運算裝置14根據所對應的待融合影像與前第N張待合成影像獲得該整合影像,且將該點雲資料對齊投影至所對應之整合影像,以使該點雲資料與該整合影像在空間上融合,再者,藉由根據對齊投影後的整合影像與點雲資料,利用該障礙物辨識模型,來辨識出每一障礙物對應的障礙物類別,以達成辨識障礙物之功效,故確實能達成本發明的目的。To sum up, in the obstacle detection method of the present invention, the computing device 14 obtains the image to be fused corresponding to the point cloud data from the images captured by the image capturing device 11, and obtains the image to be fused from the radar module The radar data to be fused corresponding to the point cloud data is obtained from the radar data obtained in 13, so as to synchronize the data obtained by different sensors. The integrated image is obtained from the Nth image to be synthesized, and the point cloud data is aligned and projected to the corresponding integrated image, so that the point cloud data and the integrated image are spatially fused, and furthermore, by aligning and projecting The integrated image and point cloud data, using the obstacle identification model to identify the obstacle category corresponding to each obstacle, so as to achieve the effect of identifying obstacles, so the purpose of the present invention can indeed be achieved.

惟以上所述者,僅為本發明的實施例而已,當不能以此限定本發明實施的範圍,凡是依本發明申請專利範圍及專利說明書內容所作的簡單的等效變化與修飾,皆仍屬本發明專利涵蓋的範圍內。But the above-mentioned ones are only embodiments of the present invention, and should not limit the scope of the present invention. All simple equivalent changes and modifications made according to the patent scope of the present invention and the content of the patent specification are still within the scope of the present invention. Within the scope covered by the patent of the present invention.

11:影像拍攝裝置 12:光達模組 13:雷達模組 14:運算裝置 41:第一轉換函式 42:雷達取樣區間 71:第二轉換函式 72:影像取樣區間 21~25:步驟 51~55:步驟 211~216:子步驟 511~516:子步驟 541~544:子步驟 11: Image shooting device 12:Lidar module 13:Radar module 14: Computing device 41: The first conversion function 42: Radar sampling interval 71: The second conversion function 72: Image sampling interval 21~25: Steps 51~55: Steps 211~216: sub-steps 511~516: sub-steps 541~544: sub-steps

本發明的其他的特徵及功效,將於參照圖式的實施方式中清楚地呈現,其中: 圖1是一方塊圖,說明實施本發明障礙物偵測方法之實施例的障礙物偵測系統; 圖2是一流程圖,說明本發明障礙物偵測方法之實施例的一雷達融合程序; 圖3是一流程圖,說明一運算裝置如何獲得一點雲資料所對應的一待融合雷達資料; 圖4是一示意圖,說明一第一轉換函式及多個雷達取樣區間; 圖5是一流程圖,說明本發明障礙物偵測方法之實施例的一影像融合程序; 圖6是一流程圖,說明該運算裝置如何獲得該點雲資料所對應的一待融合影像; 圖7是一示意圖,說明一第二轉換函式及多個影像取樣區間;及 圖8是一流程圖,說明該運算裝置如何將該點雲資料對齊投影至所對應的整合影像。 Other features and effects of the present invention will be clearly presented in the implementation manner with reference to the drawings, wherein: FIG. 1 is a block diagram illustrating an obstacle detection system implementing an embodiment of the obstacle detection method of the present invention; FIG. 2 is a flowchart illustrating a radar fusion procedure of an embodiment of the obstacle detection method of the present invention; Fig. 3 is a flow chart illustrating how an computing device obtains radar data to be fused corresponding to a point cloud data; 4 is a schematic diagram illustrating a first transfer function and a plurality of radar sampling intervals; FIG. 5 is a flowchart illustrating an image fusion procedure of an embodiment of the obstacle detection method of the present invention; Fig. 6 is a flowchart illustrating how the computing device obtains an image to be fused corresponding to the point cloud data; FIG. 7 is a schematic diagram illustrating a second conversion function and a plurality of image sampling intervals; and FIG. 8 is a flowchart illustrating how the computing device aligns and projects the point cloud data to the corresponding integrated image.

51~55:步驟 51~55: Steps

Claims (9)

一種障礙物偵測方法,藉由一運算裝置來實施,該運算裝置與設置於一移動載具上之一影像拍攝裝置與一光達模組電連接,該影像拍攝裝置用於持續拍攝位於該移動載具前方之多個障礙物的一連串影像,該光達模組用於持續獲得位於該移動載具之前方且包含該等障礙物的一連串點雲資料,該障礙物偵測方法包含以下步驟: (A)在接收到該光達模組的該等點雲資料與該影像拍攝裝置的該等影像後,對於每一點雲資料,根據該點雲資料,自該影像拍攝裝置所拍攝之該等影像中獲得一對應該點雲資料的待融合影像; (B) 對於每一待融合影像,獲得早於該待融合影像拍攝的前第N張待合成影像,其中N≧3; (C) 對於每一點雲資料,根據所對應的該待融合影像與該前第N張待合成影像,獲得一對應該點雲資料的整合影像; (D) 對於每一點雲資料,將該點雲資料對齊投影至所對應之該整合影像;及 (E)對於每一點雲資料,根據經步驟(D)之對齊投影後的該點雲資料與該整合影像,利用一障礙物辨識模型,辨識出位於該移動載具前方之每一障礙物對應的障礙物類別。 An obstacle detection method is implemented by a computing device, the computing device is electrically connected with an image capturing device installed on a mobile vehicle and a LiDAR module, and the image capturing device is used to continuously capture images located on the A series of images of multiple obstacles in front of the mobile vehicle, the lidar module is used to continuously obtain a series of point cloud data located in front of the mobile vehicle and including the obstacles, the obstacle detection method includes the following steps : (A) After receiving the point cloud data of the LiDAR module and the images of the image capture device, for each point cloud data, according to the point cloud data, the image captured by the image capture device Obtain a pair of images to be fused corresponding to the point cloud data in the image; (B) For each image to be fused, obtain the Nth image to be synthesized earlier than the image to be fused, where N≧3; (C) For each point cloud data, according to the corresponding image to be fused and the previous Nth image to be synthesized, obtain a pair of integrated images corresponding to the point cloud data; (D) For each point cloud data, align and project the point cloud data to the corresponding integrated image; and (E) For each point cloud data, according to the point cloud data and the integrated image after the aligned projection in step (D), use an obstacle recognition model to identify the correspondence of each obstacle located in front of the mobile vehicle obstacle category. 如請求項1所述的障礙物偵測方法,其中,步驟(A)包含以下子步驟: (A-1)在接收到該光達模組的該等點雲資料與該影像拍攝裝置的該等影像後,對於每一點雲資料,根據該點雲資料,自該影像拍攝裝置所拍攝之該等影像中取出一第一候選影像及一第二候選影像,該第一候選影像之拍攝時間早於該點雲資料的獲得時間,且與該點雲資料之獲得時間的時間間隔最短,該第二候選影像為該第一候選影像的前一張影像; (A-2) 對於每一點雲資料,根據該點雲資料對應的該第一候選影像,獲得位於該移動載具前方之該等障礙物中之一特定障礙物對應的一第一特定障礙物參數; (A-3) 對於每一點雲資料,根據該點雲資料對應的該第二候選影像,獲得該特定障礙物對應的一第二特定障礙物參數; (A-4)對於每一點雲資料,根據該點雲資料之獲得時間、該點雲資料所對應之第一候選影像及第二候選影像的拍攝時間、該第一特定障礙物參數以及該第二特定障礙物參數,計算該特定障礙物在點雲資料之獲得時間的一推測障礙物參數; (A-5) 對於每一點雲資料,根據該推測障礙物參數及一障礙物參數對拍攝區間的轉換規則,將該推測障礙物參數轉換至一推測拍攝區間;及 (A-6) 對於每一點雲資料,根據該推測拍攝區間自該影像拍攝裝置所拍攝之該等影像中獲得對應該推測拍攝區間的影像,以作為該待融合影像。 The obstacle detection method as described in claim 1, wherein step (A) includes the following sub-steps: (A-1) After receiving the point cloud data of the lidar module and the images of the image capture device, for each point cloud data, according to the point cloud data, from the image captured by the image capture device A first candidate image and a second candidate image are extracted from the images, the shooting time of the first candidate image is earlier than the acquisition time of the point cloud data, and the time interval with the acquisition time of the point cloud data is the shortest, the the second candidate image is a previous image of the first candidate image; (A-2) For each point cloud data, according to the first candidate image corresponding to the point cloud data, obtain a first specific obstacle corresponding to one of the specific obstacles located in front of the mobile vehicle parameter; (A-3) For each point cloud data, obtain a second specific obstacle parameter corresponding to the specific obstacle according to the second candidate image corresponding to the point cloud data; (A-4) For each point cloud data, according to the acquisition time of the point cloud data, the shooting time of the first candidate image and the second candidate image corresponding to the point cloud data, the first specific obstacle parameter and the first 2 specific obstacle parameters, calculating an estimated obstacle parameter for the specific obstacle at the time when the point cloud data is obtained; (A-5) For each point cloud data, convert the estimated obstacle parameter to an estimated shooting interval according to the estimated obstacle parameter and an obstacle parameter conversion rule for the shooting interval; and (A-6) For each point cloud data, according to the estimated shooting interval, an image corresponding to the estimated shooting interval is obtained from the images captured by the image capturing device as the image to be fused. 如請求項2所述的障礙物偵測方法,其中,在子步驟(A-4)中,還根據以下公式計算出該推測障礙物參數
Figure 03_image029
Figure 03_image031
, 其中,
Figure 03_image033
為該點雲資料之獲得時間、
Figure 03_image035
為該點雲資料所對應之第一候選影像的拍攝時間、
Figure 03_image037
為該點雲資料所對應之第二候選影像的拍攝時間、
Figure 03_image039
為該第一特定障礙物參數、
Figure 03_image041
為該第二特定障礙物參數,以及
Figure 03_image043
為一誤差。
The obstacle detection method as described in claim 2, wherein, in sub-step (A-4), the estimated obstacle parameter is also calculated according to the following formula
Figure 03_image029
:
Figure 03_image031
, in,
Figure 03_image033
is the time when the point cloud data was obtained,
Figure 03_image035
is the shooting time of the first candidate image corresponding to the point cloud data,
Figure 03_image037
is the shooting time of the second candidate image corresponding to the point cloud data,
Figure 03_image039
is the first specific obstacle parameter,
Figure 03_image041
is the second specific obstacle parameter, and
Figure 03_image043
is an error.
如請求項1所述的障礙物偵測方法,其中,在步驟(C)中,對於每一點雲資料,係將所對應的該待融合影像之一部分與該前第N張待合成影像之一部分進行影像合成,以獲得對應該點雲資料的該整合影像。The obstacle detection method as described in claim 1, wherein, in step (C), for each point cloud data, a part of the corresponding image to be fused and a part of the previous Nth image to be synthesized are combined Image synthesis is performed to obtain the integrated image corresponding to the point cloud data. 如請求項1所述的障礙物偵測方法,其中,步驟(D)包含以下子步驟: (D-1) 對於每一點雲資料,根據相關於該點雲資料之一點雲座標系與該整合影像之像素座標系之座標系轉換的一座標轉換參數組,將該點雲資料轉換為二維點雲資料; (D-2) 對於每一二維點雲資料,獲得該二維點雲資料中每一物件的邊緣特徵; (D-3) 對於每一點雲資料所對應的整合影像,獲得該整合影像中每一物件的邊緣特徵;及 (D-4) 對於每一點雲資料,根據所對應的二維點雲資料中每一物件的邊緣特徵與所對應的整合影像中每一物件的邊緣特徵,將所對應的該二維點雲資料與所對應的該整合影像對齊。 The obstacle detection method as described in claim 1, wherein step (D) includes the following sub-steps: (D-1) For each point cloud data, convert the point cloud data into two dimensional point cloud data; (D-2) For each two-dimensional point cloud data, obtain the edge features of each object in the two-dimensional point cloud data; (D-3) For the integrated image corresponding to each point cloud data, obtain the edge features of each object in the integrated image; and (D-4) For each point cloud data, according to the edge features of each object in the corresponding two-dimensional point cloud data and the edge features of each object in the corresponding integrated image, the corresponding two-dimensional point cloud The data is aligned with the corresponding integrated image. 如請求項1所述的障礙物偵測方法,還包含以下步驟: (F)在接收到該光達模組的該等點雲資料後,對於每一點雲資料,利用一方向包圍盒演算法,定位該點雲資料中的該等障礙物;及 (G)利用一卡爾曼濾波器,追蹤每一點雲資料中所定位出的該等障礙物以獲得該移動載具與位於該移動載具前方之每一障礙物對應的障礙物距離。 The obstacle detection method as described in claim 1, further comprising the following steps: (F) After receiving the point cloud data of the lidar module, for each point cloud data, use a bounding box algorithm to locate the obstacles in the point cloud data; and (G) Using a Kalman filter to track the obstacles located in each point cloud data to obtain an obstacle distance corresponding to the mobile vehicle and each obstacle located in front of the mobile vehicle. 如請求項1所述的障礙物偵測方法,該運算裝置還與設置於該移動載具上之一雷達模組電連接,該雷達模組用於持續獲得位於該移動載具之前方且包含該等障礙物的一連串雷達資料,該障礙物偵測方法還包含以下步驟: (H)在接收到該光達模組的該等點雲資料與該雷達模組的該等雷達資料後,對於每一點雲資料,根據該點雲資料,自該雷達模組所獲得之該等雷達資料中獲得一對應該點雲資料的待融合雷達資料; (I)對於每一待融合雷達資料,利用一方向包圍盒演算法,定位該待融合雷達資料中的該等障礙物;及 (J)利用一卡爾曼濾波器,追蹤每一待融合雷達資料中所定位出的該等障礙物以獲得位於該移動載具前方之每一障礙物對應的障礙物速度。 According to the obstacle detection method described in claim 1, the computing device is also electrically connected to a radar module arranged on the mobile vehicle, and the radar module is used to continuously obtain the radar module located in front of the mobile vehicle and including A series of radar data of the obstacles, the obstacle detection method also includes the following steps: (H) After receiving the point cloud data of the lidar module and the radar data of the radar module, for each point cloud data, according to the point cloud data, the Obtain a pair of radar data to be fused corresponding to the point cloud data from the radar data; (1) For each radar data to be fused, use a bounding box algorithm to locate the obstacles in the radar data to be fused; and (J) Using a Kalman filter to track the obstacles located in each of the radar data to be fused to obtain an obstacle velocity corresponding to each obstacle in front of the mobile vehicle. 如請求項7所述的障礙物偵測方法,其中,步驟(H)包含以下子步驟: (H-1)在接收到該光達模組的該等點雲資料與該雷達模組的該等雷達資料後,對於每一點雲資料,根據該點雲資料,自該雷達模組所獲得之該等雷達資料中取出一第一候選雷達資料及一第二候選雷達資料,該第一候選雷達資料之獲得時間早於該點雲資料的獲得時間,且與該點雲資料之獲得時間的時間間隔最短,該第二候選雷達資料為該第一候選雷達資料的前一筆雷達資料; (H-2) 對於每一點雲資料,根據該點雲資料對應的該第一候選雷達資料,獲得位於該移動載具前方之該等障礙物中之一特定障礙物對應的一第一特定障礙物參數; (H-3) 對於每一點雲資料,根據該點雲資料對應的該第二候選雷達資料,獲得該特定障礙物對應的一第二特定障礙物參數; (H-4)對於每一點雲資料,根據該點雲資料之獲得時間、該點雲資料所對應之第一候選雷達資料及第二候選雷達資料的獲得時間、該第一特定障礙物參數以及該第二特定障礙物參數,計算該特定障礙物在該點雲資料之獲得時間的一推測障礙物參數; (H-5) 對於每一點雲資料,根據該推測障礙物參數及一障礙物參數對獲得區間的轉換規則,將該推測障礙物參數轉換至一推測獲得區間;及 (H-6) 對於每一點雲資料,根據該推測獲得區間自該雷達模組所獲得之該等雷達資料中獲得對應該推測獲得區間的雷達資料,以作為該待融合雷達資料。 The obstacle detection method as described in claim item 7, wherein, step (H) comprises the following sub-steps: (H-1) After receiving the point cloud data of the lidar module and the radar data of the radar module, for each point cloud data, according to the point cloud data, obtained from the radar module A first candidate radar data and a second candidate radar data are extracted from the radar data, and the acquisition time of the first candidate radar data is earlier than the acquisition time of the point cloud data, and is the same as the acquisition time of the point cloud data The time interval is the shortest, and the second candidate radar data is the previous radar data of the first candidate radar data; (H-2) For each point cloud data, according to the first candidate radar data corresponding to the point cloud data, obtain a first specific obstacle corresponding to one of the specific obstacles located in front of the mobile vehicle physical parameters; (H-3) For each point cloud data, according to the second candidate radar data corresponding to the point cloud data, obtain a second specific obstacle parameter corresponding to the specific obstacle; (H-4) For each point cloud data, according to the acquisition time of the point cloud data, the acquisition time of the first candidate radar data and the second candidate radar data corresponding to the point cloud data, the first specific obstacle parameter and The second specific obstacle parameter, calculating an estimated obstacle parameter of the specific obstacle at the time when the point cloud data is obtained; (H-5) For each point cloud data, according to the estimated obstacle parameter and a conversion rule of an obstacle parameter to the obtained interval, convert the estimated obstacle parameter to an estimated obtained interval; and (H-6) For each point cloud data, the radar data corresponding to the estimated interval is obtained from the radar data obtained by the radar module according to the estimated interval as the radar data to be fused. 如請求項8所述的障礙物偵測方法,在子步驟(H-4)中,還根據以下公式計算出該推測障礙物參數
Figure 03_image045
Figure 03_image013
, 其中,
Figure 03_image033
為該點雲資料之獲得時間、
Figure 03_image035
為該點雲資料所對應之第一候選雷達資料的獲得時間、
Figure 03_image037
為該點雲資料所對應之第二候選雷達資料的獲得時間、
Figure 03_image047
為該第一特定障礙物參數,以及
Figure 03_image049
為該第二特定障礙物參數,
Figure 03_image043
為一誤差值。
Obstacle detection method as described in claim item 8, in sub-step (H-4), also calculate the estimated obstacle parameter according to the following formula
Figure 03_image045
:
Figure 03_image013
, in,
Figure 03_image033
is the time when the point cloud data was obtained,
Figure 03_image035
is the acquisition time of the first candidate radar data corresponding to the point cloud data,
Figure 03_image037
is the acquisition time of the second candidate radar data corresponding to the point cloud data,
Figure 03_image047
is the first specific obstacle parameter, and
Figure 03_image049
is the second specific obstacle parameter,
Figure 03_image043
is an error value.
TW110132319A 2021-08-31 2021-08-31 Obstacle detection method TWI774543B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW110132319A TWI774543B (en) 2021-08-31 2021-08-31 Obstacle detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW110132319A TWI774543B (en) 2021-08-31 2021-08-31 Obstacle detection method

Publications (2)

Publication Number Publication Date
TWI774543B TWI774543B (en) 2022-08-11
TW202311781A true TW202311781A (en) 2023-03-16

Family

ID=83807341

Family Applications (1)

Application Number Title Priority Date Filing Date
TW110132319A TWI774543B (en) 2021-08-31 2021-08-31 Obstacle detection method

Country Status (1)

Country Link
TW (1) TWI774543B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI824789B (en) * 2022-10-21 2023-12-01 國立臺北科技大學 Hybrid system for vehicle detection and hybrid method for vehicle detection

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI754808B (en) * 2018-11-29 2022-02-11 財團法人工業技術研究院 Vehicle, vehicle positioning system, and vehicle positioning method
US11100702B2 (en) * 2019-12-27 2021-08-24 Industrial Technology Research Institute 3D image labeling method based on labeling information of 2D image and 3D image labeling device

Also Published As

Publication number Publication date
TWI774543B (en) 2022-08-11

Similar Documents

Publication Publication Date Title
CN104108392B (en) Lane Estimation Apparatus And Method
US7321386B2 (en) Robust stereo-driven video-based surveillance
JP5663352B2 (en) Image processing apparatus, image processing method, and image processing program
JP2019194616A (en) Position detection method, device and equipment based upon image, and storage medium
KR20120053275A (en) Method and apparatus for estimating 3d face position
TW201004818A (en) The asynchronous photography automobile-detecting apparatus
KR101203816B1 (en) Robot fish localization system using artificial markers and method of the same
JP2018156408A (en) Image recognizing and capturing apparatus
JP5539250B2 (en) Approaching object detection device and approaching object detection method
JP2006226965A (en) Image processing system, computer program and image processing method
JP6410231B2 (en) Alignment apparatus, alignment method, and computer program for alignment
JP2000293693A (en) Obstacle detecting method and device
JPH09297849A (en) Vehicle detector
JP4123138B2 (en) Vehicle detection method and vehicle detection device
TWI774543B (en) Obstacle detection method
JPH1144533A (en) Preceding vehicle detector
CN111932590B (en) Object tracking method and device, electronic equipment and readable storage medium
KR101420242B1 (en) vehicle detector and method using stereo camera
JP5132164B2 (en) Background image creation device
JPH11211738A (en) Speed measurement method of traveling body and speed measuring device using the method
JP6886136B2 (en) Alignment device, alignment method and computer program for alignment
KR100844640B1 (en) Method for object recognizing and distance measuring
WO2021124657A1 (en) Camera system
JP2010014699A (en) Shape measuring apparatus and shape measuring method
CN113792645A (en) AI eyeball fusing image and laser radar