TWI673190B - Vehicle detection method based on optical radar - Google Patents

Vehicle detection method based on optical radar Download PDF

Info

Publication number
TWI673190B
TWI673190B TW107139781A TW107139781A TWI673190B TW I673190 B TWI673190 B TW I673190B TW 107139781 A TW107139781 A TW 107139781A TW 107139781 A TW107139781 A TW 107139781A TW I673190 B TWI673190 B TW I673190B
Authority
TW
Taiwan
Prior art keywords
dimensional
point cloud
optical radar
image
detection method
Prior art date
Application number
TW107139781A
Other languages
Chinese (zh)
Other versions
TW202017784A (en
Inventor
花凱龍
簡士哲
張峰嘉
莊定為
蕭有崧
蕭簡浩
Original Assignee
國家中山科學研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 國家中山科學研究院 filed Critical 國家中山科學研究院
Priority to TW107139781A priority Critical patent/TWI673190B/en
Application granted granted Critical
Publication of TWI673190B publication Critical patent/TWI673190B/en
Publication of TW202017784A publication Critical patent/TW202017784A/en

Links

Landscapes

  • Traffic Control Systems (AREA)
  • Optical Radar Systems And Details Thereof (AREA)
  • Image Analysis (AREA)

Abstract

一種基於光學雷達之汽車偵測方法,步驟包括:(A)藉由一光學雷達取得一三維點雲資料;(B)將該三維點雲資料進行一分割地面處理以去除地面資訊;(C)再將該三維點雲資料進行一物件提取處理以獲得一包含有汽車候選點雲群的三維點雲圖;(D)將該三維點雲圖進行一二維映射處理以獲得一二維圖;(E)對該三維點雲圖與該二維圖進行一三維及一二維特徵提取,並利用該三維及二維特徵以判斷出汽車位置,藉此,強化處理藉由光學雷達取得的影像資料,更可分辨出遠方汽車與環境區塊,改善夜間或是惡劣的天氣環境下,強化汽車辨識的能力。 An optical radar-based vehicle detection method, the steps include: (A) obtaining a three-dimensional point cloud data by an optical radar; (B) subjecting the three-dimensional point cloud data to a segmented ground processing to remove ground information; (C) An object extraction process is performed on the three-dimensional point cloud data to obtain a three-dimensional point cloud image including candidate point cloud groups of the car; (D) performing a two-dimensional mapping process on the three-dimensional point cloud image to obtain a two-dimensional map; (E ) Perform a three-dimensional and a two-dimensional feature extraction on the three-dimensional point cloud image and the two-dimensional image, and use the three-dimensional and two-dimensional features to determine the position of the car, thereby enhancing the processing of the image data obtained by the optical radar, and more Can distinguish distant cars and environmental blocks, improve the ability to identify cars at night or in bad weather.

Description

基於光學雷達之汽車偵測方法 Optical radar-based vehicle detection method

本發明係關於一種影像處理方法,特別是關於一種用於光學雷達之汽車偵測方法。 The present invention relates to an image processing method, and more particularly to a vehicle detection method for optical radar.

光學雷達(LIDAR)是自動駕駛系統中非常重要的組件,在許多傳統供應商和新創公司看來,未來自動駕駛汽車的障礙探測和還是得靠固態光學雷達,因為這是目前最節省成本且可靠的方式,光學雷達通常用於檢測車輛及行人,光學雷達比起millimeter wave(MMW)radar具有較高的空間分辨率及高測距精度,但也存在以下缺點:無法在惡劣環境下使用,如下雨或起霧,該狀況時探測距離會短於MMW radar,然而近期光學雷達靈敏度的改進已可以在惡劣環境下有更好的性能,目前已經成為重要的sensor並廣泛運用於自動駕駛上。 Optical radar (LIDAR) is a very important component in autonomous driving systems. From the perspective of many traditional suppliers and start-up companies, the obstacle detection of autonomous vehicles in the future will still rely on solid-state optical radar, because this is the most cost-effective and In a reliable way, optical radar is usually used to detect vehicles and pedestrians. Compared with millimeter wave (MMW) radar, optical radar has higher spatial resolution and higher ranging accuracy, but also has the following disadvantages: it cannot be used in harsh environments. In the event of rain or fog, the detection distance will be shorter than that of MMW radar in this situation. However, the recent improvements in the sensitivity of optical radar can have better performance in harsh environments, and it has now become an important sensor and is widely used in autonomous driving.

目前已經有許多優秀的論文在探討光學雷達檢測物體,例如:B.Li提出一種基於全卷積網路與Lidar資料的車輛偵測系統(B.Li,T.Zhang,and T.Xia.Vehicle detection from 3d lidar using fully convolutional network.In Robotics:Science and Systems,2016),Navarro-Serment提出一種基於幾何與運動特徵來辨識三維數據(L.E.Navarro-Serment,C. Mertz,and M.Hebert,“Pedestrian detection and tracking using three-dimensional LADAR data,”Int.J.Robot.Res.,vol.29,no.12,pp.1516-1528,May 2010.),Kidono提出切片的特徵,代表物體同高度水平的輪廓,提高遠距離物體的識別性能(K.Kidono,T.Miyasaka,A.Watanabe,T.Naito,and J.Miura,“Pedestrian recognition using high-definition LIDAR,”in Proc.IEEE IV Symp.,2011,pp.405-410.)。 At present, there are many excellent papers discussing objects detected by optical radar. For example: B.Li proposed a vehicle detection system based on full convolutional network and Lidar data (B.Li, T. Zhang, and T.Xia.Vehicle detection from 3d lidar using fully convolutional network. In Robotics: Science and Systems, 2016), Navarro-Serment proposed a method for identifying three-dimensional data based on geometric and motion characteristics (LENavarro-Serment, C. Mertz, and M. Hebert, "Pedestrian detection and tracking using three-dimensional LADAR data," Int. J. Robot. Res., Vol. 29, no. 12, pp. 1516-1528, May 2010.), proposed by Kidono The characteristics of the slice represent the contour of the object at the same height level, which improves the recognition performance of distant objects (K.Kidono, T. Miyasaka, A. Watanabe, T. Naito, and J. Miura, "Pedestrian recognition using high-definition LIDAR, "In Proc. IEEE IV Symp., 2011, pp.405-410.).

習知利用光學雷達(Light detection and ranging system,簡稱LIDAR)透過發射電磁輻射去探測障礙物的距離,但儘管光學雷達擁有高達50公尺的範圍偵測,但隨著偵測距離越長,點雲資料也會越稀疏,使得障礙物較難辨識。 It is known to use optical radar (Light detection and ranging system, LIDAR) to detect the distance of obstacles by transmitting electromagnetic radiation. However, although optical radar has a range detection of up to 50 meters, as the detection distance becomes longer, the point Cloud data will also become sparse, making obstacles harder to identify.

於目前的文獻中,已經有相當多的車輛偵測研究,基於視覺圖像的方法已經開發相當多年了,雖然這些方法在清晰的圖像條件下已經有不錯的成果,但是在夜間或是惡劣的天氣環境下,所得到的低清晰度圖像往往無法辨識清楚,因此習知使用光學雷達距離可長達50公尺以上,但大約到40公尺處,點雲資料就已經變得相當稀疏,無法清楚的辨識遠方物體;由於汽車行駛速度可高達100公里/小時,對於自動駕駛而言,遠處的汽車偵測同樣是非常重要的,因此無法辨識遠處的汽車可能會帶來嚴重的後果。 In the current literature, there have been quite a few researches on vehicle detection. Visual image-based methods have been developed for many years. Although these methods have had good results under clear image conditions, they may be bad at night. In the weather environment, the low-definition images obtained are often indistinguishable, so it is known that the range of optical radar can be more than 50 meters, but at about 40 meters, the point cloud data has become quite sparse. It is not possible to clearly identify distant objects; as cars can travel at speeds of up to 100 km / h, for autonomous driving, distant car detection is also very important, so failure to identify distant cars may cause serious as a result of.

因此目前業界極需發展出一種基於光學雷達之汽車偵測方法,能以影像處理強化紋理特徵的方式,以避免外在惡劣環境的影響,以及量測距離過遠等問題,如此一來,方能同時兼具準確度與成本,在不須花費其他硬體設備的改 造費用下,進而強化汽車辨識能力,以增加汽車安全之目的。 Therefore, the industry currently needs to develop an optical radar-based automotive detection method that can enhance texture features through image processing to avoid the impact of external harsh environments and the problem of measuring distances too far. It can have both accuracy and cost at the same time, without the need to modify other hardware equipment. At the expense of manufacturing costs, and then strengthen the ability to identify cars, to increase the purpose of car safety.

鑒於上述習知技術之缺點,本發明之主要目的在於提供一種基於光學雷達之汽車偵測方法,整合一光學雷達、一三維點雲資料、一物件提取處理及一三維及二維特徵提取等,以完成光學雷達三維點雲資料中的汽車影像強化處理。 In view of the shortcomings of the above-mentioned conventional technologies, the main object of the present invention is to provide an optical radar-based vehicle detection method that integrates an optical radar, a three-dimensional point cloud data, an object extraction process, and a three-dimensional and two-dimensional feature extraction. In order to complete the automotive image enhancement processing in optical radar 3D point cloud data.

為了達到上述目的,根據本發明所提出之一方案,提供一種基於光學雷達之汽車偵測方法,包括:(A)藉由一光學雷達取得一三維點雲資料;(B)將該三維點雲資料進行一分割地面處理以去除地面資訊;(C)再將該三維點雲資料進行一物件提取處理以獲得一包含有汽車候選點雲群的三維點雲圖;(D)將該三維點雲圖進行一二維映射處理以獲得一二維圖;(E)對該三維點雲圖與該二維圖進行一三維及一二維特徵提取,並利用該三維及二維特徵以判斷出汽車位置。 In order to achieve the above object, according to a solution proposed by the present invention, an optical radar-based vehicle detection method is provided, including: (A) obtaining a three-dimensional point cloud data by an optical radar; (B) obtaining the three-dimensional point cloud The data is subjected to a segmented ground processing to remove the ground information; (C) the 3D point cloud data is subjected to an object extraction process to obtain a 3D point cloud image containing the candidate point cloud group of the car; (D) the 3D point cloud image is processed A two-dimensional mapping process is performed to obtain a two-dimensional map; (E) performing a three-dimensional and a two-dimensional feature extraction on the three-dimensional point cloud map and the two-dimensional map, and using the three-dimensional and two-dimensional features to determine a car position.

上述中步驟(A)由光學雷達取得的三維點雲資料中,X、Y、Z軸都經過正規化(nomalization)處理,以刪除重複性和不一致的相依性,保護資料並讓三維點雲資料運用更有彈性;步驟(B)中分割地面處理可利用一隨機樣本共識(Random Sample Consensus)以去除地面,所謂的隨機樣本共識與傳統方法不同,並不是盡可能多的資料來獲得一個初始解後,才利用來消除無效的資料點,而是像本發明中的三維點 雲資料,利用隨機可行的樣本資料集,找出其一致性後以擴大這個(地面)集合,再將此集合消除,就可將三維點雲資料中的地面資訊消除。 In the three-dimensional point cloud data obtained by the optical radar in step (A) above, the X, Y, and Z axes have been subjected to normalization processing to remove repetitive and inconsistent dependencies, protect the data, and let the three-dimensional point cloud data The application is more flexible; the segmentation ground processing in step (B) can use a Random Sample Consensus to remove the ground. The so-called random sample consensus is different from the traditional method, and it is not as much data as possible to obtain an initial solution. Only then is it used to eliminate invalid data points, but like the three-dimensional points in the present invention For cloud data, use random and feasible sample data sets to find the consistency to expand this (ground) set, and then eliminate this set to eliminate the ground information in the 3D point cloud data.

步驟(C)中的物件提取處理可利用三維點雲資料中點與點之間的距離做分群,例如當兩點間距離小於0.2m時(但不以此為限),可將兩點標記為同一個群,再以各群的長寬高之比來篩選出行人候選點雲群。 The object extraction process in step (C) can use the distance between the points in the 3D point cloud data for grouping. For example, when the distance between two points is less than 0.2m (but not limited to this), the two points can be marked. For the same group, the candidate point cloud group of pedestrians is screened by the ratio of the length, width, and height of each group.

步驟(D)中的二維映射處理可包含下列步驟:(a)將該三維點雲圖映射出該二維圖;(b)利用二值化影像膨脹該二維圖以消除雜訊;(c)利用形態學演算出該二維圖內各物件的輪廓;(d)對該二維圖內的各物件作區域填滿,上述步驟,可有效去除雜訊,並使步驟(C)篩選出的汽車候選點雲群,在二維圖中加強使其更加明顯。 The two-dimensional mapping process in step (D) may include the following steps: (a) mapping the three-dimensional point cloud image to the two-dimensional image; (b) expanding the two-dimensional image using a binary image to eliminate noise; (c) ) Use the morphological calculation to calculate the outline of each object in the two-dimensional map; (d) Fill the area of each object in the two-dimensional map. The above steps can effectively remove noise and make step (C) screen out The candidate point cloud group of the car is strengthened in the two-dimensional map to make it more obvious.

步驟(E)中係同時利用提取出的三維及二維特徵,以機器學習進行分類,考量三維及二維特徵後,進而判斷出汽車位置。 In step (E), the extracted three-dimensional and two-dimensional features are simultaneously used for classification by machine learning. After considering the three-dimensional and two-dimensional features, the position of the car is further determined.

步驟(E)中係該三維及二維特徵係經深度卷積處理後,再經逐點卷積處理,以減少該三維及二維特徵之參數量。 In step (E), the three-dimensional and two-dimensional features are subjected to deep convolution processing and then subjected to point-wise convolution processing to reduce the parameter amount of the three-dimensional and two-dimensional features.

步驟(E)中該二維特徵係經深度卷積處理後,再經逐點卷積處理,取得一二維深度特徵,中該三維特徵係經三維卷積網路處理後,其取該三維卷積網路之特徵成為一三維 深度特徵,更可利用該三維深度特徵及二維特徵深度特徵以判斷出汽車位置。 In step (E), the two-dimensional feature is subjected to deep convolution processing, and then subjected to point-by-point convolution processing to obtain a two-dimensional depth feature. The three-dimensional feature is processed through a three-dimensional convolution network, and then the three-dimensional feature is taken. Convolutional network features become three-dimensional For the depth feature, the three-dimensional depth feature and the two-dimensional feature depth feature can be used to determine the position of the car.

以上之概述與接下來的詳細說明及附圖,皆是為了能進一步說明本創作達到預定目的所採取的方式、手段及功效。而有關本創作的其他目的及優點,將在後續的說明及圖式中加以闡述。 The above summary and the following detailed description and drawings are to further explain the methods, means and effects adopted by this creation to achieve the intended purpose. The other purposes and advantages of this creation will be explained in the subsequent description and drawings.

S101‧‧‧步驟(A) S101‧‧‧Step (A)

S102‧‧‧步驟(B) S102‧‧‧Step (B)

S103‧‧‧步驟(C) S103‧‧‧Step (C)

S104‧‧‧步驟(D) S104‧‧‧Step (D)

S105‧‧‧步驟(E) S105‧‧‧Step (E)

第一圖係為本發明一種基於光學雷達之汽車偵測方法流程圖;第二圖係為本發明一種二維映射處理實施例;第三圖係為本發明一實施例,一般汽車偵測方法結果(a)與本發明之汽車偵測方法結果(b)。 The first diagram is a flowchart of an optical radar-based vehicle detection method of the present invention; the second diagram is an embodiment of a two-dimensional mapping process of the present invention; the third diagram is an embodiment of the present invention, a general automobile detection method Result (a) and result (b) of the vehicle detection method of the present invention.

以下係藉由特定的具體實例說明本創作之實施方式,熟悉此技藝之人士可由本說明書所揭示之內容輕易地了解本創作之優點及功效。 The following is a specific example to illustrate the implementation of this creation. Those who are familiar with this technique can easily understand the advantages and effects of this creation from the content disclosed in this manual.

本發明利用光學雷達(Light detection and ranging system,LIDAR)透過發射電磁輻射去探測物件的距離,儘管光學雷達擁有50公尺的範圍偵測,但隨著偵測距離越長,點雲資料也會越稀疏,使得障礙物較難辨識,由其是在夜間或是惡劣的天氣環境下,所得到的低清晰度圖像往往無法辨識清楚,依據經驗光學雷達感測距離雖可長達50公尺以上,但大 約到40公尺處,三維點雲就已經變得相當稀疏了,無法清楚的辨識遠方物體,因此,本發明主要目的之一是希望能夠在只藉由使用光學雷達設備下,提升偵測遠方汽車所在的正確位置,提早偵測出汽車位置,可以給予自動駕駛足夠的反應距離,使得光學雷達不受光線影響,不論白天或晚上都能即時感測周遭障礙物形狀、距離等資訊。 The present invention uses an optical radar (Light detection and ranging system, LIDAR) to detect the distance of an object by transmitting electromagnetic radiation. Although the optical radar has a range detection of 50 meters, as the detection distance becomes longer, the point cloud data will also The more sparse it is, the more difficult it is to identify obstacles. Because it is at night or in bad weather, the low-resolution images obtained are often not clear. According to experience, the optical radar sensing distance can be as long as 50 meters. Above but big At about 40 meters, the three-dimensional point cloud has become quite sparse, and distant objects cannot be clearly identified. Therefore, one of the main objects of the present invention is to improve the detection of distant places by using only optical radar equipment. Detecting the car's position early in the correct position of the car can give autopilot a sufficient reaction distance, so that the optical radar is not affected by light, and can instantly sense information about the shape and distance of obstacles around the clock, day or night.

請參閱第一圖,為本發明一種基於光學雷達之汽車偵測方法流程圖,如第一圖所示,本發明所提供一種基於光學雷達之汽車偵測方法,步驟包括:步驟(A)藉由一光學雷達取得一三維點雲資料S101,其中該三維點雲資料包含正規化(nomalization)的XYZ軸;步驟(B)將該三維點雲資料進行一分割地面處理以去除地面資訊S102,其中該分割地面處理係利用一隨機樣本共識(Random Sample Consensus)以去除地面;步驟(C)再將該三維點雲資料進行一物件提取處理以獲得一包含有汽車候選點雲群的三維點雲圖S103,其中該物件提取處理係利用三維點雲資料中點與點之間的距離做分群;步驟(D)將該三維點雲圖進行一二維映射處理以獲得一二維圖S104;步驟(E)對該三維點雲圖與該二維圖進行一三維及一二維特徵提取,並利用該三維及二維特徵以判斷出汽車位置S105。 Please refer to the first figure, which is a flowchart of an optical radar-based vehicle detection method according to the present invention. As shown in the first figure, the present invention provides an optical radar-based vehicle detection method. The steps include: (A) borrowing Obtain a three-dimensional point cloud data S101 from an optical radar, where the three-dimensional point cloud data includes normalized XYZ axes; step (B) perform a segmented ground processing on the three-dimensional point cloud data to remove ground information S102, where The segmented ground processing uses a Random Sample Consensus to remove the ground; step (C) performs an object extraction process on the 3D point cloud data to obtain a 3D point cloud image including a candidate point cloud group of the car S103 Among them, the object extraction processing uses the distance between points in the 3D point cloud data to perform grouping; step (D) performs a 2D mapping process on the 3D point cloud image to obtain a 2D map S104; step (E) Perform one-dimensional and one-dimensional feature extraction on the three-dimensional point cloud image and the two-dimensional image, and use the three-dimensional and two-dimensional features to determine a car position S105.

實施例 Examples

步驟一:光學雷達(LIDAR)數據採集,本實施例利用LIDAR取得三維點雲數據,經過前處理,讓X、Y、Z軸都正規化, 正規化處理是用來刪除重複性和不一致的相依性,以保護資料並讓三維點雲資料運用更有彈性;步驟二:分割地面處理,本實施例利用隨機樣本共識(Random Sample Consensus,簡稱RANSAC)去除地面,所謂的隨機樣本共識與傳統方法不同,並不是盡可能多的資料來獲得一個初始解後,才利用來消除無效的資料點,而是像本實施例將三維點雲資料,利用隨機可行的樣本資料集,找出其一致性後以擴大地面集合,再將此地面集合消除,就可將三維點雲資料中的地面資訊消除;步驟三:物件提取處理,本實施例根據點之間的距離做分群,用K-D tree做搜尋方式,當兩點間距離小於0.2m時,該兩點標記為同一個群,再以各群的長寬高初步篩選行人候選點雲群;步驟四:分類處理,本實施例使用三維及映射二維特徵所提取出的特徵向量計算及評估,利用機器學習的方式分類出是否為汽車,其實施內容可使用著名的二維特徵以增加精確度,將測得的三維點雲分群,利用汽車大小提取出候選點雲群,再從三維點雲中提取特徵,而為了解決長距離及稀疏點的問題,我們將三維點雲映射至二維平面上,此過程需經計算到該點雲群的距離及角度,經旋轉點雲後映射至二維平面,再經過輪廓判斷及填滿形成二維圖像,再提取相應的二維輪廓及其相關的二維特徵,例如:方向梯度直方圖(HOG)、 影像特徵提取(LBP)、哈爾特徵Haar-like,最後基於傳統的三維特徵及所提出的二維特徵利用機器學習作分類; 請參閱第二圖,為本發明一種二維映射處理實施歷,上述實施例中三維點雲映射至二維平面以產出一二維圖的二維映射處理,該二維映射處理包含下列步驟:(a)將該三維點雲圖映射產出一二維圖,此過程需經計算到該點雲群的距離及角度,經旋轉點雲後映射至二維平面;(b)利用二值化影像膨脹該二維圖以消除雜訊;(c)利用形態學演算出該二維圖內各物件的輪廓;(d)對該二維圖內的各物件作區域(空隙)填滿。 Step 1: Optical radar (LIDAR) data acquisition. In this embodiment, LIDAR is used to obtain three-dimensional point cloud data. After pre-processing, the X, Y, and Z axes are normalized. The normalization process is used to delete repetitive and inconsistent dependencies to protect the data and make the use of 3D point cloud data more flexible. Step 2: Segmentation of the ground. This example uses Random Sample Consensus (RANSAC for short) ) Remove the ground. The so-called random sample consensus is different from the traditional method. It is not to use as much data as possible to obtain an initial solution, and then use it to eliminate invalid data points. Instead, use the three-dimensional point cloud data as in this embodiment. Randomly feasible sample data set, find the consistency to expand the ground set, and then eliminate this ground set, then the ground information in the 3D point cloud data can be eliminated; Step 3: Object extraction processing, this embodiment is based on the point The distance between them is divided into groups, and the KD tree is used as the search method. When the distance between two points is less than 0.2m, the two points are marked as the same group, and then the pedestrian candidate point cloud group is preliminarily filtered by the length, width, and height of each group; Four: classification processing, this embodiment uses the three-dimensional and mapped two-dimensional features extracted feature vector calculation and evaluation, using machine learning to classify Whether it is a car, its implementation can use the well-known two-dimensional features to increase accuracy, group the measured three-dimensional point clouds, use the size of the car to extract candidate point cloud groups, and then extract features from the three-dimensional point clouds. To solve the problem of long distance and sparse points, we map the 3D point cloud to a 2D plane. This process needs to calculate the distance and angle of the point cloud group, and then map the 2D plane to the 2D plane after rotating the point cloud, and then pass the contour. Judging and filling to form a two-dimensional image, and then extracting the corresponding two-dimensional contour and its related two-dimensional features, such as: direction gradient histogram (HOG), Image Feature Extraction (LBP), Haar-like, and finally use machine learning for classification based on traditional 3D features and proposed 2D features; Please refer to the second figure, which is a two-dimensional mapping process implementation history of the present invention. In the above embodiment, the three-dimensional point cloud is mapped to a two-dimensional plane to generate a two-dimensional mapping process. The two-dimensional mapping process includes the following steps. : (A) Map the three-dimensional point cloud map into a two-dimensional map. This process needs to calculate the distance and angle to the point cloud group, and then map the two-dimensional plane after rotating the point cloud. (B) Use binarization. The image expands the two-dimensional image to eliminate noise; (c) calculates the outline of each object in the two-dimensional image using morphological calculations; (d) fills the area (gap) of each object in the two-dimensional image.

如第二圖所示,該圖上方為原始點雲,該圖下方為經過輪廓判斷及填滿後的二維圖像資訊,接著提取相應的二維輪廓及其相關的二維特徵(例如:HOG、LBP、Haar-like),為進一步的提升準確率,本發明提出將三維點雲轉換成立體像素,先找到影像中所有候選汽車的邊界框,然後將其劃分成N * N * N個區塊,於本實施例中N為30,之後計算每個區塊中的點數量並且將所有區塊中的數量正規化到0到1之間,藉此將所有區塊成為複數立體像素,轉換成立體像素之後,經過三維卷積神經網路運算,創建出此三維卷積神經網路,輸入層為30 * 30 * 30的立體像素,每個卷積層的內核數量是30 * 30且內核大小為5 * 5 * 5,本實施例從此三維卷積網絡的最後一層提取500維特徵作為深度學習特徵,之後再 將映射二維圖透過一個二維卷積神經網路,提取出二維的深度特徵,該神經網路把標準卷積過程分解成深度卷積以及逐點卷積兩個過程,此轉換過程可大幅降低參數量以及計算量,其中深度卷積指的是先只對影像的每個通道的影像(例如:RGB三個通道的影像)來做單獨的卷積,深度卷積完之後通道數不會改變(即卷積前為三個通道,卷積後還是三個通道),而逐點卷積指的是對於每個點來做卷積,通過設定的卷積核數量來改變最後卷積完成的通道數,不會改變影像大小,相較於標準型的卷積,因為先通過深度卷積降低影像的大小,然後才做逐點卷積,所以完成所有的卷積操作時就可以減少參數量以及計算量,最後結合傳統的三維、二維特徵與深度特徵利用機器學習作分類。 As shown in the second figure, the original point cloud is at the top of the figure, and the two-dimensional image information after contour determination and filling is below the figure, and then the corresponding two-dimensional contour and its related two-dimensional features are extracted (for example: HOG, LBP, Haar-like), in order to further improve the accuracy, the present invention proposes to convert the 3D point cloud into volume pixels, first find the bounding boxes of all candidate cars in the image, and then divide them into N * N * N Block, N is 30 in this embodiment, and then calculate the number of points in each block and normalize the number in all blocks to between 0 and 1, thereby turning all blocks into complex stereo pixels, After conversion into volume pixels, the 3D convolutional neural network is used to create this 3D convolutional neural network. The input layer is 30 * 30 * 30 stereo pixels. The number of kernels in each convolutional layer is 30 * 30 and the kernel The size is 5 * 5 * 5, this embodiment extracts 500-dimensional features from the last layer of this 3D convolutional network as deep learning features, and then The two-dimensional map is extracted through a two-dimensional convolutional neural network to extract two-dimensional depth features. The neural network decomposes the standard convolution process into two processes: deep convolution and point-by-point convolution. This conversion process can be Significantly reduce the amount of parameters and calculations. The depth convolution refers to the first convolution of only the images of each channel of the image (for example, the three channels of RGB images). After the depth convolution, the number of channels does not change. Will change (that is, three channels before convolution, or three channels after convolution), and pointwise convolution refers to convolution for each point, and the final convolution is changed by the number of convolution kernels set The number of completed channels does not change the image size. Compared to the standard type of convolution, because the image size is reduced by deep convolution before point-by-point convolution, so all convolution operations can be reduced. The amount of parameters and the amount of calculations are finally combined with traditional three-dimensional, two-dimensional features and deep features using machine learning for classification.

請參閱第三圖,係本發明一實施例,一般汽車偵測方法結果(a)與本發明之汽車偵測方法結果(b),如圖所示,單純使用三維的特徵來判斷汽車,如果汽車距離較遠點雲會很稀疏,只有使用三維特徵可能無法辨識出遠方汽車,以至於偵測不到遠方的汽車,如圖三(a)上方;圖三(b)為使用上述傳統的三維、二維特徵與深度特徵利用機器學習作分類所產出的汽車偵測結果圖,如圖所示,經本發明之偵測方法,就可以偵測出遠方的汽車。 Please refer to the third figure, which is an embodiment of the present invention. The results of the general vehicle detection method (a) and the results of the vehicle detection method (b) of the present invention, as shown in the figure, use only three-dimensional features to determine the car. The point cloud of cars far away will be very sparse. Only using three-dimensional features may not be able to identify distant cars, so that distant cars cannot be detected, as shown in Figure 3 (a); Figure 3 (b) uses the traditional 3D The two-dimensional features and the deep features are classified by the machine detection results generated by machine learning. As shown in the figure, the distant cars can be detected by the detection method of the present invention.

本發明利用光學雷達,發展出「基於光學雷達之汽車偵測方法」,此系統對於光學雷達偵測遠方物體有極大的 改善,可避免純靠三維特徵來做汽車判斷時所產生不夠準確等相關問題,在實際的智慧車輛上,需要準確的汽車距離以幫助提醒駕駛者,並且不論白天或夜晚都必須良好的運作,相比之下,光學雷達擁有最大可達50公尺的範圍偵測,足夠提供車速提升時車輛動態判斷的處理時間,360度分層掃描(64層)可描繪出車輛周圍物體的輪廓,有利於物體識別與分類,相當適合戶外及用於車輛的偵測;透過本案發明的方法可以提升光學雷達辨識汽車的準確率,可再融合RGB-Camera來優化系統,更可提高精確度,相信在未來可以增加光學雷達在自動輔助駕駛系統或工業相關產業的應用,例如光學雷達也可以於室內使用,在大型自動化工廠中,利用本案發明的方法偵測工作機台,當有自動化機器靠近複數機器移動的區域時,機器可以即時自動停止運作,避免發生碰撞。 The present invention uses optical radar to develop a "optical radar-based vehicle detection method". This system has great advantages for optical radar to detect distant objects. Improvement can avoid related problems such as inaccuracy when making car judgments solely based on 3D features. In actual smart vehicles, accurate car distance is needed to help remind the driver, and it must work well regardless of day or night. In contrast, the optical radar has a range detection of up to 50 meters, which is enough to provide the processing time for vehicle dynamic judgment when the vehicle speed is increased. The 360-degree layered scan (64 layers) can depict the outline of the objects around the vehicle, which is beneficial. For object recognition and classification, it is quite suitable for outdoor and vehicle detection. The method of the invention can improve the accuracy of optical radar to identify cars. It can be further integrated with RGB-Camera to optimize the system, and it can improve the accuracy. In the future, the application of optical radars in automatic driver assistance systems or industrial related industries can be increased. For example, optical radars can also be used indoors. In large automated factories, the method of the present invention is used to detect working machines. When there are automated machines near multiple machines, When moving the area, the machine can stop automatically immediately to avoid collision.

上述之實施例僅為例示性說明本創作之特點及功效,非用以限制本創作之實質技術內容的範圍。任何熟悉此技藝之人士均可在不違背創作之精神及範疇下,對上述實施例進行修飾與變化,因此,本創作之權利保護範圍,應如後述之申請專利範圍所列。 The above-mentioned embodiments are only for illustrative purposes to explain the features and effects of this creation, and are not intended to limit the scope of the substantial technical content of this creation. Anyone who is familiar with this technique can modify and change the above embodiments without departing from the spirit and scope of the creation. Therefore, the scope of protection of the rights of this creation should be listed in the scope of patent applications described below.

Claims (7)

一種基於光學雷達之汽車偵測方法,步驟包括:(A)藉由一光學雷達取得一三維點雲資料;(B)將該三維點雲資料進行一分割地面處理以去除地面資訊;(C)再將該三維點雲資料進行一物件提取處理以獲得一包含有汽車候選點雲群的三維點雲圖;(D)將該三維點雲圖進行一二維映射處理以獲得一二維圖;(E)對該三維點雲圖與該二維圖進行一三維及一二維特徵提取,並利用該三維及二維特徵以判斷出汽車位置。 An optical radar-based vehicle detection method, the steps include: (A) obtaining a three-dimensional point cloud data by an optical radar; (B) subjecting the three-dimensional point cloud data to a segmented ground processing to remove ground information; (C) An object extraction process is performed on the three-dimensional point cloud data to obtain a three-dimensional point cloud image including candidate point cloud groups of the car; (D) performing a two-dimensional mapping process on the three-dimensional point cloud image to obtain a two-dimensional map; (E ) Perform one-dimensional and one-dimensional feature extraction on the three-dimensional point cloud image and the two-dimensional image, and use the three-dimensional and two-dimensional features to determine the position of the car. 如申請專利範圍第1項所述之基於光學雷達之汽車偵測方法,其中,該二維映射處理包含下列步驟:(a)將該三維點雲圖映射出該二維圖;(b)利用二值化影像膨脹該二維圖以消除雜訊;(c)利用形態學演算出該二維圖內各物件的輪廓;(d)對該二維圖內的各物件作區域填滿。 According to the optical radar-based vehicle detection method described in item 1 of the patent application scope, the two-dimensional mapping process includes the following steps: (a) mapping the three-dimensional point cloud image to the two-dimensional image; (b) using two The valued image expands the two-dimensional map to eliminate noise; (c) calculates the outline of each object in the two-dimensional map using morphological calculations; (d) fills the area of each object in the two-dimensional map. 如申請專利範圍第2項所述之基於光學雷達之汽車偵測方法,其中,該二維特徵係選自HOG、LBP、Haar-like其中之一或其混合。 According to the optical radar-based automobile detection method described in the second patent application scope, wherein the two-dimensional feature is selected from one of HOG, LBP, Haar-like, or a mixture thereof. 如申請專利範圍第3項所述之基於光學雷達之汽車偵測方法,其中,步驟(E)中係同時利用該三維及二維特徵以機器學習進行分類,進而判斷出汽車位置。 According to the optical radar-based vehicle detection method described in item 3 of the scope of the patent application, in step (E), the three-dimensional and two-dimensional features are used for classification by machine learning to determine the position of the vehicle. 如申請專利範圍第1項所述之基於光學雷達之汽車偵測方法,其中,步驟(E)中更包含該二維特徵係經深度卷積處理後,再經逐點卷積處理,取得一二維深度特徵。 According to the optical radar-based vehicle detection method described in item 1 of the scope of patent application, wherein step (E) further includes the two-dimensional feature is subjected to deep convolution processing, and then subjected to point-by-point convolution processing to obtain a 2D depth features. 如申請專利範圍第5項所述之基於光學雷達之汽車偵測方法,其中,步驟(E)中更包含該三維特徵係經三維卷積網路處理後,取該三維卷積網路之特徵成為一三維深度特徵。 According to the optical radar-based automobile detection method described in item 5 of the scope of patent application, wherein step (E) further includes the three-dimensional feature after being processed by a three-dimensional convolution network, and the characteristics of the three-dimensional convolution network are taken. Become a three-dimensional depth feature. 如申請專利範圍第6項所述之基於光學雷達之汽車偵測方法,其中,步驟(E)中更利用該三維深度特徵及二維特徵深度特徵以判斷出汽車位置。 According to the optical radar-based automobile detection method described in the sixth item of the patent application scope, in step (E), the three-dimensional depth feature and the two-dimensional feature depth feature are further used to determine the car position.
TW107139781A 2018-11-07 2018-11-07 Vehicle detection method based on optical radar TWI673190B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW107139781A TWI673190B (en) 2018-11-07 2018-11-07 Vehicle detection method based on optical radar

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW107139781A TWI673190B (en) 2018-11-07 2018-11-07 Vehicle detection method based on optical radar

Publications (2)

Publication Number Publication Date
TWI673190B true TWI673190B (en) 2019-10-01
TW202017784A TW202017784A (en) 2020-05-16

Family

ID=69023579

Family Applications (1)

Application Number Title Priority Date Filing Date
TW107139781A TWI673190B (en) 2018-11-07 2018-11-07 Vehicle detection method based on optical radar

Country Status (1)

Country Link
TW (1) TWI673190B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI735191B (en) * 2020-03-27 2021-08-01 國立陽明交通大學 System and method for lidar defogging
CN117576311A (en) * 2023-11-13 2024-02-20 中铁二十五局集团第三工程有限公司 Three-dimensional reconstruction method, device, equipment and storage medium for sound barrier engineering

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI745204B (en) * 2020-12-28 2021-11-01 國家中山科學研究院 High-efficiency LiDAR object detection method based on deep learning
TWI766560B (en) * 2021-01-27 2022-06-01 國立臺灣大學 Object recognition and ranging system using image semantic segmentation and lidar point cloud

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150035662A1 (en) * 2013-07-31 2015-02-05 Elwha, Llc Systems and methods for adaptive vehicle sensing systems
CN104670222A (en) * 2013-11-29 2015-06-03 财团法人车辆研究测试中心 Sliding-mode driving route voting strategy module and driving control system and method
US20170261602A1 (en) * 2014-07-03 2017-09-14 GM Global Technology Operations LLC Vehicle radar methods and systems
TWI620677B (en) * 2016-08-25 2018-04-11 崑山科技大學 Automatic control method for vehicle lane change
US20180151071A1 (en) * 2016-11-30 2018-05-31 Hyundai Motor Company Apparatus and method for recognizing position of vehicle
US20180196127A1 (en) * 2017-01-11 2018-07-12 Toyota Research Institute, Inc. Systems and methods for automatically calibrating a lidar using information from a secondary vehicle

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150035662A1 (en) * 2013-07-31 2015-02-05 Elwha, Llc Systems and methods for adaptive vehicle sensing systems
CN104670222A (en) * 2013-11-29 2015-06-03 财团法人车辆研究测试中心 Sliding-mode driving route voting strategy module and driving control system and method
US20170261602A1 (en) * 2014-07-03 2017-09-14 GM Global Technology Operations LLC Vehicle radar methods and systems
TWI620677B (en) * 2016-08-25 2018-04-11 崑山科技大學 Automatic control method for vehicle lane change
US20180151071A1 (en) * 2016-11-30 2018-05-31 Hyundai Motor Company Apparatus and method for recognizing position of vehicle
US20180196127A1 (en) * 2017-01-11 2018-07-12 Toyota Research Institute, Inc. Systems and methods for automatically calibrating a lidar using information from a secondary vehicle

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI735191B (en) * 2020-03-27 2021-08-01 國立陽明交通大學 System and method for lidar defogging
CN117576311A (en) * 2023-11-13 2024-02-20 中铁二十五局集团第三工程有限公司 Three-dimensional reconstruction method, device, equipment and storage medium for sound barrier engineering

Also Published As

Publication number Publication date
TW202017784A (en) 2020-05-16

Similar Documents

Publication Publication Date Title
CN109948661B (en) 3D vehicle detection method based on multi-sensor fusion
TWI673190B (en) Vehicle detection method based on optical radar
CN107817496B (en) Object detection system suitable for automatic vehicle
TWI651686B (en) Optical radar pedestrian detection method
Asvadi et al. 3D object tracking using RGB and LIDAR data
CN106951879B (en) Multi-feature fusion vehicle detection method based on camera and millimeter wave radar
CN113111887B (en) Semantic segmentation method and system based on information fusion of camera and laser radar
CN104541302B (en) Distance prompt Object Segmentation System and method
Weon et al. Object Recognition based interpolation with 3d lidar and vision for autonomous driving of an intelligent vehicle
JP5822255B2 (en) Object identification device and program
Yao et al. Extraction and motion estimation of vehicles in single-pass airborne LiDAR data towards urban traffic analysis
CN113412505A (en) System and method for ordered representation and feature extraction of point clouds obtained by detection and ranging sensors
CN113658257B (en) Unmanned equipment positioning method, device, equipment and storage medium
CN113267761B (en) Laser radar target detection and identification method, system and computer readable storage medium
CN113177593A (en) Fusion method of radar point cloud and image data in water traffic environment
CN114463736A (en) Multi-target detection method and device based on multi-mode information fusion
CN112990049A (en) AEB emergency braking method and device for automatic driving of vehicle
CN117975436A (en) Three-dimensional target detection method based on multi-mode fusion and deformable attention
CN113688738A (en) Target identification system and method based on laser radar point cloud data
CN116486287A (en) Target detection method and system based on environment self-adaptive robot vision system
CN116503803A (en) Obstacle detection method, obstacle detection device, electronic device and storage medium
CN114118247A (en) Anchor-frame-free 3D target detection method based on multi-sensor fusion
Nedevschi et al. Driving environment perception using stereovision
CN116778262A (en) Three-dimensional target detection method and system based on virtual point cloud
Bharadhwaj et al. Deep learning-based 3D object detection using LiDAR and image data fusion