TW202416216A - Path planning system and method of autonomous vehicles to compute the predicted path based on the point cloud data of either the preceding vehicle or the lane edge for reducing the costs of constructing high-precision graphics, and reducing the storage space - Google Patents

Path planning system and method of autonomous vehicles to compute the predicted path based on the point cloud data of either the preceding vehicle or the lane edge for reducing the costs of constructing high-precision graphics, and reducing the storage space Download PDF

Info

Publication number
TW202416216A
TW202416216A TW111139032A TW111139032A TW202416216A TW 202416216 A TW202416216 A TW 202416216A TW 111139032 A TW111139032 A TW 111139032A TW 111139032 A TW111139032 A TW 111139032A TW 202416216 A TW202416216 A TW 202416216A
Authority
TW
Taiwan
Prior art keywords
vehicle
path
lane
driving
bird
Prior art date
Application number
TW111139032A
Other languages
Chinese (zh)
Other versions
TWI824773B (en
Inventor
楊濟帆
許琮明
鄭守益
Original Assignee
財團法人車輛研究測試中心
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 財團法人車輛研究測試中心 filed Critical 財團法人車輛研究測試中心
Priority to TW111139032A priority Critical patent/TWI824773B/en
Application granted granted Critical
Publication of TWI824773B publication Critical patent/TWI824773B/en
Publication of TW202416216A publication Critical patent/TW202416216A/en

Links

Images

Landscapes

  • Navigation (AREA)
  • Control Of Motors That Do Not Use Commutators (AREA)

Abstract

Disclosed is a path planning system and method of an autonomous vehicle, which uses at least one sensor to detect the surrounding environment information of the vehicle, and converts the surrounding environment information into a bird's-eye view including the coordinate information of each coordinate point. The system identifies and marks the lane edges, lanes and other vehicles in the bird's-eye view based on the coordinate information, and then computes the center point of a lane as well as finds the preceding vehicle. Then, the speed of the preceding vehicle is computed based on the position of the preceding vehicle, and one of the predicted paths of the preceding vehicle is estimated. If the predicted path of the preceding vehicle is identical to the driving path of the present vehicle, the preceding vehicle will be used as a path reference point; otherwise, the lane edge will be used as a path reference line to compute the final path of the present vehicle. This invention can plan the paths through point cloud data, greatly reducing the costs of constructing high-precision graphics, and reducing the storage space occupied by the data.

Description

自駕車之路徑規劃系統及方法Self-driving vehicle route planning system and method

本發明係有關一種路徑規劃系統,特別是指一種之自駕車之路徑規劃系統及方法。The present invention relates to a route planning system, and more particularly to a route planning system and method for a self-driving vehicle.

近年自駕車技術逐漸成熟,相關的開源自駕車軟體紛紛投入市場,使自駕車開發門檻降低。目前主流的自駕車技術多依賴由GPS位置所錄製的高精圖資或車道線偵測以獲取最佳路徑。In recent years, self-driving car technology has gradually matured, and related open source self-driving software has been put into the market, lowering the threshold for self-driving car development. Currently, mainstream self-driving car technology mostly relies on high-precision maps recorded by GPS location or lane line detection to obtain the best path.

其中,車道線偵測獲取最佳路徑的方法有其硬傷,在於但並非所有的環境皆有車道線,如路口、停車場等沒有車道線。這些沒有劃設車道線的地方就會偵測失敗,因此車道線偵測的方法會受到環境上的限制。Among them, the method of obtaining the best path by lane line detection has its own shortcomings, that is, not all environments have lane lines, such as intersections and parking lots. In these places without lane lines, detection will fail, so the lane line detection method will be limited by the environment.

而利用高精圖資計算最佳路徑的方法,需要先利用搭載立體攝影機的車輛收集完整的道路資訊,辨識出有助於定位的道路特徵。例如建築物、交通號誌、路燈等,以及道路標線如車道線、方向箭頭、行人穿越道等。再將道路資訊的圖資與GPS的定位資料結合,即可產生精確的路線影像。但其最大問題在於若定位失效則無法使用。尤其當車輛位於路口時,同時缺乏車道線可供偵測,則此時將無法規劃車輛路線。此外,獲得圖資資料需要耗費大量的人力與經費進行量測,資料量也相當龐大,進而導致成本增加。The method of using high-precision map data to calculate the best path requires first using a vehicle equipped with a stereo camera to collect complete road information and identify road features that are helpful for positioning. For example, buildings, traffic signs, street lights, and road markings such as lane lines, direction arrows, pedestrian crossings, etc. Then, the map data of the road information is combined with the positioning data of the GPS to generate an accurate route image. However, its biggest problem is that it cannot be used if the positioning fails. Especially when the vehicle is at an intersection and there is a lack of lane lines for detection, it will be impossible to plan the vehicle route at this time. In addition, obtaining map data requires a lot of manpower and funds for measurement, and the amount of data is also quite large, which leads to increased costs.

有鑑於此,本發明針對上述習知技術之缺失及未來之需求,提出一種自駕車之路徑規劃系統及方法,以解決上述該等缺失,具體架構及其實施方式將詳述於下:In view of this, the present invention proposes a route planning system and method for self-driving vehicles to solve the above-mentioned deficiencies and future needs of the prior art. The specific structure and implementation method are described in detail below:

本發明之主要目的在提供一種自駕車之路徑規劃系統及方法,其可不依靠高精度圖資,而是藉由回波強度值對周圍環境物體進行分類,可降低錄製高精圖資所耗費的人力和費用成本,同時降低資料佔用的空間。The main purpose of the present invention is to provide a route planning system and method for a self-driving car, which does not rely on high-precision map data, but classifies the surrounding environment objects by echo intensity values, thereby reducing the manpower and cost of recording high-precision map data, and at the same time reducing the space occupied by data.

本發明之另一目的在提供一種自駕車之路徑規劃系統及方法,其不須依賴導航系統,在導航失效的情況下仍可透過光達進行物理性偵測以規劃路徑。Another object of the present invention is to provide a route planning system and method for an autonomous vehicle, which does not rely on a navigation system and can still plan a route through physical detection using lidar when navigation fails.

本發明之再一目的在提供一種自駕車之路徑規劃系統及方法,其在道路或路口沒有車道線的情況下,藉由周圍環境判別出車道邊緣,並藉以找到車道中心點,進而規劃出行駛路徑,大幅提升安全性。Another object of the present invention is to provide a route planning system and method for a self-driving vehicle, which can identify the lane edge based on the surrounding environment and find the lane center point when there is no lane line on the road or intersection, and then plan the driving route, thereby greatly improving safety.

為達上述目的,本發明提供一種自駕車之路徑規劃系統,一種自駕車之路徑規劃系統,設置於一本車上,自駕車之路徑規劃系統包括:至少一感測器,用以偵測本車之周圍環境資訊;一鳥瞰圖產生模組,連接感測器,接收周圍環境資訊並轉換成一鳥瞰圖,鳥瞰圖包括每一座標點之座標資訊;一類別偵測模組,連接鳥瞰圖產生模組,依據座標資訊判別並標記出鳥瞰圖中的車道邊緣、車道線及前車;一車道中心計算模組,連接類別偵測模組,依據已標記出的鳥瞰圖中的車道邊緣及車道線計算一車道中心點,根據車道中心點及本車之位置從他車中找出一前車,及依據前車之位置計算前車之車速;一前車預判模組,連接類別偵測模組及車道中心計算模組,透過一車輛運動學模型估計前車之一預測路徑;以及一路徑規劃模組,連接前車預判模組,若前車之預測路徑與本車之行駛路徑相同時,以前車做為一路徑參考點,若前車之預測路徑與本車之行駛路徑不同或沒有前車時,則以車道邊緣做為一路徑參考線,計算本車之一最終路徑。To achieve the above-mentioned purpose, the present invention provides a route planning system for a self-driving vehicle, a route planning system for a self-driving vehicle, which is arranged on a vehicle, and the route planning system for a self-driving vehicle comprises: at least one sensor for detecting the surrounding environment information of the vehicle; a bird's-eye view generation module, connected to the sensor, receiving the surrounding environment information and converting it into a bird's-eye view, the bird's-eye view including the coordinate information of each mark point; a category detection module, connected to the bird's-eye view generation module, judging and marking the lane edge, lane line and the preceding vehicle in the bird's-eye view according to the coordinate information; a lane center calculation module, connected to the category detection module, judging and marking the lane center information in the marked bird's-eye view according to the lane center information in the marked bird's-eye view; a lane center point is calculated based on the lane edge and lane line, a leading vehicle is found from other vehicles based on the lane center point and the position of the vehicle, and the speed of the leading vehicle is calculated based on the position of the leading vehicle; a leading vehicle prediction module is connected to the category detection module and the lane center calculation module, and estimates a predicted path of the leading vehicle through a vehicle kinematic model; and a path planning module is connected to the leading vehicle prediction module. If the predicted path of the leading vehicle is the same as the driving path of the vehicle, the leading vehicle is used as a path reference point. If the predicted path of the leading vehicle is different from the driving path of the vehicle or there is no leading vehicle, the lane edge is used as a path reference line to calculate a final path of the vehicle.

依據本發明之實施例,感測器為光達,感測器將本車之周圍環境資訊以點雲圖呈現,鳥瞰圖產生模組再利用一轉軸公式將點雲圖轉換為鳥瞰圖。According to an embodiment of the present invention, the sensor is a lidar, and the sensor presents the surrounding environment information of the vehicle in the form of a point cloud image. The bird's-eye view generation module then uses a rotation axis formula to convert the point cloud image into a bird's-eye view image.

依據本發明之實施例,座標資訊包括座標點所組成的形狀、點數疏密、座標點所組成的物件的高度、或每一座標點之回波強度值。According to an embodiment of the present invention, the coordinate information includes the shape formed by the coordinate points, the density of the points, the height of the object formed by the coordinate points, or the echo intensity value of each coordinate point.

依據本發明之實施例,回波強度值預設有複數區間,不同區間的回波強度值的座標點係以不同顏色顯示在鳥瞰圖上。According to an embodiment of the present invention, the echo intensity value is preset to have a plurality of intervals, and the coordinate points of the echo intensity values in different intervals are displayed in different colors on the bird's-eye view.

依據本發明之實施例,類別偵測模組係將回波強度值進行濾波以濾除雜訊,再依據座標資訊判別出鳥瞰圖中的車道邊緣、車道線及前車。According to an embodiment of the present invention, the category detection module filters the echo intensity value to filter out noise, and then determines the lane edge, lane line and the preceding vehicle in the bird's-eye view based on the coordinate information.

依據本發明之實施例,類別偵測模組係利用卡爾曼濾波器對座標資訊進行濾波。According to an embodiment of the present invention, the category detection module utilizes a Kalman filter to filter the coordinate information.

依據本發明之實施例,車道中心計算模組係依據鳥瞰圖中的車道邊緣及車道線找出一可行駛範圍後,再取相鄰之二車道線的中心點做為車道中心點,或是取車道線與車道邊緣之平均值做為車道中心點。According to an embodiment of the present invention, the lane center calculation module finds a feasible driving range based on the lane edge and lane line in the bird's eye view, and then takes the center point of two adjacent lane lines as the lane center point, or takes the average value of the lane line and the lane edge as the lane center point.

依據本發明之實施例,車道中心計算模組取得鳥瞰圖中之所標記的前車之位置後,依據連續時間的至少二鳥瞰圖之前車之位置,以計算前車之車速。According to an embodiment of the present invention, after the lane center calculation module obtains the position of the preceding vehicle marked in the bird's-eye view, it calculates the speed of the preceding vehicle based on the position of the preceding vehicle in at least two consecutive bird's-eye views.

依據本發明之實施例,前車預判模組更透過標記之車道線建立一駕駛行為興趣區域,再依據預測路徑與駕駛行為興趣區域以預測前車之行為,包括直行或轉彎。According to an embodiment of the present invention, the preceding vehicle prediction module further establishes a driving behavior interest area through the marked lane lines, and then predicts the behavior of the preceding vehicle, including going straight or turning, based on the predicted path and the driving behavior interest area.

依據本發明之實施例,前車之預測路徑與本車之行駛路徑相同時,路徑規劃模組以前車做為路徑參考點,並與本車及車道線中心點的位置及前車的車速結合,計算本車之最終路徑。According to an embodiment of the present invention, when the predicted path of the preceding vehicle is the same as the driving path of the own vehicle, the path planning module uses the preceding vehicle as a path reference point and combines it with the position of the own vehicle and the center point of the lane line and the speed of the preceding vehicle to calculate the final path of the own vehicle.

依據本發明之實施例,前車之預測路徑與本車之行駛路徑不同時,路徑規劃模組以車道邊緣做為路徑參考線,並以路徑參考線計算出一邊緣曲率,以計算本車之最終路徑。According to an embodiment of the present invention, when the predicted path of the preceding vehicle is different from the driving path of the vehicle, the path planning module uses the lane edge as a path reference line and calculates an edge curvature based on the path reference line to calculate the final path of the vehicle.

本發明更提供一種自駕車之路徑規劃方法,包括下列步驟:利用至少一感測器偵測一本車之周圍環境資訊;將周圍環境資訊轉換成一鳥瞰圖,鳥瞰圖包括每一座標點之座標資訊;依據座標資訊判別並標記出鳥瞰圖中的車道邊緣、車道線及他車;依據已標記出的鳥瞰圖中的車道邊緣及車道線計算一車道中心點,根據車道中心點及本車之位置從他車中找出一前車,再依據標記出的前車之位置計算前車之車速;透過一車輛運動學模型估計前車之一預測路徑;以及若前車之預測路徑與本車之行駛路徑相同時,以前車做為一路徑參考點,若前車之預測路徑與本車之行駛路徑不同或沒有前車時,則以車道邊緣做為一路徑參考線,計算本車之一最終路徑。The present invention further provides a method for planning a route for a self-driving vehicle, comprising the following steps: using at least one sensor to detect the surrounding environment information of the vehicle; converting the surrounding environment information into a bird's-eye view, the bird's-eye view including the coordinate information of each landmark; determining and marking the lane edge, lane line and other vehicles in the bird's-eye view according to the coordinate information; calculating a lane center point according to the marked lane edge and lane line in the bird's-eye view, and calculating the lane center point according to the lane center point. The method uses the point and the position of the vehicle to find a leading vehicle from other vehicles, and then calculates the speed of the leading vehicle according to the marked position of the leading vehicle; estimates a predicted path of the leading vehicle through a vehicle-to-vehicle kinematic model; and if the predicted path of the leading vehicle is the same as the driving path of the vehicle, uses the leading vehicle as a path reference point; if the predicted path of the leading vehicle is different from the driving path of the vehicle or there is no leading vehicle, uses the edge of the lane as a path reference line to calculate a final path of the vehicle.

依據本發明之實施例,依據前車之位置計算前車之車速之步驟更包括下列步驟:依據連續時間的至少二鳥瞰圖之前車之位置,計算前車之車速。According to an embodiment of the present invention, the step of calculating the speed of the preceding vehicle based on the position of the preceding vehicle further includes the following steps: calculating the speed of the preceding vehicle based on the position of the preceding vehicle in at least two bird's-eye views over a continuous period of time.

依據本發明之實施例,透過車輛運動學模型估計前車之一預測路徑之步驟更包括下列步驟:透過標記之車道線建立一駕駛行為興趣區域,再依據預測路徑與駕駛行為興趣區域以預測前車之行為,包括直行或轉彎。According to an embodiment of the present invention, the step of estimating a predicted path of a preceding vehicle through a vehicle kinematic model further includes the following steps: establishing a driving behavior interest area through marked lane lines, and then predicting the behavior of the preceding vehicle based on the predicted path and the driving behavior interest area, including going straight or turning.

下面將結合本發明實施例中的附圖,對本發明實施例中的技術方案進行清楚、完整地描述,顯然,所描述的實施例是本發明一部分實施例,而不是全部的實施例。基於本發明中的實施例,熟悉本技術領域者在沒有做出進步性勞動前提下所獲得的所有其他實施例,都屬於本發明保護的範圍。The following will be combined with the drawings in the embodiments of the present invention to clearly and completely describe the technical solutions in the embodiments of the present invention. Obviously, the described embodiments are part of the embodiments of the present invention, but not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those familiar with the art without making progressive labor are within the scope of protection of the present invention.

應當理解,當在本說明書和所附申請專利範圍中使用時,術語「包括」和「包含」指示所描述特徵、整體、步驟、操作、元素和/或元件的存在,但並不排除一個或多個其它特徵、整體、步驟、操作、元素、元件和/或其集合的存在或添加。It should be understood that when used in this specification and the appended patent applications, the terms "include" and "comprising" indicate the presence of described features, wholes, steps, operations, elements and/or components, but do not exclude the presence or addition of one or more other features, wholes, steps, operations, elements, components and/or combinations thereof.

還應當理解,在此本發明說明書中所使用的術語僅僅是出於描述特定實施例的目的而並不意在限制本發明。如在本發明說明書和所附申請專利範圍中所使用的那樣,除非上下文清楚地指明其它情況,否則單數形式的「一」、「一個」及「該」意在包括複數形式。It should also be understood that the terms used in this specification are for the purpose of describing specific embodiments only and are not intended to limit the present invention. As used in this specification and the appended patent applications, the singular forms "a", "an" and "the" are intended to include plural forms unless the context clearly indicates otherwise.

還應當進一步理解,在本發明說明書和所附申請專利範圍中使用的術語「及/或」是指相關聯列出的項中的一個或多個的任何組合以及所有可能組合,並且包括這些組合。It should be further understood that the term "and/or" used in this specification and the appended patent applications refers to any combination and all possible combinations of one or more of the associated listed items, and includes these combinations.

本發明提供一種自駕車之路徑規劃系統及方法,請參考第1圖,其為本發明自駕車之路徑規劃系統之架構圖。自駕車之路徑規劃系統10設置於一本車(圖中未示)上。自駕車之路徑規劃系統10包括至少一感測器12、一鳥瞰圖產生模組13、一類別偵測模組14、一車道中心計算模組15、一前車預判模組16及一路徑規劃模組17。其中,感測器12連接鳥瞰圖產生模組13,鳥瞰圖產生模組13連接類別偵測模組14,類別偵測模組14連接車道中心計算模組15及前車預判模組16,車道中心計算模組15連接前車預判模組16,前車預判模組16連接路徑規劃模組17。上述該些模組設置於一車上主機11中,車上主機11包含至少一處理器(圖中未示),該些模組可以一個或更多處理器來實現。The present invention provides a route planning system and method for a self-driving vehicle. Please refer to FIG. 1, which is a schematic diagram of the route planning system for a self-driving vehicle of the present invention. The route planning system 10 for a self-driving vehicle is disposed on a vehicle (not shown in the figure). The route planning system 10 for a self-driving vehicle includes at least one sensor 12, a bird's-eye view generation module 13, a category detection module 14, a lane center calculation module 15, a front vehicle prediction module 16, and a route planning module 17. The sensor 12 is connected to the bird's-eye view generation module 13, the bird's-eye view generation module 13 is connected to the category detection module 14, the category detection module 14 is connected to the lane center calculation module 15 and the front vehicle prediction module 16, the lane center calculation module 15 is connected to the front vehicle prediction module 16, and the front vehicle prediction module 16 is connected to the path planning module 17. The above modules are set in an on-board host 11, and the on-board host 11 includes at least one processor (not shown in the figure), and the modules can be implemented by one or more processors.

感測器12裝設於本車上,用以偵測本車之周圍環境資訊。在一實施例中,感測器12為光達,擷取周圍環境的點雲資料,產生點雲圖。鳥瞰圖產生模組13利用一轉軸公式將點雲圖轉換成一鳥瞰圖,且鳥瞰圖包括每一座標點之座標資訊,包括座標點所組成的形狀、點數疏密、座標點所組成的物件的高度、或每一座標點之回波強度值等物理量的偵測類別。由於感測器12可過濾特定資訊,因此感測器12可用於判斷是否有前車。The sensor 12 is installed on the vehicle to detect the surrounding environment information of the vehicle. In one embodiment, the sensor 12 is a lidar, which captures the point cloud data of the surrounding environment and generates a point cloud map. The bird's-eye view generation module 13 uses a rotation formula to convert the point cloud map into a bird's-eye view, and the bird's-eye view includes the coordinate information of each point, including the shape composed of the coordinate points, the density of the points, the height of the object composed of the coordinate points, or the echo intensity value of each point. The detection category of physical quantities such as. Since the sensor 12 can filter specific information, the sensor 12 can be used to determine whether there is a preceding vehicle.

在一實施例中,若感測器12為光達,則其所接收的點雲回波會因物體不同的材質、顏色等而有不同的回波強度,故可透過回波強度值判別車道邊緣、車道線或前車。具體而言,可將回波強度值預設為複數區間,不同回波強度值區間的座標點係以不同顏色顯示在鳥瞰圖上,例如回波強度值a~b為特殊顏色塗層,若符合高度低、長條形等特徵,則判別其為車道線或車道邊緣;回波強度值c~d為金屬,若符合高度中~高、立方體等特徵,則判別其為車輛;以及回波強度值e~f為植被或混凝土,若符合高度中~高、不規則形等特徵,則判別其為灌木叢或人行道。此判別的步驟係由類別偵測模組14進行。In one embodiment, if the sensor 12 is a lidar, the point cloud echo received by it will have different echo intensities due to different materials, colors, etc. of the objects, so the lane edge, lane line, or the front vehicle can be determined by the echo intensity value. Specifically, the echo intensity value can be preset to a plurality of intervals, and the coordinate points of different echo intensity value intervals are displayed in different colors on the bird's-eye view. For example, the echo intensity values a~b are special color coatings. If they meet the characteristics of low height and long strip shape, they are determined to be lane lines or lane edges; the echo intensity values c~d are metals. If they meet the characteristics of medium to high height and cube shape, they are determined to be vehicles; and the echo intensity values e~f are vegetation or concrete. If they meet the characteristics of medium to high height and irregular shape, they are determined to be bushes or sidewalks. This determination step is performed by the category detection module 14.

類別偵測模組14依據座標資訊判別出車道線、車道邊緣及所有車輛,並在鳥瞰圖上進行標註,例如在鳥瞰圖上描繪出車道線及車道邊緣,及將所有車輛框選出來,車輛包括本車與前車/他車。若感測器12為光達,則類別偵測模組14先將座標資訊利用卡爾曼濾波器進行濾波,以濾除雜訊之後,再進行車道線、車道邊緣及所有車輛之判別。The category detection module 14 identifies the lane lines, lane edges and all vehicles based on the coordinate information, and marks them on the bird's-eye view, for example, by drawing the lane lines and lane edges on the bird's-eye view, and selecting all vehicles, including the vehicle itself and the preceding vehicle/other vehicles. If the sensor 12 is a lidar, the category detection module 14 first filters the coordinate information using a Kalman filter to filter out noise, and then identifies the lane lines, lane edges and all vehicles.

車道中心計算模組15用以計算依據已標記出的鳥瞰圖中的車道邊緣及車道線計算一車道中心點。首先從該鳥瞰圖中的車道邊緣及車道線找出一可行駛範圍,接著,再從可行駛範圍中取相鄰之二車道線的中心點做為車道中心點,或是取車道線與車道邊緣之平均值做為車道中心點。多個車道中心點可連成一條車道中心線。由於已知標記的車輛中何者為本車,因此在已知車道中心點的情況下,還可進一步得知其他車輛中哪台是前車。此外,車道中心計算模組15取得前車之位置後,依據連續時間的至少二張鳥瞰圖的前車位置,即可計算出前車之車速。因此,車道中心計算模組15輸出可行駛範圍、車道中心點、前車位置及前車車速等資訊。The lane center calculation module 15 is used to calculate a lane center point based on the marked lane edges and lane lines in the bird's-eye view. First, a feasible driving range is found from the lane edges and lane lines in the bird's-eye view. Then, the center points of two adjacent lane lines in the feasible driving range are taken as the lane center point, or the average value of the lane line and the lane edge is taken as the lane center point. Multiple lane center points can be connected to form a lane center line. Since it is known which of the marked vehicles is the vehicle, when the lane center point is known, it can be further known which of the other vehicles is the front vehicle. In addition, after the lane center calculation module 15 obtains the position of the front vehicle, the speed of the front vehicle can be calculated based on the position of the front vehicle in at least two bird's-eye views in a continuous period of time. Therefore, the lane center calculation module 15 outputs information such as the feasible driving range, the lane center point, the position of the preceding vehicle, and the speed of the preceding vehicle.

前車預判模組16將鳥瞰圖中抓取到的前車位置透過一車輛運動學模型,在假設前車的速度為恆速的前提下,估計前車之一預測路徑。前車預判模組16更透過標記之車道線建立一駕駛行為興趣區域,再依據預測路徑與駕駛行為興趣區域以預測t秒後前車之行為,包括直行或轉彎。因此前車預判模組16之輸出為前車的預測行為。The front vehicle prediction module 16 uses a vehicle kinematics model to estimate a predicted path of the front vehicle based on the position of the front vehicle captured in the bird's-eye view and the assumption that the speed of the front vehicle is constant. The front vehicle prediction module 16 further establishes a driving behavior interest area through the marked lane lines, and then predicts the behavior of the front vehicle t seconds later based on the predicted path and the driving behavior interest area, including going straight or turning. Therefore, the output of the front vehicle prediction module 16 is the predicted behavior of the front vehicle.

路徑規劃模組17依據前車的預測路徑和預測行為,判斷前車與本車的行駛路徑是否相同。若前車之預測路徑與本車之行駛路徑相同,例如同樣要右轉時,則參考前車的路徑,換言之,以前車做為一路徑參考點。結合本車、前車、及車道中心點三者的位置,可計算出本車之一最終路徑的路徑方程式。若前車之預測路徑與本車之行駛路徑不同時,則以最靠近本車的車道邊緣做為一路徑參考線,計算本車之一最終路徑。The path planning module 17 determines whether the driving paths of the preceding vehicle and the vehicle are the same according to the predicted path and predicted behavior of the preceding vehicle. If the predicted path of the preceding vehicle is the same as the driving path of the vehicle, for example, when both vehicles are turning right, the path of the preceding vehicle is used as a reference point. By combining the positions of the vehicle, the preceding vehicle, and the center point of the lane, a path equation of the final path of the vehicle can be calculated. If the predicted path of the preceding vehicle is different from the driving path of the vehicle, the edge of the lane closest to the vehicle is used as a reference line to calculate the final path of the vehicle.

請同時參考第2圖,其為本發明自駕車之路徑規劃方法之流程圖。步驟S10中,利用至少一感測器12偵測一本車之周圍環境資訊。步驟S12,鳥瞰圖產生模組13將周圍環境資訊轉換成一鳥瞰圖,鳥瞰圖包括每一座標點之座標資訊。步驟S14,類別偵測模組14依據座標資訊判別並標記出鳥瞰圖中的車道邊緣、車道線及他車。步驟S16中,車道中心計算模組15依據已標記出的鳥瞰圖中的車道邊緣及車道線計算一車道中心點。接著車道中心計算模組15根據車道中心點及本車之位置從他車中找出一前車,再依據標記出的前車之位置計算前車之車速。此步驟中,車道中心計算模組15是先從鳥瞰圖中的車道邊緣及車道線找出一可行駛範圍後,再從此可行駛範圍中取相鄰之二車道線的中心點做為車道中心點,或是取車道線與車道邊緣之平均值做為車道中心點。步驟S18中,前車預判模組16透過一車輛運動學模型估計前車之一預測路徑,進一步還預測前車的行為,包括直行或轉彎。接著如步驟S20,該感測器12先判斷是否有前車,若有前車,則步驟S22路徑規劃模組17進一步判斷前車之預測路徑與本車之行駛路徑是否相同。若相同,則如步驟S24所述,以前車做為一路徑參考點,結合本車及車道中心點的位置,計算出本車之一最終路徑。反之,若前車之預測路徑與本車之行駛路徑不同,或是步驟S20判斷沒有前車,則如步驟S26所述,以車道邊緣做為一路徑參考線,計算本車之一最終路徑。Please also refer to Figure 2, which is a flow chart of the route planning method of the self-driving car of the present invention. In step S10, at least one sensor 12 is used to detect the surrounding environment information of the vehicle. In step S12, the bird's-eye view generation module 13 converts the surrounding environment information into a bird's-eye view, and the bird's-eye view includes the coordinate information of each landmark. In step S14, the category detection module 14 determines and marks the lane edges, lane lines and other vehicles in the bird's-eye view based on the coordinate information. In step S16, the lane center calculation module 15 calculates a lane center point based on the marked lane edges and lane lines in the bird's-eye view. Then, the lane center calculation module 15 finds a front vehicle from other vehicles based on the lane center point and the position of the vehicle, and then calculates the speed of the front vehicle based on the marked position of the front vehicle. In this step, the lane center calculation module 15 first finds a feasible driving range from the lane edge and lane line in the bird's eye view, and then takes the center point of two adjacent lane lines in this feasible driving range as the lane center point, or takes the average value of the lane line and the lane edge as the lane center point. In step S18, the front vehicle prediction module 16 estimates a predicted path of the front vehicle through a vehicle kinematic model, and further predicts the behavior of the front vehicle, including going straight or turning. Next, as in step S20, the sensor 12 first determines whether there is a preceding vehicle. If there is a preceding vehicle, then in step S22, the path planning module 17 further determines whether the predicted path of the preceding vehicle is the same as the driving path of the vehicle. If they are the same, then as described in step S24, the preceding vehicle is used as a path reference point, and the positions of the vehicle and the center point of the lane are combined to calculate a final path of the vehicle. On the contrary, if the predicted path of the preceding vehicle is different from the driving path of the vehicle, or if step S20 determines that there is no preceding vehicle, then as described in step S26, the lane edge is used as a path reference line to calculate a final path of the vehicle.

上述步驟S12中,鳥瞰圖產生模組13利用轉軸公式將點雲圖轉換成一鳥瞰圖,轉軸公式如下式(1): (1) 其中,(x’, y’, z’)為點雲圖的原始座標,(x, y, z)為轉換後的鳥瞰圖之座標。(cos αi, cos βi, cos γi)改記為(c 1i, c 2i, c 3i),(i=1, 2, 3)。α,β,γ為原始座標系要旋轉的角度。(h 1, h 2, h 3)為新原點在原始座標系的位置。 In the above step S12, the bird's-eye view generation module 13 converts the point cloud image into a bird's-eye view using a rotation formula. The rotation formula is as follows (1): (1) Where (x', y', z') is the original coordinate of the point cloud, and (x, y, z) is the coordinate of the transformed bird's-eye view. (cos αi, cos βi, cos γi) are changed to (c 1i , c 2i , c 3i ), (i=1, 2, 3). α, β, γ are the angles of rotation of the original coordinate system. (h 1 , h 2 , h 3 ) are the positions of the new origin in the original coordinate system.

上述步驟S18中,前車預判模組16透過車輛運動學模型估計前車之的預測路徑,進一步預測前車的行為,具體請參考第3圖,其為預判前車之預測路徑之座標示意圖。A為自駕車模型的前輪位置;B為自駕車模型的後輪位置;C為自駕車模型的質心位置;O為OA、OB的交點,是車輛的瞬時滾動中心,線段OA、OB分別垂直於兩個輪胎的方向。δr為後輪偏角、δf為前輪偏角、Lr為後輪到質心點的長度、Lf為前輪到質心點的長度。因此,前車的預測路徑可以下式(2)表示: (2) 其中ψ為為航向角;β為滑移角,指車輛行進方向和輪圈所指方向兩者間所成的角度。v為車速,r為車輪角速度。 In the above step S18, the front vehicle prediction module 16 estimates the predicted path of the front vehicle through the vehicle kinematic model, and further predicts the behavior of the front vehicle. For details, please refer to Figure 3, which is a coordinate diagram of the predicted path of the front vehicle. A is the front wheel position of the self-driving model; B is the rear wheel position of the self-driving model; C is the center of mass position of the self-driving model; O is the intersection of OA and OB, which is the instantaneous rolling center of the vehicle, and the line segments OA and OB are perpendicular to the directions of the two tires respectively. δr is the rear wheel deflection angle, δf is the front wheel deflection angle, Lr is the length from the rear wheel to the center of mass, and Lf is the length from the front wheel to the center of mass. Therefore, the predicted path of the front vehicle can be expressed by the following formula (2): (2) Where ψ is the heading angle; β is the slip angle, which refers to the angle between the vehicle's direction of travel and the direction of the wheel. v is the vehicle speed, and r is the wheel angular velocity.

上述步驟S24中係以三次方程式 找出本車的最終路徑,如下式(3) r(s) = [x(s), y(s), θ(s), k(s)]’            (3) 其中,x為x軸座標點,y為y軸座標點,θ為本車的航向角,k為路口的彎道曲率。在無前車的情境下,可透過車道邊緣獲得車道曲率,再透過下式(4)~(8)代入三次方程式,得到本車的最終路徑公式(3): (4) (5) (6) (7) (8) In the above step S24, the cubic equation is Find the final path of the vehicle as follows (3): r(s) = [x(s), y(s), θ(s), k(s)]' (3) where x is the x-axis coordinate point, y is the y-axis coordinate point, θ is the heading angle of the vehicle, and k is the curvature of the curve at the intersection. In the absence of a preceding vehicle, the curvature of the lane can be obtained from the lane edge, and then substituted into the cubic equation using the following equations (4) to (8) to obtain the final path formula (3): (4) (5) (6) (7) (8)

第4圖至第6圖為在T字路口應用本發明之實施例示意圖。以感測器是光達為例,首先將點雲圖利用轉軸公式轉換成第4圖所示的鳥瞰圖,將彩圖以灰階表示。接著根據回波強度值對周圍環境物體進行分類,找出車道線、車道邊緣及所有車輛,如第5圖所示,其中長虛線為車道線20、短虛線為車道邊緣22、長方形框為他車24。黑點則為本車26的位置。找出兩條車道線20之間或車道線20與車道邊緣22之間的車道中心點28,如第5圖中的三角形標記。需注意的是,此車道中心點28為通過路口之後的第一個中心點,而隨著本車26移動,每個時間t的車道中心點28也會跟著移動,多個車道中心點28可連成一條車道中心線。第6圖中,則是根據本車26車寬範圍判斷前方是否有車輛,若有前車則透過車輛運動學預判駕駛行為;反之則提取最近的車道邊緣22的曲率進行計算,找出過彎的最終路徑,如第6圖中的淺灰色弧形箭頭為本車26的過彎路徑。Figures 4 to 6 are schematic diagrams of embodiments of the present invention applied at a T-junction. Taking the case where the sensor is a lidar, the point cloud image is first converted into a bird's-eye view as shown in Figure 4 using the axis rotation formula, and the color image is represented in grayscale. Then, the surrounding environmental objects are classified according to the echo intensity value to find the lane lines, lane edges and all vehicles, as shown in Figure 5, where the long dashed line is the lane line 20, the short dashed line is the lane edge 22, and the rectangular frame is the other vehicle 24. The black dot is the position of the vehicle 26. Find the lane center point 28 between two lane lines 20 or between a lane line 20 and a lane edge 22, as shown by the triangle mark in Figure 5. It should be noted that the lane center point 28 is the first center point after passing the intersection, and as the vehicle 26 moves, the lane center point 28 at each time t will also move, and multiple lane center points 28 can be connected to form a lane center line. In Figure 6, it is determined whether there is a vehicle in front based on the vehicle width range of the vehicle 26. If there is a vehicle in front, the driving behavior is predicted through vehicle kinematics; otherwise, the curvature of the nearest lane edge 22 is extracted for calculation to find the final path of the curve. For example, the light gray arc arrow in Figure 6 is the curve path of the vehicle 26.

第7圖至第9圖為在十字路口應用本發明之實施例示意圖。以感測器是光達為例,首先將點雲圖利用轉軸公式轉換成第7圖所示的鳥瞰圖,將彩圖以灰階表示。接著根據回波強度值對周圍環境物體進行分類,找出車道線、車道邊緣及所有車輛,如第8圖所示,其中短虛線為車道邊緣22、長方形框為他車26。黑點則為本車26的位置。接著,找出兩條車道邊緣22之間的車道中心點28,如第8圖中的三角形標記。由於本車26可以直走或右轉,因此同時找出直行路徑和右轉路徑的車道中心點28,產生第8圖中的兩個三角形標記。與第6圖相同的是,此二個三角形標記也是通過路口後兩個路徑各自的第一個車道中心點28。第9圖中,則是根據本車26車寬範圍判斷前方是否有車輛,若有前車則透過車輛運動學預判前車的駕駛行為;反之則提取最近的車道邊緣22的曲率進行計算,找出過彎的最終路徑,如第9圖中的淺灰色直線箭頭和淺灰色弧形箭頭皆為本車26通過十字路口的路徑。Figures 7 to 9 are schematic diagrams of embodiments of the present invention applied at an intersection. Taking the case where the sensor is a lidar, the point cloud image is first converted into a bird's-eye view as shown in Figure 7 using the rotation axis formula, and the color image is represented in grayscale. Then, the surrounding environmental objects are classified according to the echo intensity value to find the lane lines, lane edges and all vehicles, as shown in Figure 8, where the short dashed line is the lane edge 22 and the rectangular frame is the other vehicle 26. The black dot is the position of the vehicle 26. Next, find the lane center point 28 between the two lane edges 22, as shown by the triangle mark in Figure 8. Since the vehicle 26 can go straight or turn right, the lane center points 28 of the straight path and the right turn path are found at the same time, generating the two triangle marks in Figure 8. Similar to Figure 6, the two triangle marks are also the first lane center points 28 of the two paths after passing the intersection. In Figure 9, the width of the vehicle 26 is used to determine whether there is a vehicle in front. If there is a vehicle in front, the driving behavior of the vehicle in front is predicted through vehicle kinematics; otherwise, the curvature of the nearest lane edge 22 is extracted for calculation to find the final path through the curve. For example, the light gray straight arrow and light gray arc arrow in Figure 9 are both the paths of the vehicle 26 passing the intersection.

第10圖至第12圖為在地下停車場應用本發明之實施例示意圖。以感測器是光達為例,首先將點雲圖利用轉軸公式轉換成第10圖所示的鳥瞰圖,將彩圖以灰階表示。接著根據回波強度值對周圍環境物體進行分類,找出車道線、車道邊緣及所有車輛,如第11圖所示,其中短虛線為車道邊緣22、長方形框為他車24。黑點則為本車26的位置。找出兩條車道邊緣22之間的車道中心點28,如第11圖中的三角形標記。第12圖中,則是根據本車26的車寬範圍判斷前方是否有車輛,若有前車則透過車輛運動學預判前車的駕駛行為;反之則提取最近的車道邊緣22的曲率進行計算,找出本車26的最終路徑,如第12圖中的淺灰色直線箭頭為本車26的路徑。Figures 10 to 12 are schematic diagrams of an embodiment of the present invention applied in an underground parking lot. Taking the case where the sensor is a lidar, the point cloud image is first converted into a bird's-eye view as shown in Figure 10 using the axis rotation formula, and the color image is represented in grayscale. Then, the surrounding environmental objects are classified according to the echo intensity value to find the lane lines, lane edges and all vehicles, as shown in Figure 11, where the short dashed line is the lane edge 22 and the rectangular frame is the other vehicle 24. The black dot is the position of the vehicle 26. Find the lane center point 28 between the two lane edges 22, as shown by the triangle mark in Figure 11. In FIG. 12 , the vehicle width of the vehicle 26 is used to determine whether there is a vehicle in front. If there is a vehicle in front, the driving behavior of the vehicle in front is predicted through vehicle kinematics. Otherwise, the curvature of the nearest lane edge 22 is extracted for calculation to find the final path of the vehicle 26 . The light gray straight arrow in FIG. 12 is the path of the vehicle 26 .

綜上所述,本發明提供一種自駕車之路徑規劃系統及方法,其將光達取得的點雲圖透過轉換公式轉為鳥瞰圖後,判別周圍環境的物體類別,找車道線和車道邊緣後算出可行駛範圍,同時藉由其他車道(即本車即將直行或轉彎後的車道)之車道線找出車道中心點作為終點。若車道中心點處有前車且預判前車與本車為相同路徑,則以前車的預測路徑做為本車的路徑參考點;反之則參考環境的車道邊緣以計算本車的最終路徑。如此一來,本車不須高精圖資、也不需GPS,藉由光達的點雲資料的回波強度值,即可計算出本車的最終路徑,大幅減少錄製高精圖資所需的成本,減少資料占用的儲存空間,且在沒有GPS的地下室仍能正常使用本發明的系統。In summary, the present invention provides a self-driving vehicle path planning system and method, which converts the point cloud image obtained by the lidar into a bird's-eye view through a conversion formula, determines the object type of the surrounding environment, finds the lane line and lane edge, and then calculates the feasible driving range. At the same time, the lane center point is found as the end point by the lane line of other lanes (i.e., the lane where the vehicle is about to go straight or turn). If there is a front vehicle at the center of the lane and it is predicted that the front vehicle and the vehicle have the same path, the predicted path of the front vehicle is used as the path reference point of the vehicle; otherwise, the lane edge of the environment is referenced to calculate the final path of the vehicle. In this way, the vehicle does not need high-precision maps or GPS. The final path of the vehicle can be calculated by the echo intensity value of the lidar point cloud data, which greatly reduces the cost of recording high-precision maps and the storage space occupied by the data. In addition, the system of the present invention can still be used normally in basements without GPS.

唯以上所述者,僅為本發明之較佳實施例而已,並非用來限定本發明實施之範圍。故即凡依本發明申請範圍所述之特徵及精神所為之均等變化或修飾,均應包括於本發明之申請專利範圍內。However, the above is only the preferred embodiment of the present invention and is not intended to limit the scope of the present invention. Therefore, all equivalent changes or modifications based on the features and spirit described in the scope of the present invention should be included in the scope of the patent application of the present invention.

10:自駕車之路徑規劃系統 11:車上主機 12:感測器 13:鳥瞰圖產生模組 14:類別偵測模組 15:車道中心計算模組 16:前車預判模組 17:路徑規劃模組 20:車道線 22:車道邊緣 24:他車 26:本車 28:車道中心點 10: Autonomous vehicle path planning system 11: Onboard host 12: Sensor 13: Bird's-eye view generation module 14: Category detection module 15: Lane center calculation module 16: Front vehicle prediction module 17: Path planning module 20: Lane line 22: Lane edge 24: Other vehicles 26: Self-driving vehicle 28: Lane center point

第1圖為本發明自駕車之路徑規劃系統之方塊圖。 第2圖為本發明自駕車之路徑規劃方法之流程圖。 第3圖為預判前車之預測路徑之座標示意圖。 第4圖至第6圖為在T字路口應用本發明之實施例示意圖。 第7圖至第9圖為在十字路口應用本發明之實施例示意圖。 第10圖至第12圖為在地下停車場應用本發明之實施例示意圖。 Figure 1 is a block diagram of the path planning system of the self-driving car of the present invention. Figure 2 is a flow chart of the path planning method of the self-driving car of the present invention. Figure 3 is a schematic diagram of the coordinates of the predicted path of the front vehicle. Figures 4 to 6 are schematic diagrams of an embodiment of the present invention applied at a T-junction. Figures 7 to 9 are schematic diagrams of an embodiment of the present invention applied at an intersection. Figures 10 to 12 are schematic diagrams of an embodiment of the present invention applied in an underground parking lot.

10:自駕車之路徑規劃系統 10: Self-driving car route planning system

11:車上主機 11: Car host

12:感測器 12: Sensor

13:鳥瞰圖產生模組 13: Bird's-eye view generation module

14:類別偵測模組 14: Category detection module

15:車道中心計算模組 15: Lane center calculation module

16:前車預判模組 16: Front vehicle prediction module

17:路徑規劃模組 17: Path planning module

Claims (23)

一種自駕車之路徑規劃系統,設置於一本車上,該自駕車之路徑規劃系統包括: 至少一感測器,用以偵測該本車之周圍環境資訊; 一鳥瞰圖產生模組,連接該至少一感測器,接收該周圍環境資訊並轉換成一鳥瞰圖,該鳥瞰圖包括每一座標點之座標資訊; 一類別偵測模組,連接該鳥瞰圖產生模組,依據該等座標資訊判別並標記出該鳥瞰圖中的車道邊緣、車道線及他車; 一車道中心計算模組,連接該類別偵測模組,依據已標記出的該鳥瞰圖中的車道邊緣及車道線找出一可行駛範圍,並計算一車道中心點,根據該車道中心點及該本車之位置從他車中找出一前車,及依據該前車之位置計算該前車之車速; 一前車預判模組,連接該類別偵測模組及該車道中心計算模組,透過一車輛運動學模型估計該前車之一預測路徑;以及 一路徑規劃模組,連接該前車預判模組,若該前車之該預測路徑與該本車之行駛路徑相同時,以該前車做為一路徑參考點,若該前車之該預測路徑與該本車之行駛路徑不同或沒有前車時,則以該車道邊緣做為一路徑參考線,計算該本車之一最終路徑。 A route planning system for a self-driving vehicle is provided on a vehicle, and the route planning system for the self-driving vehicle includes: At least one sensor for detecting the surrounding environment information of the vehicle; A bird's-eye view generation module, connected to the at least one sensor, receives the surrounding environment information and converts it into a bird's-eye view, and the bird's-eye view includes the coordinate information of each landmark; A category detection module, connected to the bird's-eye view generation module, determines and marks the lane edge, lane line and other vehicles in the bird's-eye view according to the coordinate information; A lane center calculation module, connected to the category detection module, finds a feasible driving range based on the lane edges and lane lines marked in the bird's-eye view, calculates a lane center point, finds a leading vehicle from other vehicles based on the lane center point and the position of the vehicle, and calculates the speed of the leading vehicle based on the position of the leading vehicle; A leading vehicle prediction module, connected to the category detection module and the lane center calculation module, estimates a predicted path of the leading vehicle through a vehicle kinematic model; and A path planning module is connected to the preceding vehicle prediction module. If the predicted path of the preceding vehicle is the same as the driving path of the vehicle, the preceding vehicle is used as a path reference point. If the predicted path of the preceding vehicle is different from the driving path of the vehicle or there is no preceding vehicle, the lane edge is used as a path reference line to calculate the final path of the vehicle. 如請求項1所述之自駕車之路徑規劃系統,其中該至少一感測器為光達。A route planning system for a self-driving vehicle as described in claim 1, wherein the at least one sensor is a lidar. 如請求項2所述之自駕車之路徑規劃系統,其中該至少一感測器將該本車之周圍環境資訊以點雲圖呈現,該鳥瞰圖產生模組再利用一轉軸公式將該點雲圖轉換為該鳥瞰圖。A route planning system for a self-driving vehicle as described in claim 2, wherein the at least one sensor presents the surrounding environment information of the vehicle in the form of a point cloud image, and the bird's-eye view generation module then uses a rotation formula to convert the point cloud image into the bird's-eye view image. 如請求項1所述之自駕車之路徑規劃系統,其中該座標資訊包括該等座標點所組成的形狀、點數疏密、該等座標點所組成的物件的高度、或每一座標點之回波強度值。A self-driving vehicle route planning system as described in claim 1, wherein the coordinate information includes a shape formed by the coordinate points, a density of points, a height of an object formed by the coordinate points, or an echo intensity value of each coordinate point. 如請求項4所述之自駕車之路徑規劃系統,其中該回波強度值預設有複數區間,不同區間的該等回波強度值的該等座標點係以不同顏色顯示在該鳥瞰圖上。A self-driving route planning system as described in claim 4, wherein the echo intensity value is preset to have multiple intervals, and the coordinate points of the echo intensity values in different intervals are displayed on the bird's-eye view in different colors. 如請求項4所述之自駕車之路徑規劃系統,其中該類別偵測模組係將該等座標資訊進行濾波以濾除雜訊,再依據該等回波強度值判別出該鳥瞰圖中的車道邊緣、車道線及他車。The route planning system for a self-driving vehicle as described in claim 4, wherein the category detection module filters the coordinate information to filter out noise, and then determines the lane edges, lane lines and other vehicles in the bird's-eye view based on the echo intensity values. 如請求項6所述之自駕車之路徑規劃系統,其中該類別偵測模組係利用卡爾曼濾波器對該等座標資訊進行濾波。A self-driving vehicle path planning system as described in claim 6, wherein the category detection module utilizes a Kalman filter to filter the coordinate information. 如請求項1所述之自駕車之路徑規劃系統,其中該車道中心計算模組係依據該鳥瞰圖中的車道邊緣及車道線找出一可行駛範圍後,再從該可行駛範圍中取相鄰之二該車道線的中心點做為該車道中心點,或是取該車道線與該車道邊緣之平均值做為該車道中心點。The self-driving route planning system as described in claim 1, wherein the lane center calculation module finds a feasible driving range based on the lane edge and lane line in the bird's-eye view, and then takes the center points of two adjacent lane lines in the feasible driving range as the lane center point, or takes the average value of the lane line and the lane edge as the lane center point. 如請求項1所述之自駕車之路徑規劃系統,其中該車道中心計算模組取得該前車之位置後,依據連續時間的至少二該鳥瞰圖之該前車之位置,以計算該前車之車速。A self-driving route planning system as described in claim 1, wherein after the lane center calculation module obtains the position of the leading vehicle, it calculates the speed of the leading vehicle based on the position of the leading vehicle in at least two bird's-eye views at continuous time. 如請求項9所述之自駕車之路徑規劃系統,其中該前車預判模組更透過標記之該車道線建立一駕駛行為興趣區域,再依據該預測路徑與該駕駛行為興趣區域以預測該前車之行為,包括直行或轉彎。As described in claim 9, the path planning system for self-driving vehicles, wherein the preceding vehicle prediction module further establishes a driving behavior interest area through the marked lane line, and then predicts the behavior of the preceding vehicle based on the predicted path and the driving behavior interest area, including going straight or turning. 如請求項1所述之自駕車之路徑規劃系統,其中該前車之該預測路徑與該本車之行駛路徑相同時,該路徑規劃模組以該前車做為該路徑參考點,並與該本車及該車道線中心點的位置及該前車的車速結合,計算該本車之該最終路徑。In the path planning system for a self-driving vehicle as described in claim 1, when the predicted path of the leading vehicle is the same as the driving path of the vehicle, the path planning module uses the leading vehicle as the path reference point and combines it with the position of the vehicle and the center point of the lane line and the speed of the leading vehicle to calculate the final path of the vehicle. 如請求項1所述之自駕車之路徑規劃系統,其中該前車之該預測路徑與該本車之行駛路徑不同時,該路徑規劃模組以該車道邊緣做為該路徑參考線,並以該路徑參考線計算出一邊緣曲率,以計算該本車之該最終路徑。In the path planning system for a self-driving vehicle as described in claim 1, when the predicted path of the preceding vehicle is different from the driving path of the vehicle, the path planning module uses the lane edge as the path reference line and calculates an edge curvature using the path reference line to calculate the final path of the vehicle. 一種自駕車之路徑規劃方法,包括下列步驟: 利用至少一感測器偵測一本車之周圍環境資訊; 將該周圍環境資訊轉換成一鳥瞰圖,該鳥瞰圖包括每一座標點之座標資訊; 依據該等座標資訊判別並標記出該鳥瞰圖中的車道邊緣、車道線及他車; 依據已標記出的該鳥瞰圖中的車道邊緣及車道線計算一車道中心點,根據該車道中心點及該本車之位置從他車中找出一前車,再依據該前車之位置計算該前車之車速; 透過一車輛運動學模型估計該前車之一預測路徑;以及 若該前車之該預測路徑與該本車之行駛路徑相同時,以該前車做為一路徑參考點,若該前車之該預測路徑與該本車之行駛路徑不同或沒有前車時,則以該車道邊緣做為一路徑參考線,計算該本車之一最終路徑。 A method for planning a path for a self-driving vehicle comprises the following steps: Using at least one sensor to detect the surrounding environment information of a vehicle; Converting the surrounding environment information into a bird's-eye view, the bird's-eye view including the coordinate information of each landmark; Identifying and marking the lane edges, lane lines and other vehicles in the bird's-eye view based on the coordinate information; Calculating a lane center point based on the marked lane edges and lane lines in the bird's-eye view, finding a leading vehicle from other vehicles based on the lane center point and the position of the vehicle, and then calculating the speed of the leading vehicle based on the position of the leading vehicle; Estimate a predicted path of the leading vehicle through a vehicle kinematic model; and If the predicted path of the preceding vehicle is the same as the driving path of the vehicle, the preceding vehicle is used as a path reference point. If the predicted path of the preceding vehicle is different from the driving path of the vehicle or there is no preceding vehicle, the lane edge is used as a path reference line to calculate the final path of the vehicle. 如請求項13所述之自駕車之路徑規劃方法,其中該本車之周圍環境資訊係以點雲圖呈現,並利用一轉軸公式將該點雲圖轉換為該鳥瞰圖。A route planning method for a self-driving vehicle as described in claim 13, wherein the surrounding environment information of the vehicle is presented in a point cloud image, and a rotation formula is used to convert the point cloud image into the bird's-eye view. 如請求項13所述之自駕車之路徑規劃方法,其中該座標資訊包括該等座標點所組成的形狀、點數疏密、該等座標點所組成的物件的高度、或每一座標點之回波強度值。A method for self-driving vehicle route planning as described in claim 13, wherein the coordinate information includes a shape formed by the coordinate points, a density of points, a height of an object formed by the coordinate points, or an echo intensity value of each coordinate point. 如請求項15所述之自駕車之路徑規劃方法,其中該回波強度值預設有複數區間,不同區間的該等回波強度值的該等座標點係以不同顏色顯示在該鳥瞰圖上。A route planning method for a self-driving vehicle as described in claim 15, wherein the echo intensity value is preset to have multiple intervals, and the coordinate points of the echo intensity values in different intervals are displayed on the bird's-eye view in different colors. 如請求項16所述之自駕車之路徑規劃方法,其中該等回波強度值被濾波以濾除雜訊後,再依據該等座標資訊判別出該鳥瞰圖中的車道邊緣、車道線及他車。A method for self-driving vehicle route planning as described in claim 16, wherein the echo intensity values are filtered to remove noise, and then the lane edges, lane lines and other vehicles in the bird's-eye view are identified based on the coordinate information. 如請求項17所述之自駕車之路徑規劃方法,其中該等座標資訊係利用卡爾曼濾波器進行濾波。A method for autonomous vehicle path planning as described in claim 17, wherein the coordinate information is filtered using a Kalman filter. 如請求項13所述之自駕車之路徑規劃方法,其中該車道中心點之計算包含下列步驟: 依據該鳥瞰圖中的車道邊緣及車道線找出一可行駛範圍;以及 在該可行駛範圍中,取相鄰之二該車道線的中心點做為該車道中心點,或是取該車道線與該車道邊緣之平均值做為該車道中心點。 A method for self-driving vehicle route planning as described in claim 13, wherein the calculation of the lane center point comprises the following steps: Finding a feasible driving range based on the lane edge and lane line in the bird's-eye view; and In the feasible driving range, taking the center points of two adjacent lane lines as the lane center point, or taking the average value of the lane line and the lane edge as the lane center point. 如請求項13所述之自駕車之路徑規劃方法,其中該依據該前車之位置計算該前車之車速之步驟更包括下列步驟: 依據連續時間的至少二該鳥瞰圖之該前車之位置,計算該前車之車速。 The route planning method for a self-driving vehicle as described in claim 13, wherein the step of calculating the speed of the preceding vehicle based on the position of the preceding vehicle further includes the following steps: Calculating the speed of the preceding vehicle based on the position of the preceding vehicle in at least two consecutive bird's-eye views. 如請求項13所述之自駕車之路徑規劃方法,其中該透過該車輛運動學模型估計該前車之一預測路徑之步驟更包括下列步驟: 透過標記之該車道線建立一駕駛行為興趣區域,再依據該預測路徑與該駕駛行為興趣區域以預測該前車之行為,包括直行或轉彎。 The path planning method for a self-driving vehicle as described in claim 13, wherein the step of estimating a predicted path of the preceding vehicle through the vehicle kinematic model further includes the following steps: Establishing a driving behavior interest region through the marked lane line, and then predicting the behavior of the preceding vehicle based on the predicted path and the driving behavior interest region, including going straight or turning. 如請求項13所述之自駕車之路徑規劃方法,其中該前車之該預測路徑與該本車之行駛路徑相同時,以該前車做為該路徑參考點,並與該本車及該車道線中心點的位置及該前車的車速結合,計算該本車之該最終路徑。As described in claim 13, in the path planning method for a self-driving vehicle, when the predicted path of the preceding vehicle is the same as the driving path of the vehicle, the preceding vehicle is used as the path reference point, and is combined with the position of the vehicle and the center point of the lane line and the speed of the preceding vehicle to calculate the final path of the vehicle. 如請求項13所述之自駕車之路徑規劃方法,其中該前車之該預測路徑與該本車之行駛路徑不同時,以該車道邊緣做為該路徑參考線,並以該路徑參考線計算出一邊緣曲率,以計算該本車之該最終路徑。As described in claim 13, in the path planning method for a self-driving vehicle, when the predicted path of the preceding vehicle is different from the driving path of the vehicle, the edge of the lane is used as the path reference line, and an edge curvature is calculated using the path reference line to calculate the final path of the vehicle.
TW111139032A 2022-10-14 2022-10-14 Self-driving route planning system and method TWI824773B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW111139032A TWI824773B (en) 2022-10-14 2022-10-14 Self-driving route planning system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW111139032A TWI824773B (en) 2022-10-14 2022-10-14 Self-driving route planning system and method

Publications (2)

Publication Number Publication Date
TWI824773B TWI824773B (en) 2023-12-01
TW202416216A true TW202416216A (en) 2024-04-16

Family

ID=90052997

Family Applications (1)

Application Number Title Priority Date Filing Date
TW111139032A TWI824773B (en) 2022-10-14 2022-10-14 Self-driving route planning system and method

Country Status (1)

Country Link
TW (1) TWI824773B (en)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107063276A (en) * 2016-12-12 2017-08-18 成都育芽科技有限公司 One kind is without the high-precision unmanned vehicle on-vehicle navigation apparatus of delay and method
CN108955692A (en) * 2018-08-02 2018-12-07 德清知域信息科技有限公司 It is a kind of by the vehicle-mounted air navigation aid being connect with pedestrian's scene
TWI711804B (en) * 2019-05-15 2020-12-01 宜陞有限公司 Vehicle navigation device for self-driving cars
TWI725611B (en) * 2019-11-12 2021-04-21 亞慶股份有限公司 Vehicle navigation switching device for golf course self-driving cars

Also Published As

Publication number Publication date
TWI824773B (en) 2023-12-01

Similar Documents

Publication Publication Date Title
US11874119B2 (en) Traffic boundary mapping
US11288521B2 (en) Automated road edge boundary detection
CN107646114B (en) Method for estimating lane
CN107851125B (en) System and method for two-step object data processing via vehicle and server databases to generate, update and communicate accurate road characteristics databases
US10239539B2 (en) Vehicle travel control method and vehicle travel control device
US12106574B2 (en) Image segmentation
CN109849922B (en) Visual information and GIS information fusion-based method for intelligent vehicle
JP4557288B2 (en) Image recognition device, image recognition method, position specifying device using the same, vehicle control device, and navigation device
GB2629117A (en) Systems and methods for determining road safety
GB2614379A (en) Systems and methods for vehicle navigation
JP6977887B2 (en) Lane estimator, method and program
Gern et al. Vision-based lane recognition under adverse weather conditions using optical flow
CN111781933A (en) High-speed automatic driving vehicle implementation system and method based on edge calculation and spatial intelligence
US20230236037A1 (en) Systems and methods for common speed mapping and navigation
US11933626B2 (en) Navigation system with vehicle position mechanism and method of operation thereof
US20230242149A1 (en) Radar object classification based on radar cross-section data
CN111623786B (en) Method for prejudging vehicle running track
CN113227831B (en) Guardrail estimation method based on multi-sensor data fusion and vehicle-mounted equipment
US20240208547A1 (en) Route planning system and method of self-driving vehicle
TW202416216A (en) Path planning system and method of autonomous vehicles to compute the predicted path based on the point cloud data of either the preceding vehicle or the lane edge for reducing the costs of constructing high-precision graphics, and reducing the storage space
WO2022254535A1 (en) Travel area determination device and travel area determination method
WO2021170140A1 (en) Lane structure detection method and apparatus
CN112530270B (en) Mapping method and device based on region allocation
CN115626181A (en) Route planning system and method for self-driving
EP4160153A1 (en) Methods and systems for estimating lanes for a vehicle