TW202416216A - Path planning system and method of autonomous vehicles to compute the predicted path based on the point cloud data of either the preceding vehicle or the lane edge for reducing the costs of constructing high-precision graphics, and reducing the storage space - Google Patents
Path planning system and method of autonomous vehicles to compute the predicted path based on the point cloud data of either the preceding vehicle or the lane edge for reducing the costs of constructing high-precision graphics, and reducing the storage space Download PDFInfo
- Publication number
- TW202416216A TW202416216A TW111139032A TW111139032A TW202416216A TW 202416216 A TW202416216 A TW 202416216A TW 111139032 A TW111139032 A TW 111139032A TW 111139032 A TW111139032 A TW 111139032A TW 202416216 A TW202416216 A TW 202416216A
- Authority
- TW
- Taiwan
- Prior art keywords
- vehicle
- path
- lane
- driving
- bird
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 27
- 235000004522 Pentaglottis sempervirens Nutrition 0.000 claims abstract description 70
- 240000004050 Pentaglottis sempervirens Species 0.000 claims abstract description 68
- 238000001514 detection method Methods 0.000 claims description 26
- 239000003086 colorant Substances 0.000 claims description 5
- 238000010586 diagram Methods 0.000 description 10
- 230000007613 environmental effect Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000009500 colour coating Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 208000037805 labour Diseases 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 229910052751 metal Inorganic materials 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 150000002739 metals Chemical class 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
Images
Landscapes
- Navigation (AREA)
- Control Of Motors That Do Not Use Commutators (AREA)
Abstract
Description
本發明係有關一種路徑規劃系統,特別是指一種之自駕車之路徑規劃系統及方法。The present invention relates to a route planning system, and more particularly to a route planning system and method for a self-driving vehicle.
近年自駕車技術逐漸成熟,相關的開源自駕車軟體紛紛投入市場,使自駕車開發門檻降低。目前主流的自駕車技術多依賴由GPS位置所錄製的高精圖資或車道線偵測以獲取最佳路徑。In recent years, self-driving car technology has gradually matured, and related open source self-driving software has been put into the market, lowering the threshold for self-driving car development. Currently, mainstream self-driving car technology mostly relies on high-precision maps recorded by GPS location or lane line detection to obtain the best path.
其中,車道線偵測獲取最佳路徑的方法有其硬傷,在於但並非所有的環境皆有車道線,如路口、停車場等沒有車道線。這些沒有劃設車道線的地方就會偵測失敗,因此車道線偵測的方法會受到環境上的限制。Among them, the method of obtaining the best path by lane line detection has its own shortcomings, that is, not all environments have lane lines, such as intersections and parking lots. In these places without lane lines, detection will fail, so the lane line detection method will be limited by the environment.
而利用高精圖資計算最佳路徑的方法,需要先利用搭載立體攝影機的車輛收集完整的道路資訊,辨識出有助於定位的道路特徵。例如建築物、交通號誌、路燈等,以及道路標線如車道線、方向箭頭、行人穿越道等。再將道路資訊的圖資與GPS的定位資料結合,即可產生精確的路線影像。但其最大問題在於若定位失效則無法使用。尤其當車輛位於路口時,同時缺乏車道線可供偵測,則此時將無法規劃車輛路線。此外,獲得圖資資料需要耗費大量的人力與經費進行量測,資料量也相當龐大,進而導致成本增加。The method of using high-precision map data to calculate the best path requires first using a vehicle equipped with a stereo camera to collect complete road information and identify road features that are helpful for positioning. For example, buildings, traffic signs, street lights, and road markings such as lane lines, direction arrows, pedestrian crossings, etc. Then, the map data of the road information is combined with the positioning data of the GPS to generate an accurate route image. However, its biggest problem is that it cannot be used if the positioning fails. Especially when the vehicle is at an intersection and there is a lack of lane lines for detection, it will be impossible to plan the vehicle route at this time. In addition, obtaining map data requires a lot of manpower and funds for measurement, and the amount of data is also quite large, which leads to increased costs.
有鑑於此,本發明針對上述習知技術之缺失及未來之需求,提出一種自駕車之路徑規劃系統及方法,以解決上述該等缺失,具體架構及其實施方式將詳述於下:In view of this, the present invention proposes a route planning system and method for self-driving vehicles to solve the above-mentioned deficiencies and future needs of the prior art. The specific structure and implementation method are described in detail below:
本發明之主要目的在提供一種自駕車之路徑規劃系統及方法,其可不依靠高精度圖資,而是藉由回波強度值對周圍環境物體進行分類,可降低錄製高精圖資所耗費的人力和費用成本,同時降低資料佔用的空間。The main purpose of the present invention is to provide a route planning system and method for a self-driving car, which does not rely on high-precision map data, but classifies the surrounding environment objects by echo intensity values, thereby reducing the manpower and cost of recording high-precision map data, and at the same time reducing the space occupied by data.
本發明之另一目的在提供一種自駕車之路徑規劃系統及方法,其不須依賴導航系統,在導航失效的情況下仍可透過光達進行物理性偵測以規劃路徑。Another object of the present invention is to provide a route planning system and method for an autonomous vehicle, which does not rely on a navigation system and can still plan a route through physical detection using lidar when navigation fails.
本發明之再一目的在提供一種自駕車之路徑規劃系統及方法,其在道路或路口沒有車道線的情況下,藉由周圍環境判別出車道邊緣,並藉以找到車道中心點,進而規劃出行駛路徑,大幅提升安全性。Another object of the present invention is to provide a route planning system and method for a self-driving vehicle, which can identify the lane edge based on the surrounding environment and find the lane center point when there is no lane line on the road or intersection, and then plan the driving route, thereby greatly improving safety.
為達上述目的,本發明提供一種自駕車之路徑規劃系統,一種自駕車之路徑規劃系統,設置於一本車上,自駕車之路徑規劃系統包括:至少一感測器,用以偵測本車之周圍環境資訊;一鳥瞰圖產生模組,連接感測器,接收周圍環境資訊並轉換成一鳥瞰圖,鳥瞰圖包括每一座標點之座標資訊;一類別偵測模組,連接鳥瞰圖產生模組,依據座標資訊判別並標記出鳥瞰圖中的車道邊緣、車道線及前車;一車道中心計算模組,連接類別偵測模組,依據已標記出的鳥瞰圖中的車道邊緣及車道線計算一車道中心點,根據車道中心點及本車之位置從他車中找出一前車,及依據前車之位置計算前車之車速;一前車預判模組,連接類別偵測模組及車道中心計算模組,透過一車輛運動學模型估計前車之一預測路徑;以及一路徑規劃模組,連接前車預判模組,若前車之預測路徑與本車之行駛路徑相同時,以前車做為一路徑參考點,若前車之預測路徑與本車之行駛路徑不同或沒有前車時,則以車道邊緣做為一路徑參考線,計算本車之一最終路徑。To achieve the above-mentioned purpose, the present invention provides a route planning system for a self-driving vehicle, a route planning system for a self-driving vehicle, which is arranged on a vehicle, and the route planning system for a self-driving vehicle comprises: at least one sensor for detecting the surrounding environment information of the vehicle; a bird's-eye view generation module, connected to the sensor, receiving the surrounding environment information and converting it into a bird's-eye view, the bird's-eye view including the coordinate information of each mark point; a category detection module, connected to the bird's-eye view generation module, judging and marking the lane edge, lane line and the preceding vehicle in the bird's-eye view according to the coordinate information; a lane center calculation module, connected to the category detection module, judging and marking the lane center information in the marked bird's-eye view according to the lane center information in the marked bird's-eye view; a lane center point is calculated based on the lane edge and lane line, a leading vehicle is found from other vehicles based on the lane center point and the position of the vehicle, and the speed of the leading vehicle is calculated based on the position of the leading vehicle; a leading vehicle prediction module is connected to the category detection module and the lane center calculation module, and estimates a predicted path of the leading vehicle through a vehicle kinematic model; and a path planning module is connected to the leading vehicle prediction module. If the predicted path of the leading vehicle is the same as the driving path of the vehicle, the leading vehicle is used as a path reference point. If the predicted path of the leading vehicle is different from the driving path of the vehicle or there is no leading vehicle, the lane edge is used as a path reference line to calculate a final path of the vehicle.
依據本發明之實施例,感測器為光達,感測器將本車之周圍環境資訊以點雲圖呈現,鳥瞰圖產生模組再利用一轉軸公式將點雲圖轉換為鳥瞰圖。According to an embodiment of the present invention, the sensor is a lidar, and the sensor presents the surrounding environment information of the vehicle in the form of a point cloud image. The bird's-eye view generation module then uses a rotation axis formula to convert the point cloud image into a bird's-eye view image.
依據本發明之實施例,座標資訊包括座標點所組成的形狀、點數疏密、座標點所組成的物件的高度、或每一座標點之回波強度值。According to an embodiment of the present invention, the coordinate information includes the shape formed by the coordinate points, the density of the points, the height of the object formed by the coordinate points, or the echo intensity value of each coordinate point.
依據本發明之實施例,回波強度值預設有複數區間,不同區間的回波強度值的座標點係以不同顏色顯示在鳥瞰圖上。According to an embodiment of the present invention, the echo intensity value is preset to have a plurality of intervals, and the coordinate points of the echo intensity values in different intervals are displayed in different colors on the bird's-eye view.
依據本發明之實施例,類別偵測模組係將回波強度值進行濾波以濾除雜訊,再依據座標資訊判別出鳥瞰圖中的車道邊緣、車道線及前車。According to an embodiment of the present invention, the category detection module filters the echo intensity value to filter out noise, and then determines the lane edge, lane line and the preceding vehicle in the bird's-eye view based on the coordinate information.
依據本發明之實施例,類別偵測模組係利用卡爾曼濾波器對座標資訊進行濾波。According to an embodiment of the present invention, the category detection module utilizes a Kalman filter to filter the coordinate information.
依據本發明之實施例,車道中心計算模組係依據鳥瞰圖中的車道邊緣及車道線找出一可行駛範圍後,再取相鄰之二車道線的中心點做為車道中心點,或是取車道線與車道邊緣之平均值做為車道中心點。According to an embodiment of the present invention, the lane center calculation module finds a feasible driving range based on the lane edge and lane line in the bird's eye view, and then takes the center point of two adjacent lane lines as the lane center point, or takes the average value of the lane line and the lane edge as the lane center point.
依據本發明之實施例,車道中心計算模組取得鳥瞰圖中之所標記的前車之位置後,依據連續時間的至少二鳥瞰圖之前車之位置,以計算前車之車速。According to an embodiment of the present invention, after the lane center calculation module obtains the position of the preceding vehicle marked in the bird's-eye view, it calculates the speed of the preceding vehicle based on the position of the preceding vehicle in at least two consecutive bird's-eye views.
依據本發明之實施例,前車預判模組更透過標記之車道線建立一駕駛行為興趣區域,再依據預測路徑與駕駛行為興趣區域以預測前車之行為,包括直行或轉彎。According to an embodiment of the present invention, the preceding vehicle prediction module further establishes a driving behavior interest area through the marked lane lines, and then predicts the behavior of the preceding vehicle, including going straight or turning, based on the predicted path and the driving behavior interest area.
依據本發明之實施例,前車之預測路徑與本車之行駛路徑相同時,路徑規劃模組以前車做為路徑參考點,並與本車及車道線中心點的位置及前車的車速結合,計算本車之最終路徑。According to an embodiment of the present invention, when the predicted path of the preceding vehicle is the same as the driving path of the own vehicle, the path planning module uses the preceding vehicle as a path reference point and combines it with the position of the own vehicle and the center point of the lane line and the speed of the preceding vehicle to calculate the final path of the own vehicle.
依據本發明之實施例,前車之預測路徑與本車之行駛路徑不同時,路徑規劃模組以車道邊緣做為路徑參考線,並以路徑參考線計算出一邊緣曲率,以計算本車之最終路徑。According to an embodiment of the present invention, when the predicted path of the preceding vehicle is different from the driving path of the vehicle, the path planning module uses the lane edge as a path reference line and calculates an edge curvature based on the path reference line to calculate the final path of the vehicle.
本發明更提供一種自駕車之路徑規劃方法,包括下列步驟:利用至少一感測器偵測一本車之周圍環境資訊;將周圍環境資訊轉換成一鳥瞰圖,鳥瞰圖包括每一座標點之座標資訊;依據座標資訊判別並標記出鳥瞰圖中的車道邊緣、車道線及他車;依據已標記出的鳥瞰圖中的車道邊緣及車道線計算一車道中心點,根據車道中心點及本車之位置從他車中找出一前車,再依據標記出的前車之位置計算前車之車速;透過一車輛運動學模型估計前車之一預測路徑;以及若前車之預測路徑與本車之行駛路徑相同時,以前車做為一路徑參考點,若前車之預測路徑與本車之行駛路徑不同或沒有前車時,則以車道邊緣做為一路徑參考線,計算本車之一最終路徑。The present invention further provides a method for planning a route for a self-driving vehicle, comprising the following steps: using at least one sensor to detect the surrounding environment information of the vehicle; converting the surrounding environment information into a bird's-eye view, the bird's-eye view including the coordinate information of each landmark; determining and marking the lane edge, lane line and other vehicles in the bird's-eye view according to the coordinate information; calculating a lane center point according to the marked lane edge and lane line in the bird's-eye view, and calculating the lane center point according to the lane center point. The method uses the point and the position of the vehicle to find a leading vehicle from other vehicles, and then calculates the speed of the leading vehicle according to the marked position of the leading vehicle; estimates a predicted path of the leading vehicle through a vehicle-to-vehicle kinematic model; and if the predicted path of the leading vehicle is the same as the driving path of the vehicle, uses the leading vehicle as a path reference point; if the predicted path of the leading vehicle is different from the driving path of the vehicle or there is no leading vehicle, uses the edge of the lane as a path reference line to calculate a final path of the vehicle.
依據本發明之實施例,依據前車之位置計算前車之車速之步驟更包括下列步驟:依據連續時間的至少二鳥瞰圖之前車之位置,計算前車之車速。According to an embodiment of the present invention, the step of calculating the speed of the preceding vehicle based on the position of the preceding vehicle further includes the following steps: calculating the speed of the preceding vehicle based on the position of the preceding vehicle in at least two bird's-eye views over a continuous period of time.
依據本發明之實施例,透過車輛運動學模型估計前車之一預測路徑之步驟更包括下列步驟:透過標記之車道線建立一駕駛行為興趣區域,再依據預測路徑與駕駛行為興趣區域以預測前車之行為,包括直行或轉彎。According to an embodiment of the present invention, the step of estimating a predicted path of a preceding vehicle through a vehicle kinematic model further includes the following steps: establishing a driving behavior interest area through marked lane lines, and then predicting the behavior of the preceding vehicle based on the predicted path and the driving behavior interest area, including going straight or turning.
下面將結合本發明實施例中的附圖,對本發明實施例中的技術方案進行清楚、完整地描述,顯然,所描述的實施例是本發明一部分實施例,而不是全部的實施例。基於本發明中的實施例,熟悉本技術領域者在沒有做出進步性勞動前提下所獲得的所有其他實施例,都屬於本發明保護的範圍。The following will be combined with the drawings in the embodiments of the present invention to clearly and completely describe the technical solutions in the embodiments of the present invention. Obviously, the described embodiments are part of the embodiments of the present invention, but not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those familiar with the art without making progressive labor are within the scope of protection of the present invention.
應當理解,當在本說明書和所附申請專利範圍中使用時,術語「包括」和「包含」指示所描述特徵、整體、步驟、操作、元素和/或元件的存在,但並不排除一個或多個其它特徵、整體、步驟、操作、元素、元件和/或其集合的存在或添加。It should be understood that when used in this specification and the appended patent applications, the terms "include" and "comprising" indicate the presence of described features, wholes, steps, operations, elements and/or components, but do not exclude the presence or addition of one or more other features, wholes, steps, operations, elements, components and/or combinations thereof.
還應當理解,在此本發明說明書中所使用的術語僅僅是出於描述特定實施例的目的而並不意在限制本發明。如在本發明說明書和所附申請專利範圍中所使用的那樣,除非上下文清楚地指明其它情況,否則單數形式的「一」、「一個」及「該」意在包括複數形式。It should also be understood that the terms used in this specification are for the purpose of describing specific embodiments only and are not intended to limit the present invention. As used in this specification and the appended patent applications, the singular forms "a", "an" and "the" are intended to include plural forms unless the context clearly indicates otherwise.
還應當進一步理解,在本發明說明書和所附申請專利範圍中使用的術語「及/或」是指相關聯列出的項中的一個或多個的任何組合以及所有可能組合,並且包括這些組合。It should be further understood that the term "and/or" used in this specification and the appended patent applications refers to any combination and all possible combinations of one or more of the associated listed items, and includes these combinations.
本發明提供一種自駕車之路徑規劃系統及方法,請參考第1圖,其為本發明自駕車之路徑規劃系統之架構圖。自駕車之路徑規劃系統10設置於一本車(圖中未示)上。自駕車之路徑規劃系統10包括至少一感測器12、一鳥瞰圖產生模組13、一類別偵測模組14、一車道中心計算模組15、一前車預判模組16及一路徑規劃模組17。其中,感測器12連接鳥瞰圖產生模組13,鳥瞰圖產生模組13連接類別偵測模組14,類別偵測模組14連接車道中心計算模組15及前車預判模組16,車道中心計算模組15連接前車預判模組16,前車預判模組16連接路徑規劃模組17。上述該些模組設置於一車上主機11中,車上主機11包含至少一處理器(圖中未示),該些模組可以一個或更多處理器來實現。The present invention provides a route planning system and method for a self-driving vehicle. Please refer to FIG. 1, which is a schematic diagram of the route planning system for a self-driving vehicle of the present invention. The
感測器12裝設於本車上,用以偵測本車之周圍環境資訊。在一實施例中,感測器12為光達,擷取周圍環境的點雲資料,產生點雲圖。鳥瞰圖產生模組13利用一轉軸公式將點雲圖轉換成一鳥瞰圖,且鳥瞰圖包括每一座標點之座標資訊,包括座標點所組成的形狀、點數疏密、座標點所組成的物件的高度、或每一座標點之回波強度值等物理量的偵測類別。由於感測器12可過濾特定資訊,因此感測器12可用於判斷是否有前車。The
在一實施例中,若感測器12為光達,則其所接收的點雲回波會因物體不同的材質、顏色等而有不同的回波強度,故可透過回波強度值判別車道邊緣、車道線或前車。具體而言,可將回波強度值預設為複數區間,不同回波強度值區間的座標點係以不同顏色顯示在鳥瞰圖上,例如回波強度值a~b為特殊顏色塗層,若符合高度低、長條形等特徵,則判別其為車道線或車道邊緣;回波強度值c~d為金屬,若符合高度中~高、立方體等特徵,則判別其為車輛;以及回波強度值e~f為植被或混凝土,若符合高度中~高、不規則形等特徵,則判別其為灌木叢或人行道。此判別的步驟係由類別偵測模組14進行。In one embodiment, if the
類別偵測模組14依據座標資訊判別出車道線、車道邊緣及所有車輛,並在鳥瞰圖上進行標註,例如在鳥瞰圖上描繪出車道線及車道邊緣,及將所有車輛框選出來,車輛包括本車與前車/他車。若感測器12為光達,則類別偵測模組14先將座標資訊利用卡爾曼濾波器進行濾波,以濾除雜訊之後,再進行車道線、車道邊緣及所有車輛之判別。The
車道中心計算模組15用以計算依據已標記出的鳥瞰圖中的車道邊緣及車道線計算一車道中心點。首先從該鳥瞰圖中的車道邊緣及車道線找出一可行駛範圍,接著,再從可行駛範圍中取相鄰之二車道線的中心點做為車道中心點,或是取車道線與車道邊緣之平均值做為車道中心點。多個車道中心點可連成一條車道中心線。由於已知標記的車輛中何者為本車,因此在已知車道中心點的情況下,還可進一步得知其他車輛中哪台是前車。此外,車道中心計算模組15取得前車之位置後,依據連續時間的至少二張鳥瞰圖的前車位置,即可計算出前車之車速。因此,車道中心計算模組15輸出可行駛範圍、車道中心點、前車位置及前車車速等資訊。The lane
前車預判模組16將鳥瞰圖中抓取到的前車位置透過一車輛運動學模型,在假設前車的速度為恆速的前提下,估計前車之一預測路徑。前車預判模組16更透過標記之車道線建立一駕駛行為興趣區域,再依據預測路徑與駕駛行為興趣區域以預測t秒後前車之行為,包括直行或轉彎。因此前車預判模組16之輸出為前車的預測行為。The front
路徑規劃模組17依據前車的預測路徑和預測行為,判斷前車與本車的行駛路徑是否相同。若前車之預測路徑與本車之行駛路徑相同,例如同樣要右轉時,則參考前車的路徑,換言之,以前車做為一路徑參考點。結合本車、前車、及車道中心點三者的位置,可計算出本車之一最終路徑的路徑方程式。若前車之預測路徑與本車之行駛路徑不同時,則以最靠近本車的車道邊緣做為一路徑參考線,計算本車之一最終路徑。The
請同時參考第2圖,其為本發明自駕車之路徑規劃方法之流程圖。步驟S10中,利用至少一感測器12偵測一本車之周圍環境資訊。步驟S12,鳥瞰圖產生模組13將周圍環境資訊轉換成一鳥瞰圖,鳥瞰圖包括每一座標點之座標資訊。步驟S14,類別偵測模組14依據座標資訊判別並標記出鳥瞰圖中的車道邊緣、車道線及他車。步驟S16中,車道中心計算模組15依據已標記出的鳥瞰圖中的車道邊緣及車道線計算一車道中心點。接著車道中心計算模組15根據車道中心點及本車之位置從他車中找出一前車,再依據標記出的前車之位置計算前車之車速。此步驟中,車道中心計算模組15是先從鳥瞰圖中的車道邊緣及車道線找出一可行駛範圍後,再從此可行駛範圍中取相鄰之二車道線的中心點做為車道中心點,或是取車道線與車道邊緣之平均值做為車道中心點。步驟S18中,前車預判模組16透過一車輛運動學模型估計前車之一預測路徑,進一步還預測前車的行為,包括直行或轉彎。接著如步驟S20,該感測器12先判斷是否有前車,若有前車,則步驟S22路徑規劃模組17進一步判斷前車之預測路徑與本車之行駛路徑是否相同。若相同,則如步驟S24所述,以前車做為一路徑參考點,結合本車及車道中心點的位置,計算出本車之一最終路徑。反之,若前車之預測路徑與本車之行駛路徑不同,或是步驟S20判斷沒有前車,則如步驟S26所述,以車道邊緣做為一路徑參考線,計算本車之一最終路徑。Please also refer to Figure 2, which is a flow chart of the route planning method of the self-driving car of the present invention. In step S10, at least one
上述步驟S12中,鳥瞰圖產生模組13利用轉軸公式將點雲圖轉換成一鳥瞰圖,轉軸公式如下式(1):
(1)
其中,(x’, y’, z’)為點雲圖的原始座標,(x, y, z)為轉換後的鳥瞰圖之座標。(cos αi, cos βi, cos γi)改記為(c
1i, c
2i, c
3i),(i=1, 2, 3)。α,β,γ為原始座標系要旋轉的角度。(h
1, h
2, h
3)為新原點在原始座標系的位置。
In the above step S12, the bird's-eye
上述步驟S18中,前車預判模組16透過車輛運動學模型估計前車之的預測路徑,進一步預測前車的行為,具體請參考第3圖,其為預判前車之預測路徑之座標示意圖。A為自駕車模型的前輪位置;B為自駕車模型的後輪位置;C為自駕車模型的質心位置;O為OA、OB的交點,是車輛的瞬時滾動中心,線段OA、OB分別垂直於兩個輪胎的方向。δr為後輪偏角、δf為前輪偏角、Lr為後輪到質心點的長度、Lf為前輪到質心點的長度。因此,前車的預測路徑可以下式(2)表示:
(2)
其中ψ為為航向角;β為滑移角,指車輛行進方向和輪圈所指方向兩者間所成的角度。v為車速,r為車輪角速度。
In the above step S18, the front
上述步驟S24中係以三次方程式 找出本車的最終路徑,如下式(3) r(s) = [x(s), y(s), θ(s), k(s)]’ (3) 其中,x為x軸座標點,y為y軸座標點,θ為本車的航向角,k為路口的彎道曲率。在無前車的情境下,可透過車道邊緣獲得車道曲率,再透過下式(4)~(8)代入三次方程式,得到本車的最終路徑公式(3): (4) (5) (6) (7) (8) In the above step S24, the cubic equation is Find the final path of the vehicle as follows (3): r(s) = [x(s), y(s), θ(s), k(s)]' (3) where x is the x-axis coordinate point, y is the y-axis coordinate point, θ is the heading angle of the vehicle, and k is the curvature of the curve at the intersection. In the absence of a preceding vehicle, the curvature of the lane can be obtained from the lane edge, and then substituted into the cubic equation using the following equations (4) to (8) to obtain the final path formula (3): (4) (5) (6) (7) (8)
第4圖至第6圖為在T字路口應用本發明之實施例示意圖。以感測器是光達為例,首先將點雲圖利用轉軸公式轉換成第4圖所示的鳥瞰圖,將彩圖以灰階表示。接著根據回波強度值對周圍環境物體進行分類,找出車道線、車道邊緣及所有車輛,如第5圖所示,其中長虛線為車道線20、短虛線為車道邊緣22、長方形框為他車24。黑點則為本車26的位置。找出兩條車道線20之間或車道線20與車道邊緣22之間的車道中心點28,如第5圖中的三角形標記。需注意的是,此車道中心點28為通過路口之後的第一個中心點,而隨著本車26移動,每個時間t的車道中心點28也會跟著移動,多個車道中心點28可連成一條車道中心線。第6圖中,則是根據本車26車寬範圍判斷前方是否有車輛,若有前車則透過車輛運動學預判駕駛行為;反之則提取最近的車道邊緣22的曲率進行計算,找出過彎的最終路徑,如第6圖中的淺灰色弧形箭頭為本車26的過彎路徑。Figures 4 to 6 are schematic diagrams of embodiments of the present invention applied at a T-junction. Taking the case where the sensor is a lidar, the point cloud image is first converted into a bird's-eye view as shown in Figure 4 using the axis rotation formula, and the color image is represented in grayscale. Then, the surrounding environmental objects are classified according to the echo intensity value to find the lane lines, lane edges and all vehicles, as shown in Figure 5, where the long dashed line is the lane line 20, the short dashed line is the lane edge 22, and the rectangular frame is the other vehicle 24. The black dot is the position of the vehicle 26. Find the lane center point 28 between two lane lines 20 or between a lane line 20 and a lane edge 22, as shown by the triangle mark in Figure 5. It should be noted that the lane center point 28 is the first center point after passing the intersection, and as the vehicle 26 moves, the lane center point 28 at each time t will also move, and multiple lane center points 28 can be connected to form a lane center line. In Figure 6, it is determined whether there is a vehicle in front based on the vehicle width range of the vehicle 26. If there is a vehicle in front, the driving behavior is predicted through vehicle kinematics; otherwise, the curvature of the nearest lane edge 22 is extracted for calculation to find the final path of the curve. For example, the light gray arc arrow in Figure 6 is the curve path of the vehicle 26.
第7圖至第9圖為在十字路口應用本發明之實施例示意圖。以感測器是光達為例,首先將點雲圖利用轉軸公式轉換成第7圖所示的鳥瞰圖,將彩圖以灰階表示。接著根據回波強度值對周圍環境物體進行分類,找出車道線、車道邊緣及所有車輛,如第8圖所示,其中短虛線為車道邊緣22、長方形框為他車26。黑點則為本車26的位置。接著,找出兩條車道邊緣22之間的車道中心點28,如第8圖中的三角形標記。由於本車26可以直走或右轉,因此同時找出直行路徑和右轉路徑的車道中心點28,產生第8圖中的兩個三角形標記。與第6圖相同的是,此二個三角形標記也是通過路口後兩個路徑各自的第一個車道中心點28。第9圖中,則是根據本車26車寬範圍判斷前方是否有車輛,若有前車則透過車輛運動學預判前車的駕駛行為;反之則提取最近的車道邊緣22的曲率進行計算,找出過彎的最終路徑,如第9圖中的淺灰色直線箭頭和淺灰色弧形箭頭皆為本車26通過十字路口的路徑。Figures 7 to 9 are schematic diagrams of embodiments of the present invention applied at an intersection. Taking the case where the sensor is a lidar, the point cloud image is first converted into a bird's-eye view as shown in Figure 7 using the rotation axis formula, and the color image is represented in grayscale. Then, the surrounding environmental objects are classified according to the echo intensity value to find the lane lines, lane edges and all vehicles, as shown in Figure 8, where the short dashed line is the lane edge 22 and the rectangular frame is the other vehicle 26. The black dot is the position of the vehicle 26. Next, find the lane center point 28 between the two lane edges 22, as shown by the triangle mark in Figure 8. Since the vehicle 26 can go straight or turn right, the lane center points 28 of the straight path and the right turn path are found at the same time, generating the two triangle marks in Figure 8. Similar to Figure 6, the two triangle marks are also the first lane center points 28 of the two paths after passing the intersection. In Figure 9, the width of the vehicle 26 is used to determine whether there is a vehicle in front. If there is a vehicle in front, the driving behavior of the vehicle in front is predicted through vehicle kinematics; otherwise, the curvature of the nearest lane edge 22 is extracted for calculation to find the final path through the curve. For example, the light gray straight arrow and light gray arc arrow in Figure 9 are both the paths of the vehicle 26 passing the intersection.
第10圖至第12圖為在地下停車場應用本發明之實施例示意圖。以感測器是光達為例,首先將點雲圖利用轉軸公式轉換成第10圖所示的鳥瞰圖,將彩圖以灰階表示。接著根據回波強度值對周圍環境物體進行分類,找出車道線、車道邊緣及所有車輛,如第11圖所示,其中短虛線為車道邊緣22、長方形框為他車24。黑點則為本車26的位置。找出兩條車道邊緣22之間的車道中心點28,如第11圖中的三角形標記。第12圖中,則是根據本車26的車寬範圍判斷前方是否有車輛,若有前車則透過車輛運動學預判前車的駕駛行為;反之則提取最近的車道邊緣22的曲率進行計算,找出本車26的最終路徑,如第12圖中的淺灰色直線箭頭為本車26的路徑。Figures 10 to 12 are schematic diagrams of an embodiment of the present invention applied in an underground parking lot. Taking the case where the sensor is a lidar, the point cloud image is first converted into a bird's-eye view as shown in Figure 10 using the axis rotation formula, and the color image is represented in grayscale. Then, the surrounding environmental objects are classified according to the echo intensity value to find the lane lines, lane edges and all vehicles, as shown in Figure 11, where the short dashed line is the lane edge 22 and the rectangular frame is the other vehicle 24. The black dot is the position of the vehicle 26. Find the lane center point 28 between the two lane edges 22, as shown by the triangle mark in Figure 11. In FIG. 12 , the vehicle width of the vehicle 26 is used to determine whether there is a vehicle in front. If there is a vehicle in front, the driving behavior of the vehicle in front is predicted through vehicle kinematics. Otherwise, the curvature of the nearest lane edge 22 is extracted for calculation to find the final path of the vehicle 26 . The light gray straight arrow in FIG. 12 is the path of the vehicle 26 .
綜上所述,本發明提供一種自駕車之路徑規劃系統及方法,其將光達取得的點雲圖透過轉換公式轉為鳥瞰圖後,判別周圍環境的物體類別,找車道線和車道邊緣後算出可行駛範圍,同時藉由其他車道(即本車即將直行或轉彎後的車道)之車道線找出車道中心點作為終點。若車道中心點處有前車且預判前車與本車為相同路徑,則以前車的預測路徑做為本車的路徑參考點;反之則參考環境的車道邊緣以計算本車的最終路徑。如此一來,本車不須高精圖資、也不需GPS,藉由光達的點雲資料的回波強度值,即可計算出本車的最終路徑,大幅減少錄製高精圖資所需的成本,減少資料占用的儲存空間,且在沒有GPS的地下室仍能正常使用本發明的系統。In summary, the present invention provides a self-driving vehicle path planning system and method, which converts the point cloud image obtained by the lidar into a bird's-eye view through a conversion formula, determines the object type of the surrounding environment, finds the lane line and lane edge, and then calculates the feasible driving range. At the same time, the lane center point is found as the end point by the lane line of other lanes (i.e., the lane where the vehicle is about to go straight or turn). If there is a front vehicle at the center of the lane and it is predicted that the front vehicle and the vehicle have the same path, the predicted path of the front vehicle is used as the path reference point of the vehicle; otherwise, the lane edge of the environment is referenced to calculate the final path of the vehicle. In this way, the vehicle does not need high-precision maps or GPS. The final path of the vehicle can be calculated by the echo intensity value of the lidar point cloud data, which greatly reduces the cost of recording high-precision maps and the storage space occupied by the data. In addition, the system of the present invention can still be used normally in basements without GPS.
唯以上所述者,僅為本發明之較佳實施例而已,並非用來限定本發明實施之範圍。故即凡依本發明申請範圍所述之特徵及精神所為之均等變化或修飾,均應包括於本發明之申請專利範圍內。However, the above is only the preferred embodiment of the present invention and is not intended to limit the scope of the present invention. Therefore, all equivalent changes or modifications based on the features and spirit described in the scope of the present invention should be included in the scope of the patent application of the present invention.
10:自駕車之路徑規劃系統 11:車上主機 12:感測器 13:鳥瞰圖產生模組 14:類別偵測模組 15:車道中心計算模組 16:前車預判模組 17:路徑規劃模組 20:車道線 22:車道邊緣 24:他車 26:本車 28:車道中心點 10: Autonomous vehicle path planning system 11: Onboard host 12: Sensor 13: Bird's-eye view generation module 14: Category detection module 15: Lane center calculation module 16: Front vehicle prediction module 17: Path planning module 20: Lane line 22: Lane edge 24: Other vehicles 26: Self-driving vehicle 28: Lane center point
第1圖為本發明自駕車之路徑規劃系統之方塊圖。 第2圖為本發明自駕車之路徑規劃方法之流程圖。 第3圖為預判前車之預測路徑之座標示意圖。 第4圖至第6圖為在T字路口應用本發明之實施例示意圖。 第7圖至第9圖為在十字路口應用本發明之實施例示意圖。 第10圖至第12圖為在地下停車場應用本發明之實施例示意圖。 Figure 1 is a block diagram of the path planning system of the self-driving car of the present invention. Figure 2 is a flow chart of the path planning method of the self-driving car of the present invention. Figure 3 is a schematic diagram of the coordinates of the predicted path of the front vehicle. Figures 4 to 6 are schematic diagrams of an embodiment of the present invention applied at a T-junction. Figures 7 to 9 are schematic diagrams of an embodiment of the present invention applied at an intersection. Figures 10 to 12 are schematic diagrams of an embodiment of the present invention applied in an underground parking lot.
10:自駕車之路徑規劃系統 10: Self-driving car route planning system
11:車上主機 11: Car host
12:感測器 12: Sensor
13:鳥瞰圖產生模組 13: Bird's-eye view generation module
14:類別偵測模組 14: Category detection module
15:車道中心計算模組 15: Lane center calculation module
16:前車預判模組 16: Front vehicle prediction module
17:路徑規劃模組 17: Path planning module
Claims (23)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW111139032A TWI824773B (en) | 2022-10-14 | 2022-10-14 | Self-driving route planning system and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW111139032A TWI824773B (en) | 2022-10-14 | 2022-10-14 | Self-driving route planning system and method |
Publications (2)
Publication Number | Publication Date |
---|---|
TWI824773B TWI824773B (en) | 2023-12-01 |
TW202416216A true TW202416216A (en) | 2024-04-16 |
Family
ID=90052997
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
TW111139032A TWI824773B (en) | 2022-10-14 | 2022-10-14 | Self-driving route planning system and method |
Country Status (1)
Country | Link |
---|---|
TW (1) | TWI824773B (en) |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107063276A (en) * | 2016-12-12 | 2017-08-18 | 成都育芽科技有限公司 | One kind is without the high-precision unmanned vehicle on-vehicle navigation apparatus of delay and method |
CN108955692A (en) * | 2018-08-02 | 2018-12-07 | 德清知域信息科技有限公司 | It is a kind of by the vehicle-mounted air navigation aid being connect with pedestrian's scene |
TWI711804B (en) * | 2019-05-15 | 2020-12-01 | 宜陞有限公司 | Vehicle navigation device for self-driving cars |
TWI725611B (en) * | 2019-11-12 | 2021-04-21 | 亞慶股份有限公司 | Vehicle navigation switching device for golf course self-driving cars |
-
2022
- 2022-10-14 TW TW111139032A patent/TWI824773B/en active
Also Published As
Publication number | Publication date |
---|---|
TWI824773B (en) | 2023-12-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11874119B2 (en) | Traffic boundary mapping | |
US11288521B2 (en) | Automated road edge boundary detection | |
CN107646114B (en) | Method for estimating lane | |
CN107851125B (en) | System and method for two-step object data processing via vehicle and server databases to generate, update and communicate accurate road characteristics databases | |
US10239539B2 (en) | Vehicle travel control method and vehicle travel control device | |
US12106574B2 (en) | Image segmentation | |
CN109849922B (en) | Visual information and GIS information fusion-based method for intelligent vehicle | |
JP4557288B2 (en) | Image recognition device, image recognition method, position specifying device using the same, vehicle control device, and navigation device | |
GB2629117A (en) | Systems and methods for determining road safety | |
GB2614379A (en) | Systems and methods for vehicle navigation | |
JP6977887B2 (en) | Lane estimator, method and program | |
Gern et al. | Vision-based lane recognition under adverse weather conditions using optical flow | |
CN111781933A (en) | High-speed automatic driving vehicle implementation system and method based on edge calculation and spatial intelligence | |
US20230236037A1 (en) | Systems and methods for common speed mapping and navigation | |
US11933626B2 (en) | Navigation system with vehicle position mechanism and method of operation thereof | |
US20230242149A1 (en) | Radar object classification based on radar cross-section data | |
CN111623786B (en) | Method for prejudging vehicle running track | |
CN113227831B (en) | Guardrail estimation method based on multi-sensor data fusion and vehicle-mounted equipment | |
US20240208547A1 (en) | Route planning system and method of self-driving vehicle | |
TW202416216A (en) | Path planning system and method of autonomous vehicles to compute the predicted path based on the point cloud data of either the preceding vehicle or the lane edge for reducing the costs of constructing high-precision graphics, and reducing the storage space | |
WO2022254535A1 (en) | Travel area determination device and travel area determination method | |
WO2021170140A1 (en) | Lane structure detection method and apparatus | |
CN112530270B (en) | Mapping method and device based on region allocation | |
CN115626181A (en) | Route planning system and method for self-driving | |
EP4160153A1 (en) | Methods and systems for estimating lanes for a vehicle |