TWI750762B - Hybrid planniing method in autonomous vehicles and system thereof - Google Patents
Hybrid planniing method in autonomous vehicles and system thereof Download PDFInfo
- Publication number
- TWI750762B TWI750762B TW109126729A TW109126729A TWI750762B TW I750762 B TWI750762 B TW I750762B TW 109126729 A TW109126729 A TW 109126729A TW 109126729 A TW109126729 A TW 109126729A TW I750762 B TWI750762 B TW I750762B
- Authority
- TW
- Taiwan
- Prior art keywords
- obstacle
- vehicle
- scene
- learned
- learning
- Prior art date
Links
Images
Landscapes
- Traffic Control Systems (AREA)
- Control Of Driving Devices And Active Controlling Of Vehicle (AREA)
Abstract
Description
本發明是關於一種自駕車之決策方法及其系統,特別是關於一種自駕車之混合決策方法及其系統。The present invention relates to a decision-making method and system for self-driving cars, in particular to a hybrid decision-making method and system for self-driving cars.
近幾年自動駕駛汽車的發展蓬勃,許多車廠大量投入資源,為自駕時代來臨預作準備,並已計畫採用無人駕駛汽車來經營交通運輸系統,而且已允許實驗性質的自動駕駛汽車。The development of self-driving cars has flourished in recent years, and many automakers have invested a lot of resources to prepare for the advent of the self-driving era. They have planned to use driverless cars to operate the transportation system, and have allowed experimental self-driving cars.
自動駕駛汽車使用多種感測器,如光達(Lidar)與雷達(Radar)持續做大範圍的感測。在行駛期間自動駕駛汽車需要根據車身與環境動態的參考資訊作為系統輸入,藉此規劃出安全之車輛行駛軌跡。Self-driving cars use a variety of sensors, such as Lidar and Radar, to continuously perform large-scale sensing. During driving, the self-driving car needs to use the reference information of the body and the environment as the system input, so as to plan a safe vehicle driving trajectory.
現行自動駕駛車輛避障之決策大多採用基於規則模型(Rule-based model)或基於學習模型(Artificial Intelligence-based model;AI-based model)兩種方法,其中基於規則模型須對每一組結果做評估,且僅適用於限制條件內之場景,而基於學習模型之軌跡會發生不連續之情況,且軌跡與速度之產生不具穩定性。由此可知,目前市場上缺乏一種可同時處理多維度變數、具備學習能力、穩定性高且符合軌跡規劃之連續性及車輛動態限制的自駕車之混合決策方法及其系統,故相關業者均在尋求其解決之道。The current decision-making of autonomous vehicles for obstacle avoidance mostly adopts two methods: rule-based model or artificial intelligence-based model (AI-based model). The evaluation is only applicable to the scene within the constraints, and the trajectory based on the learning model will be discontinuous, and the trajectory and speed will not be stable. It can be seen that there is currently a lack of a hybrid decision-making method and system for self-driving cars that can deal with multi-dimensional variables at the same time, has learning ability, high stability, and conforms to the continuity of trajectory planning and vehicle dynamic constraints. seek its solution.
因此,本發明之目的在於提供一種自駕車之混合決策方法及其系統,其先利用基於學習模型學習駕駛避障行為,然後融合基於規則之路徑規劃,以建構出混合決策。此混合決策之基於規則模型透過特定場景類別及特定關鍵參數所規劃出的軌跡已是最佳軌跡,可解決習知技術中需要生成多條軌跡而擇一之額外篩選作動的問題。Therefore, the purpose of the present invention is to provide a hybrid decision-making method and system for self-driving cars, which first use a learning-based model to learn driving obstacle avoidance behavior, and then integrate rule-based path planning to construct hybrid decision-making. The trajectory planned by the rule-based model of the hybrid decision through a specific scene category and a specific key parameter is already the optimal trajectory, which can solve the problem that multiple trajectories need to be generated to select one additional screening action in the prior art.
依據本發明的方法態樣之一實施方式提供一種自駕車之混合決策方法,其用以決策出本車之最佳軌跡函數,此自駕車之混合決策方法包含參數獲得步驟、基於學習之場景決策步驟、基於學習之參數優化步驟以及基於規則之路徑規劃步驟。其中參數獲得步驟係驅動感測單元感測本車之周圍場景而獲得待學習參數組,並將待學習參數組儲存至記憶體。基於學習之場景決策步驟係驅動運算處理單元接收來自記憶體之待學習參數組,並依據待學習參數組與一基於學習模型從複數場景類別中判別出符合本車之周圍場景之其中一場景類別。此外,基於學習之參數優化步驟係驅動運算處理單元將待學習參數組執行基於學習模型而產生關鍵參數組。基於規則之路徑規劃步驟係驅動運算處理單元將其中一場景類別及關鍵參數組執行一基於規則模型而規劃出最佳軌跡函數。According to an embodiment of the method aspect of the present invention, there is provided a hybrid decision-making method for self-driving cars, which is used to determine the optimal trajectory function of the vehicle. steps, learning-based parameter optimization steps, and rule-based path planning steps. The parameter obtaining step is to drive the sensing unit to sense the surrounding scene of the vehicle to obtain the parameter set to be learned, and store the parameter set to be learned in the memory. The learning-based scene decision-making step is to drive the computing processing unit to receive the to-be-learned parameter set from the memory, and to determine one of the scene types that matches the surrounding scenes of the vehicle according to the to-be-learned parameter set and a learning model from a plurality of scene types . In addition, the learning-based parameter optimization step drives the arithmetic processing unit to execute the learning model-based parameter set to be learned to generate a key parameter set. The rule-based path planning step drives the computing processing unit to execute a rule-based model for one of the scene categories and key parameter groups to plan the optimal trajectory function.
藉此,本發明的自駕車之混合決策方法透過基於學習模型學習駕駛避障行為,然後融合基於規則之路徑規劃,以建構出混合決策,不但可同時處理多維度變數,還能使系統具備學習能力,並符合軌跡規劃之連續性及車輛動態限制。Thereby, the hybrid decision-making method for self-driving cars of the present invention learns the driving obstacle avoidance behavior based on the learning model, and then integrates the rule-based path planning to construct hybrid decision-making, which can not only process multi-dimensional variables at the same time, but also enable the system to learn capability, and comply with the continuity of trajectory planning and vehicle dynamics constraints.
前述實施方式之其他實施例如下:前述待學習參數組可包含本車道路寬、相對距離、障礙物長度及障礙物側向距離,其中本車道路寬代表本車所在之道路之寬度。相對距離代表本車與障礙物之距離。障礙物長度代表障礙物之長度。障礙物側向距離代表障礙物離車道中心線之距離。Other examples of the aforementioned embodiments are as follows: the aforementioned parameter set to be learned may include the vehicle road width, relative distance, obstacle length and obstacle lateral distance, wherein the vehicle road width represents the width of the road where the vehicle is located. The relative distance represents the distance between the vehicle and the obstacle. Obstacle length represents the length of the obstacle. The obstacle lateral distance represents the distance of the obstacle from the center line of the lane.
前述實施方式之其他實施例如下:前述參數獲得步驟可包含資訊感測步驟,此資訊感測步驟包含車輛動態感測步驟、障礙物感測步驟及車道感測步驟。其中車輛動態感測步驟係驅動車輛動態感測裝置依據地圖資訊定位本車之當前位置與路口停止線,並感測本車之當前航向角、當前速度及當前加速度。障礙物感測步驟係驅動障礙物感測裝置感測與本車相距預定距離範圍內之障礙物,以產生對應障礙物之障礙物資訊及對應本車之複數可行駛空間座標點。障礙物資訊包含對應障礙物之障礙物當前位置、障礙物速度及障礙物加速度。車道感測步驟係驅動車道感測裝置感測本車車道線間距及道路曲率。Other examples of the aforementioned embodiments are as follows: the aforementioned parameter obtaining step may include an information sensing step, and the information sensing step includes a vehicle dynamic sensing step, an obstacle sensing step, and a lane sensing step. The vehicle dynamic sensing step is to drive the vehicle dynamic sensing device to locate the current position of the vehicle and the stop line at the intersection according to the map information, and to sense the current heading angle, current speed and current acceleration of the vehicle. The obstacle sensing step is to drive the obstacle sensing device to sense obstacles within a predetermined distance from the vehicle to generate obstacle information corresponding to the obstacles and a plurality of drivable space coordinate points corresponding to the vehicle. The obstacle information includes the current position of the obstacle, the speed of the obstacle and the acceleration of the obstacle. The lane sensing step is to drive the lane sensing device to sense the lane spacing and road curvature of the vehicle.
前述實施方式之其他實施例如下:前述參數獲得步驟可更包含資料處理步驟,此資料處理步驟由運算處理單元配置實施。資料處理步驟包含裁切步驟,此裁切步驟係依據預設時間間隔與預設偏航變化率裁切對應本車之當前位置、當前航向角、當前速度、當前加速度、障礙物資訊、此些可行駛空間座標點、本車車道線間距及道路曲率而產生裁切資料。此外,本車與障礙物之間存在碰撞時間間隔,本車具有偏航率;當碰撞時間間隔小於等於預設時間間隔時,啟動裁切步驟;當偏航率之變化小於等於預設偏航變化率時,停止裁切步驟。Other examples of the aforementioned implementation manner are as follows: the aforementioned parameter obtaining step may further include a data processing step, and the data processing step is configured and implemented by an arithmetic processing unit. The data processing step includes a cutting step, and the cutting step is to cut the current position, current heading angle, current speed, current acceleration, obstacle information, etc. The cutting data is generated from the coordinates of the drivable space, the distance between the lane lines of the vehicle and the curvature of the road. In addition, there is a collision time interval between the vehicle and the obstacle, and the vehicle has a yaw rate; when the collision time interval is less than or equal to the preset time interval, the cutting step is started; when the change of the yaw rate is less than or equal to the preset yaw rate When the rate of change is changed, the cropping step is stopped.
前述實施方式之其他實施例如下:前述資料處理步驟可更包含一分群步驟,此分群步驟係依據複數預設加速度範圍及複數對向障礙物資訊將裁切資料分群為複數群體。此些預設加速度範圍包含保守預設加速度範圍與常態預設加速度範圍,此些對向障礙物資訊包含對向有障礙物資訊與對向無障礙物資訊,此些群體包含保守群體與常態群體。保守預設加速度範圍與對向無障礙物資訊對應保守群體,且常態預設加速度範圍與對向有障礙物資訊對應常態群體。Other examples of the aforementioned embodiments are as follows: the aforementioned data processing step may further include a grouping step, and the grouping step is to group the cropping data into plural groups according to the plural preset acceleration ranges and the plural opposite obstacle information. The preset acceleration ranges include a conservative preset acceleration range and a normal preset acceleration range, the opposing obstacle information includes opposing obstacle information and opposing obstacle-free information, and these groups include conservative groups and normal groups . The conservative preset acceleration range and the information without obstacles in the opposite direction correspond to the conservative group, and the normal preset acceleration range and the information with obstacles in the opposite direction correspond to the normal group.
前述實施方式之其他實施例如下:前述資料處理步驟可更包含鏡射步驟,此鏡射步驟係依據各場景類別沿車輛行進方向將本車軌跡函數鏡射而產生鏡射後本車軌跡函數,待學習參數組包含鏡射後本車軌跡函數。Other examples of the aforementioned embodiments are as follows: the aforementioned data processing step may further include a mirroring step, and the mirroring step mirrors the own vehicle trajectory function along the traveling direction of the vehicle according to each scene category to generate the mirrored own vehicle trajectory function, The parameter group to be learned includes the vehicle trajectory function after mirroring.
前述實施方式之其他實施例如下:前述基於學習之參數優化步驟可包含基於學習之駕駛行為產生步驟與關鍵參數產生步驟,其中基於學習之駕駛行為產生步驟係依據基於學習模型將待學習參數組學習而產生已學習行為參數組。待學習參數組包含駕駛行徑路線參數組與駕駛加減速行為參數組。而關鍵參數產生步驟係將已學習行為參數組之系統作動參數組運算而求得系統作動時間點,並將系統作動時間點、目標點縱向距離、目標點橫向距離、目標點曲率、本車速度及目標速度組合形成關鍵參數組。Other examples of the aforementioned embodiments are as follows: the aforementioned learning-based parameter optimization step may include a learning-based driving behavior generating step and a key parameter generating step, wherein the learning-based driving behavior generating step is based on learning a set of parameters to be learned based on a learning model. A learned behavior parameter group is generated. The parameter group to be learned includes a driving route parameter group and a driving acceleration/deceleration behavior parameter group. The key parameter generation step is to calculate the system actuation parameter group of the learned behavior parameter group to obtain the system actuation time point, and to calculate the system actuation time point, the longitudinal distance of the target point, the lateral distance of the target point, the curvature of the target point, and the speed of the vehicle. and the target speed combination to form a key parameter group.
前述實施方式之其他實施例如下:前述已學習行為參數組可包含系統作動參數組、目標點縱向距離、目標點橫向距離、目標點曲率、本車速度及目標速度。系統作動參數組包含本車速度、本車加速度、方向盤角度、偏航率、相對距離及障礙物側向距離。Other examples of the aforementioned embodiments are as follows: the aforementioned learned behavior parameter set may include a system action parameter set, the longitudinal distance of the target point, the lateral distance of the target point, the curvature of the target point, the vehicle speed and the target speed. The system action parameter group includes vehicle speed, vehicle acceleration, steering wheel angle, yaw rate, relative distance and obstacle lateral distance.
前述實施方式之其他實施例如下:前述最佳軌跡函數可包含平面座標曲線方程式、切線速度及切線加速度,其中平面座標曲線方程式代表本車於平面座標之最佳軌跡。切線速度代表本車於平面座標曲線方程式之切點之速度。切線加速度代表本車於切點之加速度。最佳軌跡函數依據運算處理單元之取樣時間更新。Other examples of the aforementioned embodiments are as follows: the aforementioned optimal trajectory function may include a plane coordinate curve equation, tangential velocity and tangential acceleration, wherein the plane coordinate curve equation represents the optimal trajectory of the vehicle in plane coordinates. The tangential speed represents the speed of the vehicle at the tangent point of the plane coordinate curve equation. Tangent acceleration represents the acceleration of the vehicle at the tangent point. The optimal trajectory function is updated according to the sampling time of the arithmetic processing unit.
前述實施方式之其他實施例如下:前述場景類別可包含障礙物佔據場景、路口場景及進出站場景。其中障礙物佔據場景包含障礙物佔據百分比,障礙物佔據場景代表周圍場景中具有障礙物與道路,障礙物佔據百分比代表障礙物佔據道路之比例。路口場景代表周圍場景中具有路口。進出站場景代表周圍場景中具有進出站。Other examples of the foregoing implementation manner are as follows: the foregoing scene categories may include obstacle occupation scenes, intersection scenes, and entry and exit scenes. The obstacle occupied scene includes the obstacle occupied percentage, the obstacle occupied scene represents that there are obstacles and roads in the surrounding scene, and the obstacle occupied percentage represents the proportion of the obstacle occupied by the road. The intersection scene represents that there are intersections in the surrounding scene. Inbound and outbound scenarios represent surrounding scenes with inbound and outbound stations.
依據本發明的結構態樣之一實施方式提供一種自駕車之混合決策系統,其用以決策出本車之最佳軌跡函數。自駕車之混合決策系統包含感測單元、記憶體以及運算處理單元,其中感測單元用以感測本車之周圍場景而獲得待學習參數組。記憶體用以存取待學習參數組、複數場景類別、基於學習模型及基於規則模型。此外,運算處理單元電性連接記憶體與感測單元,運算處理單元經配置以實施包含以下步驟之操作:基於學習之場景決策步驟、基於學習之參數優化步驟及基於規則之路徑規劃步驟。其中基於學習之場景決策步驟係依據待學習參數組與基於學習模型從此些場景類別中判別出符合本車之周圍場景之其中一場景類別。基於學習之參數優化步驟係將待學習參數組執行基於學習模型而產生關鍵參數組。基於規則之路徑規劃步驟係將其中一場景類別及關鍵參數組執行基於規則模型而規劃出最佳軌跡函數。According to one embodiment of the structural aspect of the present invention, a hybrid decision-making system for self-driving cars is provided, which is used to decide the optimal trajectory function of the own vehicle. The hybrid decision-making system of the self-driving car includes a sensing unit, a memory and an arithmetic processing unit, wherein the sensing unit is used for sensing the surrounding scene of the vehicle to obtain a set of parameters to be learned. The memory is used to access the parameter set to be learned, the plural scene categories, the learning-based model and the rule-based model. In addition, the arithmetic processing unit is electrically connected to the memory and the sensing unit, and the arithmetic processing unit is configured to implement operations including the following steps: a learning-based scene decision step, a learning-based parameter optimization step, and a rule-based path planning step. The learning-based scene decision-making step is to determine one of the scene categories matching the surrounding scenes of the vehicle from the scene categories according to the parameter set to be learned and the learning model. The learning-based parameter optimization step is to execute the learning model-based parameter set to be learned to generate a key parameter set. The rule-based path planning step is to plan an optimal trajectory function by executing a rule-based model on one of the scene categories and key parameter groups.
藉此,本發明之自駕車之混合決策系統利用基於學習模型學習駕駛避障行為,然後融合基於規則之路徑規劃,以建構出混合決策,既可同時處理多維度變數,亦能使系統具備學習能力,並符合軌跡規劃之連續性及車輛動態限制。Thereby, the hybrid decision-making system for self-driving cars of the present invention uses the learning-based model to learn the driving obstacle avoidance behavior, and then integrates the rule-based path planning to construct hybrid decision-making, which can simultaneously process multi-dimensional variables and enable the system to learn capability, and comply with the continuity of trajectory planning and vehicle dynamics constraints.
前述實施方式之其他實施例如下:前述待學習參數組可包含本車道路寬、相對距離、障礙物長度及障礙物側向距離,其中本車道路寬代表本車所在之道路之寬度。相對距離代表本車與障礙物之距離。障礙物長度代表障礙物之長度。障礙物側向距離代表障礙物離車道中心線之距離。Other examples of the aforementioned embodiments are as follows: the aforementioned parameter set to be learned may include the vehicle road width, relative distance, obstacle length and obstacle lateral distance, wherein the vehicle road width represents the width of the road where the vehicle is located. The relative distance represents the distance between the vehicle and the obstacle. Obstacle length represents the length of the obstacle. The obstacle lateral distance represents the distance of the obstacle from the center line of the lane.
前述實施方式之其他實施例如下:前述記憶體可用以存取一地圖資訊,此地圖資訊相關於本車所行駛之路線。感測單元包含車輛動態感測裝置、障礙物感測裝置及車道感測裝置,其中車輛動態感測裝置依據地圖資訊定位本車之當前位置,並感測本車之當前航向角、當前速度及當前加速度。障礙物感測裝置感測與本車相距預定距離範圍內之障礙物,以產生對應障礙物之障礙物資訊及對應本車之複數可行駛空間座標點。障礙物資訊包含對應障礙物之障礙物當前位置、障礙物速度及障礙物加速度。車道感測裝置感測本車車道線間距與道路曲率。Other examples of the aforementioned embodiments are as follows: the aforementioned memory can be used to access a map information, and the map information is related to the route traveled by the vehicle. The sensing unit includes a vehicle dynamic sensing device, an obstacle sensing device and a lane sensing device, wherein the vehicle dynamic sensing device locates the current position of the vehicle according to the map information, and senses the current heading angle, current speed and current acceleration. The obstacle sensing device senses obstacles within a predetermined distance from the vehicle to generate obstacle information corresponding to the obstacle and a plurality of drivable space coordinate points corresponding to the vehicle. The obstacle information includes the current position of the obstacle, the speed of the obstacle and the acceleration of the obstacle. The lane sensing device senses the distance between the lane lines of the vehicle and the curvature of the road.
前述實施方式之其他實施例如下:前述運算處理單元經配置以實施資料處理步驟,此資料處理步驟包含裁切步驟。此裁切步驟係依據預設時間間隔與預設偏航變化率裁切對應本車之當前位置、當前航向角、當前速度、當前加速度、障礙物資訊、此些可行駛空間座標點、本車車道線間距及道路曲率而產生裁切資料。此外,本車與障礙物之間存在碰撞時間間隔,本車具有偏航率。當碰撞時間間隔小於等於預設時間間隔時,啟動裁切步驟;當偏航率之變化小於等於預設偏航變化率時,停止裁切步驟。Other examples of the aforementioned embodiments are as follows: the aforementioned arithmetic processing unit is configured to perform a data processing step, the data processing step including a cutting step. This cutting step is to cut the current position, current heading angle, current speed, current acceleration, obstacle information, these drivable space coordinate points, the current position of the vehicle according to the preset time interval and the preset yaw rate of change. Lane line spacing and road curvature to generate clipping data. In addition, there is a collision time interval between the vehicle and the obstacle, and the vehicle has a yaw rate. When the collision time interval is less than or equal to the preset time interval, the cutting step is started; when the change of the yaw rate is less than or equal to the preset yaw change rate, the cutting step is stopped.
前述實施方式之其他實施例如下:前述資料處理步驟可更包含一分群步驟,此分群步驟係依據複數預設加速度範圍及複數對向障礙物資訊將裁切資料分群為複數群體。此些預設加速度範圍包含一保守預設加速度範圍與一常態預設加速度範圍,此些對向障礙物資訊包含一對向有障礙物資訊與一對向無障礙物資訊,此些群體包含一保守群體與一常態群體。保守預設加速度範圍與對向無障礙物資訊對應保守群體,且常態預設加速度範圍與對向有障礙物資訊對應常態群體。Other examples of the aforementioned embodiments are as follows: the aforementioned data processing step may further include a grouping step, and the grouping step is to group the cropping data into plural groups according to the plural preset acceleration ranges and the plural opposite obstacle information. The preset acceleration ranges include a conservative preset acceleration range and a normal preset acceleration range, the opposite obstacle information includes a pair of information with obstacles in the direction and a pair of information without obstacles, and the groups include a Conservative groups and a normal group. The conservative preset acceleration range and the information without obstacles in the opposite direction correspond to the conservative group, and the normal preset acceleration range and the information with obstacles in the opposite direction correspond to the normal group.
前述實施方式之其他實施例如下:前述資料處理步驟可更包含鏡射步驟,此鏡射步驟係依據各場景類別沿車輛行進方向將本車軌跡函數鏡射而產生鏡射後本車軌跡函數。待學習參數組包含鏡射後本車軌跡函數。Other examples of the aforementioned embodiments are as follows: the aforementioned data processing step may further include a mirroring step, which mirrors the own vehicle trajectory function along the vehicle traveling direction according to each scene category to generate the mirrored own vehicle trajectory function. The parameter group to be learned includes the vehicle trajectory function after mirroring.
前述實施方式之其他實施例如下:前述基於學習之參數優化步驟可包含基於學習之駕駛行為產生步驟與關鍵參數產生步驟,其中基於學習之駕駛行為產生步驟係依據基於學習模型將待學習參數組學習而產生已學習行為參數組。待學習參數組包含駕駛行徑路線參數組與駕駛加減速行為參數組。關鍵參數產生步驟係將已學習行為參數組之系統作動參數組運算而求得系統作動時間點,並將系統作動時間點、目標點縱向距離、目標點橫向距離、目標點曲率、本車速度及目標速度組合形成關鍵參數組。Other examples of the aforementioned embodiments are as follows: the aforementioned learning-based parameter optimization step may include a learning-based driving behavior generating step and a key parameter generating step, wherein the learning-based driving behavior generating step is based on learning a set of parameters to be learned based on a learning model. A learned behavior parameter group is generated. The parameter group to be learned includes a driving route parameter group and a driving acceleration/deceleration behavior parameter group. The key parameter generation step is to calculate the system operation parameter group of the learned behavior parameter group to obtain the system operation time point, and calculate the system operation time point, the vertical distance of the target point, the horizontal distance of the target point, the curvature of the target point, the speed of the vehicle and the The target speed combination forms the key parameter group.
前述實施方式之其他實施例如下:前述已學習行為參數組可包含系統作動參數組、目標點縱向距離、目標點橫向距離、目標點曲率、本車速度及目標速度。系統作動參數組包含本車速度、本車加速度、方向盤角度、偏航率、相對距離及障礙物側向距離。Other examples of the aforementioned embodiments are as follows: the aforementioned learned behavior parameter set may include a system action parameter set, the longitudinal distance of the target point, the lateral distance of the target point, the curvature of the target point, the vehicle speed and the target speed. The system action parameter group includes vehicle speed, vehicle acceleration, steering wheel angle, yaw rate, relative distance and obstacle lateral distance.
前述實施方式之其他實施例如下:前述最佳軌跡函數可包含平面座標曲線方程式、切線速度及切線加速度,其中平面座標曲線方程式代表本車於平面座標之最佳軌跡。切線速度代表本車於平面座標曲線方程式之切點之速度。切線加速度代表本車於切點之加速度。最佳軌跡函數依據運算處理單元之取樣時間更新。Other examples of the aforementioned embodiments are as follows: the aforementioned optimal trajectory function may include a plane coordinate curve equation, tangential velocity and tangential acceleration, wherein the plane coordinate curve equation represents the optimal trajectory of the vehicle in plane coordinates. The tangential speed represents the speed of the vehicle at the tangent point of the plane coordinate curve equation. Tangent acceleration represents the acceleration of the vehicle at the tangent point. The optimal trajectory function is updated according to the sampling time of the arithmetic processing unit.
前述實施方式之其他實施例如下:前述場景類別可包含障礙物佔據場景、路口場景及進出站場景,其中障礙物佔據場景包含障礙物佔據百分比,障礙物佔據場景代表周圍場景中具有障礙物與道路,障礙物佔據百分比代表障礙物佔據道路之比例。路口場景代表周圍場景中具有路口。進出站場景代表周圍場景中具有進出站。Other examples of the aforementioned embodiments are as follows: the aforementioned scene categories may include obstacle occupied scenes, intersection scenes, and entry and exit scenes, wherein the obstacle occupied scene includes the obstacle occupied percentage, and the obstacle occupied scene represents the surrounding scene with obstacles and roads. , the obstacle occupancy percentage represents the proportion of the road occupied by obstacles. The intersection scene represents that there are intersections in the surrounding scene. Inbound and outbound scenarios represent surrounding scenes with inbound and outbound stations.
以下將參照圖式說明本發明之複數個實施例。為明確說明起見,許多實務上的細節將在以下敘述中一併說明。然而,應瞭解到,這些實務上的細節不應用以限制本發明。也就是說,在本發明部分實施例中,這些實務上的細節是非必要的。此外,為簡化圖式起見,一些習知慣用的結構與元件在圖式中將以簡單示意的方式繪示之;並且重複之元件將可能使用相同的編號表示之。Several embodiments of the present invention will be described below with reference to the drawings. For the sake of clarity, many practical details are set forth in the following description. It should be understood, however, that these practical details should not be used to limit the invention. That is, in some embodiments of the present invention, these practical details are unnecessary. In addition, for the purpose of simplifying the drawings, some well-known and conventional structures and elements will be shown in a simplified and schematic manner in the drawings; and repeated elements may be denoted by the same reference numerals.
此外,本文中當某一元件(或單元或模組等)「連接」於另一元件,可指所述元件是直接連接於另一元件,亦可指某一元件是間接連接於另一元件,意即,有其他元件介於所述元件及另一元件之間。而當有明示某一元件是「直接連接」於另一元件時,才表示沒有其他元件介於所述元件及另一元件之間。而第一、第二、第三等用語只是用來描述不同元件,而對元件本身並無限制,因此,第一元件亦可改稱為第二元件。且本文中之元件/單元/電路之組合非此領域中之一般周知、常規或習知之組合,不能以元件/單元/電路本身是否為習知,來判定其組合關係是否容易被技術領域中之通常知識者輕易完成。In addition, when a certain element (or unit or module, etc.) is "connected" to another element herein, it may mean that the element is directly connected to another element, or it may also mean that a certain element is indirectly connected to another element , that is, there are other elements interposed between said element and another element. When it is expressly stated that an element is "directly connected" to another element, it means that no other element is interposed between the element and the other element. The terms first, second, third, etc. are only used to describe different elements, and do not limit the elements themselves. Therefore, the first element can also be renamed as the second element. And the combination of elements/units/circuits in this article is not a commonly known, conventional or well-known combination in this field, and it cannot be determined whether the combination relationship of the elements/units/circuits is well-known or not easily understood by those in the technical field. Usually the knowledgeable can do it easily.
請參閱第1圖,第1圖係繪示本發明第一實施例的自駕車之混合決策方法100的流程示意圖。自駕車之混合決策方法100用以決策出本車之最佳軌跡函數108,此自駕車之混合決策方法100包含參數獲得步驟S02、基於學習(AI-based)之場景決策步驟S04、基於學習之參數優化步驟S06以及基於規則(Rule-based)之路徑規劃步驟S08。Please refer to FIG. 1. FIG. 1 is a schematic flowchart of a hybrid decision-
參數獲得步驟S02係驅動感測單元感測本車之周圍場景而獲得待學習參數組102,並將待學習參數組102儲存至記憶體。基於學習之場景決策步驟S04係驅動運算處理單元接收來自記憶體之待學習參數組102,並依據待學習參數組102與一基於學習模型(AI-based model)從複數場景類別104中判別出符合本車之周圍場景之其中一場景類別104。此外,基於學習之參數優化步驟S06係驅動運算處理單元將待學習參數組102執行基於學習模型而產生關鍵參數組106。基於規則之路徑規劃步驟S08係驅動運算處理單元將其中一場景類別104及關鍵參數組106執行一基於規則模型(Rule-based model)而規劃出最佳軌跡函數108。藉此,本發明的自駕車之混合決策方法100透過基於學習模型學習駕駛避障行為,然後融合基於規則之路徑規劃,以建構出混合決策,不但可同時處理多維度變數,還能使系統具備學習能力,並符合軌跡規劃之連續性及車輛動態限制。以下將透過較詳細的實施例來說明上述各步驟之細節。The parameter obtaining step S02 is to drive the sensing unit to sense the surrounding scene of the vehicle to obtain the
請一併參閱第2圖至第9圖,其中第2圖係繪示本發明第二實施例的自駕車之混合決策方法100a的流程示意圖;第3圖係繪示第2圖的自駕車之混合決策方法100a的資訊感測步驟S122之示意圖;第4圖係繪示第2圖的自駕車之混合決策方法100a的資訊感測步驟S122之輸入資料及輸出資料101的示意圖;第5圖係繪示第2圖的自駕車之混合決策方法100a的資料處理步驟S124之示意圖;第6圖係繪示第2圖的自駕車之混合決策方法100a應用於同車道避障之示意圖;第7圖係繪示第2圖的自駕車之混合決策方法100a應用於障礙物佔據場景之示意圖;第8圖係繪示第2圖的自駕車之混合決策方法100a應用於車道變換之示意圖;以及第9圖係繪示第2圖的自駕車之混合決策方法100a的基於規則之路徑規劃步驟S18之示意圖;如圖所示,自駕車之混合決策方法100a用以決策出本車HV之最佳軌跡函數108,此自駕車之混合決策方法100a包含參數獲得步驟S12、基於學習之場景決策步驟S14、基於學習之參數優化步驟S16、基於規則之路徑規劃步驟S18、診斷步驟S20以及控制步驟S22。Please refer to FIG. 2 to FIG. 9 together, wherein FIG. 2 is a schematic flowchart of the hybrid decision-making method 100a for a self-driving car according to the second embodiment of the present invention; A schematic diagram of the information sensing step S122 of the hybrid decision-making method 100a; FIG. 4 is a schematic diagram of the input data and output data 101 of the information sensing step S122 of the hybrid decision-making method 100a of the self-driving car in FIG. 2; FIG. 5 is a schematic diagram Figure 2 is a schematic diagram of the data processing step S124 of the hybrid decision-making method 100a for self-driving cars; Figure 6 is a schematic diagram illustrating the application of the hybrid decision-making method 100a for self-driving cars in Figure 2 to avoid obstacles in the same lane; Figure 7 Fig. 2 is a schematic diagram of the hybrid decision-making method 100a of the self-driving car applied to an obstacle occupancy scene; Fig. 8 is a schematic diagram illustrating the application of the hybrid decision-making method 100a of the self-driving car in Fig. 2 to lane changing; and Fig. 9 The figure is a schematic diagram of the rule-based path planning step S18 of the hybrid decision-making method 100a of the self-driving car in FIG. 2; as shown in the figure, the hybrid decision-making method 100a of the self-driving car is used to decide the optimal trajectory function of the HV of the vehicle 108. The hybrid decision-
參數獲得步驟S12係驅動感測單元感測本車HV之周圍場景而獲得待學習參數組102,並將待學習參數組102儲存至記憶體。詳細地說,待學習參數組102包含本車道路寬
LD、相對距離
RD、障礙物長度
L
obj 、障礙物側向距離
D
obj 。其中本車道路寬
LD代表本車HV所在之道路之寬度。相對距離
RD代表本車HV與障礙物Obj之距離。障礙物長度
L
obj 代表障礙物Obj之長度。障礙物側向距離
D
obj 代表障礙物Obj離車道中心線之距離。再者,參數獲得步驟S12包含資訊感測步驟S122與資料處理步驟S124。
The parameter obtaining step S12 is to drive the sensing unit to sense the surrounding scene of the HV of the vehicle to obtain the
資訊感測步驟S122包含車輛動態感測步驟S1222、障礙物感測步驟S1224以及車道感測步驟S1226。車輛動態感測步驟S1222係驅動車輛動態感測裝置依據地圖資訊定位本車HV之當前位置與路口停止線,並感測本車HV之當前航向角、當前速度及當前加速度。障礙物感測步驟S1224係驅動障礙物感測裝置感測與本車HV相距預定距離範圍內之障礙物Obj,以產生對應障礙物Obj之障礙物資訊及對應本車HV之複數個可行駛空間座標點。障礙物資訊包含對應障礙物Obj之障礙物當前位置、障礙物速度
v
obj 及障礙物加速度。車道感測步驟S1226係驅動車道感測裝置感測本車車道線間距及道路曲率。此外,由第4圖可知,資訊感測步驟S122之輸入資料包含地圖資訊、全球定位系統(Global Positioning System,GPS)資料、影像資料、光達(Lidar)資料、雷達(Radar)資料、慣性測量單元(Inertial Measurement Unit,IMU)資料。輸出資料101包含當前位置、當前航向角、路口停止線、障礙物當前位置、障礙物速度
v
obj 、障礙物加速度、可行駛空間座標點、本車車道線間距及道路曲率。
The information sensing step S122 includes a vehicle dynamic sensing step S1222 , an obstacle sensing step S1224 and a lane sensing step S1226 . The vehicle dynamic sensing step S1222 is to drive the vehicle dynamic sensing device to locate the current position of the vehicle HV and the stop line at the intersection according to the map information, and sense the current heading angle, current speed and current acceleration of the vehicle HV. The obstacle sensing step S1224 is to drive the obstacle sensing device to sense the obstacle Obj within a predetermined distance from the vehicle HV, so as to generate obstacle information corresponding to the obstacle Obj and a plurality of drivable spaces corresponding to the vehicle HV Coordinate point. The obstacle information includes the current position of the obstacle corresponding to the obstacle Obj, the speed of the obstacle v obj and the acceleration of the obstacle. The lane sensing step S1226 is to drive the lane sensing device to sense the lane line spacing and road curvature of the vehicle. In addition, it can be seen from FIG. 4 that the input data of the information sensing step S122 includes map information, Global Positioning System (GPS) data, image data, Lidar data, Radar data, and inertial measurement Unit (Inertial Measurement Unit, IMU) information. The
資料處理步驟S124由運算處理單元配置實施,資料處理步驟S124包含裁切步驟S1242、分群步驟S1244以及鏡射步驟S1246。其中裁切步驟S1242係依據預設時間間隔與預設偏航變化率裁切對應本車HV之當前位置、當前航向角、當前速度、當前加速度、障礙物資訊、此些可行駛空間座標點、本車車道線間距及道路曲率而產生一裁切資料。本車HV與障礙物Obj之間存在碰撞時間間隔,本車HV具有偏航率(yaw rate);當碰撞時間間隔小於等於預設時間間隔時,啟動裁切步驟S1242;當偏航率之變化小於等於預設偏航變化率時,停止裁切步驟S1242。上述預設時間間隔可為3秒,預設偏航變化率可為0.5,而且偏航率之變化可針對連續多筆資料作綜合判斷(例如:連續5筆的偏航率之變化均小於等於0.5),但本發明不以此為限。此外,分群步驟S1244係依據複數個預設加速度範圍及複數個對向障礙物資訊將裁切資料分群為複數個群體,此些預設加速度範圍包含一保守預設加速度範圍與一常態預設加速度範圍,此些對向障礙物資訊包含一對向有障礙物資訊與一對向無障礙物資訊,此些群體包含一保守群體與一常態群體。保守預設加速度範圍與對向無障礙物資訊對應保守群體,而常態預設加速度範圍與對向有障礙物資訊則對應常態群體。上述保守預設加速度範圍可為-0.1g至0.1g,常態預設加速度範圍可為-0.2g至-0.3g及0.2g至0.3g,亦即
,g代表重力加速度,但本發明不以此為限。藉此,分群步驟S1244之目的在於區分駕駛行為的差異(保守或者常態),其可改善後續之基於學習模型所訓練的成效。此外,分群步驟S1244可方便切換模型或參數,且能讓系統在可執行之範圍內切換加速程度,或者避開障礙物Obj。另外,鏡射步驟S1246係依據各場景類別104沿一車輛行進方向(例如:Y軸)將一本車軌跡函數鏡射而產生一鏡射後本車軌跡函數。待學習參數組102包含鏡射後本車軌跡函數。本車軌跡函數為本車HV行駛之軌跡,其代表駕駛行為資料。藉此,鏡射步驟S1246之本車軌跡函數與鏡射後本車軌跡函數均可供後續之基於學習模型訓練,以增加蒐集資料的多樣性,進而避免基於學習模型因為資料多樣性不足而無法有效分辨場景類別104的問題。
The data processing step S124 is configured and implemented by an arithmetic processing unit. The data processing step S124 includes a cropping step S1242 , a grouping step S1244 and a mirroring step S1246 . The cutting step S1242 is to cut the current position, current heading angle, current speed, current acceleration, obstacle information, these drivable space coordinate points, A clipping data is generated based on the distance between the lane lines of the vehicle and the curvature of the road. There is a collision time interval between the vehicle HV and the obstacle Obj, and the vehicle HV has a yaw rate; when the collision time interval is less than or equal to the preset time interval, the cutting step S1242 is started; when the yaw rate changes When it is less than or equal to the preset yaw change rate, the cropping step S1242 is stopped. The above preset time interval can be 3 seconds, the preset yaw rate of change can be 0.5, and the change of yaw rate can be comprehensively judged for multiple consecutive data (for example: the change of yaw rate of 5 consecutive records is less than or equal to 0.5), but the present invention is not limited to this. In addition, the grouping step S1244 is to group the cropping data into a plurality of groups according to a plurality of predetermined acceleration ranges and a plurality of opposite obstacle information, and the predetermined acceleration ranges include a conservative predetermined acceleration range and a normal predetermined acceleration The information on the opposite obstacles includes a pair of information with obstacles and a pair of information without obstacles, and these groups include a conservative group and a normal group. The conservative preset acceleration range and the information without obstacles in the opposite direction correspond to the conservative group, while the normal preset acceleration range and the information with obstacles in the opposite direction correspond to the normal group. The above conservative preset acceleration range may be -0.1g to 0.1g, and the normal preset acceleration range may be -0.2g to -0.3g and 0.2g to 0.3g, namely , g represents the gravitational acceleration, but the present invention is not limited to this. Therefore, the purpose of the grouping step S1244 is to distinguish differences in driving behaviors (conservative or normal), which can improve the results of subsequent training based on the learning model. In addition, the grouping step S1244 can facilitate the switching of models or parameters, and enables the system to switch the acceleration level within an executable range, or avoid obstacles Obj. In addition, the mirroring step S1246 is to mirror a vehicle trajectory function along a vehicle traveling direction (eg, Y axis) according to each
基於學習之場景決策步驟S14係驅動運算處理單元接收來自記憶體之待學習參數組102,並依據待學習參數組102與基於學習模型從複數個場景類別104中判別出符合本車HV之周圍場景之其中一個場景類別104。詳細地說,基於學習模型係基於機率統計方法並透過蒐集真人駕駛行為資料來訓練,其可包含端到端(End to End)或採樣規劃(Sampling based planning)。場景類別104可包含障礙物佔據場景、路口場景及進出站場景,其中障礙物佔據場景包含一障礙物佔據百分比,障礙物佔據場景代表周圍場景中具有障礙物Obj與道路,障礙物佔據百分比代表障礙物Obj佔據道路之比例。舉第7圖為例,此場景類別104為障礙物佔據場景,其可包含第一場景1041、第二場景1042、第三場景1043、第四場景1044及第五場景1045。其中第一場景1041代表障礙物Obj未佔據車道(障礙物佔據百分比=0%);第二場景1042代表障礙物Obj有三分之一車身佔據車道(障礙物佔據百分比=33.
%,三分之一車身為0.7m);第三場景1043代表障礙物Obj有二分之一車身佔據車道(障礙物佔據百分比=50%,二分之一車身為1.05m);第四場景1044代表障礙物Obj有三分之二車身佔據車道(障礙物佔據百分比=66.
%,三分之二車身為1.4m);第五場景1045代表障礙物Obj全車身佔據車道(障礙物佔據百分比=100%,全車身為2.1m)。此外,路口場景代表周圍場景中具有一路口。當其中一個場景類別104為路口時,車輛動態感測裝置透過地圖資訊得到一路口停止線。再者,進出站場景代表周圍場景中具有一進出站。藉此,基於學習之場景決策步驟S14可得到符合周圍場景之場景類別104,以供後續之基於規則之路徑規劃步驟S18使用。
The learning-based scene decision step S14 is to drive the arithmetic processing unit to receive the to-be-learned parameter set 102 from the memory, and to determine the surrounding scene that matches the HV of the vehicle from the plurality of
基於學習之參數優化步驟S16係驅動運算處理單元將待學習參數組102執行基於學習模型而產生關鍵參數組106。詳細地說,基於學習之參數優化步驟S16包含基於學習之駕駛行為產生步驟S162及關鍵參數產生步驟S164,其中基於學習之駕駛行為產生步驟S162係依據基於學習模型將待學習參數組102學習而產生一已學習行為參數組103。已學習行為參數組103包含系統作動參數組、目標點縱向距離、目標點橫向距離、目標點曲率、本車速度
v
h 及目標速度。駕駛行徑路線參數組(
x
i ,
y
j )及駕駛加減速行為參數組可透過資訊感測步驟S122得到;換言之,待學習參數組102包含駕駛行徑路線參數組(
x
i ,
y
j )及駕駛加減速行為參數組。再者,關鍵參數產生步驟S164係將已學習行為參數組103之系統作動參數組運算而求得一系統作動時間點,並將系統作動時間點、目標點縱向距離、目標點橫向距離、目標點曲率、本車速度
v
h 及目標速度組合形成關鍵參數組106。其中系統作動參數組包含本車速度
v
h 、本車加速度、方向盤角度、偏航率、相對距離
RD及障礙物側向距離
D
obj 。
The learning-based parameter optimization step S16 is to drive the arithmetic processing unit to execute the learning model-based parameter set 102 to be learned to generate the
基於規則之路徑規劃步驟S18係驅動運算處理單元將其中一個場景類別104及關鍵參數組106執行基於規則模型而規劃出最佳軌跡函數108。詳細地說,此其中一個場景類別104是符合本車HV之當前周圍場景。基於規則模型係基於確定行為制定規則,而決策結果取決於感測器資訊,其包含多項式或插值曲線。再者,基於規則之路徑規劃步驟S18包含目標點生成步驟S182、座標轉換步驟S184及軌跡生成步驟S186。其中目標點生成步驟S182係驅動運算處理單元依據場景類別104及關鍵參數組106運算而生成複數個目標點TP。座標轉換步驟S184係驅動運算處理單元依據可行駛空間座標點將此些目標點TP轉換出複數個目標二維座標。軌跡生成步驟S186係驅動運算處理單元將此些目標二維座標連接而產生最佳軌跡函數108。舉第9圖為例,目標點生成步驟S182會生成三個目標點TP,然後座標轉換步驟S184會產生對應三個目標點TP之三個目標二維座標,最後軌跡生成步驟S186依據目標二維座標產生最佳軌跡函數108。此外,最佳軌跡函數108包含平面座標曲線方程式BTF、切線速度及切線加速度,其中平面座標曲線方程式BTF代表本車HV於平面座標之最佳軌跡,亦即最佳軌跡函數108之座標方程式。切線速度代表本車HV於平面座標曲線方程式BTF之切點之速度。切線加速度代表本車HV於切點之加速度。另外值得一提的是,待學習參數組102可依據運算處理單元之取樣時間更新,進而更新最佳軌跡函數108;換言之,最佳軌跡函數108可依據運算處理單元之取樣時間更新。The rule-based path planning step S18 drives the arithmetic processing unit to execute one of the
診斷步驟S20係診斷本車HV的未來行駛軌跡與當前周圍場景(例如:當前道路曲率、本車車道線間距或相對距離 RD)是否皆維持在安全的容忍誤差內,並產生一診斷結果,藉此判斷自動駕駛的路線是否安全,同時藉由判斷方程式也可直接判斷出未來行駛軌跡中,需進行校正的參數並進行校正,以提升自動駕駛的安全性。 The diagnosis step S20 is to diagnose whether the future driving trajectory of the vehicle HV and the current surrounding scene (for example: the current road curvature, the distance between the vehicle lanes or the relative distance RD ) are all maintained within a safe tolerance error, and generate a diagnosis result by This judges whether the route of automatic driving is safe, and at the same time, the judgment equation can also directly judge the parameters that need to be corrected in the future driving trajectory and make corrections to improve the safety of automatic driving.
控制步驟S22係依據診斷結果控制本車HV之自動駕駛參數,其細節為習知技術,故不再贅述。The control step S22 is to control the automatic driving parameters of the HV of the vehicle according to the diagnosis result, the details of which are known in the art, and are not repeated here.
藉此,本發明之自駕車之混合決策方法100a透過基於學習模型學習駕駛避障行為,然後融合基於規則之路徑規劃,以建構出混合決策,既可同時處理多維度變數,亦能使系統具備學習能力,並符合軌跡規劃之連續性及車輛動態限制。Thereby, the hybrid decision-
請一併參閱第2圖至第10圖,其中第10圖係繪示本發明第三實施例之自駕車之混合決策系統200的方塊示意圖。自駕車之混合決策系統200用以決策出本車HV之最佳軌跡函數108,自駕車之混合決策系統200包含感測單元300、記憶體400及運算處理單元500。Please refer to FIG. 2 to FIG. 10 together, wherein FIG. 10 is a block diagram illustrating a hybrid decision-
感測單元300用以感測本車HV之周圍場景而獲得一待學習參數組102。詳細地說,感測單元300包含車輛動態感測裝置310、障礙物感測裝置320及車道感測裝置330,其中車輛動態感測裝置310、障礙物感測裝置320及車道感測裝置330均設置於本車HV。車輛動態感測裝置310依據地圖資訊定位本車HV之當前位置,並感測本車HV之當前航向角、當前速度及當前加速度。上述車輛動態感測裝置310包含GPS、陀螺儀(Gyroscope)、里程計(Odemeter)、車速計(Speed Meter)及慣性測量單元(IMU)。再者,障礙物感測裝置320感測與本車HV相距一預定距離範圍內之障礙物Obj,以產生對應障礙物Obj之障礙物資訊及對應本車HV之複數個可行駛空間座標點。障礙物資訊包含對應障礙物Obj之障礙物當前位置、障礙物速度及障礙物加速度。此外,車道感測裝置330感測本車車道線間距與道路曲率。上述障礙物感測裝置320及車道感測裝置330包含Lidar、Radar及相機。其結構細節為習知技術,故不再贅述。The
記憶體400用以存取待學習參數組102、複數場景類別104、基於學習模型及基於規則模型,且記憶體400用以存取地圖資訊,此地圖資訊相關於本車HV所行駛之路線。The
運算處理單元500電性連接記憶體400與感測單元300,運算處理單元500經配置以實施自駕車之混合決策方法100、100a,其可為微處理器、電子控制單元(Electronic Control Unit;ECU)、電腦、行動裝置或其他運算處理器。The
藉此,本發明的自駕車之混合決策系統200利用基於學習模型學習駕駛避障行為,然後融合基於規則之路徑規劃,以建構出混合決策,既可同時處理多維度變數,亦具備學習能力,並符合及車輛動態限制及軌跡規劃之連續性。Thereby, the hybrid decision-
由上述實施方式可知,本發明具有下列優點:其一,透過基於學習模型學習駕駛避障行為,然後融合基於規則之路徑規劃,以建構出混合決策,不但可同時處理多維度變數,還具備學習能力,並符合軌跡規劃之連續性及車輛動態限制。其二,基於規則模型透過特定場景類別及特定關鍵參數所規劃出的軌跡已是最佳軌跡,可解決習知技術中需要生成多條軌跡而擇一之額外篩選作動的問題。其三,待學習參數組可依據運算處理單元之取樣時間隨時更新,進而隨時更新最佳軌跡函數,進而大幅地提升自動駕駛的安全性與實用性。It can be seen from the above-mentioned embodiments that the present invention has the following advantages: firstly, by learning the driving obstacle avoidance behavior based on the learning model, and then integrating the path planning based on the rules to construct a mixed decision-making, it can not only deal with multi-dimensional variables at the same time, but also has the ability to learn capability, and comply with the continuity of trajectory planning and vehicle dynamics constraints. Second, the trajectory planned by the rule-based model through specific scene categories and specific key parameters is already the best trajectory, which can solve the problem of generating multiple trajectories to select one additional screening action in the prior art. Third, the parameter set to be learned can be updated at any time according to the sampling time of the arithmetic processing unit, and then the optimal trajectory function can be updated at any time, thereby greatly improving the safety and practicability of automatic driving.
雖然本發明已以實施方式揭露如上,然其並非用以限定本發明,任何熟習此技藝者,在不脫離本發明之精神和範圍內,當可作各種之更動與潤飾,因此本發明之保護範圍當視後附之申請專利範圍所界定者為準。Although the present invention has been disclosed in the above embodiments, it is not intended to limit the present invention. Anyone skilled in the art can make various changes and modifications without departing from the spirit and scope of the present invention. Therefore, the protection of the present invention The scope shall be determined by the scope of the appended patent application.
100,100a:自駕車之混合決策方法 101:輸出資料 102:待學習參數組 103:已學習行為參數組 104:場景類別 1041:第一場景 1042:第二場景 1043:第三場景 1044:第四場景 1045:第五場景 106:關鍵參數組 108:最佳軌跡函數 200:自駕車之混合決策系統 300:感測單元 310:車輛動態感測裝置 320:障礙物感測裝置 330:車道感測裝置 400:記憶體 500:運算處理單元 S02,S12:參數獲得步驟 S04,S14:基於學習之場景決策步驟 S06,S16:基於學習之參數優化步驟 S08,S18:基於規則之路徑規劃步驟 S122:資訊感測步驟 S1222:車輛動態感測步驟 S1224:障礙物感測步驟 S1226:車道感測步驟 S124:資料處理步驟 S1242:裁切步驟 S1244:分群步驟 S1246:鏡射步驟 S162:基於學習之駕駛行為產生步驟 S164:關鍵參數產生步驟 S182:目標點生成步驟 S184:座標轉換步驟 S186:軌跡生成步驟 S20:診斷步驟 S22:控制步驟 BTF:平面座標曲線方程式 D obj :障礙物側向距離 HV:本車 LD:本車道路寬 L obj :障礙物長度 Obj:障礙物 RD:相對距離 TP:目標點 v h :本車速度 v obj :障礙物速度 x i , y j :駕駛行徑路線參數組100, 100a: Hybrid decision-making method for self-driving cars 101: Output data 102: Parameter group to be learned 103: Learning behavior parameter group 104: Scene category 1041: First scene 1042: Second scene 1043: Third scene 1044: Fourth scene 1045: Fifth scene 106: Key parameter group 108: Best trajectory function 200: Hybrid decision-making system for self-driving cars 300: Sensing unit 310: Vehicle dynamic sensing device 320: Obstacle sensing device 330: Lane sensing device 400 : memory 500: arithmetic processing unit S02, S12: parameter acquisition steps S04, S14: learning-based scene decision-making steps S06, S16: learning-based parameter optimization Steps S08, S18: rule-based path planning Step S122: information sensing Step S1222: Vehicle Dynamic Sensing Step S1224: Obstacle Sensing Step S1226: Lane Sensing Step S124: Data Processing Step S1242: Cropping Step S1244: Grouping Step S1246: Mirroring Step S162: Learning-Based Driving Behavior Generation Step S164 : key parameter generation step S182: target point generation step S184: coordinate conversion step S186: trajectory generation step S20: diagnosis step S22: control step BTF: plane coordinate curve equation D obj : obstacle lateral distance HV: own vehicle LD : this Road width L obj : obstacle length Obj: obstacle RD : relative distance TP: target point v h : own vehicle speed v obj : obstacle speed x i , y j : driving route parameter group
第1圖係繪示本發明第一實施例的自駕車之混合決策方法的流程示意圖; 第2圖係繪示本發明第二實施例的自駕車之混合決策方法的流程示意圖; 第3圖係繪示第2圖的自駕車之混合決策方法的資訊感測步驟之示意圖; 第4圖係繪示第2圖的自駕車之混合決策方法的資訊感測步驟之輸入資料及輸出資料的示意圖; 第5圖係繪示第2圖的自駕車之混合決策方法的資料處理步驟之示意圖; 第6圖係繪示第2圖的自駕車之混合決策方法應用於同車道避障之示意圖; 第7圖係繪示第2圖的自駕車之混合決策方法應用於障礙物佔據場景之示意圖; 第8圖係繪示第2圖的自駕車之混合決策方法應用於車道變換之示意圖; 第9圖係繪示第2圖的自駕車之混合決策方法的基於規則之路徑規劃步驟之示意圖;以及 第10圖係繪示本發明第三實施例之自駕車之混合決策系統的方塊示意圖。 FIG. 1 is a schematic flowchart of a hybrid decision-making method for self-driving cars according to a first embodiment of the present invention; FIG. 2 is a schematic flowchart of a hybrid decision-making method for self-driving cars according to a second embodiment of the present invention; FIG. 3 is a schematic diagram illustrating the information sensing steps of the hybrid decision-making method of the self-driving car of FIG. 2; FIG. 4 is a schematic diagram showing the input data and output data of the information sensing step of the hybrid decision-making method of the self-driving car of FIG. 2; FIG. 5 is a schematic diagram showing the data processing steps of the hybrid decision-making method of the self-driving car of FIG. 2; FIG. 6 is a schematic diagram illustrating the application of the hybrid decision-making method of the self-driving car in FIG. 2 to avoid obstacles in the same lane; FIG. 7 is a schematic diagram illustrating the application of the hybrid decision-making method of the self-driving car in FIG. 2 to an obstacle occupation scene; FIG. 8 is a schematic diagram illustrating the application of the hybrid decision-making method of the self-driving car in FIG. 2 to lane changing; FIG. 9 is a schematic diagram illustrating the rule-based path planning steps of the hybrid decision-making method of the self-driving car of FIG. 2; and FIG. 10 is a block diagram illustrating a hybrid decision-making system for self-driving cars according to a third embodiment of the present invention.
100:自駕車之混合決策方法 100: A Hybrid Decision-Making Approach for Self-Driving Vehicles
102:待學習參數組 102: Parameter group to be learned
104:場景類別 104: Scene Category
106:關鍵參數組 106:Key parameter group
108:最佳軌跡函數 108: Best Trajectory Function
S02:參數獲得步驟 S02: Parameter acquisition step
S04:基於學習之場景決策步驟 S04: Scenario decision-making steps based on learning
S06:基於學習之參數優化步驟 S06: Parameter optimization steps based on learning
S08:基於規則之路徑規劃步驟 S08: Rule-based path planning steps
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW109126729A TWI750762B (en) | 2020-08-06 | 2020-08-06 | Hybrid planniing method in autonomous vehicles and system thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW109126729A TWI750762B (en) | 2020-08-06 | 2020-08-06 | Hybrid planniing method in autonomous vehicles and system thereof |
Publications (2)
Publication Number | Publication Date |
---|---|
TWI750762B true TWI750762B (en) | 2021-12-21 |
TW202206956A TW202206956A (en) | 2022-02-16 |
Family
ID=80681482
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
TW109126729A TWI750762B (en) | 2020-08-06 | 2020-08-06 | Hybrid planniing method in autonomous vehicles and system thereof |
Country Status (1)
Country | Link |
---|---|
TW (1) | TWI750762B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103646298A (en) * | 2013-12-13 | 2014-03-19 | 中国科学院深圳先进技术研究院 | Automatic driving method and automatic driving system |
CN107797555A (en) * | 2017-10-30 | 2018-03-13 | 奇瑞汽车股份有限公司 | A kind of tourist coach automatic Pilot control method and device |
CN108973990A (en) * | 2017-05-31 | 2018-12-11 | 百度(美国)有限责任公司 | Method, medium and system for automatic Pilot control |
US20180362032A1 (en) * | 2016-02-29 | 2018-12-20 | Huawei Technologies Co., Ltd. | Self-driving method, and apparatus |
CN109598934A (en) * | 2018-12-13 | 2019-04-09 | 清华大学 | A kind of rule-based method for sailing out of high speed with learning model pilotless automobile |
CN110406530A (en) * | 2019-07-02 | 2019-11-05 | 宁波吉利汽车研究开发有限公司 | A kind of automatic Pilot method, apparatus, equipment and vehicle |
-
2020
- 2020-08-06 TW TW109126729A patent/TWI750762B/en active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103646298A (en) * | 2013-12-13 | 2014-03-19 | 中国科学院深圳先进技术研究院 | Automatic driving method and automatic driving system |
US20180362032A1 (en) * | 2016-02-29 | 2018-12-20 | Huawei Technologies Co., Ltd. | Self-driving method, and apparatus |
CN108973990A (en) * | 2017-05-31 | 2018-12-11 | 百度(美国)有限责任公司 | Method, medium and system for automatic Pilot control |
CN107797555A (en) * | 2017-10-30 | 2018-03-13 | 奇瑞汽车股份有限公司 | A kind of tourist coach automatic Pilot control method and device |
CN109598934A (en) * | 2018-12-13 | 2019-04-09 | 清华大学 | A kind of rule-based method for sailing out of high speed with learning model pilotless automobile |
CN110406530A (en) * | 2019-07-02 | 2019-11-05 | 宁波吉利汽车研究开发有限公司 | A kind of automatic Pilot method, apparatus, equipment and vehicle |
Also Published As
Publication number | Publication date |
---|---|
TW202206956A (en) | 2022-02-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3699048B1 (en) | Travelling track prediction method and device for vehicle | |
Van Brummelen et al. | Autonomous vehicle perception: The technology of today and tomorrow | |
CN113165652B (en) | Verifying predicted trajectories using a mesh-based approach | |
EP3814909B1 (en) | Using divergence to conduct log-based simulations | |
JP7466396B2 (en) | Vehicle control device | |
JP7043295B2 (en) | Vehicle control devices, vehicle control methods, and programs | |
CN114074681B (en) | Probability-based lane change decision and motion planning system and method thereof | |
US20220121213A1 (en) | Hybrid planning method in autonomous vehicle and system thereof | |
JP2019131077A (en) | Vehicle control device, vehicle control method, and program | |
US11529951B2 (en) | Safety system, automated driving system, and methods thereof | |
CN113460080B (en) | Vehicle control device, vehicle control method, and storage medium | |
CN112590778B (en) | Vehicle control method and device, controller and intelligent automobile | |
WO2023097874A1 (en) | Method and device for planning driving track | |
CN117885764B (en) | Vehicle track planning method and device, vehicle and storage medium | |
CN114834443A (en) | Vehicle control method and device, controller and intelligent automobile | |
CN118235180A (en) | Method and device for predicting drivable lane | |
US11429843B2 (en) | Vehicle operation labeling | |
CN113899378A (en) | Lane changing processing method and device, storage medium and electronic equipment | |
KR20220095365A (en) | Vehicle and method of controlling cut-in response | |
CN117022262A (en) | Unmanned vehicle speed planning control method and device, electronic equipment and storage medium | |
JP2021160531A (en) | Vehicle control device, vehicle control method, and program | |
TWI750762B (en) | Hybrid planniing method in autonomous vehicles and system thereof | |
CN114217601B (en) | Hybrid decision method and system for self-driving | |
Goswami | Trajectory generation for lane-change maneuver of autonomous vehicles | |
RU2790105C2 (en) | Method and electronic device for control of self-driving car |