TWI453697B - The detection system of the driving space and its detection method - Google Patents
The detection system of the driving space and its detection method Download PDFInfo
- Publication number
- TWI453697B TWI453697B TW100149546A TW100149546A TWI453697B TW I453697 B TWI453697 B TW I453697B TW 100149546 A TW100149546 A TW 100149546A TW 100149546 A TW100149546 A TW 100149546A TW I453697 B TWI453697 B TW I453697B
- Authority
- TW
- Taiwan
- Prior art keywords
- image
- processing unit
- obstacle
- detection
- value
- Prior art date
Links
Landscapes
- Traffic Control Systems (AREA)
- Image Processing (AREA)
Description
本發明是有關於一種可行駛空間之偵測系統及其偵測方法,特別是指一種利用立體視覺判斷道路可行駛空間之偵測系統及其偵測方法。The invention relates to a detection system for a travelable space and a detection method thereof, in particular to a detection system for determining a road travelable space by using stereo vision and a detection method thereof.
交通載具與人們的生活息息相關,其中又以汽車為首要的個人交通載具。然而,汽車駕駛可能因技巧、經驗及專注能力不同,而導致可能的交通事故,所以,現今的汽車搭載許多電子輔助裝置,輔助並提醒汽車駕駛,以避免上述情況。Traffic vehicles are closely related to people's lives, and cars are the primary personal transportation vehicles. However, car driving may result in possible traffic accidents due to different skills, experience and concentration. Therefore, today's cars are equipped with many electronic aids to assist and remind the car to avoid this situation.
其中,障礙物偵測系統是汽車工業正努力發展的方向之一,常見的障礙物偵測系統,如倒車雷達,已成為許多汽車的標準配備之一,有效地輔助汽車駕駛評估停車空間,以順利完成停車動作。而最近特別受到矚目的是,汽車停止或行駛中對前方障礙物的偵測,這方面的偵測技術可以避免汽車駕駛因忽略前方障礙物而導致車禍,甚至可以在特定的情況下,為汽車駕駛減速或剎車,大幅提升行車安全性。然而,如何有效地偵測障礙物,目前的相關技術仍有許多缺點有待改進。Among them, the obstacle detection system is one of the directions that the automobile industry is striving for. The common obstacle detection system, such as the parking sensor, has become one of the standard equipments of many cars, effectively assisting the car to evaluate the parking space. Successfully completed the parking action. Recently, the special attention has been paid to the detection of obstacles in front of the car when it stops or travels. This kind of detection technology can avoid car accidents caused by ignoring obstacles in front of the car, and even in certain circumstances, it is a car. Driving speed reduction or braking greatly improves driving safety. However, how to effectively detect obstacles, the current related technologies still have many shortcomings to be improved.
以美國專利案US 5937079「Method for Stereo Image Object Detection」為例,該發明主以水平邊緣特徵,確定輸入之參考影像之障礙物樣板,再透過直方圖統計,求得障礙物三維空間之位置。然而,該案以二維影像的水平特徵作為障礙物比對之樣板,容易受到障礙物樣板設計及數量之影響。Taking US Patent No. 5937079 "Method for Stereo Image Object Detection" as an example, the invention mainly determines the obstacle template of the input reference image by using the horizontal edge feature, and then obtains the position of the obstacle three-dimensional space through the histogram statistics. However, the case uses the horizontal features of the two-dimensional image as a model for obstacle comparison, which is susceptible to the design and quantity of the obstacle template.
另外,在美國專利案US 6801244「Obstacle Detection Apparatus and Method」中,該發明係透過離線左右攝影機影像之車道轉換矩陣,再利用有高度物體於左右影像存有視差之特性,區分出路面與障礙物。雖然該案以左右影像視點轉換之手法,估算出道路可行駛空間,但此方法易受道路環境變異之影響,舉例來說,當車輛從平面道路轉換至斜坡道路時,會誤判是遇到障礙物。In addition, in the U.S. Patent No. 6,801,244, "Obstacle Detection Apparatus and Method", the invention distinguishes the road surface from the obstacle by using the lane change matrix of the offline left and right camera images and using the parallax of the high-altitude image in the left and right images. . Although the case estimates the road travelable space by means of left and right image viewpoint conversion, this method is susceptible to road environment variation. For example, when a vehicle switches from a flat road to a slope road, it is misjudged to be an obstacle. Things.
再者,在美國專利案US 2006/0095207「Obstacle Detection Using Stereo Vision」中,該發明係以二維影像的邊角(Edge)與顏色資訊偵測出障礙物,然後透過立體視覺估算出障礙物的三維空間資訊,再估算道路可行駛空間與安全可行駛路徑。然而,本案以二維影像特徵偵測障礙物,易受障礙物二維特徵選用之限制,也就是說,如果沒有預先妥善地設定障礙物的二維特徵,則無法偵測出障礙物。Furthermore, in the US Patent No. US 2006/0095207 "Obstacle Detection Using Stereo Vision", the invention detects obstacles by using the edge and color information of the two-dimensional image, and then estimates the obstacle through stereo vision. The three-dimensional spatial information is used to estimate the road travelable space and the safe travel path. However, in this case, the obstacle is detected by the two-dimensional image feature, which is susceptible to the selection of the two-dimensional feature of the obstacle, that is, if the two-dimensional feature of the obstacle is not properly set in advance, the obstacle cannot be detected.
由上述前案可以了解,過去在道路可行駛空間偵測演算法中,大都先偵測出障礙物,再進行可行駛空間估算。其中障礙物偵測演算法大多以影像紋理資訊(顏色/邊緣/陰影)或型態資訊(長/寬/長寬比)或樣板比對方式進行偵測,易受環境影響且適用性低,障礙物遮蔽與數量亦會造成影響,進而產生可行駛路徑的誤差。It can be understood from the above previous cases that in the past, in the road space detecting algorithm, most of the obstacles are detected first, and then the travelable space is estimated. Most of the obstacle detection algorithms are detected by image texture information (color/edge/shadow) or type information (length/width/aspect ratio) or template comparison, which is susceptible to environmental influence and low applicability. Obstacle obscuration and quantity also have an impact, which in turn creates errors in the path that can be traveled.
另外,車輛在行駛的過程中,系統辨識障礙物所需之反應時間必須是近乎即時的訊息回饋,系統之演算法不但不能受到外界環境的其他因素干擾,還要具備一定程度之可靠性。採用二組攝影機以建立立體視覺的三維資訊雖可解決外界環境之影響,但龐大之運算負荷,卻是此系統最大的瓶頸,也是系統是否實用的關鍵性因素。In addition, during the driving process, the response time required for the system to identify obstacles must be near-instant message feedback. The algorithm of the system can not only be interfered by other factors of the external environment, but also has a certain degree of reliability. The use of two sets of cameras to establish stereoscopic three-dimensional information can solve the influence of the external environment, but the huge computing load is the biggest bottleneck of this system, and also the key factor of whether the system is practical.
因此,本發明之首要目的,即在提供一種利用立體視覺判斷道路可行駛空間之偵測系統。Accordingly, it is a primary object of the present invention to provide a detection system for determining a road travelable space using stereo vision.
於是,本發明可行駛空間之偵測系統,安裝於一交通載具上並朝向該交通載具的移動方向,該偵測系統包含:二影像擷取單元、一處理單元及一記憶單元。該等影像擷取單元相間隔地設置於該交通載具上,並朝向該交通載具前進的方向以記錄一第一影像及一第二影像。該處理單元電連接該等影像擷取單元。該記憶單元電連接該處理單元,並儲存該第一影像、該第二影像及一與偵測可行駛空間相關並供該處理單元執行的偵測程式,該等影像擷取單元、該處理單元及該記憶單元協同配合執行可行駛空間之偵測。該偵測程式使該處理單元:首先,執行一立體影像重建運算,轉換該第一影像及第二影像為一包括複數像素之第三影像,各該像素具有一視差值。接著,依據一道路函數將該第三影像轉換為一包括複數格的距離陣列。再者,執行一成本函數,以一障礙物項及一道路平面項估算複數對應該等格的障礙值,其中該障礙物項及該道路平面項是依據該距離陣列之各欄之視差值而得。然後,執行一最佳化邊界估算函數,以計算出一平滑度值。接著,依據該平滑度值執行一最佳化演算法,以計算出複數最佳可行駛空間邊界值。Therefore, the detection system of the travel space of the present invention is mounted on a traffic vehicle and faces the moving direction of the traffic vehicle. The detection system comprises: a second image capturing unit, a processing unit and a memory unit. The image capturing units are disposed on the traffic carrier at intervals and are oriented in a direction in which the traffic vehicle advances to record a first image and a second image. The processing unit is electrically connected to the image capturing units. The memory unit is electrically connected to the processing unit, and stores the first image, the second image, and a detection program associated with the detected travelable space and executed by the processing unit, the image capturing unit and the processing unit And the memory unit cooperates to perform detection of the travelable space. The detection unit causes the processing unit to: first perform a stereoscopic image reconstruction operation, and convert the first image and the second image into a third image including a plurality of pixels, each pixel having a disparity value. Then, the third image is converted into an array of distances including a plurality of cells according to a road function. Furthermore, performing a cost function, estimating an obstacle value corresponding to the complex number by an obstacle item and a road plane term, wherein the obstacle item and the road plane item are based on the disparity values of the columns of the distance array And got it. Then, an optimized boundary estimation function is performed to calculate a smoothness value. Then, an optimization algorithm is executed according to the smoothness value to calculate a complex optimal travelable space boundary value.
另外,本發明之另一目的,即在提供一種利用立體視覺判斷道路可行駛空間之偵測方法。Further, another object of the present invention is to provide a method for detecting a road travelable space by using stereoscopic vision.
於是,本發明可行駛空間之偵測方法,運作於一包括二相間隔之影像擷取單元、一記憶單元及一處理單元之偵測系統,該偵測方法包含下列步驟:首先,該二影像擷取單元記錄一第一影像及一第二影像於該記憶單元。接著,該處理單元執行一立體影像重建運算,將該第一影像及第二影像轉換為一包括複數像素之第三影像,各該像素具有一視差值。再者,該處理單元依據一道路函數將該第三影像轉換為一包括複數格的距離陣列。然後,該處理單元執行一成本函數,以一障礙物項及一道路平面項估算複數對應該等格的障礙值,其中該障礙物項及該道路平面項是依據該距離陣列之各欄之視差值而得。接著,該處理單元執行一最佳化邊界估算函數,以計算出一平滑度值。再者,依據該平滑度值執行一最佳化演算法,以計算出複數最佳可行駛空間邊界值。Therefore, the method for detecting the driving space of the present invention operates on a detection system including a two-phase spaced image capturing unit, a memory unit and a processing unit, the detecting method comprising the following steps: First, the two images The capturing unit records a first image and a second image in the memory unit. Then, the processing unit performs a stereoscopic image reconstruction operation, and converts the first image and the second image into a third image including a plurality of pixels, each of the pixels having a disparity value. Moreover, the processing unit converts the third image into a distance array including a plurality of cells according to a road function. Then, the processing unit performs a cost function to estimate the obstacle value corresponding to the complex number by an obstacle item and a road plane item, wherein the obstacle item and the road plane item are based on the columns of the distance array. The difference is derived. Next, the processing unit performs an optimized boundary estimation function to calculate a smoothness value. Furthermore, an optimization algorithm is performed according to the smoothness value to calculate a complex optimal travelable space boundary value.
本發明之功效在於成本函數中的障礙物項及道路平面項是依據該距離陣列之各欄之視差值而得,可以適用於不同的道路情境,不論是平面道路或是上、下坡道路,對於障礙物的偵測都有良好的效果。The effect of the invention is that the obstacle item and the road plane item in the cost function are obtained according to the disparity values of the columns of the distance array, and can be applied to different road situations, whether it is a plane road or an ups and downslope road. It has a good effect on the detection of obstacles.
有關本發明之前述及其他技術內容、特點與功效,在以下配合參考圖式之一個較佳實施例的詳細說明中,將可清楚的呈現。The above and other technical contents, features and advantages of the present invention will be apparent from the following detailed description of the preferred embodiments.
參閱圖1、2,本發明之可行駛空間之偵測系統是裝設在交通載具11上,在本較佳實施例中,該交通載具11為一車輛,但並不限於此。該偵測系統包含二相間隔設置的影像擷取單元21、一記憶單元22、一處理單元23、一檢測單元24,及一播放單元25。該等影像擷取單元21在本較佳實施例中為二具攝影機,每一影像擷取單元21可以對交通載具11前方的景物拍攝一設定視角的影像,如30度視角的影像,該每一影像的解析度設為640像素×480像素,也就是說,該影像的欄(Column)具有640個像素,而該影像的列(Row)具有480個像素,但該影像的解析度並不限於此。該等影像擷取單元21如車輛頭燈般相間隔地裝設於該交通載具11的適當處,如前保險桿處、車頂上、中控儀表板上方或透過機構吸付於擋風玻璃等位置,且拍攝方向是面向欲偵測之方向,在本較佳實施例中,該拍攝方向如箭頭12所示,朝著交通載具11向前方移動的方向,其目的在於交通載具11向前移動時,偵測其前方是否有任何障礙物,並推估可行駛之空間。Referring to Figures 1 and 2, the detection system of the travelable space of the present invention is mounted on a traffic vehicle 11. In the preferred embodiment, the traffic vehicle 11 is a vehicle, but is not limited thereto. The detection system includes a two-phase interval image capturing unit 21, a memory unit 22, a processing unit 23, a detecting unit 24, and a playing unit 25. The image capturing unit 21 is a camera in the preferred embodiment. Each image capturing unit 21 can capture a set angle of view image, such as a 30 degree view image, on the scene in front of the traffic vehicle 11 . The resolution of each image is set to 640 pixels × 480 pixels, that is, the column of the image has 640 pixels, and the column of the image has 480 pixels, but the resolution of the image is Not limited to this. The image capturing units 21 are installed at appropriate intervals of the traffic vehicle 11 at the same time as the headlights of the vehicle, such as at the front bumper, on the roof, above the central control panel, or through the windshield through the mechanism. The position is the same as the direction to be detected. In the preferred embodiment, the shooting direction is as shown by the arrow 12, moving toward the traffic carrier 11 in the forward direction, and the purpose is the traffic vehicle 11 When moving forward, detect if there are any obstacles in front of it and estimate the space that can be driven.
參閱圖1、3,左側之影像擷取單元21可以擷取到一第一影像211,而右側之影像擷取單元21可以擷取到一第二影像212,該等第一影像211及第二影像212因為拍攝角度略有不同,因此會產生「視差」,第一影像211及第二影像212中越接近影像擷取單元21的障礙物,視差的情況會越明顯,相反的,越遠離影像擷取單元21的障礙物,視差的情況則越不明顯。在本較佳實施例中,該交通載具11移動方向前方的障礙物是一位於交通載具11左側的高牆31、一位於交通載具11左前方的機車32及一位於交通載具11右前方的大客車33。Referring to FIG. 1 and FIG. 3, the image capturing unit 21 on the left side can capture a first image 211, and the image capturing unit 21 on the right side can capture a second image 212, the first image 211 and the second image. Since the image 212 is slightly different in shooting angle, "parallax" is generated. The closer to the obstacle of the image capturing unit 21 in the first image 211 and the second image 212, the more obvious the parallax will be. On the contrary, the farther away from the image 撷Taking the obstacle of the unit 21, the case of parallax is less obvious. In the preferred embodiment, the obstacle in front of the moving direction of the traffic carrier 11 is a high wall 31 located on the left side of the traffic vehicle 11 , a locomotive 32 located on the left front side of the traffic vehicle 11 , and a traffic vehicle 11 . The bus 33 on the right front.
參閱圖1、2,該記憶單元22儲存如圖3所示的該第一影像211、該第二影像212及一與偵測可行駛空間相關並供該處理單元23執行的偵測程式,另外也可以暫時性地儲存偵測程式所需的影像檔案及計算資料,以供該偵測程式存取。在本較佳實施例中,該記憶單元22為一記憶體(Memory)模組。Referring to FIG. 1 and FIG. 2, the memory unit 22 stores the first image 211, the second image 212, and a detection program associated with detecting the travelable space and being executed by the processing unit 23, as shown in FIG. It is also possible to temporarily store the image files and calculation data required for the detection program for access by the detection program. In the preferred embodiment, the memory unit 22 is a memory module.
該處理單元23電連接該等影像擷取單元21及該記憶單元22,在本較佳實施例中,該處理單元23為一包括一中央處理器的主機板模組。該偵測系統的記憶單元22及處理單元23並不限於由一車用電腦(Car PC)實施,也可以製作成一專用的獨立晶片或獨立主機板整合於車輛的電子控制系統中。The processing unit 23 is electrically connected to the image capturing unit 21 and the memory unit 22. In the preferred embodiment, the processing unit 23 is a motherboard module including a central processing unit. The memory unit 22 and the processing unit 23 of the detection system are not limited to being implemented by a car computer (Car PC), and may be fabricated as a dedicated independent chip or a separate motherboard integrated into the electronic control system of the vehicle.
該檢測單元24電連接該處理單元23,該檢測單元24在本較佳實施例中連線該交通載具11的一方向燈模組和一車速模組(均圖未示),該方向燈模組可依據左一方向燈及一右方向燈的開啟狀況,產生一包括相對應的一左轉信號及一右轉信號的轉向資訊,而該車速模組則依據目前車速,產生一車速資訊,如時速60公里。該轉向資訊及車速資訊都會傳送至該檢測單元24,而該檢測單元24將複數對應該轉向資訊及車速資訊的檢測信號傳送至該處理單元23。必須注意的是,該包括左轉信號及右轉信號的轉向資訊並不限於由上述方向燈模組提供,也可以是由一方向盤提供,當方向盤以逆時針及順時針轉動一特定角度,則產生對應的轉向資訊。The detecting unit 24 is electrically connected to the processing unit 23, and the detecting unit 24 connects a directional light module of the traffic vehicle 11 and a vehicle speed module (not shown) in the preferred embodiment. The module can generate a steering information including a corresponding left turn signal and a right turn signal according to the opening condition of the left direction light and the right direction light, and the speed module generates a speed information according to the current speed. , such as 60 kilometers per hour. The steering information and the vehicle speed information are transmitted to the detecting unit 24, and the detecting unit 24 transmits a plurality of detection signals corresponding to the turning information and the vehicle speed information to the processing unit 23. It should be noted that the steering information including the left turn signal and the right turn signal is not limited to be provided by the above directional light module, or may be provided by a steering wheel. When the steering wheel is rotated counterclockwise and clockwise by a specific angle, Generate corresponding steering information.
該播放單元25電連接該處理單元23,該播放單元25在本較佳實施例中設於該交通載具11的儀表板,並為一包括一喇叭的液晶顯示螢幕,提供駕駛者關於行駛中的相關圖像化資訊及聲音警示。The playing unit 25 is electrically connected to the processing unit 23. The playing unit 25 is disposed on the dashboard of the traffic carrier 11 in the preferred embodiment, and is a liquid crystal display screen including a speaker, providing the driver with driving Relevant graphical information and sound warnings.
上述的該等影像擷取單元21、該記憶單元22、該處理單元23、該檢測單元24,及該播放單元25協同配合執行可行駛空間之偵測,該偵測程式使該處理單元23執行的步驟容後說明。The image capturing unit 21, the memory unit 22, the processing unit 23, the detecting unit 24, and the playing unit 25 cooperate to perform detection of the travelable space, and the detecting program causes the processing unit 23 to execute The steps are explained later.
參閱圖2、3,本發明之可行駛空間之偵測方法運作於上述偵測系統,該偵測系統包含該等影像擷取單元21、該包括該偵測程式的記憶單元22、該執行該偵測程式的處理單元23、該檢測單元24,及該播放單元25,該偵測方法包含下列步驟:參閱圖2、3、4,首先,如步驟401所示,該等影像擷取單元21記錄該第一影像211及該第二影像212於該記憶單元22。其中,該第一影像211及第二影像212因為上述的視差,可以做為產生立體影像並分析距離的素材。Referring to Figures 2 and 3, the method for detecting the travelable space of the present invention operates in the above-described detection system, the detection system includes the image capture unit 21, the memory unit 22 including the detection program, and the execution of the The detecting unit 23, the detecting unit 24, and the playing unit 25, the detecting method comprises the following steps: refer to FIG. 2, 3, and 4. First, as shown in step 401, the image capturing unit 21 The first image 211 and the second image 212 are recorded in the memory unit 22. The first image 211 and the second image 212 can be used as a material for generating a stereoscopic image and analyzing the distance due to the parallax described above.
接著,如步驟402所示,該處理單元23執行一立體影像重建運算,利用該第一影像211及第二影像212製作出一呈現景物中障礙物距離的第三影像。該立體影像重建運算在本較佳實施例中,是將該第一影像211及該第二影像212利用一特徵點匹配法轉換為一包括複數像素之第三影像,而每一像素具有一視差值(Disparity)。所謂特徵點匹配法,是指在該第一影像211及該第二影像212中找出複數相同的特徵點(如機車32),再依據該等特徵點確認第三影像中每一像素的視差值。該第三影像中,其解析度與該第一影像211與該第二影像212相同,影像的欄與列分別是640像素及480像素,但是第三影像中是以16色灰階顯示每一像素的視差值,該第三影像中某一像素的灰度越深,代表視差程度越小,也指出了該像素較遠離該等影像擷取單元21,反之,該第三影像中某一像素灰度越淡,代表視差程度越高,也指出了該像素較靠近該等影像擷取單元21,每一像素集合所構成且視差值(灰度)接近的區塊在本較佳實施例中可能是機車32、大客車33及天空34等。因此該第三影像實質上是一個包括影像的欄、影像的列及視差值三種資訊構成的三維座標影像。在計算機圖學的領域,該第三影像的產生並不限於採用特徵點匹配法,也可以採用別種演算法取得上述之第三影像。Then, as shown in step 402, the processing unit 23 performs a stereoscopic image reconstruction operation, and uses the first image 211 and the second image 212 to create a third image that exhibits an obstacle distance in the scene. In the preferred embodiment, the first image 211 and the second image 212 are converted into a third image including a plurality of pixels by using a feature point matching method, and each pixel has a view. Difference (Disparity). The feature point matching method refers to finding the same feature points (such as the locomotive 32) in the first image 211 and the second image 212, and confirming the view of each pixel in the third image according to the feature points. Difference. In the third image, the resolution is the same as the first image 211 and the second image 212, and the columns and columns of the image are 640 pixels and 480 pixels, respectively, but the third image is displayed in 16 gray scales. The disparity value of the pixel, the deeper the gray level of a pixel in the third image, the smaller the degree of parallax is, and the pixel is further away from the image capturing unit 21, and vice versa. The lighter the gray level of the pixel, the higher the degree of parallax, and the fact that the pixel is closer to the image capturing unit 21, and the block of each pixel set and the parallax value (grayscale) is close to the preferred embodiment. In the example, it may be a locomotive 32, a bus 33, and a sky 34. Therefore, the third image is substantially a three-dimensional coordinate image composed of three columns of information: a column of images, a column of images, and a disparity. In the field of computer graphics, the generation of the third image is not limited to the feature point matching method, and the third image may be obtained by another algorithm.
參閱圖2、4,再者,如步驟403所示,該處理單元23依據該第三影像中同一列之所有像素,及該等像素之視差值計算出一道路函數。在本較佳實施例中,計算該道路函數的方法是由該處理單元23設定一組二維座標,其中橫軸座標設定為該第三影像之視差值,縱軸座標設定為該第三影像的列,將每一像素依據其所屬之「列」及「視差值」分別填入上述二維座標之縱軸與橫軸中,取得一由該等像素所構成之不規則曲線,再利用一曲線擬合的數學方法計算出一最接近上述不規則曲線之最佳曲線以找出每一像素之「列」及「視差值」的關係,而代表上述關係的公式就稱為該道路函數,而該道路函數是:第三影像的列=(視差值×道路常數A)+道路常數B在本較佳實施例中,該道路常數A為0.6173,該道路常數B為246.0254。Referring to FIG. 2 and FIG. 4, further, as shown in step 403, the processing unit 23 calculates a road function according to all the pixels in the same column in the third image and the disparity values of the pixels. In the preferred embodiment, the method for calculating the road function is that the processing unit 23 sets a set of two-dimensional coordinates, wherein the horizontal axis coordinate is set as the parallax value of the third image, and the vertical axis coordinate is set as the third coordinate. In the column of the image, each pixel is filled into the vertical axis and the horizontal axis of the two-dimensional coordinate according to the "column" and "disparity value" to which each pixel belongs, to obtain an irregular curve formed by the pixels, and then Using a mathematical method of curve fitting to calculate the best curve closest to the above irregular curve to find the relationship between "column" and "disparity value" of each pixel, and the formula representing the above relationship is called The road function, and the road function is: column of the third image = (disparity value x road constant A) + road constant B. In the preferred embodiment, the road constant A is 0.6173, and the road constant B is 246.0254.
然後,如步驟404所示,該處理單元23將該第三影像依據上述之道路函數進行轉換,目的是為了求得第三影像的列與視差值的轉換關係,以利後續運算。原本的第三影像的縱軸是該第三影像的列,而橫軸是該第三影像的欄,該處理單元23利用該道路函數將第三影像的列轉換為相對應的視差值,而橫軸則維持是欄,原第三影像的像素則依據新的座標系統重新排列產生一距離資訊。Then, as shown in step 404, the processing unit 23 converts the third image according to the road function described above, in order to obtain a conversion relationship between the column of the third image and the disparity value for subsequent operations. The vertical axis of the original third image is a column of the third image, and the horizontal axis is a column of the third image, and the processing unit 23 uses the road function to convert the column of the third image into a corresponding disparity value. The horizontal axis is maintained as a column, and the pixels of the original third image are rearranged according to the new coordinate system to generate a distance information.
參閱圖2、4、5,接著,如步驟405所示,該處理單元23對上述距離資訊執行一佔有格(Occupancy Grid)轉換以計算出一包括複數格的距離陣列5,實質上降低距離資訊的資料量,增加該處理單元23的運算效率,達到即時處理的效果。所謂的佔有格轉換,是把原本較高精度的二維資料,轉換成以一組較低精度的二維的格狀陣列來表現。以本較佳實施例來說,該距離資訊的欄是640像素(對應原本第三影像的欄),而縱軸的視差值是16階,總共為10240組「欄的視差值」。假設該距離陣列5每一格51所設定之寬度是40畫素、高度是1階視差值,則距離陣列5的橫軸是16格,而高度也是16格,總共為256組「欄的視差值」,大幅減少的資料量有助於減輕該處理單元23的運算負荷。必須注意的是,上述佔有格轉換的目的在於減少待處理的資料量,但是並不限於上述方法。Referring to Figures 2, 4, and 5, then, as shown in step 405, the processing unit 23 performs an Occupancy Grid conversion on the distance information to calculate a distance array 5 including a plurality of cells, substantially reducing the distance information. The amount of data increases the computational efficiency of the processing unit 23 to achieve an immediate processing effect. The so-called possessive transformation is to convert the original high-precision two-dimensional data into a set of lower-precision two-dimensional lattice arrays. In the preferred embodiment, the field of the distance information is 640 pixels (corresponding to the column of the original third image), and the vertical value of the vertical axis is 16 steps, for a total of 10240 groups of "parallax values". Assuming that the width of each of the distance arrays 5 is set to 40 pixels and the height is a 1st order disparity value, the horizontal axis of the array 5 is 16 cells, and the height is also 16 cells, for a total of 256 groups of columns. The parallax value, the greatly reduced amount of data helps to reduce the computational load of the processing unit 23. It must be noted that the purpose of the above-described possessive conversion is to reduce the amount of data to be processed, but is not limited to the above method.
取得距離陣列5的意義可以視為如圖1所示的交通載具11前方空間的俯視圖,由於交通載具11前方的障礙物越接近該交通載具11,則對應該障礙物的像素的視差值越大,相反的,越遠離該交通載具11,則對應該障礙物的像素值越小,甚至趨近於0。在距離陣列5中標示「×」的格代表某一欄中,像素的視差值大部分聚集在此,也就是說該欄中有某個障礙物位於特定的距離。The meaning of the distance array 5 can be regarded as a top view of the space in front of the traffic carrier 11 as shown in FIG. 1. Since the obstacle in front of the traffic vehicle 11 is closer to the traffic vehicle 11, the pixel corresponding to the obstacle is viewed. The larger the difference, the opposite, the farther away from the traffic vehicle 11, the smaller the pixel value corresponding to the obstacle, or even closer to zero. The grid labeled "X" in the distance array 5 represents a certain column, and the parallax values of the pixels are mostly gathered here, that is, there is an obstacle in the column at a specific distance.
參閱圖2、3、5接著,如步驟406所示,為了更進一步減少該處理單元實際的運算負荷,該處理單元23依據該檢測單元24的檢測信號處理在該距離陣列5中對應如圖6所示的複數偵測區域61、62、63的資料。在本較佳實施例中,該等檢測信號依據前述的速度資訊是否高於一預設速度,如30公里,及轉向資訊而改變。該等檢測信號如表1所示:2, 3, and 5, as shown in step 406, in order to further reduce the actual computing load of the processing unit, the processing unit 23 processes the corresponding image in the distance array 5 according to the detection signal of the detecting unit 24. The data of the complex detection areas 61, 62, 63 are shown. In the preferred embodiment, the detection signals are changed according to whether the speed information is higher than a preset speed, such as 30 kilometers, and the steering information. The detection signals are shown in Table 1:
在檢測信號1的情況中,交通載具11是直線地向前行駛且車速高於預設速度,影響行駛的障礙物必然是在該交通載具11的前方,因此,僅需偵測如偵測區域62所示的部分。在檢測信號2的情況中,交通載具11的右方向燈開啟且車速高於預設速度,代表該交通載具即將轉換至右側車道,因此需要偵測如偵測區域62、63所示的部分。在檢測信號3的情況中,交通載具11的左方向燈開啟且車速高於預設速度,代表交通載具11即將轉換至左側車道,因此需要偵測如偵測區域61、62所示的部分。在檢測信號4的情況中,主要是在市區等擁擠路段行駛且車速低於預設速度,因此需要偵測如偵測區域61、62、63所示的部分,以全面性地防範可能的危險。在本較佳實施例中,是假設交通載具11正處在檢測信號4之狀況中,因此該距離陣列5中的所有資料都必須處理。In the case of the detection signal 1, the traffic vehicle 11 is traveling straight ahead and the vehicle speed is higher than the preset speed, and the obstacle affecting the travel must be in front of the traffic vehicle 11, so that only the detection is required. The portion shown in the area 62 is measured. In the case of the detection signal 2, the right direction light of the traffic vehicle 11 is turned on and the vehicle speed is higher than the preset speed, indicating that the traffic vehicle is about to be switched to the right lane, so it is necessary to detect as shown by the detection areas 62, 63. section. In the case of the detection signal 3, the left direction light of the traffic vehicle 11 is turned on and the vehicle speed is higher than the preset speed, which means that the traffic vehicle 11 is about to be switched to the left lane, so it is necessary to detect as shown in the detection areas 61, 62. section. In the case of the detection signal 4, mainly in a crowded section such as an urban area and the vehicle speed is lower than a preset speed, it is necessary to detect portions such as the detection areas 61, 62, 63 to comprehensively prevent possible Danger. In the preferred embodiment, it is assumed that the traffic carrier 11 is in the condition of the detection signal 4, so that all data in the distance array 5 must be processed.
必須注意的是,在檢測信號2、3所示的轉向資訊在本較佳實施例中是以對應左、右方向燈的左、右轉信號判定偵測的區域,但並不限於此,例如也可以是偵測交通載具11的方向盤是否順時針或逆時針地旋轉超過一特定角度,並根據此產生相對應的左、右轉信號。It should be noted that the steering information indicated by the detection signals 2 and 3 is determined by the left and right turn signals corresponding to the left and right direction lights in the preferred embodiment, but is not limited thereto. It is also possible to detect whether the steering wheel of the traffic carrier 11 is rotated clockwise or counterclockwise by more than a specific angle, and accordingly generate corresponding left and right turn signals.
參閱圖2、4、5,再者,如步驟407所示,該處理單元23執行一成本函數(Cost Function)以估算距離陣列5中每一欄的每一視差值的一障礙值,該障礙值越高,代表該處越有可能有障礙物。該成本函數是:Referring to Figures 2, 4, and 5, further, as shown in step 407, the processing unit 23 performs a cost function to estimate a barrier value for each disparity value of each column in the array 5, The higher the obstacle value, the more likely it is that there is an obstacle. The cost function is:
v (d ,u )=ω1 ×Object (d ,u )+ω2 ×Road (d ,u ) v ( d , u )=ω 1 × Object ( d , u )+ω 2 × Road ( d , u )
其中,v(d,u) 是在該距離陣列5中,第u欄及第d視差值的障礙值,ω1 是一障礙物項權重常數,ω2 是一道路平面項權重常數,該兩權重常數是在本較佳實施例中分別設定為30及50以取得較好的偵測效果,但並不限於此,可以視實際測試的結果彈性地調整該兩權重常數。Where v(d, u) is the obstacle value of the u-th column and the d-th disparity value in the distance array 5, ω 1 is an obstacle item weight constant, and ω 2 is a road plane item weight constant, The two weight constants are set to 30 and 50 respectively in the preferred embodiment to obtain a better detection effect, but are not limited thereto, and the two weight constants can be elastically adjusted according to the results of actual tests.
Object(d,u) 是一障礙物項,也就是說在該距離陣列5的第u欄中,該等影像擷取單元21至障礙物的視差值變化。其函式是: Object(d, u) is an obstacle item, that is, in the uth column of the distance array 5, the parallax values of the image capturing unit 21 to the obstacle change. Its function is:
其中v =v min ~v(d) 指的是v min 為該成本函數在該距離陣列5的第u欄中估算該障礙物項的初始位址,在距離陣列5中,v min 指的是最高視差值,也就是圖3所示的該第一影像211或第二影像212的最下方的列(列=0)。ω代表二元判斷函數,其中arg=du,v -d,若ω|arg|<一預設閥值,則ω(arg)=1,若ω|arg|≧該預設閥值,則ω(arg)=0,在本較佳實施例中,該預設閥值為20,但並不限於此。Where v = v min ~ v(d) means that v min is the cost function to estimate the initial address of the obstacle item in the u column of the distance array 5, in the distance array 5, v min refers to The highest disparity value, that is, the lowest column (column = 0) of the first image 211 or the second image 212 shown in FIG. ω represents a binary judgment function, where arg=d u,v -d, if ω|arg| < a predetermined threshold, ω(arg)=1, if ω|arg|≧ the preset threshold, then ω(arg) = 0. In the preferred embodiment, the preset threshold is 20, but is not limited thereto.
而Road(d,u) 是一道路平面項,也就是說在該距離陣列5的第u欄中,障礙物至最遠處的視差值變化。其函式是: Road(d, u) is a road plane term, that is to say, in the u column of the distance array 5, the parallax value of the obstacle to the farthest point changes. Its function is:
其中v =v(d) ~v max 所代表的意思是,v max 為該成本函數在第u欄中估算該障礙物項的終點位址,在距離陣列5中,v max 指的是最低視差值,也就是圖3所示的該第一影像211或第二影像212的最上方的列(列=影像的列高)。ω是和上述相同的二元判斷函數,並同樣地依據該預設閥值判斷。Where v = v(d) ~ v max means that v max is the cost function to estimate the end point address of the obstacle item in the u column. In the distance array 5, v max refers to the lowest view. The difference is the uppermost column of the first image 211 or the second image 212 shown in FIG. 3 (column = column height of the image). ω is the same binary judgment function as described above, and is similarly judged based on the preset threshold value.
特別注意的是,在本發明中該道路平面項是採用視差值作為計算的參數,因如圖1所示的交通載具11的前方是上坡道路或下坡道路,對該成本函數而言並不會視為障礙物而提高該障礙值。It is particularly noted that in the present invention, the road plane term uses the disparity value as a calculated parameter, since the front of the traffic vehicle 11 as shown in FIG. 1 is an uphill road or a downhill road, and the cost function is Words are not considered obstacles and increase the value of the obstacle.
然後,如步驟408所示,該處理單元23依據一初始邊界估算函數利用該等障礙值計算出該距離陣列5中每一欄之一初始可行駛空間邊界值,各該初始可行駛空間邊界值I(u) 對應的函式如下。Then, as shown in step 408, the processing unit 23 calculates an initial travelable space boundary value of each of the distance arrays 5 according to an initial boundary estimation function, and the initial travelable space boundary values. The function corresponding to I(u) is as follows.
該距離陣列5的每一欄(u)的初使可行駛空間邊界值串連後大致成一條曲線,該等欄的初使可行駛空間邊界值雖然不夠精準,但是可以縮小後續對可行駛空間邊界估算的搜尋區域,相對地減少處理單元23計算的負擔,以提升障礙物辨識的速度。Each column (u) of the distance array 5 initially makes the boundary value of the travelable space concatenated into a curve, and the initial boundary of the available travel space is not accurate enough, but the subsequent travelable space can be reduced. The search area of the boundary estimate relatively reduces the burden calculated by the processing unit 23 to increase the speed of obstacle recognition.
接著,如步驟409所示,該處理單元23依據一最佳化邊界估算函數,利用上述初始可行駛空間邊界值計算出一平滑度值。其目的在於獲知該距離陣列5的所有欄的初使可行駛空間邊界值之間是否有不夠平滑的變化,若是,則代表距離陣列5的某一欄的初始可行駛空間邊界值可能因為雜訊等因素影響,而導致其與鄰近的另一欄的初始可行駛空間邊界值差異較大。該最佳化邊界估算函數是:Next, as shown in step 409, the processing unit 23 calculates a smoothness value using the initial travelable space boundary value according to an optimized boundary estimation function. The purpose is to know whether there is an insufficiently smooth change between the initial travelable space boundary values of all the columns of the distance array 5, and if so, the initial travelable space boundary value representing a certain column of the distance array 5 may be due to noise. The influence of other factors causes it to differ greatly from the initial drivable space boundary value of another adjacent column. The optimized boundary estimation function is:
C u , d , k =E 1 (u ,d )+E 2 (u ,d ,k ) C u , d , k = E 1 ( u , d ) + E 2 ( u , d , k )
其中,若該平滑度值C u,d,k 越高,代表上述相鄰的欄的差異較大的問題越有可能存在,其中E 1 (u,d )=v (d,u )代表該初始可行駛空間邊界所在的第u欄與第d視差值的能量值(Cost),而E 2 (u,d,k )=ω3 ×(d-k),代表第u欄與第d視差值的初始可行駛空間邊界及相鄰的第u+1欄與第k視差值的初始可行駛空間邊界所在的能量值的差值,ω3 為一常數,在本較佳實施例設定為0.5。Wherein, if the smoothness value C u,d,k is higher, the problem that the difference between the adjacent columns is larger is more likely to exist, wherein E 1 ( u,d )= v ( d,u ) represents the The energy value (Cost) of the u-th and d-th disparity values of the initial drivable space boundary, and E 2 ( u,d,k )=ω 3 ×(dk), representing the u-th column and the d-th parallax The difference between the initial drivable space boundary of the value and the energy value of the initial ejectable space boundary of the adjacent u+1 column and the kth disparity value, ω 3 is a constant, which is set in the preferred embodiment as 0.5.
如步驟410所示,該處理單元23判定該平滑度值C u,d,k 是否高於一預設值以執行一最佳化演算法。若是,代表該等初始可行駛空間邊界值所構成的曲線並不平滑,也就是說距離陣列5中至少其中一欄所對應的初始可行駛空間邊界值可能受雜訊等干擾,產生與鄰近欄差異較大的初始可行駛空間邊界值,因此,則如步驟411所示,該處理單元23依據該最佳化演算法計算出每一初始可行駛空間邊界值之一最佳可行駛空間邊界值。該最佳化演算法在本較佳實施例中為一動態規劃法,但並不限於此,數學上還有其他演算方法可以使用。然後,如步驟412所示,將該距離陣列5中每一最佳可行駛空間邊界值儲存於該記憶單元22。該距離陣列5中每一欄所對應的最佳可行駛空間邊界值都是一條如圖7所示的障礙線段52,該等障礙線段52代表障礙物和如圖1所示之交通載具11的距離。As shown in step 410, the processing unit 23 determines whether the smoothness value C u,d,k is higher than a predetermined value to perform an optimization algorithm. If yes, the curve formed by the boundary values of the initial drivable spaces is not smooth, that is, the boundary value of the initial drivable space corresponding to at least one of the columns in the array 5 may be disturbed by noise, etc., and the adjacent column is generated. The initial drivable space boundary value is different. Therefore, as shown in step 411, the processing unit 23 calculates one of the optimal drivable space boundary values for each initial drivable space boundary value according to the optimization algorithm. . The optimization algorithm is a dynamic programming method in the preferred embodiment, but is not limited thereto, and other mathematical calculation methods can be used. Then, as shown in step 412, each of the optimal travelable space boundary values in the range array 5 is stored in the memory unit 22. The optimal travelable space boundary value corresponding to each column in the distance array 5 is a barrier line segment 52 as shown in FIG. 7, and the obstacle line segment 52 represents an obstacle and a traffic vehicle 11 as shown in FIG. the distance.
回到步驟410所示,若該處理單元23判定該平滑度值C u,d,k 未高於該預設值,則該處理單元23不需要執行上述的最佳化演算法,而如步驟412所示,將該距離陣列5中每一最佳可行駛空間邊界值儲存於該記憶單元22。Returning to step 410, if the processing unit 23 determines that the smoothness value C u,d,k is not higher than the preset value, the processing unit 23 does not need to perform the optimization algorithm described above, but as steps. At 412, each of the optimal travelable space boundary values in the range array 5 is stored in the memory unit 22.
至步驟412為止,該處理單元23所收集關於該距離陣列5的資訊足以獲知如圖1所示的交通載具11與障礙物的距離,若障礙物的距離過近,則可利用該播放單元25的喇叭發出聲音警示,以提醒駕駛者。然而,上述距離陣列5雖然可以提供如圖1所示的交通載具11和障礙物之間的距離資訊,但是,以此方式所呈現的資訊並無法直接由駕駛者所理解,因此,必須改變距離陣列5的座標系統,也就是說,對應地回到如圖2所示的第一影像211及第二影像212的呈現方式。Up to step 412, the information collected by the processing unit 23 about the distance array 5 is sufficient to know the distance between the traffic vehicle 11 and the obstacle as shown in FIG. 1. If the distance of the obstacle is too close, the playing unit can be utilized. The 25's horn sounded a warning to alert the driver. However, although the distance array 5 described above can provide the distance information between the traffic carrier 11 and the obstacle as shown in FIG. 1, the information presented in this way cannot be directly understood by the driver, and therefore must be changed. The coordinate system of the array 5, that is, correspondingly returns to the presentation of the first image 211 and the second image 212 as shown in FIG.
參閱圖2、7、8,然後,如圖4的步驟413所示,該處理單元轉換該距離陣列5為一包括複數偵測格71的障礙物偵測圖層7,將該距離陣列5的座標系統(橫軸是欄、縱軸是視差值)轉換為該障礙物偵測圖層7的座標系統(橫軸是與距離陣列相同的欄、縱軸是列),同時,把該等最佳可行駛空間邊界值也對應地轉換為複數障礙提示線段72。視差值轉換為列的方式可利用步驟403計算出的道路函數,最後計算出如圖8所示包括該等偵測格71的障礙物偵測圖層7。特別說明的是,該障礙物偵測圖層7中,以該等最佳可行駛空間邊界值為界,列低於最佳可行駛空間邊界值,也就是障礙提示線段72,為一可行駛空間(標示為「○」的偵測格71),列高於最佳可行駛空間邊界值為一障礙物區域(標示為「×」的偵測格71),另外,在本較佳實施例中,該處理單元23僅處理該障礙物偵測圖層7中對應實際上道路部分的偵測格71,在大部分情況中,該等對應道路部分的偵測格71會組成一梯形區域。Referring to FIGS. 2, 7, and 8, then, as shown in step 413 of FIG. 4, the processing unit converts the distance array 5 into an obstacle detection layer 7 including a plurality of detection cells 71, and coordinates the coordinates of the array 5 The system (the horizontal axis is the column and the vertical axis is the disparity value) is converted into the coordinate system of the obstacle detection layer 7 (the horizontal axis is the same column as the distance array, and the vertical axis is the column), and at the same time, the best The drivable space boundary value is also correspondingly converted into a plurality of obstacle cue line segments 72. The way in which the disparity value is converted into a column can be calculated by using the road function calculated in step 403, and finally the obstacle detecting layer 7 including the detecting cells 71 as shown in FIG. 8 is calculated. Specifically, in the obstacle detection layer 7, the boundary value of the optimal travelable space is bounded, and the column is lower than the optimal travelable space boundary value, that is, the obstacle prompt line segment 72 is a travelable space. (Detection cell 71 marked "○"), the column is higher than the optimal travelable space boundary value as an obstacle area (detection cell 71 labeled "X"), and in the preferred embodiment The processing unit 23 processes only the detection cells 71 corresponding to the actual road portions of the obstacle detection layer 7, and in most cases, the detection cells 71 of the corresponding road portions form a trapezoidal region.
參閱圖2、8、9,再者,如圖4的步驟414所示,處理單元將如圖2所示的第一影像211和第二影像212的其中一幅作為一基底影像8,該障礙物偵測圖層7及最佳可行駛空間邊界則疊置於該基底影像8上,並顯示一合成影像9於播放單元25中,提供該交通載具11的駕駛者判讀障礙物的尺寸及距離等提示。舉例來說,在合成影像9中,高牆31、機車32及大客車33的下方都出現複數代表最佳可行駛空間邊界,並對應如圖7所示的該等障礙提示線段72的光條91,而光條91下方由障礙物偵測圖層7所呈現的半透明遮罩代表可行駛區域,對應如圖7所示標示為「○」的偵測格71;相反的,沒有半透明遮罩的部分代表不能行駛的障礙物區域,對應如圖7所示標示為「×」的偵測格71。Referring to FIGS. 2, 8, and 9, further, as shown in step 414 of FIG. 4, the processing unit uses one of the first image 211 and the second image 212 as shown in FIG. 2 as a base image 8, the obstacle. The object detection layer 7 and the optimal travelable space boundary are superimposed on the base image 8 and display a composite image 9 in the playback unit 25 to provide the size and distance of the driver's interpretation obstacle of the traffic vehicle 11. Wait for tips. For example, in the composite image 9, the plurality of walls 31, the locomotive 32, and the bus 33 have a plurality of light bars representing the optimal drivable space boundary, and corresponding to the light bar of the obstacle cue line segment 72 as shown in FIG. 91, and the semi-transparent mask presented by the obstacle detection layer 7 under the light strip 91 represents the travelable area, corresponding to the detection grid 71 marked as "○" as shown in FIG. 7; conversely, there is no translucent cover The portion of the cover represents an obstacle area that cannot be driven, corresponding to the detection grid 71 labeled "X" as shown in FIG.
參閱圖2、4,上述的步驟401至414在本較佳實施例中,該處理單元23是於每一預設時間,如1秒,即執行一次,以便能掌握每1秒交通載具的行進方向之障礙物的狀況,但並不限於此,也可以設計成依據交通載具11的目前速度彈性調整,或依據處理單元23的運算速度調整成符合效率的預設時間。Referring to FIG. 2 and FIG. 4, the above steps 401 to 414 are performed in the preferred embodiment. The processing unit 23 is executed once every preset time, such as 1 second, so as to be able to grasp the traffic carrier every 1 second. The condition of the obstacle in the traveling direction is not limited thereto, and may be designed to be elastically adjusted according to the current speed of the traffic vehicle 11 or adjusted to a preset time in accordance with the calculation speed of the processing unit 23 in accordance with the efficiency.
參閱圖1、2,由於本發明不只是能感測到障礙物,更能進一步地獲知可行駛空間,因此具有下列應用:Referring to Figures 1 and 2, since the present invention not only can detect obstacles, but also can further know the travelable space, it has the following applications:
一、當交通載具11行駛中遭遇路邊的臨時障礙物(停靠於路肩的車輛及施工圍籬),或遭其他車輛超車並切入正在行駛的車道,均能立即地判斷障礙物的位置,提供相對應的行車導引,建議駕駛者適當的行車方向。1. When the traffic vehicle 11 encounters temporary obstacles on the roadside (vehicles parked on the shoulders and construction fences), or is overtaken by other vehicles and cut into the driving lane, the position of the obstacle can be immediately judged. Provide the corresponding driving guidance and advise the driver to the proper driving direction.
二、應用於交通載具11的停靠,如路邊停車或倒車入庫等。由於該處理單元23可以比對可停靠空間及預先儲存的交通載具11尺寸,因此能夠判斷該交通載具11是否可以停泊於該可停靠空間。Second, it is applied to the stop of the traffic vehicle 11, such as roadside parking or reversing into the warehouse. Since the processing unit 23 can compare the size of the dockable space and the pre-stored traffic carrier 11, it can be determined whether the traffic carrier 11 can be parked in the dockable space.
綜上所述,本發明在步驟407的道路平面項是由視差值計算而來,可以適用不同的道路情境,例如平面道路、上坡道路及下坡道路等。另外,在步驟408中,計算出該距離陣列5中每一欄的初始可行駛空間邊界值,提升搜尋最佳化可行駛空間邊界值的效率,克服現有偵測可行駛空間及障礙物的技術,故確實能達成本發明之目的。In summary, the road plane item of step 407 of the present invention is calculated from the disparity value, and different road situations, such as a flat road, an uphill road, and a downhill road, can be applied. In addition, in step 408, the initial drivable space boundary value of each column in the distance array 5 is calculated, and the efficiency of searching for the optimal travelable space boundary value is improved, and the existing technology for detecting the travelable space and the obstacle is overcome. Therefore, the object of the present invention can be achieved.
惟以上所述者,僅為本發明之較佳實施例而已,當不能以此限定本發明實施之範圍,即大凡依本發明申請專利範圍及發明說明內容所作之簡單的等效變化與修飾,皆仍屬本發明專利涵蓋之範圍內。The above is only the preferred embodiment of the present invention, and the scope of the invention is not limited thereto, that is, the simple equivalent changes and modifications made by the scope of the invention and the description of the invention are All remain within the scope of the invention patent.
11...交通載具11. . . Traffic vehicle
12...箭頭12. . . arrow
21...影像擷取單元twenty one. . . Image capture unit
211...第一影像211. . . First image
212...第二影像212. . . Second image
22...記憶單元twenty two. . . Memory unit
23...處理單元twenty three. . . Processing unit
24...檢測單元twenty four. . . Detection unit
25...播放單元25. . . Play unit
31...高牆31. . . High wall
32...機車32. . . locomotive
33...大客車33. . . Bus
34...天空34. . . sky
401~414...步驟401~414. . . step
5...距離陣列5. . . Distance array
51...格51. . . grid
52...障礙線段52. . . Barrier line
61~63...偵測區域61~63. . . Detection area
7...障礙物偵測圖層7. . . Obstacle detection layer
71...偵測格71. . . Detection grid
72...障礙提示線段72. . . Obstacle prompt line segment
8...基底影像8. . . Base image
9...合成影像9. . . Synthetic image
91...光條91. . . Light bar
圖1是一情境俯視圖,說明本發明可行駛空間之偵測系統中的二影像擷取單元;1 is a top view of a scenario illustrating a second image capture unit in a detection system for a travelable space of the present invention;
圖2是一系統方塊圖,說明本發明可行駛空間之偵測系統之較佳實施例;2 is a system block diagram showing a preferred embodiment of the detection system for the travelable space of the present invention;
圖3是一影像示意圖,說明在該較佳實施例中,該等影像擷取單元所拍攝的一第一影像及一第二影像;3 is a schematic diagram showing a first image and a second image captured by the image capturing unit in the preferred embodiment;
圖4是一系統流程圖,說明本發明可行駛空間之偵測方法之較佳實施例;4 is a system flow diagram illustrating a preferred embodiment of a method for detecting a travelable space of the present invention;
圖5是一張二維資料示意圖,說明該較佳實施例中的一距離陣列;Figure 5 is a schematic diagram of a two-dimensional data illustrating a distance array in the preferred embodiment;
圖6是一情境俯視圖,說明該較佳實施例中的複數偵測區域;Figure 6 is a top plan view illustrating the complex detection area in the preferred embodiment;
圖7是一張二維資料示意圖,說明該較佳實施例中的複數對應複數最佳可行駛空間邊界值所對應的障礙線段;7 is a schematic diagram of a two-dimensional data, illustrating that the complex number in the preferred embodiment corresponds to a barrier line segment corresponding to a boundary value of a plurality of optimal travelable spaces;
圖8是一影像示意圖,說明在該較佳實施例中的一障礙物偵測圖層;及Figure 8 is a schematic diagram showing an obstacle detection layer in the preferred embodiment; and
圖9是一影像示意圖,說明在該較佳實施例中,該障礙物偵測圖層及複數障礙提示線段疊置於一基底影像上,並顯示一合成影像。FIG. 9 is a schematic diagram showing an image of the obstacle detection layer and the plurality of obstacle warning lines stacked on a base image and displaying a composite image.
401...影像擷取單元記錄第一影像及第二影像401. . . The image capturing unit records the first image and the second image
402...處理單元執行立體影像重建運算402. . . Processing unit performs stereo image reconstruction operation
403...處理單元計算出道路函數403. . . The processing unit calculates the road function
404...處理單元將第三影像轉換為距離資訊404. . . The processing unit converts the third image into distance information
405...處理單元對距離資訊執行佔有格轉換405. . . The processing unit performs a possessive conversion on the distance information
406...處理單元依據動態狀況處理距離陣列406. . . The processing unit processes the distance array according to the dynamic condition
407...處理單元執行成本函數407. . . Processing unit execution cost function
408...處理單元依據初始邊界估算函數計算初始可行駛空間邊界值408. . . The processing unit calculates the initial drivable space boundary value according to the initial boundary estimation function
409...該處理單元依據最佳化邊界估算函數計算平滑度值409. . . The processing unit calculates a smoothness value according to an optimized boundary estimation function
410...處理單元判定平滑度值是否高於預設值410. . . The processing unit determines whether the smoothness value is higher than a preset value
411...處理單元依據最佳化演算法計算最佳可行駛空間邊界值411. . . The processing unit calculates the optimal travelable space boundary value according to the optimization algorithm
412...處理單元將最佳可行駛空間邊界值儲存於儲存單元412. . . The processing unit stores the optimal travelable space boundary value in the storage unit
413...處理單元轉換距離陣列為障礙物偵測圖層413. . . Processing unit conversion distance array is obstacle detection layer
414...處理單元將基底影像、障礙物偵測圖層、最佳可行駛空間邊界輸出為合成影像414. . . The processing unit outputs the base image, the obstacle detection layer, and the optimal travelable space boundary as synthetic images
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW100149546A TWI453697B (en) | 2011-12-29 | 2011-12-29 | The detection system of the driving space and its detection method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW100149546A TWI453697B (en) | 2011-12-29 | 2011-12-29 | The detection system of the driving space and its detection method |
Publications (2)
Publication Number | Publication Date |
---|---|
TW201327473A TW201327473A (en) | 2013-07-01 |
TWI453697B true TWI453697B (en) | 2014-09-21 |
Family
ID=49225132
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
TW100149546A TWI453697B (en) | 2011-12-29 | 2011-12-29 | The detection system of the driving space and its detection method |
Country Status (1)
Country | Link |
---|---|
TW (1) | TWI453697B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI737437B (en) * | 2020-08-07 | 2021-08-21 | 財團法人車輛研究測試中心 | Trajectory determination method |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5937079A (en) * | 1996-09-05 | 1999-08-10 | Daimler-Benz Ag | Method for stereo image object detection |
US6801244B2 (en) * | 2000-02-29 | 2004-10-05 | Kabushiki Kaisha Toshiba | Obstacle detection apparatus and method |
TW200604959A (en) * | 2004-07-30 | 2006-02-01 | Jia-Bin Wang | 3D space simulation action detection system |
TWI292830B (en) * | 2005-04-18 | 2008-01-21 | Matsushita Electric Works Ltd | Spatial information detecting system |
TWI337251B (en) * | 2006-10-18 | 2011-02-11 | Panasonic Elec Works Co Ltd | Spatial information detecting device |
-
2011
- 2011-12-29 TW TW100149546A patent/TWI453697B/en active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5937079A (en) * | 1996-09-05 | 1999-08-10 | Daimler-Benz Ag | Method for stereo image object detection |
US6801244B2 (en) * | 2000-02-29 | 2004-10-05 | Kabushiki Kaisha Toshiba | Obstacle detection apparatus and method |
TW200604959A (en) * | 2004-07-30 | 2006-02-01 | Jia-Bin Wang | 3D space simulation action detection system |
TWI292830B (en) * | 2005-04-18 | 2008-01-21 | Matsushita Electric Works Ltd | Spatial information detecting system |
TWI337251B (en) * | 2006-10-18 | 2011-02-11 | Panasonic Elec Works Co Ltd | Spatial information detecting device |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI737437B (en) * | 2020-08-07 | 2021-08-21 | 財團法人車輛研究測試中心 | Trajectory determination method |
Also Published As
Publication number | Publication date |
---|---|
TW201327473A (en) | 2013-07-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10733892B2 (en) | Driver assistant system using influence mapping for conflict avoidance path determination | |
AU2020202527B2 (en) | Using wheel orientation to determine future heading | |
CN105009175B (en) | The behavior of autonomous vehicle is changed based on sensor blind spot and limitation | |
US9121717B1 (en) | Collision avoidance for vehicle control | |
JP6468839B2 (en) | Sensor field selection | |
JP6705496B2 (en) | Image processing device, imaging device, mobile device control system, mobile device, image processing method, and program | |
CN113167906B (en) | Automatic vehicle false object detection | |
US9076047B2 (en) | System and method for recognizing parking space line markings for vehicle | |
TWI308891B (en) | ||
CN106324618B (en) | Realize the method based on laser radar detection lane line system | |
JP6583527B2 (en) | Image processing apparatus, imaging apparatus, mobile device control system, image processing method, and program | |
JP2015506310A (en) | Vehicle control based on cognitive uncertainty | |
CN106828492A (en) | Predictive reasoning to controlling the speed of vehicle | |
WO2021227520A1 (en) | Visual interface display method and apparatus, electronic device, and storage medium | |
CN103185571B (en) | Travelable space detection system and detection method | |
JP4940177B2 (en) | Traffic flow measuring device | |
JP2016037098A (en) | Vehicular imaging system | |
WO2019123582A1 (en) | Object information generation device and object information generation program | |
TWI453697B (en) | The detection system of the driving space and its detection method | |
JP7402753B2 (en) | Safety support system and in-vehicle camera image analysis method | |
TW202307766A (en) | Lean vehicle travel data processing device | |
JP4847303B2 (en) | Obstacle detection method, obstacle detection program, and obstacle detection apparatus | |
WO2024195104A1 (en) | Driving assistance device, driving assistance method, and recording medium | |
CN108297794A (en) | Parking assistance system and traveling line of prediction display methods | |
CN115761691A (en) | Vision-based car following state identification method |