TWI268878B - A computer vision based lane detection method and system - Google Patents

A computer vision based lane detection method and system Download PDF

Info

Publication number
TWI268878B
TWI268878B TW93107691A TW93107691A TWI268878B TW I268878 B TWI268878 B TW I268878B TW 93107691 A TW93107691 A TW 93107691A TW 93107691 A TW93107691 A TW 93107691A TW I268878 B TWI268878 B TW I268878B
Authority
TW
Taiwan
Prior art keywords
lane
image
unit
warning
actual position
Prior art date
Application number
TW93107691A
Other languages
Chinese (zh)
Other versions
TW200531861A (en
Inventor
Li-Chen Fu
Shih-Shinh Huang
Pei-Yung Hsiao
Chung-Jen Chen
Chun-Che Wang
Original Assignee
Aetex Biometric Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aetex Biometric Corp filed Critical Aetex Biometric Corp
Priority to TW93107691A priority Critical patent/TWI268878B/en
Publication of TW200531861A publication Critical patent/TW200531861A/en
Application granted granted Critical
Publication of TWI268878B publication Critical patent/TWI268878B/en

Links

Landscapes

  • Traffic Control Systems (AREA)
  • Image Processing (AREA)

Abstract

A computer vision based lane detection method and system for hazard warning during driving includes a digital camera sensing unit, image preprocessing unit, lane detection unit, and warning mechanism unit. The digital camera sensing unit is to continuously capture the on-road lane images in sequence and then transmit them to the image preprocessing unit. The image preprocessing unit will then remove the noises resulting from the camera vibration due to vehicle's movement or variable climate conditions. By applying salient three properties of the lane image, brightness, slenderness, and proximity, the lane detection unit can detect the lane location in the images. After the lanes are successful detected, the warning mechanism unit will decide whether the current driving is hazardous or not based on the developed ""hazard driving rules"", and issue the warning signal if hazard driving is confirmed.

Description

1268878 玖、發明說明: 【發明所屬之技術領域】 本發明係關於-種以電腦影像為基礎的車道偵 警不方法及其系統’特別可應用於一般高速公路,在夕王 不同的天候狀況下仍可適用。 夕種 【先前技術】 據内政部警政署公告’台灣每年約三千人因交 事故而傷亡,統計顯示97%肇事原因是駕駛疏忽。駕^ 驶車輛時所呈現的車道路面標線位置的相對變化,可"仃 駕駛人的行車與精神狀況。因此,在駕駛安全警示:反應 應用中,透過對車道的辨讀, "’、統的 的辨識進而針對危險的駕駛 出警告訊號是非常重要的—環;此外,偵測位於同二 方=,以達到跟車或避免碰撞,在智慧型運輸系統中: 、要的蟻題,因此,車道偵測是行車警示系統與智慧 尘運輸系統中不可或缺的重要技術。 一 1、 一般以電腦影像為基礎之車道仙4警示“, 對相機進行複雜的校正程序,所得到的相機參數,可用以 ==道❹m計算車道的相對距離,作為產生警示訊號 ::。但由於校正過程繁複’此種系統並不適用於一般 或避二'然而’一些不同種類之車道债測系統,意圖簡化 曾“ ㈣複私序部彺彺提兩了車道偵測的計 异複雜度,使車道無法有效且快速的偵測。 由此可見,上述之習用技術仍有缺點,且有待改良。 1268878 本案發明人鑒於上述之I、爸 ^ <早道偵測系統所衍生的各項缺失, 在對車道的影像進行缚纟 #'、、田的性質分析與探討後,發展出一 套有效率且便利的車道彳自 倡測之安全警示系統,並可應用於 各種不同的天候狀況與行車環境。 【發明内容】 具有上列優點之本件以電腦視覺為基礎之車道债測安 s T # /i: n統’主要由—數位相機感測單元、影像 資料前處理單元、車道偵:目丨留— k谓而早兀、警示啟動機制運算單元、 螢幕顯示單元以及警示喇认留_ 、 不嘴j ϋ八早疋所組成。其中,數位相機 感測單元懸掛於後視镑夕g工上 〇 之反面或固定於前搶風玻璃或前座 駕駿平台上,不停的拮貞取皇 乂 貝取車輛刚方之影像,並將所擷取影 像傳輸到影像資料前處理單元 ^ . 早兀以便進行下一步的處理,影 像資料前處理單元會排除夢 卩降衣丨兄天候或相機本身所產生的影 象雜Λ ( Noise ) R得到較為清晰的影像;車道债測單元 對完成前處理之影像進行車道的辨識,根據事前所分析之 車道在影像中所呈現的三個性質:明亮()、細 常(SlenderneSS)以及連續相鄰性(Proximity),以偵測 出車道在影像中的實際位置。螢幕顯示單元會將車道價測 的結果影像以圖示的方式呈現出來。 在债測出車道後,警示啟動機制運算單元會根據㈣ 測之連續車道影像與所分析之危險駕駛條列(Μ),來 判斷當前的駕驶是否屬於危險駕駛的特徵㈣,以作為是 否發出警示訊號的依據。若判斷的結果確定駕駛人正處於 1268878 危險的行車狀況時,警示喇ϋΛ單元會產生警示訊號以提醒 駕馱人注意行車安全;另一方面,為了有效區分各種不同 的危險駕駛,警示喇ρ八單元會發出不同的警示訊號以告知 駕駛人不同的危險行車情境。 【實施方式】 本發明揭示一種以電腦視覺為基礎之車道偵測安全警 示系統,包含: 影像感測單元,其用於連續的擷取車子前方的數位影 像; ^影像資料前處理單元,其接收該數位影像並以數學運 异方式加予處理而排除因外在因素或該影像感測單元本身 所產生的影像雜訊;及 早逗偵測單 饮队$目琢影像資竹别爽理旱元的 除影像雜訊後的影像,並且根據車 一, 仆佩平逼隹影像中所呈現的 二個性質:明亮、細長以及 ㈠往而偵測出車道在 如像中的貫際位置,於是可使用車 作為提供駕㉟人安入Μ * 、在如像中的實際位置 扠供駕駛人女全警不訊號的判斷依據。 較佳的’其中該影像資料前處理單 器。 匕3 同斯慮波 中的亮度擷取出其中 ,及計算出車道在影 較佳的,該車道偵測單元依影像 的特徵點,將特徵點連接成車道線段 像中的實際位置。 較佳的, 本發明的車道偵測安 全警示系統進一步 包含 1268878 一警示啟動機制運算單元, 道的實際位置的差異,並且 時,產生一警示訊號,其中 偵/則單元及警示啟動機制運 合成一個積體電路晶片。 其計算相鄰的兩個影像中的車 當此差異大於一預設的門檻值 該影像資料前處理單元、車道 算單元可以獨立的被設置或整 車又佳的’本發明的車道谓測安全警示系統進一步包含 -警示料單元’其依該警示訊號發出警報。 較佳的,本發明的車道偵測安全警示系統進—步包含 -螢幕顯示單元,其用於顯示該數位影像及車道位置。 一本發明㈣示—種以電腦視覺為基礎之車道们則安全 警示方法’包含下列步驟: 連續的擷取車子前方的數位影像; 以數學運算方式處理該數位影像來排除因外在因素或 該影像感測單元本身所產生的影像雜訊;及 根據車道在影像中所呈現的三個性質:明亮、細長以 及連續相鄰性’而_出車道在去除影像雜訊後的影像影 像中的實際位置,包含依影像中的亮度擷取出其中的特徵 將特徵點連接成車道線段,及計算出車道在影像中的 實際位置於疋可使用車道在影像中的實際位置作為提供 駕駛人安全警示訊號的判斷依據。 較佳的,該數學運算方式包含進行高斯濾、波的運算。 、,佳的’本發明的方法進—步包含進行—警示啟動機 制運算的操作’包含计异相鄰的兩個景“象中的車道的實際 位置的差I ’判斷此差異是否大於一預設的門檻值,並: 1268878 當此差異大於-㈣的n檻值冑,— 較佳的,本發明的方法進-步包含㈣警=號發出 警報。 車乂佳的’本發明項的方法進—步包含將該數位影像及 車道位置顯示於一螢幕顯示單元。 本發明s供叫固α電腦才見覺為基礎的車道谓測安全警 示系統,在不同的天候狀況(晴天、雨天、陰天或起霧) 以及行車環境巾卜般直行㈣f路面、料或上坡路面、 隧道以及高架橋下),提醒駕駛人隨時注意危險的駕敏情 況,以降低交通意外事故的發生。本發明的另外一個目的 在於提供-㈣智慧型運輸系統中,車道辨識的功能,使 得則貞_車道巾前行車輛,以衫_碰撞並應用於跟 車系統中。 圖一所示係本發明所提供以電腦視覺為基礎之車道偵 測安全警示系統之方塊圖,主要包含有一數位相機感測單 元1、影像資料前處理單元2、車道偵測單元3、車道與景 物顯示單元4、警示啟動機制運算單元5肖警告心八單元 6。其中,懸掛於後視鏡反方或固定於前擋風玻璃或前座駕 敬平台上之數位相機感測單元丨’會連續擷取道路影像並 將影像傳送給影像前處理單元2。影像前處理單元2主要 利用高斯遽波器(GaUSSianFilter)過濾影像中因車輛晃動 或環境光線影響所造成的雜訊,首先將高斯連續函數數位 化, 陣列 並將σ設定為適當係數(例如:〇.6),所得到數位化的 為[dl,d2,d3, d4, d5](例如 • [2, 4, 5, 4, 2]),將每個影 ϊ268878 像點的骨度值乘以相對應的值後,除以dl + d2 + d3+d4 + d5 (幻如· 17)用以常態化(Normalize ),如此處理過後的影 /a — 1以有效的去除雜訊;最後傳送到車道偵測單元進行車 道的偵測。 根據對車道影像的詳細分析,車道影像具備下明亮、 、、、田長以及連續相鄰性三個性質,所謂明亮性意指車道在影 像中的亮度值會明顯大於一般的路面;細長意指車道標線 在所呈現的影像為一條長條形的直線或接近直線之微彎曲 7弧線’相鄰性意指道路標線之影像點會兩兩連續相接。 車道偵測單元3根據上述的車道影像性質進行如圖二所示 之辨識步驟,以偵測影像中車道的存在;道路辨識步驟的 第步驟為特徵點擷取3 1,將影像(圖三左上圖)中每一列 (row)的壳度值由左至右描繪出來,請參閱圖三左下圖,基 於車道影像的明亮性,定義特徵點為每列中亮度較高的影 像點。特徵點的選取如下描述:首先由左至右檢視影像點 的π度值,當亮度開始上升時,定義此點為左低點,其 亮度稱為左低點亮度;當亮度值增至最大時,定義此點為 頂點Βτ,其亮度值為頂點亮度;隨著亮度的減少,當亮度 減至最低時,定義此點為右低點Br,其亮度值稱為右低點 亮度。在定義左低點、右低點以及頂點後,由於車道特徵 點具有明亮以及細長特性,因此會滿足下列三個條件,其 中頂點受度舆左低點亮度之差需大於某一臨界值(例如: 60) ’且頂點亮度與右低點亮度之差亦須大於某一臨界值 (例如:60),最後左低點與右低點在影像中的距離差需小 11 1268878 j某π榼值(例如.10),滿足以上所述三個條件的頂點 p為車道影像之特徵點’詳細圖示請參閱圖三右上圖,搜 索影像中的每一列的特徵點,可以找出影像中所有的車道 特徵點,所相對應的車道特徵點影像如圖三右T圖所示。 車道组合單凡32根據車道影像具有連續相鄰性,將所 谓測到的車道特徵點影像組合而成車道線段(心 Segment) ’假設户與《為特徵點操取單元所债測到的特徵 T,其在影像中的座標分別為(與(w〇,當以 丨少心丨皆小於設定適當門檻值(例如:3)時,這兩個特徵 :即被歸類成同一個車道線段。車道組合單元首先將—個 、道特徵點視為第-個車道線段的起點,接著依上述方法 找出符合該門檻值的第二個特徵點,再以此第二個特徵點 ^起』以同樣方式找出第三個特徵點,直到沒有特徵點符 ° °亥門榼Λ ’於是完成第-條車道線段;以同樣方式對不 屬第一條車道線段的特徵點來完成第二條車道線段,依此 行直収有任何車道線段可以被組合。圖三訂圖 :過線段組合單元處理後所得到的所有車道線段如圖四所 北旦車迢重建單元Μ依據所計算出來的車道線段,在排除 月豕所形成的雜訊線段後,以重新計算並描繪出車道在麥 f中的位置。對每一個車道線段的所有特徵點,我們利用 取小平方法求出每一個線段的直線方程式少=力+切义, 表不每一個車道線段的斜率。 因此一條車道線段可表示成ζ = ,其中 12 1268878 0。為此線段中最低點以及最高點。而車道重建的程序 為,針對兩個車道線段以及 ,若線段Z2在線段上方,當線段^的 線段方程式通過線段L的最低點,我們視此兩線段 L與L屬於同一個車道線;最後我們挑選出兩個最長的車 道線當作整個車道偵測之辨識結果34,詳細結果可參閱圖 五。 此車道偵測技術可廣泛應用在不同的天候狀況(例 如:晴天、雨天、陰天或起霧)以及行車環境中(例如: 一般直行或轉彎路面、爬坡或上坡路面、隧道以及高架橋 下),圖六為此技術在不同天候與行車環境中的車道债測 結果,最上兩張影像為在一般行車狀況以及行經高架橋時 陰影遮蔽的情形下的偵測結果;中間兩張影像為進出隧道 時’光線有急劇變化的情形下的偵測結果;最後兩張為在 向陽狀況下以及起霧的天候下之偵測結果。 在偵測出車道在影像中的位置後,警示啟動機制運算 單元5會根據此辨識結果34,判斷駕駛人是否正處於危險 的駕駛情境中,並適時的產生警示訊號,詳細的流程圖如 圖七所示。首先計算出兩條車道在影像中的交點座標52, 接著判斷車道線交點是否位於事先所設定之中心範圍53, 若交點位於中心範圍,意味駕駛人正位於車道中心,無須 發出警示訊號,系統會紀錄此交點的位置後54繼續下一張 影像的辨識。若交點超出所預設之中心範圍,計算車道超 過中心軸的比例55,若此比例小於一定的門檻值56,表示 13 1268878 駕駛者正進行變換車道的駕駛行為,系統在此狀況不會發 - 出警示訊號;否則系統計算出所紀錄的車道交點位置之變 、 異數57,若變異數大於某一適當門檻值58,意味駕駛者的 · 駕敬路徑為彎曲曲線,系統在此狀況下會產生各種人機互 動之安全警示訊號59以提醒駕駛人。根據這樣的警示機 制’可以正確的辨識出危險的駕駛情況並大量減少錯誤警 示的機率。 【圖式簡單說明】 ® 圖一為本發明之以電腦視覺為基礎之車道偵測安全警 示系統之架構方塊圖。 圖二為本發明之以電腦視覺為基礎之車道偵測安全警 示系統的車道偵測單元偵測流程圖。 圖三為表示對道路影像進行特徵點擷取處理的流程, ,中左上圖為待處理影像;左下圖為影像中同一列的點的 儿度值,右上圖顯示為該同一列的點的亮度值的左低點 籲 l頁"占Βτ及為右低點br,右下圖顯示影像中特徵點。 、圖四中的左圖影像中的特徵點;而右圖表示特徵點經 車道線段組合單元處理後所得到的線段。 圖五為表示車道線段影像經車道重建單元處理過程及 t果,其中左上圖為線段^與L2的組合的示意圖;右上 圖為依本發明方法重建的車道線段A及B ;下圖為顯示有 車道線段的道路影像。 圖/、為表不二各種天候狀況以及行車環境之實際車道 14 I268878 綱^識、社Μ. ]^» 經高加丨巾最上兩張影像為在—般行車狀況以及行 影像二時陰影遮蔽的情形下的車道谓測結果;中間兩張 ϋ!隨道時,光線有急劇變化的情形下的車道偵測 最後兩張為在向陽狀況下以及起霧的天候下 偵测結果。 平道 圖七為警示啟動機制單元的流程圖。 【主要部分代表符號】 1 :數位相機 2 :影像資料前處理單元 3 ·車道偵測單元 4 :顯示單元 5 :警示啟動機制運算單元 6 :警示喇π八單元 31 :特徵點擷取 32 :車道線段組合 33 :車道重建 34 :車道辨識結果 52 ··計算車道交叉點 5 3 ·父點是否洛於中心範圍 5 4 :紀錄車道交點 55 :計算超越比例 56 :比例是否大於門檻值 57 :計算變異數 58 ··變異數是否大於門檻值 59 :產生警示訊號 B l ·売度左低點 Β τ :亮度頂點 B r :亮度右低點 151268878 玖, invention description: [Technical field to which the invention pertains] The present invention relates to a method and system for lane detection based on computer imagery, which is particularly applicable to general highways, under different weather conditions of Xiwang Still applicable. Xi'an [Prior Art] According to the Ministry of the Interior's Police Department's announcement, about 3,000 people in Taiwan are killed or injured each year due to accidents. Statistics show that 97% of the accidents are caused by driving negligence. The relative change in the position of the lane markings presented when driving the vehicle can "仃 the driver's driving and mental conditions. Therefore, in the driving safety warning: reaction application, through the recognition of the lane, "', the identification of the system and then the warning signal for dangerous driving is very important - ring; in addition, the detection is located in the same two = In order to achieve the following or avoid collision, in the intelligent transportation system: the necessary ant problem, therefore, lane detection is an indispensable important technology in the traffic warning system and the intelligent dust transportation system. 1. The general warning of the driveway based on computer image ", a complex calibration procedure for the camera, the obtained camera parameters, can be used to calculate the relative distance of the lane with == way m as a warning signal::. Because the correction process is complicated, 'this system is not suitable for general or avoidance. 'However, some different types of lane debt measurement systems, the intention is to simplify the "fourth" , so that the lane can not be detected effectively and quickly. It can be seen that the above-mentioned conventional techniques still have shortcomings and need to be improved. 1268878 The inventor of the present case developed a set of efficiency in view of the above-mentioned I, Dao ^ < early detection system derived from the missing, in the analysis of the image of the lanes, and the nature of the field analysis and discussion The convenient lanes are self-advocating safety warning systems and can be applied to a variety of weather conditions and driving environments. SUMMARY OF THE INVENTION The present invention has the advantages of the above-mentioned computer vision-based lane debt measurement s T # /i: n system 'mainly by - digital camera sensing unit, image data pre-processing unit, lane detection: target retention — k means early warning, warning start mechanism arithmetic unit, screen display unit, and warning la _ _, not mouth j ϋ eight early 疋. Among them, the digital camera sensing unit is hung on the reverse side of the rear view of the smashing sill or fixed on the front windshield or the front seat driving platform, constantly arranging the image of the vehicle and taking the image of the vehicle. The captured image is transmitted to the image data pre-processing unit ^. As early as possible for the next processing, the image data pre-processing unit will exclude the image noise generated by the nightmare or the camera itself. A clearer image is obtained; the lane debt measurement unit identifies the lanes of the pre-processed images, and the three properties presented in the images according to the lanes analyzed beforehand: bright (), fine (Slenderne SS) and continuous adjacent Proximity to detect the actual position of the lane in the image. The screen display unit presents the resulting image of the lane price test as shown. After the debt is measured out of the lane, the warning start-up mechanism unit determines whether the current driving is a dangerous driving feature (4) according to the (four) measured continuous lane image and the analyzed dangerous driving lane (Μ), as a warning. The basis of the signal. If the result of the judgment determines that the driver is in the dangerous driving condition of 1268878, the warning unit will generate a warning signal to remind the driver to pay attention to driving safety; on the other hand, in order to effectively distinguish various dangerous driving, the warning is The unit will issue different warning signals to inform the driver of different dangerous driving situations. [Embodiment] The present invention discloses a computer vision-based lane detection safety warning system, comprising: an image sensing unit for continuously capturing digital images in front of a vehicle; ^ image data pre-processing unit, receiving The digital image is added by mathematical processing to eliminate image noise caused by external factors or the image sensing unit itself; early detection of the single drink team $. In addition to the video after the image noise, and according to the car one, the servant Pei Ping forced the two properties presented in the image: bright, slender and (a) to detect the intersection of the lane in the image, so Use the car as a driver to provide 35 people to enter the Μ *, in the actual position of the image for the driver's decision to use the policeman's full police. Preferably, the image data is pre-processed.匕3 The luminance in the same wave is taken out, and the lane is better. The lane detection unit connects the feature points to the actual position in the lane segment image according to the feature points of the image. Preferably, the lane detection security warning system of the present invention further comprises a 1268878 alert activation mechanism computing unit, the difference in the actual position of the track, and at the same time, generating a warning signal, wherein the detecting unit and the warning starting mechanism are combined into one Integrated circuit chip. It calculates the car in the two adjacent images. When the difference is greater than a preset threshold, the image data pre-processing unit and the lane calculation unit can be independently set or the vehicle is better. The warning system further includes a warning unit that alerts the warning signal. Preferably, the lane detection security alert system of the present invention further comprises a screen display unit for displaying the digital image and the lane position. An invention (four) shows a safety warning method for a computer vision-based lane, which includes the following steps: continuously capturing a digital image in front of the vehicle; mathematically processing the digital image to exclude external factors or The image noise generated by the image sensing unit itself; and the three properties that are presented in the image according to the lane: bright, slender, and continuous adjacency' and the actual appearance of the lane in the image after removing the image noise The position includes the feature according to the brightness in the image, the feature points are connected into the lane segment, and the actual position of the lane in the image is calculated. The actual position of the lane in the image is used as the driver's safety warning signal. Judgments based. Preferably, the mathematical operation includes performing Gaussian filtering and wave operations. Preferably, the method of the present invention includes the step of performing the operation of the warning start mechanism to include the difference I between the actual positions of the lanes in the two adjacent scenes, and whether the difference is greater than one The threshold value is set, and: 1268878 When the difference is greater than the (n) value of n(胄), - preferably, the method of the present invention includes (4) the alarm number to issue an alarm. The step-by-step includes displaying the digital image and the lane position on a screen display unit. The invention is a lane-based safety warning system based on the ground-based computer, which is based on different weather conditions (sunny, rainy, cloudy). Day or fog) and the driving environment of the towel (four) f road, material or uphill road, tunnel and under the viaduct), to remind drivers to pay attention to dangerous driving conditions to reduce the occurrence of traffic accidents. Another aspect of the present invention The purpose is to provide - (iv) the function of lane recognition in the intelligent transportation system, so that the vehicle is driven by the 贞_ lane towel, and the shirt is used for collision and is applied to the following system. The invention provides a block diagram of a computer vision-based lane detection safety warning system, which mainly comprises a digital camera sensing unit 1, a video data pre-processing unit 2, a lane detecting unit 3, a lane and a scene display unit 4, and an alert. The start-up mechanism operation unit 5 audibly warns the heart unit 6. The digital camera sensor unit 悬挂' suspended from the rear view mirror or fixed on the front windshield or the front seat homing platform continuously captures the road image and images It is transmitted to the image pre-processing unit 2. The image pre-processing unit 2 mainly uses a Gaussian filter to filter the noise caused by vehicle sway or ambient light in the image, first digitizes the Gaussian continuous function, and arrays and σ Set to the appropriate coefficient (for example: 〇.6), the resulting digitized is [dl, d2, d3, d4, d5] (for example • [2, 4, 5, 4, 2]), each will be 268878 After multiplying the bone value of the image point by the corresponding value, divide by dl + d2 + d3+d4 + d5 (phantom 17) for normalize, and the processed shadow /a-1 is valid. Remove noise; finally send to The lane detection unit performs lane detection. According to the detailed analysis of the lane image, the lane image has three properties of lower brightness, ,, field length and continuous adjacency. The so-called brightness means the brightness value of the lane in the image. It will be significantly larger than the general road surface; slender means that the image of the lane marking is a long straight line or a slightly curved 7-curve near the straight line. 'Adjacent means that the image point of the road marking will be two consecutive phases. The lane detecting unit 3 performs the identification step shown in FIG. 2 according to the nature of the lane image to detect the presence of the lane in the image; the first step of the road identification step is to capture the image by the feature point 3 1 The shell values of each row in the three left upper graphs are drawn from left to right. Please refer to the left lower panel of Figure 3. Based on the brightness of the lane image, define the feature points as the image points with higher brightness in each column. The selection of feature points is as follows: First, the π degree value of the image point is viewed from left to right. When the brightness starts to rise, the point is defined as the left low point, and the brightness is called the left low point brightness; when the brightness value is increased to the maximum Define this point as the vertex Βτ, whose brightness value is the vertex brightness; as the brightness decreases, when the brightness is reduced to the minimum, the point is defined as the right low point Br, and the brightness value is called the right low point brightness. After defining the left low point, the right low point, and the vertices, since the lane feature points have bright and slender characteristics, the following three conditions are satisfied, wherein the difference between the vertex acceptance degree and the left low point brightness needs to be greater than a certain critical value (for example, : 60) 'And the difference between the brightness of the vertex and the brightness of the right low point must also be greater than a certain threshold (for example: 60), and the distance difference between the left low point and the right low point in the image needs to be 11 1168878 j π 榼 value (For example, .10), the vertex p satisfying the above three conditions is the feature point of the lane image. For detailed illustration, please refer to the upper right diagram of Figure 3. Search for the feature points of each column in the image to find all the images in the image. Lane feature points, the corresponding lane feature point image is shown in Figure 3 right T. The lane combination unit Fan 32 has continuous adjacentity according to the lane image, and combines the so-called measured lane feature point images into a lane segment (heart segment). The hypothetical household and the feature T measured by the feature point operation unit The coordinates in the image are (and (w〇, when less than the heart is less than the set appropriate threshold (for example: 3), these two characteristics: that is classified as the same lane segment. Lane The combination unit first regards the feature point of the track as the starting point of the first lane segment, and then finds the second feature point that meets the threshold value according to the above method, and then uses the second feature point to The way to find the third feature point until there is no feature point ° °Haimen 榼Λ 'then complete the first - lane segment; in the same way for the feature points that are not part of the first lane segment to complete the second lane segment According to this line, any lane segments can be combined. Figure 3: All the lane segments obtained after the processing of the line segment combination unit are shown in Figure 4. The North Bay rut reconstruction unit is based on the calculated lane segment. In row In addition to the noise line formed by the Lunar New Year, the position of the lane in the wheat f is recalculated and depicted. For all the feature points of each lane segment, we use the method of taking the Xiaoping method to find the straight line equation for each line segment. Force + cutoff, which shows the slope of each lane segment. Therefore, a lane segment can be expressed as ζ = , where 12 1268878 0. The lowest point and the highest point in this segment. The lane reconstruction procedure is for two lanes. Line segment and, if line segment Z2 is above the line segment, when the line segment equation of line segment ^ passes the lowest point of line segment L, we regard the two line segments L and L belong to the same lane line; finally we select the two longest lane lines as the whole Lane detection identification results 34, detailed results can be seen in Figure 5. This lane detection technology can be widely used in different weather conditions (such as: sunny, rainy, cloudy or fogging) and driving environment (for example: generally straight Or turning roads, climbing or uphill roads, tunnels and underducts), Figure 6 is the road debt measurement of this technology in different weather and driving environments. The top two images are the detection results in the case of general driving conditions and shadows when passing through the viaduct; the middle two images are the detection results in the case where the light changes sharply when entering and leaving the tunnel; the last two are in the case The detection result under the sun and the foggy weather. After detecting the position of the lane in the image, the alert activation mechanism operation unit 5 determines whether the driver is in a dangerous driving situation based on the identification result 34. And the warning signal is generated in time. The detailed flow chart is shown in Figure 7. First, the intersection coordinates 52 of the two lanes in the image are calculated, and then it is judged whether the intersection of the lane line is located in the center range 53 set in advance, if the intersection is located The central scope means that the driver is at the center of the lane without warning signals. The system will record the location of the intersection and continue with the identification of the next image. If the intersection point exceeds the preset center range, calculate the ratio of the lane over the central axis 55. If the ratio is less than a certain threshold value of 56, it means that 13 1268878 the driver is performing the driving behavior of changing lanes, and the system will not send in this situation - The warning signal is generated; otherwise, the system calculates the change of the recorded lane intersection position and the number of 57. If the variance is greater than a certain threshold value of 58, it means that the driver's driving path is a curved curve, and the system will generate under this condition. Various human-machine interaction safety warning signals 59 are provided to remind the driver. According to such a warning mechanism, dangerous driving situations can be correctly identified and the probability of false alarms can be greatly reduced. [Simple Description of the Drawings] ® Figure 1 is a block diagram of the architecture of the computer vision-based lane detection security warning system of the present invention. Figure 2 is a flow chart of the detection of the lane detection unit of the computer vision-based lane detection security warning system of the present invention. Figure 3 shows the flow of feature point extraction processing on the road image. The upper left image is the image to be processed; the lower left image is the child value of the same column in the image, and the upper right image shows the brightness of the same column. The left low point of the value is called l page " Β τ and the right low point br, the lower right image shows the feature points in the image. The feature points in the image on the left in Figure 4; and the line on the right shows the line segments obtained after the feature points are processed by the lane segment combination unit. Figure 5 is a schematic diagram showing the processing of the lane segment image through the lane reconstruction unit, and the top left diagram is a schematic diagram of the combination of the line segment ^ and L2; the upper right diagram is the lane segment A and B reconstructed according to the method of the present invention; Road image of the lane segment. Figure /, is the actual scene of the various weather conditions and the driving environment 14 I268878 Outline, Society.] ^» The top two images of the high-altitude towel are in the general driving situation and the shadow of the line image In the case of the lane prediction results; the middle two ϋ! In the case of the lane, the last two lanes in the case of sharp changes in the light are detected under the sunny condition and the foggy weather. Ping Road Figure 7 is a flow chart of the warning start mechanism unit. [Main part representative symbol] 1 : Digital camera 2 : Image data pre-processing unit 3 · Lane detection unit 4 : Display unit 5 : Warning start mechanism operation unit 6 : Warning π 八 unit 31 : Feature point capture 32 : Lane Line segment combination 33: Lane reconstruction 34: Lane identification result 52 ··Calculate lane intersection 5 3 · Whether the parent point is in the center range 5 4 : Record lane intersection 55: Calculate the overrun ratio 56: Is the ratio greater than the threshold value 57: Calculate the variation Number 58 ··· The variability is greater than the threshold value 59 : Generate warning signal B l · 左 degree left low Β τ : Brightness vertex B r : Brightness right low 15

Claims (1)

1268878 拾、申請專利範圍: 1. 一種以電腦視覺為基礎之車道偵測安全警示系統, 包含: 。 ”、 影像感測單元,其用於連續的掏取車子前方的數 像; ” 影像資料前處理單元,其接收該數位影像並以數學運 算方式加Μ理而排除因外在因素或該影像❹彳單元本身 所產生的影像雜訊;及 車道债測單元,其接收來自該影像資料前處理單元的 去除影像雜訊後㈣像,並絲據車道在料巾所呈現的 二個性質:明亮、細長以及連續相鄰性,而偵測出車道在 ^:的實際位置’^是可使用車道在影像中的實際位置 …挺供駕駛人安全警示訊號的判斷依據。 2.如申請專利範圍第i項的車道读測安全警示系统,直 中该影像資㈣處理單元包含—高斯濾波器。 3直如申請專利範圍第】項的車道偵測安全警示系統,其 =道偵測單元依影像中的亮度掏取出其中的特徵點, =徵點連接成車道線段,及計算出車道在影像中的實際 進一 4.如申請專利範圍第1項的車道備 步包含一警示啟動機制運算單元 測安全警示系統,其 ’其計算相鄰的兩個 16 1268878 影像中的車道的實際位置的差異,並且當此差異大於一預 設的門櫪值時’產生一警不訊號,其中該影像資料前處理 單元、車道偵測單元及警示啟動機制運算單元可以獨立的 被設置或整合成一個積體電路晶片。 5·如申請專利範圍第4項的車道偵測安全警示系統, 其進一步包含一警示喇队單元,其依該警示訊號發出警報。1268878 Picking up and applying for patent scope: 1. A computer vision-based lane detection safety warning system, including: ”, image sensing unit for continuously capturing the digital image in front of the car;” image data pre-processing unit that receives the digital image and mathematically adds it to eliminate external factors or the image. The image noise generated by the unit itself; and the lane debt measuring unit receives the image after removing the image noise from the pre-processing unit of the image data, and the two properties exhibited by the lane in the towel: bright, Slender and continuous adjacency, and detecting the actual position of the lane at ^: '^ is the actual position of the lane that can be used in the image... it is a basis for judging the driver's safety warning signal. 2. For the lane reading safety warning system of claim i, the processing unit (4) directly includes a Gaussian filter. 3 As far as the lane detection safety warning system of the patent application scope item is concerned, the channel detection unit extracts the feature points according to the brightness in the image, the = point is connected into the lane segment, and the lane is calculated in the image. Actually, the lane preparation step of claim 1 includes a warning start mechanism operation unit measurement safety warning system, which calculates the difference in the actual position of the lanes in the adjacent two 16 1268878 images, and When the difference is greater than a preset threshold value, an alarm signal is generated, wherein the image data pre-processing unit, the lane detection unit, and the warning activation mechanism operation unit can be independently set or integrated into one integrated circuit chip. . 5. The lane detection safety warning system of claim 4, further comprising a warning racquet unit that issues an alarm according to the warning signal. 6.如申請專利範圍帛i項的車道偵測安全警示系統, 其進-步包含-螢幕顯示單元’其用於顯示該數位影像及 車道位置。 7· —種以電腦視覺為基礎之車道偵測安全警示方法 包含下列步驟: 連績的擷取車子前方的數位影像;6. The lane detection security alert system of claim ii, wherein the step-by-step includes a screen display unit for displaying the digital image and the lane position. 7·-A computer vision-based lane detection safety warning method includes the following steps: The digital image of the front of the car is captured by the continuous performance; 以數予運算方式處理該數位影像來排除因外在因素或 該影像感測單元本身所產生的影像雜訊;及 根據車道在影像中所呈現的三個性質:明亮、細長以 及連續相鄰性,而偵測ψ电、苦+ 阳偵邓出車道在去除影像雜訊後的影像 像中的實際位置,包含依影像中的亮度操取出其中的特徵 p將特徵料接成車料段,及計算出車道 實際位置,於是可蚀田由从丄 T ^ 車道在影像巾的實際位置作為提供 駕駛女王警不訊號的判斷依據。 17 1268878 8.如申請專利範圍第7項的方法,其中該數學運算方式 包含進行高斯濾波的運算。 9·如申請專利範圍第7項的方法,其進一步包含進行一 警示啟動機制運算的操作,包含計算相鄰的兩個影像中的 \ 車道的實際位置的差異,判斷此差異是否大於一預設的門 檻值,並且當此差異大於一預設的門檻值時,產生一警示 訊號。 β 10.如申請專利範圍第9項的方法,其進一步包含依該 警示訊號發出警報。 11·如申請專利範圍第7項的方法,其進一步包含將該 數位影像及車道位置顯示於一螢幕顯示單元。 18 1268878 柒、指定代表圖: (一) 本案指定代表圖為:第(1 )圖。 (二) 本代表圖之元件代表符號簡單說明: 1 :數位相機感測單元 2:影像資料前處理單元 3:車道偵測單元 4:顯示單元 5:警示啓動機制運算單元 6:警示喇叭單元 捌、本案若有化學式時,請揭示最能顯示發明特徵的化學式:Processing the digital image in a digital computing manner to eliminate image noise generated by external factors or the image sensing unit itself; and three properties exhibited in the image according to the lane: bright, slender, and continuous adjacency And detecting the actual position in the image of the image after the image noise is removed, including the feature p in the image, and the feature material is connected into the vehicle segment, and The actual position of the lane is calculated, so the eroded field is used as the basis for judging the driving of the Queen's police signal from the actual position of the image towel from the 丄T^ lane. 17 1268878 8. The method of claim 7, wherein the mathematical operation comprises performing Gaussian filtering. 9. The method of claim 7, further comprising performing an operation of an alert initiation mechanism, comprising calculating a difference in an actual position of a lane in the adjacent two images, and determining whether the difference is greater than a preset The threshold value, and when the difference is greater than a predetermined threshold, a warning signal is generated. β 10. The method of claim 9, further comprising issuing an alert based on the alert signal. 11. The method of claim 7, further comprising displaying the digital image and lane position on a screen display unit. 18 1268878 柒, designated representative map: (1) The representative representative of the case is: (1). (2) The representative symbol of the representative figure is a simple description: 1 : Digital camera sensing unit 2: Image data pre-processing unit 3: Lane detection unit 4: Display unit 5: Warning start mechanism operation unit 6: Warning speaker unit 捌If there is a chemical formula in this case, please reveal the chemical formula that best shows the characteristics of the invention:
TW93107691A 2004-03-22 2004-03-22 A computer vision based lane detection method and system TWI268878B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW93107691A TWI268878B (en) 2004-03-22 2004-03-22 A computer vision based lane detection method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW93107691A TWI268878B (en) 2004-03-22 2004-03-22 A computer vision based lane detection method and system

Publications (2)

Publication Number Publication Date
TW200531861A TW200531861A (en) 2005-10-01
TWI268878B true TWI268878B (en) 2006-12-21

Family

ID=38291371

Family Applications (1)

Application Number Title Priority Date Filing Date
TW93107691A TWI268878B (en) 2004-03-22 2004-03-22 A computer vision based lane detection method and system

Country Status (1)

Country Link
TW (1) TWI268878B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2163428A1 (en) 2008-09-10 2010-03-17 National Chiao Tung University Intelligent driving assistant systems
TWI641516B (en) * 2018-03-06 2018-11-21 國立交通大學 Lane line detection method

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113212298A (en) * 2021-05-07 2021-08-06 宝能(广州)汽车研究院有限公司 Vehicle abnormity prompting method, prompting system and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2163428A1 (en) 2008-09-10 2010-03-17 National Chiao Tung University Intelligent driving assistant systems
TWI641516B (en) * 2018-03-06 2018-11-21 國立交通大學 Lane line detection method
US10726277B2 (en) 2018-03-06 2020-07-28 National Chiao Tung University Lane line detection method

Also Published As

Publication number Publication date
TW200531861A (en) 2005-10-01

Similar Documents

Publication Publication Date Title
JP7332726B2 (en) Detecting Driver Attention Using Heatmaps
JP5198835B2 (en) Method and system for presenting video images
US8810653B2 (en) Vehicle surroundings monitoring apparatus
US8452103B2 (en) Scene matching reference data generation system and position measurement system
US7482916B2 (en) Automatic signaling systems for vehicles
JP4872245B2 (en) Pedestrian recognition device
JP4888761B2 (en) Virtual lane display device
TW201226234A (en) Image type obstacle detection car reversing warning system and method
JP5547160B2 (en) Vehicle periphery monitoring device
CN204309672U (en) A kind of anti-car rear-end prior-warning device based on image recognition
EP2741234B1 (en) Object localization using vertical symmetry
JP4991384B2 (en) Approaching object detection device and approaching object detection program
CN108482367A (en) A kind of method, apparatus and system driven based on intelligent back vision mirror auxiliary
CN112896159A (en) Driving safety early warning method and system
Cualain et al. Multiple-camera lane departure warning system for the automotive environment
JP2007058805A (en) Forward environment recognition device
JP2014056295A (en) Vehicle periphery monitoring equipment
TWI268878B (en) A computer vision based lane detection method and system
JP5345992B2 (en) Vehicle periphery monitoring device
TWI245715B (en) A computer vision based vehicle detection and warning system
CN114120250B (en) Video-based motor vehicle illegal manned detection method
JP6877651B1 (en) Visual load value estimation device, visual load value estimation system, visual load value estimation method, and visual load value estimation program
JP2008028478A (en) Obstacle detection system, and obstacle detecting method
JP5173991B2 (en) Vehicle periphery monitoring device
KR101197740B1 (en) Front side monitoring system

Legal Events

Date Code Title Description
MM4A Annulment or lapse of patent due to non-payment of fees