TWI253998B - Method and apparatus for obstacle avoidance with camera vision - Google Patents

Method and apparatus for obstacle avoidance with camera vision Download PDF

Info

Publication number
TWI253998B
TWI253998B TW93135791A TW93135791A TWI253998B TW I253998 B TWI253998 B TW I253998B TW 93135791 A TW93135791 A TW 93135791A TW 93135791 A TW93135791 A TW 93135791A TW I253998 B TWI253998 B TW I253998B
Authority
TW
Taiwan
Prior art keywords
obstacle
image sensor
image
distance
rti
Prior art date
Application number
TW93135791A
Other languages
Chinese (zh)
Other versions
TW200616816A (en
Inventor
Jiun-Yuan Tseng
Original Assignee
Jiun-Yuan Tseng
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiun-Yuan Tseng filed Critical Jiun-Yuan Tseng
Priority to TW93135791A priority Critical patent/TWI253998B/en
Priority to US11/260,723 priority patent/US20060111841A1/en
Priority to JP2005332937A priority patent/JP2006184276A/en
Application granted granted Critical
Publication of TWI253998B publication Critical patent/TWI253998B/en
Publication of TW200616816A publication Critical patent/TW200616816A/en

Links

Landscapes

  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Traffic Control Systems (AREA)

Abstract

The present invention relates to a method and an apparatus of operating an obstacle avoidance system with camera vision. The invention is used in day and night, and provides a strategy of obstacle avoidance without complicated fuzzy inference for safe driving. The method includes the following steps: capturing, analyzing plural images of an obstacle, positioning an image sensor, running an obstacle recognizing flow, obtaining an absolute velocity of a system carrier, obtaining a relative velocity and a relative distance of the system carrier with respect to the obstacle, and performing a strategy of obstacle avoidance.

Description

1253998 九、發明說明: 【發明所屬之技術領域】 本發明係關於一種障礙物防撞裝置及其實施方法,尤指 一種以視訊感知為基礎,而特別適合運用於交通工具之防 撞裝置及方法。 【先前技術】 國内已經有許多學術研究單位正在從事汽車防追撞的研 究,以國立交通大學的智慧型運輸系統(Intelligent Transportation Systems ; ITS)整合計畫裡的汽車防撞警示子_ 系統為例,其原理是採用超音波感測器量測車輛之間的距 離。在國外,有關汽車相關的安全系統方面之研究已行之 有年’而且已經結合其他相關的資訊系統整合成智慧型運 輸系統ITS,目前已完成自動防撞裴置(Aut〇m〇tive Collision Avoidance System ; ACAS),其原理是使用紅外線 量測駕駛者本身所駕駛的車子與前方車子之間的距離,進 而推算出兩車之間的相對速度,最後同樣透過人機介面來 提醒駕駛者做出安全措施。AC AS的建立可以利用感測器接籲 收環境資訊、利用擷取之影像作車輛辨識和建立防追撞策 略三個流程說明其系統架構。 感測器的功能在於能擷取外部環境的資訊,目前國内外 相關實驗所使用到的感測器,如川端昭所提出的超音波(最 新超音波工程)、Health Physics所提出的無線電波及雷射 (紅外線)(International Commission 〇n Non-Ionizing Radiation Protection: Guidelines for limiting exposure to 96682捕充修正之中說.doc . 5 . 1253998 time-varying electric, magnetic and electromagnetic fields)、Wann所提出的 GPS三點定位(Position tracking and velocity estimation for mobile positioning systems)、Kearney 所提出的 CCD攝影機(Camera calibration using geometric constraints)等。各個感測器之特性如表1所示。 表1感測器之特性 感測器 超音波 無線電波 雷射(紅外線) 衛星定位 CCD攝影機 原理 都卜勒效應 測速 都卜勒效應 測速 紅外線效應 測速 GPS定位 影像平面座標 轉換為立體空 間座標、智慧 型影像辨識 優點 對人無不 良影響,便 宜且硬體易 實現。 可測量中長 距離約 100〜200M。 測量距離較 長,可達 500〜600M, 測量值精準。 能提供汽車導 航的功能。 測量距離可達 100M,能提供 最完整的路況 資訊,包括道 路邊線偵測、 前車距離、車 速等資訊。 缺點 測量距離知 短,僅約 0〜10m 〇無 法提供完整 的道路資 訊0 電S波危害 人雜> 無法提 供比影像更 完整的道路 資訊。 對人體(尤其 是人眼)的危 害比電磁波 更大。無法提 供完整的道 路資訊。 價格昂贵,定 位誤差10m左 右’無法有效 提供車子避撞 功能。另其它 障礙物需同時 安裝GPS方可 予以定位。 受天候光線的 影響,但可藉 由智慧型訊號 處理的方法做 妥善的處理。 感測器常 見的使用 場合 倒車防碰撞 系統、自走 車、車輛防 撞 警用測速 器、車輛防撞 警用測速 器、車輛防撞 ,星導航 工籴影像檢 測、機器手臂 視覺建立、自 走車、車輛防 撞 由表1可知利用CCD攝影機擷取影像雖可提供最完整的 路況資訊,但缺點是容易受光線干擾,且無法運用在夜間 的障礙物辨識。 %682補充修正之中說.doc 1253998 目前,國内外利用影像做車輛辨識的方法很多,包含 Yamaguchi 所提利用車牌辨識(A Method for Identifying Specific Vehicles Using Template Matching)、Marmoiton所 提出的三個前方已知方位易辨識標誌、(Location and relative speed estimation of vehicles by monocular vision)、Kato戶斤 提出的圖形辨識(Preceding Vehicle Recognition Based on Learning From Sample Images)、Kruger提出的光學流量 (Real-time estimation and tracking of optical flow vectors for obstacle detection)及Lutzeler所提出的車輛影像圖騰或 邊界組合的比對(EMS-vision: recognition of intersections on unmarked road networks)。各種影像辨識車輛方法之比 較如表2。 表2影像辨識車輛方法 利用車牌辨識 三個前方已知 方位的易辨識 標言志 圖形辨識 車輛的影像邊界 組合 理論依據 高通率波器可 辨識車牌號 碼;車牌大小 型式一致。利 用車牌像素量 來決定與前車 的距離。 ft用前方三個 易辨識標誌; 且三個標諸的 相對方位已 知0 找出本輛的特 徵向量並作類 神經網路訓 練。 利用影像上,車 輛的邊界分佈。 應用場合 ¥車場管理系 統 主動式安全駕 駛輔助系統 鋼板瑕庇檢 測;人臉辨 識… 主動式安全駕駛 輔助系統 演算法 高通率波器 二點精確透視 法 類神經網路訓 練 利用 HCDFCPvI 做強健性邊界找 尋 CPU計算資源 使用程度 中 只使用單CCD 攝影機為影像 中 只使用單CCD 攝影機為影像 高 ~ 作類神經網路 訓練時相當費 低 只需影像上一條 線段的色階資料 96682補充修正之中說.d〇c 1253998 輸入,一次處 理單張影像, 但對整張影像 作影像處理, 耗損了不少 CPU資源。 輸入,一次處 理單張影像, 但對整張影像 作影像處理, 耗損了不少 CPU資源。 π,五訓練資 料的好壞決定 辨識的品質。 (至多720個像素 點)便可。 必須事先確定 的參數或資訊 High-pass Filter的係數 前方三標誌的 相對座標資訊 模板資料庫的 建立;類神經 網路的建立。 影像中,車輛邊 界的分佈。 方法難易 難 背景不能太複 雜,且本方法 僅適用於10 公尺内 中 建立圖騰資料 庫不易;需找 其質與量都具 代表性的車、 道圖騰來訓練 才有成效 易 雖然車輛邊界的 分佈並不^定, 但與道路上的其 他的邊界組合群 有很大的差別。 可測距離 短 10公尺以内 中 可達100公尺 左右 中 可達100公尺 左右 可達100公尺左 右 準確度 不高 高 不高 高 計算效率 中f 中等 快速 ¥展成本 低 中等 需配合政府的 道路車輛的陸 上標誌工程配 合,方可商品 化 高 發展不易,時 間及人力成本 極高。 低一 耗用的計算及硬 體資源皆少 防撞反應策略主要是模擬人類在發生追撞事故前所做的 反應,一般人類藉由觀察與前車的距離與相對速度便能憑 經驗與直覺做出適當的反應,避免追撞事故的發生。國内 外針對主動式駕駛安全系統所提出的防撞反應策略之研究 相當多。其中,Mar J.所提出的 car-following collision prevention system(CFCPS)及 An ANFIS controller for the car-following collision prevention system在對多個相關防 撞反應策略做比較之後,在防撞的表現上已獲得一個優異 96682補充修正之中说.doc 1253998 的成效。CFCPS是以前後車的相對速度、前後車的距離減 去安全距離所得的值為輸入,並以25條模糊規則為主的模 糊推論引擎為計算核心、,最後求得一個車輛加減速的依 據。另外其在探討為使車輛達到安全穩定,系統所需花費 的時間時提到,CFCPS |需要7〜8秒,同類性質的實驗如 GM(General Motors) modelf 1〇秒,Kikuchi 福 ^政_〇吻 model則需12〜14秒。 、 【發明内容】 本發明之主要目的在提出一種實施於全天候障礙物防撞_ 方法及裝置,俾能在白天和晚上進行障礙物辨識,且不需 經由繁複的模糊規則推論運算即可得出一套防撞策略,以 供一系統載體之駕駛作為行車時之依據。 本發明之另〜目的在提出一種實施於全天候障礙物防撞 方法及裝置,俾使影像感測器之定位因系統載體受撞擊而 改變時,不須經實地量測而可直接自行恢復其定位。 為達上述之目的,本發明揭露一種以視訊感知之障礙物 防撞方法,其係應用於一障礙物和一移動中之系統載體,鲁 且一影像感測器係架設於該系統载體上。該障礙物防揸方 法包含下列步驛⑷〜⑴,λ中步驛(a)係搁取該障礙物於第 一與第二時刻之複數個影像並進行分析;步驟0)係定位該 ‘ 影像感測器;步驟⑷係執行-障礙物辨識流程;步驟⑷係 ^ 獲取該系統載體之絕對速度;步驟⑷係獲取該系統栽體與 該障礙物之一相對距離和一相對速度;以及步驟(f)係執行 一防撞策略。 96682補充修正之中說.doc 1253998 以一視訊感知的障礙物防撞裝置加以BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to an obstacle collision avoidance device and an implementation method thereof, and more particularly to an anti-collision device and method based on video perception, which is particularly suitable for use in a vehicle. . [Prior Art] Many academic research units in China are engaged in research on vehicle anti-collision, and the car collision warning system in the Intelligent Transportation Systems (ITS) integration plan of the National Chiao Tung University is For example, the principle is to measure the distance between vehicles using an ultrasonic sensor. In foreign countries, research on automotive-related safety systems has been in progress for a long time and has been integrated into the intelligent transportation system ITS in conjunction with other related information systems. At present, automatic anti-collision devices have been completed (Aut〇m〇tive Collision). Avoidance System (ACAS), the principle is to use infrared to measure the distance between the driver's own car and the car in front, and then calculate the relative speed between the two cars. Finally, the driver's interface is used to remind the driver to do the same. Safety measures. The establishment of the AC AS can be based on the sensor's access to environmental information, the use of captured images for vehicle identification and the establishment of anti-collision strategies to illustrate its system architecture. The function of the sensor is to capture the information of the external environment. The sensors used in relevant experiments at home and abroad, such as the ultrasonic wave proposed by Kawabata (the latest ultrasonic engineering), the radio waves and thunder proposed by Health Physics (Infrared) (International Commission 〇n Non-Ionizing Radiation Protection: Guidelines for limiting exposure to 96682 catching corrections. doc. 5 . 1253998 time-varying electric, magnetic and electromagnetic fields), Wann's proposed GPS three Position tracking and velocity estimation for mobile positioning systems, Camera calibration using geometric constraints, and the like. The characteristics of each sensor are shown in Table 1. Table 1 sensor characteristics sensor ultrasonic wave laser (infrared) satellite positioning CCD camera principle Doppler effect speed Doppler effect speed infrared effect speed GPS positioning image plane coordinates converted to three-dimensional space coordinates, smart The advantages of image recognition have no adverse effects on people, and are cheap and easy to implement. It can measure medium and long distances of about 100~200M. The measuring distance is long, up to 500~600M, and the measured value is accurate. It can provide the function of car navigation. With a measuring distance of up to 100M, it provides the most complete information on road conditions, including roadside line detection, front distance, and speed. Disadvantages The measurement distance is short, only about 0~10m. 完整 No complete road information is provided. 0 Electric S wave hazard. Miscellaneous> It is impossible to provide more complete road information than the image. It is more harmful to the human body (especially the human eye) than electromagnetic waves. Unable to provide complete road information. The price is too high, and the positioning error is about 10m. It is not effective to provide the car collision avoidance function. Other obstacles need to be installed at the same time to be positioned. It is affected by the weather light, but it can be handled properly by intelligent signal processing. Sensors common use occasions, reverse collision anti-collision system, self-propelled vehicle, vehicle anti-collision police speed detector, vehicle anti-collision police speed detector, vehicle anti-collision, star navigation process image detection, robot arm vision establishment, self-propelled The collision avoidance of vehicles and vehicles is shown in Table 1. Although the image captured by the CCD camera can provide the most complete road condition information, the disadvantage is that it is easily interfered by light and cannot be used for obstacle recognition at night. %682 added corrections.doc 1253998 At present, there are many ways to use images for vehicle identification at home and abroad, including A Method for Identifying Specific Vehicles Using Template Matching, and the three fronts proposed by Marmoiton. Position and relative speed estimation of vehicles by monocular vision, Preceding Vehicle Recognition Based on Learning From Sample Images, Kruger's optical flow (Real-time estimation and tracking) The optical flow vectors for obstacle detection (EMS-vision: recognition of intersections on unmarked road networks). The comparison of various image recognition vehicle methods is shown in Table 2. Table 2 Image Recognition Vehicle Method Using License Plate Recognition Three easy-to-identify orientations with known directions in front. Graphic recognition Vehicle image boundary combination Theoretical basis High-pass rate filter can identify license plate number; license plate size is consistent. Use the license plate pixel to determine the distance to the car. Ft uses the front three easy-to-recognize marks; and the three relative orientations are known to 0 to find the characteristic vector of the vehicle and perform neural network training. Use the image to distribute the boundaries of the vehicle. Application ➢ Parking lot management system Active safety driving assistance system steel plate detection; face recognition... Active safety driving assistance system algorithm High-pass rate wave two-point precise perspective method Neural network training using HCDFCPvI to make strong boundary search In the CPU computing resource usage degree, only a single CCD camera is used. In the image, only a single CCD camera is used for image high. When the neural network training is relatively low, only the color gradation data 96682 of the upper line segment of the image is added and corrected. D〇c 1253998 Input, processing a single image at a time, but the image processing of the entire image consumes a lot of CPU resources. Input, process a single image at a time, but image processing the entire image, which consumes a lot of CPU resources. π, the quality of the five training materials determines the quality of identification. (Up to 720 pixels). Parameters or information that must be determined in advance. The coefficient of the High-pass Filter. The relative coordinate information of the three preceding signs. The establishment of the template database; the establishment of a neural network. The distribution of vehicle boundaries in the image. The method is difficult and difficult, the background can not be too complicated, and the method is only suitable for establishing the totem database in 10 meters. It is difficult to find the car and the road totem with the quality and quantity to be effective. Although the vehicle boundary The distribution is not fixed, but it is quite different from other boundary combinations on the road. The measurable distance is less than 10 meters and can reach 100 meters or so, up to 100 meters, up to 100 meters. Accuracy is not high, high and high, high computational efficiency, medium and fast, low cost, medium cost, and government The road sign engineering of road vehicles is compatible with the high development of commercialization, and the time and labor costs are extremely high. Low-cost computing and hardware resources are less anti-collision reaction strategies are mainly to simulate the human response before the collision accident. Generally, humans can rely on experience and intuition by observing the distance and relative speed from the preceding vehicle. Make an appropriate response to avoid a collision accident. There are quite a few studies on the anti-collision response strategies proposed for domestic and international active driving safety systems. Among them, Mar J. proposed car-following collision prevention system (CFCPS) and An ANFIS controller for the car-following collision prevention system have obtained the anti-collision performance after comparing several related anti-collision reaction strategies. An excellent 96,826 supplemental amendment said the effect of .doc 1253998. CFCPS is based on the relative speed of the front and rear vehicles, the distance between the front and rear vehicles minus the safety distance, and the fuzzy inference engine based on 25 fuzzy rules is the core of calculation, and finally the basis of vehicle acceleration and deceleration is obtained. In addition, it is discussed that in order to make the vehicle safe and stable, the time required for the system is mentioned, CFCPS | takes 7~8 seconds, experiments of the same nature such as GM (General Motors) modelf 1 sec, Kikuchi Fu ^ _ _ 〇 The kiss model takes 12 to 14 seconds. SUMMARY OF THE INVENTION The main object of the present invention is to provide an anti-collision method and apparatus for all weather obstacles, which can perform obstacle recognition during day and night, and can be obtained without complicated fuzzy rule inference calculation. A set of anti-collision strategies for driving a system carrier as the basis for driving. Another object of the present invention is to provide an anti-collision method and device for all-weather obstacles, which can directly restore the positioning of the image sensor without being subjected to field measurement when the position of the image sensor is changed due to the impact of the system carrier. . In order to achieve the above object, the present invention discloses a video-aware obstacle collision avoidance method, which is applied to an obstacle and a moving system carrier, and an image sensor is mounted on the system carrier. . The obstacle tampering method comprises the following steps (4)~(1), wherein λ step (a) takes a plurality of images of the obstacle at the first and second moments and analyzes; step 0) locates the image. a sensor; step (4) is to perform an obstacle recognition process; step (4) is to obtain an absolute speed of the system carrier; step (4) is to obtain a relative distance between the system carrier and the obstacle and a relative speed; and f) Perform an anti-collision strategy. 96682 Supplementary Amendment states that .doc 1253998 is based on a video-aware obstacle anti-collision device

在,該警告器將發出聲光或產生震動以進行警告。 上述之防撞方法可以一 實施,其係裝設於一系矣 器、一運篝簞敬1 【實施方式】 圖1所示為本發明所揭露之一種以視訊感知的障礙物防 撞裳置2G ’其係裝設於_系統載體24上。該防撞裝置主鲁 要包含一影像感測器22、一運算單元26及一警告器25。該 影像感測器22可進行掃描,並擷取經掃描該障礙物於第一 與第二時刻之複數個影像。該運算單元26係針對該複數個 影像進行分析。若分析的結果認為有一障礙物21存在,該 警告器25將發出聲光或產生震動進行警告。 圊2所示為本發明之以視訊感知的障礙物防撞方法1 Q之 流程。包含以下步驟11〜16,其中步驟丨丨為擷取複數個影像 並進行分析,步驟12為定位該影像感測器,步驟13為執行鲁 一障礙物辨識流程,步驟14為獲取該系統載體之絕對速 度,步驟15為獲取該系統載體與該障礙物之一相對距離和 一相對速度’以及步驟16為執行一防撞策略。 以下說明上述步驊之詳細内容: 步驟11係擷取並分析該複數個影像,其包含下列步驟(參 圖3): (a)量測縱深距離111(即該系統載體24和該障礙物21之 96682補充修正之中說山 -10- 1253998 相對距離): 縱深距離量測的成像幾何圖形如圖4所示,此圖含 括兩個座標系統·二維影像平面座標及三維實 際空間座標(cry)。前者的座標原點為影像平面5〇 中心〇,,後者座標原點y為影像感測器2 2鏡頭的物理 幾何中心。尽(height of image sensor)表示『至地面 的垂直高度,即,f為影像感測器22的焦距。影像 感測器22的光學軸52以表示,此射線與地面的交 點為C ;點Α位於一條平行於地面且通過〇w的射線 上。若有一目標點D位於F點正前方l距離處,且D點 在影像平面的對應點為E。若/ = z _ "fc 、 0'=ZAOWC 、 Θ,乙C0WD 二 ZEC^cy 且 巧= 〇可以獲得以下關係式 A =tan -1 A (1) =細-丨(△巧 *(ci)) (2) =tan-1 (+) (3) L == Hc (4) tan( θχ^θ2)The warning device will emit a sound or a vibration to warn. The above-mentioned anti-collision method can be implemented in one system, and is installed in a system of cymbals and cymbals. [Embodiment] FIG. 1 is a view showing a collision avoidance device for visually impaired by the present invention. 2G' is mounted on the _system carrier 24. The anti-collision device includes an image sensor 22, an arithmetic unit 26, and a warning device 25. The image sensor 22 is operative to scan and capture a plurality of images of the obstacle at the first and second moments. The arithmetic unit 26 analyzes the plurality of images. If the result of the analysis is that an obstacle 21 is present, the alarm 25 will sound or emit a vibration to warn.圊 2 shows the flow of the video-aware obstacle collision avoidance method 1 Q of the present invention. The method includes the following steps 11 to 16, wherein the step is to capture a plurality of images and perform analysis, the step 12 is to locate the image sensor, the step 13 is to perform the Luyi obstacle recognition process, and the step 14 is to obtain the system carrier. Absolute speed, step 15 is to obtain a relative distance between the system carrier and the obstacle and a relative speed 'and step 16 is to implement a collision avoidance strategy. The details of the above steps are explained below: Step 11 captures and analyzes the plurality of images, which includes the following steps (refer to FIG. 3): (a) Measuring the depth distance 111 (ie, the system carrier 24 and the obstacle 21) The 96,826 supplementary correction says that the mountain-10- 1253998 relative distance): The imaging geometry of the depth distance measurement is shown in Figure 4. This figure includes two coordinate systems, two-dimensional image plane coordinates and three-dimensional real space coordinates ( Cry). The coordinate origin of the former is the image plane 5〇 center 〇, and the latter coordinate origin y is the physical geometric center of the image sensor 2 2 lens. The height of image sensor indicates "the vertical height to the ground, that is, f is the focal length of the image sensor 22. The optical axis 52 of the image sensor 22 is such that the intersection of the ray and the ground is C; the point Α is on a ray that is parallel to the ground and passes through the 〇w. If there is a target point D located at the distance of the front of the point F, and the corresponding point of the point D on the image plane is E. If / = z _ "fc , 0'=ZAOWC , Θ , B C0WD 2 ZEC^cy and 巧 = 〇 can get the following relationship A = tan -1 A (1) = fine - 丨 (△巧*(ci ))) (2) =tan-1 (+) (3) L == Hc (4) tan( θχ^θ2)

Pi^JL^x tan((tan -1 令 一 &)) ⑹ Δρ; L w 96682補充修正之中說d〇c • 11 · 1 = Pi 御ι (5) 此處影像感測器2 2焦距f為已知,c取為影像縱座才气 值的一半(240*320的影像之c值為12〇),义、々可由 實際量測獲得’乃代表一條直路盡頭在影像中的位 置,易由人眼經由影像快速判斷而得知;3又稱為影 1253998 像感測器22的俯角(Depression Angle ; DA),是一個 影響座標映射的重要的參數,式(1)及(2)為兩種簡易 影像校準方法,可不需另外由角度測量儀量測即可 藉由推導出3。式(3)的/可經過影像處理及式(5)、(6) 獲得,其中外為像素長度(pixe][ length),表示圖4的 所佔的像素量,細為影像平面上像素的間距。式 (4)求得的L,即為影像感測器22與前方障礙物2丨的真 實距離。 的量測牵扯到對於影像感測器22硬體架構的籲 認識,以CCD攝影機之感光電板為例,其硬體架構 如圖5所示。像素解析度為64〇*48〇(八*的感光電板 負貴接收外在的光色彩訊號,影像感測器22對角線 的長度〇5)為1/3英吋,因此可由式(7)換算出像素的 間距△凡(厘米)。 △A 了巧,χ丄Pi^JL^x tan((tan -1 令一&)) (6) Δρ; L w 96682 supplementary correction says d〇c • 11 · 1 = Pi 御 (5) here image sensor 2 2 The focal length f is known, c is taken as half of the image vertical position (the c value of the image of 240*320 is 12〇), and the meaning of 义 and 々 can be obtained by actual measurement, which represents the position of the end of a straight road in the image. It is easy to be judged by the human eye through the rapid judgment of the image; 3 is also called the shadow degree of the 1253998 image sensor 22 (Depression Angle; DA), which is an important parameter affecting the coordinate mapping, equations (1) and (2) For the two simple image calibration methods, it can be deduced by 3 without additional measurement by the angle measuring instrument. The equation (3) can be obtained by image processing and equations (5) and (6), wherein the outer pixel length (pixe) [length) represents the amount of pixels occupied by FIG. 4, and is finely the pitch of pixels on the image plane. . The L obtained by the equation (4) is the true distance between the image sensor 22 and the front obstacle 2丨. The measurement involves the appeal of the hardware structure of the image sensor 22. Taking the CCD of the CCD camera as an example, the hardware structure is shown in FIG. 5. The resolution of the pixel is 64〇*48〇 (eight* of the photosensitive plate is negatively received to receive the external light color signal, and the length of the diagonal of the image sensor 22 is 〇5) is 1/3 inch, so the formula can be 7) Convert the pitch of the pixels to △ (cm). △A is clever, χ丄

Wy Py 1 2 1 3 ⑺ =Γϋ9·77χ10 · 另外,△八亦可由影像求得,根據式可得式 (8)。 ’ A f ” 當影像感測器22的焦距f為已知時,八可由圖4觀察 得知,、A、Z可經由實際量測而得。接著可求出 。為求得較具代表性的仏,不同的外可對應至不 96682補充修正之中說.doc •12- 1253998 同的峭,因此可取多點A以獲得多個δα,並求多個 △λ的平均;或可利用多個細及/的聯立方程式求解 。由實驗結果獲得私為8.3ixi〇-3m(厘米),準確率 達 85% 〇 (b)量測橫向之距離1丨2 : 若將圓4中应與瓦自圖中抽離出來,其内部幾何 關係不變的條件之下,另參圖6做更清楚的說明。圖 6表示μ橫向距離量測的幾何關係囷,圖中的D點若 向負f方向移動W距離便可得到K點,實際空間座標_ 位置為(-%坼,。K點在影像平面上的成像為G點, 平面座標位置為(-W,/)。;;表示&的向量;5表示 的向量二可得關係式(9)及(10)。 λ 4 n. a (9) (10) Θι = cos —~Wy Py 1 2 1 3 (7) = Γϋ9·77χ10 · In addition, △8 can also be obtained from the image, and the formula (8) can be obtained according to the formula. 'A f ′ When the focal length f of the image sensor 22 is known, eight can be observed from Fig. 4, and A and Z can be obtained through actual measurement. Then it can be found. The different 可, different external can correspond to the no. 96682 supplementary correction said .doc •12- 1253998 the same kurt, so it is possible to take multiple points A to obtain multiple δα, and find the average of multiple △λ; or can use more The fine and / or simultaneous equations are solved. The experimental result is 8.3ixi〇-3m (cm), the accuracy is 85% 〇(b) The lateral distance is 1丨2: If the circle 4 should be The tile is extracted from the figure, and its internal geometric relationship is unchanged. See Figure 6 for a clearer explanation. Figure 6 shows the geometric relationship of μ lateral distance measurement, and the point D in the figure is negative. The K point is obtained by moving the W distance in the f direction, and the actual space coordinate _ position is (-%坼, the image of the K point on the image plane is G point, and the plane coordinate position is (-W, /).;; ; Vector; 5 represents the vector (2) and (10). λ 4 n. a (9) (10) Θι = cos —~

MM ^ =i/ccsc(^ +<92)tan^MM ^ =i/ccsc(^ +<92)tan^

(c)量測該障礙物之高度113 : 圖7說明該障礙物21以車輛為實施例時之高度量 測方法。在-台車輛所能形成的影像範圍内,如其 所示的方形框,其像素長度^length of detecti〇、n window)可由下列式⑴)求得。式⑴)中的〔為影像橫 座標一半值’對橫·縱座標為卿320的影像而言,c 取值為240/2=120喝車尾在影像平面的縱座標值, 此值由下而上依序遞增其座標值。式⑴)的巧可由式 96682補充修正之中說.doc -13- 1253998 (12)獲得,式中的小為車輛高度、凡為車輛寬度, _/^為/對映至實際空間點的縱深。參照圖〜(#所 不,對不同Lj而言,同一輛車會在影像上呈現不同 的L ’此時的影像感測器22為固定不動的狀態。L—p 可由式(13)獲得,0為式的影像感測器22的俯角, A = ZCO-D = ZjE:〇 V (參照圖 4 )。 (11) (12) (13)(c) Measuring the height of the obstacle 113: Fig. 7 illustrates the height measurement method of the obstacle 21 with the vehicle as an embodiment. In the image range that can be formed by the vehicle, as shown in the square frame, the pixel length ^length of detecti〇, n window) can be obtained by the following formula (1)). In the equation (1)), [is half the value of the horizontal coordinate of the image], and for the image with the horizontal and vertical coordinates of the binary 320, the value of c is 240/2 = 120. The ordinate value of the tail of the vehicle in the image plane, this value is given by And the coordinates are incremented in order. The formula (1)) can be obtained by adding the correction of the formula 96682. doc -13- 1253998 (12), where the small height of the vehicle, the width of the vehicle, the _/^ is / the depth of the actual space point . Referring to the figure ~ (# does not, for different Lj, the same car will present a different L on the image. The image sensor 22 at this time is in a fixed state. L-p can be obtained by the formula (13), The depression angle of the image sensor 22 of the formula 0, A = ZCO-D = ZjE: 〇V (refer to Fig. 4). (11) (12) (13)

Idw = C + p丨’-i ,合 tandtan'^p^i))若凡叫Idw = C + p丨’-i , combined with tandtan'^p^i))

Ρι 二 < —P 古 xtan⑷ ΜβιΓ1^二浩 Hv>Hc i Hc τ ^2),若 L· _ ρ =人 tandA),若 ze[c+,239] 表3為四個實施例,其中//K =n4cm, 4 = 18360111,i/c=l29cm,用以證明式(11)〜(13)是可行 的。另可觀察得知平均誤差率約在7·21°/。,亦即其準 確率達90%以上,顯見式(11)〜(13)是可實際應用的。 表3驗證式(11)〜(13)可行度的統計表 測數據 實驗圖 車尾 影像 位置i 車尾與影像 感測器的縱 深I』⑽ 式(11)〜(13)計算 的L 實際量 測的 L 誤差% Idw 1 ών 圖 8(a) 38 6.8 135 140 3.57~ 圖 8 (b) 96 12.4 75 79 5.06 圖 8(c) 130 23.4 40 44 9·09~ 圊 8(d) 157 78.5 12 13.5 11.1Γ" 96682補充修正之中耽.doc -14- 1253998 步驟12係定位該影像感測器,其包含以下各步驟(參圖 (a) 首先,由掃描線linei由下而上每隔約3〜5公尺作横^ 掃描,假設掃描至line Γ時找到具路面邊線特徵的 點P(位於道路中央分隔線段32上)、p,(位於道路邊 線3 1上); (b) 由p點沿著圖左邊的道路中央分隔線段32朝上下方 尋找出該道路中央分隔線段32 ( —般為白色線段)_ 的兩端點,如p卜p2,並據以分別形成丨ine3和Une2, 另Pi’、p2’是line3、line2各自與圖右邊道路邊線31 的交點; ⑷找出兩射線的交點% ; (d) 乃代入式(2),因此影像感測器22的俯角θ可得; (e) 另根據圖9與式(4),可推導出式(14),其中“和以分 別為line3和line2與影像感測器22的縱深距離;另 參SM n,分別為依據、Ζα>定義的*同的鲁 ZCOw£> 〇Ρι 二< —P Ancient xtan(4) ΜβιΓ1^二浩 Hv>Hc i Hc τ ^2), if L· _ ρ = person tandA), if ze[c+, 239] Table 3 is four embodiments, wherein // K = n4cm, 4 = 18360111, i/c = l29cm, to prove that equations (11) ~ (13) are feasible. It can also be observed that the average error rate is about 7.21 ° /. That is, the accuracy rate is over 90%, and the explicit formulas (11) to (13) are practical. Table 3 Verification formula (11) ~ (13) Feasibility statistics table data experimental image tail image position i tail and image sensor depth I" (10) Equation (11) ~ (13) calculated L actual amount Measured L error % Idw 1 ών Figure 8(a) 38 6.8 135 140 3.57~ Figure 8 (b) 96 12.4 75 79 5.06 Figure 8(c) 130 23.4 40 44 9·09~ 圊8(d) 157 78.5 12 13.5 11.1Γ" 96682 Supplementary Amendment doc.doc -14- 1253998 Step 12 is to locate the image sensor, which includes the following steps (see Figure (a). First, the scanning line linei is from bottom to top. 3 to 5 meters for horizontal ^ scanning, assuming that the line P is found to find the point P of the road edge feature (located on the road center divider line 32), p, (on the road edge 3 1); (b) by p Point along the road center dividing line segment 32 on the left side of the figure to look up and down the two ends of the road center dividing line segment 32 (usually a white line segment) _, such as p pp p2, and accordingly form 丨ine3 and Une2, The other Pi', p2' are the intersections of line3 and line2 with the roadside line 31 on the right side of the figure; (4) find the intersection of two rays; (d) is substituted (2) Therefore, the depression angle θ of the image sensor 22 can be obtained; (e) According to FIG. 9 and (4), the equation (14) can be derived, wherein “the sum is the depth of the line 3 and the line 2 and the image sensor 22 respectively. Distance; another SM n, which is the basis of the definition, Ζα> defined by the same Lu ZCOw£> 〇

La LaLa La

Hc (14) tan(^ ^θ2) Η” tan(^ -f θ2) 由式(14)中可得式(15),其中q為一路面線段的長度 Η __g 96682補充修正之中就.doc -15- 1253998 Θ(影像感測器22的俯角)及圪(影像感測器22至地面 的高度)求出後,即表示該影像感測器22已被定位。 因本發明之障礙物防撞方法及裝置,可由影像分析 直接求得影像感測器之俯角與高度,故即使影像感測 器之定位因系統載體受撞擊而改變時,其不須經實地 量測即可直接自動重新定位。 步驟13係執行一障礙物辨識流程,其包含以下步驟(參圖 10): (a)設定一掃描線態樣13 1,該掃描線態樣選自以下任一 態樣,如圖11所示,框中所示即為所得之影像。 態樣一:單線型掃描線,如圖11之子圖(a)。 態樣二:曲折型掃描線,如圖丨丨之子圖,其掃描 方式敘述如下:兩條邊線33所包園的範圍 是影像感測器22位置前方左右共寬約數公 尺的範圍,掃描寬度視需求而定。掃描線 40以曲折的態樣由影像底部往上對像素點 逐一掃描,每前進約數公尺的縱深距離時 作變向掃描,前進的距離亦視需求而定。 態樣二··三條線型掃描線,如圖11之子圖(c),為三 條直線型掃描線4〇的掃描態樣,其包含的 範圍約為影像感測器22所在系統載體24正 前方左右共寬1.5倍於該系統載體24的寬 度。掃描線前進的距離亦視需求而定。 態樣四:五條線型掃描線,如圖n之子圖(d),其包 96682補充修正之中說.doc •16· 1253998 含的範圍是Η 11之子圖⑷三條線型掃描線 40的再延伸兩條掃描線4〇。 態樣五:轉彎型掃描線,如圖"之子圖⑷,與圖U 之子圖(c)的掃描線40最大的不同是本掃描 線態樣調整加大了左右邊掃描線4〇的範 圍,可作為車輛轉弩時的掃描線態樣。 態樣六:橫向型掃描線,如圖丨丨之子圖⑺。 其中若使用態樣四,則可偵測到對向、十字路口突 然插入及超車後插入,或急停的障礙物。另因為可以_Hc (14) tan(^ ^θ2) Η" tan(^ -f θ2) The formula (15) is obtained from the formula (14), where q is the length of a road segment Η __g 96682 supplementary correction. -15- 1253998 Θ (the depression angle of the image sensor 22) and 圪 (the height of the image sensor 22 to the ground) are determined, that is, the image sensor 22 has been positioned. The collision method and device can directly determine the depression angle and height of the image sensor by image analysis, so even if the position of the image sensor is changed due to the impact of the system carrier, it can be automatically repositioned without field measurement. Step 13 performs an obstacle recognition process, which includes the following steps (refer to FIG. 10): (a) setting a scan line pattern 13 1, the scan line pattern is selected from any of the following aspects, as shown in FIG. The image shown is shown in the box. Aspect 1: Single-line scan line, as shown in Figure 11 (a). Aspect 2: Zigzag scan line, as shown in Figure ,, its scanning method As follows: the range of the two side lines 33 is about a few meters in front of the position of the image sensor 22 The scanning width is determined according to requirements. The scanning line 40 scans the pixel points one by one from the bottom of the image in a zigzag manner, and scans the direction of each depth when the depth is about several meters. The distance of the advancement is also determined according to the demand. The second line scan line, as shown in the sub-picture (c) of FIG. 11, is a scan pattern of three linear scan lines 4〇, which includes a range approximately in front of the system carrier 24 where the image sensor 22 is located. The width is 1.5 times wider than the width of the system carrier 24. The distance from the scan line is also determined according to the requirements. Aspect 4: Five line scan lines, as shown in the sub-graph (d) of Figure n, .doc •16· 1253998 The range of the image is Η11 (4) The extension of the three line scan lines 40 and the two scan lines 4〇. Aspect 5: Turning scan line, as shown in the figure (4), and Figure U The biggest difference of the scan line 40 of the sub-picture (c) is that the scan line pattern adjustment increases the range of the left and right side scan lines 4 ,, and can be used as a scan line state when the vehicle turns. Aspect 6: Lateral scan Line, as shown in the figure (7). With the fourth aspect, it can detect obstacles in the opposite direction, the intersection of the intersection and the insertion after the overtaking, or the emergency stop.

偵測到對向障礙物,因此夜晚時可作為自動調整近遠 光燈與調整會車速度的依據,也就是當對向障礙物與 系統載髗的距離被測量出小於一設定的距離&公尺 時,則f調整為近燈照明,反之則可調整為遠燈照明。 (b)提供一邊緣點鑑定132,詳述如下:計算掃描線上相 鄰像素在色階上的歐幾里德距離(Euclidean distance)。若該影像為彩色影像,以及⑻表示第灸與第 灸Η個像素之間的歐幾里德距離,則i?⑻被定義為 尽)2 +((+(、一… 3 。若該大於>c2,則 该第A:像素被視為一邊緣點。其中<、qA分別表示 第灸個像素點的紅綠藍三色的色階值,q為一臨界常 數’可由經驗值設定。若該影像為黑白灰階影像,則 及W被定義為及(灸)=〇吼+1-Gn%,且若該及⑻大於>(:3, 則該第A:像素被視為一邊緣點。其中表示第々個像 96682補充修正之中說.doc -17- 1253998 C3為一臨界常數。 ,該掃描方式可選 以下任一方 素點的灰階色階值, (C)設定一掃描方式133 式·· (c.iM貞測區間式的掃描方式··由下而上掃描當 找到邊緣點時’會假呀兮 T1k°又該點是車尾在影像中的位置 而據以設立,測區間,進而分析該摘測區間内 掃描線的像素資料1礙物21在離影像感測器取 同的縱深距離時’偵測區間長度L會有所不同。圖 8⑷〜⑷是以-汽車為例,當汽車在不同縱深距離時 會有所不同的_區間長度L。本掃描方式的掃描 線終點,以欲辨識汽車為例,為圖8⑷影像中車尾在 1 0時的L (即/〜)’其中/⑻_z· U表當前 車車尾的影像位置在i=0(即影像最底部)時所形成的 偵測區間長度)· (c.2)逐步式的掃描方式··掃描方式由下而上對影 像像素點逐步作掃描分析,並不設立偵測區間。掃 描終點一般是路的終點的影像位置。 鲁 (d)提供兩個布林變數之真假值134,方法如下: (cLl)利用障礙物21底部存在陰影之特性β因為立 體物會產生陰影,路面的標線等非立體物無法產生 陰影,故該陰影可作為分辨障礙物21的依據。提供 一布林變數a,則&之真假由式(16)和(17)來決定, 若&kC4成立,則a為真 (16) 96682補充修正之中說.doc -18- 1253998 shadow pixel ~一^ <c4成立’則a為假 (17) 其中L為偵測區間長度。是指符合陰影特 徵的像素量’通常取該俏測區間底部約q <長的像 素資料。C4、C5為一常數值。 另車輛底部的陰影(shadow_pixel)應該符合下式 (18)的關係: [R<CexRr shadow—pixel = < [Gray ^C7x Gray r · 式(18)中各符號說明如下,分析彩色影像時,尺分 別代表像素資料的紅色的色階值’尤分別代表灰色道 路的紅綠藍的色階值;分析黑白灰階影像時,士叮 代表像素資料的色階值,W代表道路的色階值。而 在灰色道路的顏色色階值攫取上,通常是取影像上 較符合灰色特性的像素群,並求該像素群之顏色平 均值。另外可藉由該像素群的顏色平均值籍得以判 斷系統載嫌所在位置之天候亮度,並作為自動調整_ 車燈亮度的依據,也就是當天候亮度越亮則車燈亮 度可调暗,反之當天候亮度越暗則車燈亮度可調 亮。C6、c7為一常數值。 、(d· 2)利用障礙物21所投射或反射的光具亮度遞 減效應之特性。一般天候較暗時,大多如夜晚時的 影像辨識,可以光亮度來判斷出障礙物在影像中的 位置。因光亮度呈多色階分布,若僅藉由計算光亮 96682補充修正之中t»U〇c •19· 1253998 度的分佈作為辨識的障礙物的依據則消耗計算效 旎,且找出的位置也並非是精確的障礙物位置。在 此提供一布林變數b,作為判斷是否為該障礙物门的 依據’則b之真假由式(〗9)來決定, 若或成立,則b為真否則為假(19) 其中及代表分析彩色影像時,像素資料的紅綠藍三 原色中以紅色為主要顏色的色階值,另外亦可視情 況參照綠或藍色的色階值;〜少代表分析黑白灰階 影像時,像素資料的灰階色階值。藉由多張彩色或籲 黑白灰階影像觀察到,當分析的像素群的及或σ⑹等 色階值提升或遞減至q⑽(臨界常數)日寺,則一 般多是障礙物在影像中的位置。 ⑷判定該障礙物種類135,其中關於障礙物陰影特性、 障礙物投射或反射光之亮度遞減特性之兩個布林變 數分別以a、b表示。白天辨識與夜間辨識的辨識法則 不同,另白天與夜間的時間分隔點可以系統時間内 定,包含以下之判別步驟: φ ⑴白天辨識時若a為真,則障礙物被辨識為一汽 車、一機車、一腳踏車等路上交通工具,其底 部具暗黑顏色的像素之障礙物; (11)白天辨識時若a為假,則障礙物被辨識為一路面 標線 '-樹影、一護欄、一山壁、一房子、一 分隔島、一人等底部不具暗黑顏色的像素之障 礙物; 96682補充修正之中說.doc -20· 1253998 (Hi)夜間辨識時若b為真,則障礙物被辨識為一汽機 車、一護攔、一山壁、一房子、一分隔島、一 人等立體障礙物;以及 (iv)夜間辨識時若^為假,則障礙物被辨識為一路上 標線或無障礙物。 圖13(a)、13(b)和13(c)包含(a)〜(q)等17幅子圖,其例 示利用判定該障礙物種類135所述之辨識法則所作之辨 識圊據。在此使用單線型態樣的掃描線作掃描辨識,以 路上的障礙物為主要欲辨識的障礙物目標,驗證障礙物馨 辨識法則的可行性,並將所獲得的實驗數據整理於表4。 囷13(a)、13(b)和13(c)中之子圖(a)〜(k)為運用於白天 時之障礙物辨識圖據,主要是以布林變數a作為辨識法 則。子囷⑴〜(q)為運用於夜間之障礙物辨識圖據,主要 是以布林變數b作為辨識法則。 圖13(a)、13(b)和13(c)中之子圖(aMq)中以代表單線 型掃描線所掃描的範圍;L2代表的是依經驗給定内定 的邊界門檻值(在此設為25),凡大於L2所在座標者被視参 為真正的邊界,在圖中即為分佈在L2左邊的邊界線 段,白天辨識時,主要是依據布林變數&作判斷,。是 判斷出屬於汽機車等底部具暗黑顏色的像素之障礙物 的位置,在此將之歸類為〇丨類障礙物。L4則為底部不 具暗黑顏色的像素之障礙物的位置,如路上標線、樹 影、護攔、山壁、房子、分隔島、人等障礙物,在此將 之歸類為〇2類障礙物。夜間辨識時,主要是依據布林變 96682補充修正之中說.doc •21 · 1253998 數b作判斷,L5則是汽機車、護欄、山壁、房子、分隔 島、人等立體障礙物的位置,該立體障礙物具發射或反 射光源的功能或特性,在此將之歸類為〇3類障礙物。 表4白天與夜間時的辨識法則與圖據 ------- 圖 13(a) 、 13(b)、 ^^- 13(c)之子圖(a)〜白天時的辨識法則與圖 — 據 識法則 式(16)、(17)的 Nshadow pixel Idw (C4設為 〇·ι) 布林變 數a 辨識結果 0.416 真 L3標示出為〇ι ⑻(汽車;樹 〇·588(汽車); 真; L3標示出為〇ι; 影) ——_____ 〇(樹影) -——_____ 假 L4標示出為〇2 (C)(汽車;路面 〇·612(汽車); 真; L3標示出為〇1; 標線) 〇(樹影) 假 L4標示出為〇2 (d)(機車;路面 〇·313(機車); 真; L3標示出為〇1; 標線) 〇(路面標線) 假 L4標示出為〇2 (e)(腳踏車;路 0·24(腳踏車); 真; ^ — L3標示出為〇1; 面標線) —---一丨丨.丨.一 〇(路面標線) 一…丨丨_丨 --L__ 假 L4標示出為〇2 ⑴(護攔) 0 假 L4標示出為〇2 (g)(山壁) 0 假 L4標示出為〇2 (h)(房子) 0 假 L4標示出為 ⑴(分隔島) 0 假 L4標示出為〇2 ⑴(人) 0 假 _-----—: L4標示出為〇2 — ——---—^ 96682補充修正之中說.doc 1253998 (k)(灰階影像 中的汽車) ...........—--—___ 0.416 真 L3標示出為〇1 圖13U)、13(b)、13(c)之子圖⑴〜(q)夜間時的辨識法則與^ 據 辨識法則 \數據 實驗圖 \ 式(19)的 R or Gray 色階值 (C8、C9皆設為 200) 布林 變數 b 辨識結果 (1)(前方汽車) 一丨丨丨丨·.·1_丨|丨丨•…,丨丨 - 212 真 L5標示出為〇3 (m)(前方逆向 來車) 219 真 L5標不出為〇3 (η)(人與機車) 207 真 L5標示出為 (〇)(房子) 205 真 L5標示出為〇3 (Ρ)(灰階影像 中的汽車) 234 真 L5標示出為〇3 (q)(前方汽車; 路面標線) 209(前方汽 車); 158(路面標線) 真; 假 L5標示出為〇3 ,不受路面標線 影響 由以上的表4與圖13(a)、13(b)和13(c)之子圖(a)〜(q)顯示 利用布林變數a、b可精確且穩定地全天候辨識多種可能會影 響交通安全的障礙物。 請參圖9,步驟14係獲取該系統載體之絕對速度,詳述如下· (a)自圖9中的pi點找出後,pi點是該道路中央分隔線段 32的端點,接著再找出下一張影像該?1點的位置。在 96682補充修正之中說.doc -23- 1253998 此假設該道路中央分隔線段32係一白色線段。 (b) 下一張影像的pi點通常距離更近,因此可將圖9中的 linel掃描線往下隔3〜5公尺作橫向掃描,或依圖9中 的斜率往下找尋有白色線段的端點。 (c) 比對前後兩張影像,該白色線段端點p丨的位置變化便 可推得其實際移動距離,而這段距離就是影像感測器 22所在系統載體24移動的距離,若再除以前後張影像 的擷取時間差,可得出該系統載體24的絕對速度。 另外,該系統載體24之絕對速度也可經由一類比數位 轉換器直接自該系統載體24上之速度表取得© 步驟1 5係獲取該系統載體與該障礙物之一相對距離和一 相對速度,詳述如下: 辨識出該障礙物21在影像中的位置後,根據式(丨)〜(6)便 可獲取該系統載體24與該障礙物21之相對距離L,如下式 (20)所示。The opposite obstacle is detected, so the night can be used as the basis for automatically adjusting the near-high beam and adjusting the speed of the vehicle, that is, when the distance between the obstacle and the system is measured, the distance is less than a set distance & In the case of a meter, f is adjusted to the near-light illumination, and vice versa to the high-light illumination. (b) An edge point identification 132 is provided, as detailed below: calculating the Euclidean distance of the adjacent pixels on the scan line in the gradation. If the image is a color image, and (8) indicates the Euclidean distance between the moxibustion and the moxibustion, then i?(8) is defined as 2+((+(,一...3. If this is greater than >c2, then the A: pixel is regarded as an edge point, where <, qA respectively represent the gradation value of the red, green and blue color of the moxibustion pixel, q is a critical constant 'can be set by the empirical value If the image is a black and white grayscale image, then W is defined as (mox) = 〇吼 + 1 - Gn%, and if the sum (8) is greater than > (: 3, then the A: pixel is considered An edge point, which indicates that the third image, 96682, is added to the correction. The .doc -17- 1253998 C3 is a critical constant. The scan mode can be selected from the gray scale value of any of the following square points, (C) setting A scanning method 133 type · (c.iM scanning interval type scanning method · from bottom to top scanning when finding the edge point 'fake false 兮 T1k ° and this point is the position of the rear of the car in the image In the setup, measurement interval, and further analysis of the pixel data of the scan line in the measurement interval, the detection interval length L will be when the object 21 is at the same depth distance from the image sensor. Figure 8 (4) ~ (4) is a car - for example, when the car at different depths of the distance will be different _ interval length L. The scanning line end point of this scanning method, for example, to identify the car, for the image of Figure 8 (4) The length of the detection interval formed when the image of the current car tail is at the time of i=0 (ie, the bottom of the image) is at the L (ie /~) of the 0. .2) Step-by-step scanning method··Scanning method scans the image pixel points step by step from bottom to top, and does not set the detection interval. The scanning end point is generally the image position of the end point of the road. Lu (d) provides two The true and false value of the Boolean variable is 134. The method is as follows: (cLl) The characteristic of the shadow at the bottom of the obstacle 21 is used because the three-dimensional object will produce a shadow, and the non-stereoscopic object such as the marking of the road surface cannot produce a shadow, so the shadow can be used as the resolution. The basis of the obstacle 21. Provide a Boolean variable a, then the true and false of & is determined by the formulas (16) and (17), if &kC4 is established, then a is true (16) 96682 supplementary amendment said .doc -18- 1253998 shadow pixel ~一^ <c4 established' then a is false (17) where L In order to detect the length of the interval, it refers to the amount of pixels conforming to the shadow feature. Usually, the pixel data of the bottom of the QAP interval is about q < long. C4 and C5 are a constant value. The shadow of the bottom of the vehicle (shadow_pixel) should be consistent with The relationship of equation (18): [R<CexRr shadow_pixel = < [Gray ^C7x Gray r · The symbols in equation (18) are explained as follows. When analyzing color images, the scales represent the red gradation values of the pixel data. 'Especially represents the red, green and blue gradation values of the gray road; when analyzing the black and white grayscale image, the gentry represents the gradation value of the pixel data, and W represents the gradation value of the road. In the color gradation value of the gray road, it is usually taken to take a pixel group that is more gray-like in the image, and the average color value of the pixel group is obtained. In addition, the color average value of the pixel group can be used to determine the brightness of the location of the system, and as the basis for automatically adjusting the brightness of the light, that is, the brighter the brightness of the day, the brightness of the light can be dimmed, and vice versa. The darker the brightness of the day, the brightness of the lights can be brightened. C6 and c7 are a constant value. (d·2) The light projected or reflected by the obstacle 21 has a characteristic of a luminance decreasing effect. Generally, when the weather is dark, most of them are image recognition at night, and the brightness can be used to determine the position of the obstacle in the image. Due to the multi-gradation distribution of the light, if the distribution of t»U〇c •19· 1253998 degrees is added as the basis of the identified obstacle by calculating the light 96682 supplementary correction, the calculation effect is consumed, and the found position is found. Nor is it a precise obstacle location. Here, a Boolean variable b is provided as a basis for judging whether it is the obstacle gate. Then the true and false of b is determined by the formula (9), and if it is true, b is true or false (19) On behalf of the analysis of color images, the red, green and blue primary colors of the pixel data are red as the main color gradation value, and the green or blue gradation value can also be referred to as appropriate; ~ less representative analysis of black and white grayscale images, pixel data Grayscale level value. Observed by multiple color or black-and-white gray-scale images, when the sum of the analyzed pixel groups or the gradation value of σ(6) is increased or decreased to q(10) (critical constant), it is generally the position of the obstacle in the image. . (4) The obstacle type 135 is judged, wherein the two Boolean variables relating to the obstacle shading characteristic, the obstacle projection or the dimming characteristic of the reflected light are denoted by a and b, respectively. The identification method of daytime identification and nighttime identification is different, and the time separation point between daytime and nighttime can be determined within the system time, including the following discriminating steps: φ (1) If a is true during daytime identification, the obstacle is identified as a car, a locomotive On the road, such as a bicycle, there are obstacles on the bottom of the pixel with dark colors; (11) If a is false during daytime identification, the obstacle is identified as a road marking '-tree shadow, a guardrail, a mountain wall , a house, a separate island, a person and other obstacles with no dark color at the bottom; 96682 supplementary correction said. doc -20· 1253998 (Hi) If b is true during night identification, the obstacle is identified as FAW Locomotives, a barrier, a mountain wall, a house, a separate island, a person and other steric obstacles; and (iv) if the nighttime identification is false, the obstacle is identified as a line or obstacle. 13(a), 13(b) and 13(c) include 17 sub-pictures (a) to (q), which exemplify the identification data by the identification rule described in the determination of the obstacle type 135. Here, the scanning line of the single-line type is used for scanning and identification, and the obstacles on the road are the main obstacle objects to be identified, the feasibility of the obstacle identification rule is verified, and the obtained experimental data is compiled in Table 4. The subgraphs (a) to (k) in 囷13(a), 13(b) and 13(c) are the obstacle identification data applied during the daytime, mainly using the Boolean variable a as the identification rule. The sub-囷(1)~(q) is the obstacle identification data applied to the nighttime, mainly based on the Boolean variable b as the identification rule. In the subgraph (aMq) in Figures 13(a), 13(b) and 13(c), the range scanned by the representative single-line scan line; L2 represents the default boundary threshold given by experience (here 25), where the coordinates greater than L2 are regarded as true boundaries, in the figure is the boundary line distributed to the left of L2. When identifying during the day, it is mainly based on the Boolean variables & It is the position of an obstacle that is judged to belong to a pixel with a dark color at the bottom of a steam locomotive, and is classified as a cockroach obstacle. L4 is the position of obstacles at the bottom of the pixel without dark colors, such as road markings, tree shadows, barriers, mountain walls, houses, islands, people and other obstacles, which are classified as 〇2 obstacles. . At night identification, it is mainly based on the Bulin change 96682 supplemental correction. doc • 21 · 1253998 number b for judgment, L5 is the position of locomotives such as steam locomotives, guardrails, mountain walls, houses, islands, people, etc. The steric obstacle has the function or characteristic of emitting or reflecting a light source, which is classified herein as a 〇3 type of obstacle. Table 4 Identification rules and figures for daytime and nighttime ------- Figure 13(a), 13(b), ^^- 13(c) subgraph (a) ~ Daytime identification rules and diagrams — Nshadow pixel Idw of formula (16), (17) (C4 is set to 〇·ι) Boolean variable a Identification result 0.416 True L3 is marked as 〇ι (8) (car; tree 〇 588 (car); True; L3 is marked as 〇ι; 影) ——_____ 〇 (tree shadow) -——_____ False L4 is marked as 〇 2 (C) (car; road 〇 · 612 (car); true; L3 is marked as 〇 1; Marking line) 〇 (tree shadow) False L4 is marked as 〇 2 (d) (locomotive; road 〇 · 313 (locomotive); true; L3 is marked as 〇 1; marking line) 〇 (road marking) false L4 Marked as 〇2 (e) (bicycle; road 0·24 (bicycle); true; ^ — L3 is marked as 〇1; surface marking) —---一丨丨.丨.一〇 (road marking ) a...丨丨_丨--L__ False L4 is marked as 〇2 (1) (barrier) 0 False L4 is marked as 〇2 (g) (mountain wall) 0 False L4 is marked as 〇2 (h) (house ) 0 False L4 is marked as (1) (separated island) 0 False L4 is marked as 〇 2 (1) (人) 0 False _------: L4 is marked as 〇2 — ——---—^ 96682 Supplementary amendment said.doc 1253998 (k) (car in grayscale image) ... ........—---___ 0.416 True L3 is marked as 〇1 Figure 13U), 13(b), 13(c) sub-graphs (1)~(q) Nighttime identification rules and identification Rule \Data experiment diagram \ R (R) of the formula (19) (C8, C9 are set to 200) Boolean variable b Identification result (1) (front car) 一丨丨丨丨···1_丨|丨丨•...,丨丨- 212 True L5 is marked as 〇3 (m) (front reverse car) 219 True L5 is marked as 〇3 (η) (person and locomotive) 207 True L5 is marked as ( 〇) (house) 205 True L5 is marked as 〇3 (Ρ) (car in grayscale image) 234 True L5 is marked as 〇3 (q) (front car; road marking) 209 (front car); 158 (Pavement marking) true; False L5 is marked as 〇3, not affected by the pavement markings by the above Table 4 and Figures 13(a), 13(b) and 13(c) subgraphs (a)~(q ) It is shown that using the Boolean variables a and b can accurately and stably identify all kinds of obstacles that may affect traffic safety. Thereof. Referring to Figure 9, step 14 is to obtain the absolute speed of the system carrier, as detailed below. (a) After finding out from the pi point in Fig. 9, the pi point is the end point of the central dividing line segment 32 of the road, and then looking for What about the next image? 1 point location. In the 96682 supplementary amendment, it is stated that .doc -23- 1253998 assumes that the road central dividing line segment 32 is a white line segment. (b) The pi point of the next image is usually closer, so the linel scan line in Figure 9 can be scanned laterally 3 to 5 meters downward, or the white line segment can be found down according to the slope in Figure 9. End point. (c) Comparing the two images before and after, the position change of the end point p丨 of the white line segment can be used to derive the actual moving distance, and the distance is the distance that the system carrier 24 of the image sensor 22 moves, and if The absolute velocity of the system carrier 24 can be derived from the difference in the time taken between the front and back images. In addition, the absolute speed of the system carrier 24 can also be obtained directly from the speedometer on the system carrier 24 via an analog-to-digital converter. Step 15 is to obtain a relative distance and a relative speed of the system carrier from the obstacle. The details are as follows: After the position of the obstacle 21 in the image is recognized, the relative distance L between the system carrier 24 and the obstacle 21 can be obtained according to the formulas (丨) to (6), as shown in the following formula (20). .

L tan ⑼ (20) 其中,影像感測器22的高度、俯角q、焦距/、像素之 間的間距4?/為已知,乃可由車輛的影像位置求得。而該系 統載體24與該障礙物21之相對速度(Reiative Vel〇city ; rV) 可根據下式(21)求得。 (21)L tan (9) (20) wherein the height, the depression angle q, the focal length /, and the spacing between the pixels 4?/ of the image sensor 22 are known, and can be obtained from the image position of the vehicle. The relative velocity of the system carrier 24 to the obstacle 21 (Reiative Vel〇city; rV) can be obtained according to the following equation (21). (twenty one)

At Δί、ΔΖ(〇各別代表前後張影像擷取的時間差及車輛被辨 識出來的距離差。 96682補充修正之中說doc -24- 1253998 步驟16係執行一防撞策略,其包含以下步驟(參圖i2): (a) 提供一等效速度161。該等效速度大小定義為該系統 載體24之絕對速度和該系統載體24與該障礙物以相 互逼近之相對速度中之較大者; (b) 提供一安全距離(safe distance)162。該安全距離之大 小大約介於該等效速度的二千分之一至二千分之一 加上10公尺之間。一較佳實施例中,該安全距離之定 義為以每小時公里為單位之該等效速度數值大小之 一半加上五,且該安全距離之單位為公尺; (c) 提供一安全係數(safe coefficient)163 ^該安全係數大 小定義為該相對距離與該安全距離之比值,且該安全 係數之大小位於〇和1之間; (d) 提供一警報程度164。該警報程度大小定義為1減去該 安全係數; (e)發出聲光或產生震動165 ^根據該警報程度之大小,At Δί, ΔΖ (〇 each represents the time difference between the image before and after the image and the distance difference recognized by the vehicle. 96682 Supplementary amendment says doc -24-1253998 Step 16 is to implement a collision avoidance strategy, which includes the following steps ( Referring to Figure i2): (a) provides an equivalent speed 161. The equivalent speed is defined as the greater of the absolute speed of the system carrier 24 and the relative speed at which the system carrier 24 and the obstacle approach each other; (b) providing a safe distance 162. The safety distance is approximately between one thousandth and one thousandth plus 10 meters of the equivalent speed. A preferred embodiment The safety distance is defined as one-half of the equivalent speed value in kilometers per hour plus five, and the safety distance is in meters; (c) providing a safety factor 163 ^ The safety factor is defined as the ratio of the relative distance to the safety distance, and the magnitude of the safety factor is between 〇 and 1; (d) providing an alarm level 164. The alarm level is defined as 1 minus the safety factor. (e) issued Sound and light or vibrate 165 ^ The size of the alarm according to the degree of,

用該警告器25發出聲光或產生震動警告該系統載體 24之駕駛者,且可以聲光警告該系統載體24周圍之 人;以及 (f)提供一次絕對速度166,該次絕對速度之定義為該系 統載艘24目前之絕對速度與該安全係數之乘積。 該防撞策略另包含一即時錄影功能步驟,當安全係數小 於某一常數值時,則此功能將開啟,紀錄危害發生前的情 景0 以上所述之實施例雖為汽車,但凡具有邊緣特徵之障礙 96682補充修正之中說d〇c •25· 1253998 物均可利用本發明所揭示之方法加以辨識,故本發明所言 之障礙物可以包含汽車、機車、卡車、貨車、火車、人、 狗、護攔、分隔島及房屋等。 以上該系統載體24係以汽車為例進行說明,但實際之應 用則不限於汽車,即該系統載體24可為機踏車、卡車、貨 車等任一種交通工具。 以上所述之實施例中,凡可擷取影像之裝置均可作為該 影像感測器22,故該影像感測器22可為電荷耦合元件 (Charge Coupled Device ; CCD)或互補式金屬氧化物半導體壽 (CMOS)元件攝影機、數位相機、單條條狀攝影機、手持式 設備上之數位相機等任一裝置。 本發明之技術内容及技術特點已揭示如上,然而熟悉本 項技術之人士仍可能基於本發明之教示及揭示而作種種不 背離本發明精神之替換及修飾。因此,本發明之保護範圍 應不限於實施例所揭示者,而應包括各種不背離本發明之 替換及修飾,並為以下之申請專利範圍所涵蓋。 【圖式簡單說$】 _ 圖1係本發明之以視訊感知的障礙物防撞裝置示意圊; 圖2係本發明之以視訊感知的障礙物防撞方法之流程圖; 圖3係圖2之分析該障礙物之複數個影像步驟之流程圖; 圖4係縱深距離量測之成像幾何圖; 圖5係感光電板之硬體架構示意圖; 圖6係量測橫向距離之成像幾何圖; 圖7係以車輛為實施例時之高度量測(彳貞測方形框的像 96682捕充修正之中說.doc -26- 1253998 素長度D影像示意圖; 圖8⑷〜⑷係車輛在四種不同緃深距離時在影像上呈現 不同的。之示意圖; 圖9係定位影像感測器之影像幾何關係圖; 圖1〇係囷2之提供一障礙物辨識流程步驟之流程圖; 圊11例示六種掃描線態樣; 圖12係圖2之執行一防撞策略步驟之流程圖;以及 圖13(a)、13(b)和13(c)例示利用布林變數所作的障礙物辨 識的實驗圖據。 【主要元件符號說明】 10 以視訊感知的障礙物防撞方法 11〜16 步驟 20 以視訊感知的障礙物防撞裝置 21 障礙物 22 影像感測器 24 系統載體 25 警告器 26 運算單元 31 道路邊線 32 道路中央分隔線段 33 道路邊線 40 婦描線 50 影像平面 52 光學軸 111 量測縱深距離 112 量測橫向之距離 113 量測該障礙物之高度 131 設定一掃描線態樣 132 提供一邊緣點鑑定 133 設定一掃描方式 96682補充修正之中就.doc -27- 1253998 134 提供兩個布林變數之真假值 135 判定該障礙物種類 161 提供一 等效速度 162 提供一安全距離 163 提供一 安全係數 164 提供一警報程度 165 發出聲光或產生震動 166 提供一次絕對速度 96682補充修正之中%.doc -28-Using the warning device 25 to emit an acousto-optic or vibrating warning to the driver of the system carrier 24, and to audibly and visually alert the person around the system carrier 24; and (f) to provide an absolute speed 166, which is defined as The current load of the system carrier 24 is the product of the absolute speed and the safety factor. The anti-collision strategy further includes an instant recording function step. When the safety factor is less than a certain constant value, the function will be turned on, and the scene before the occurrence of the hazard is recorded. The embodiment described above is a car, but has an edge feature. The obstacle 96682 supplemental amendment says that d〇c •25· 1253998 can be identified by the method disclosed by the present invention, so the obstacles claimed in the present invention can include automobiles, locomotives, trucks, trucks, trains, people, dogs. , barriers, islands and houses. The above system carrier 24 is described by taking an automobile as an example, but the actual application is not limited to an automobile, that is, the system carrier 24 may be any type of vehicle such as a bicycle, a truck, or a freight car. In the above embodiments, any image capturing device can be used as the image sensor 22, so the image sensor 22 can be a charge coupled device (CCD) or a complementary metal oxide. Any device such as a semiconductor CMOS (CMOS) component camera, a digital camera, a single strip camera, or a digital camera on a handheld device. The technical contents and technical features of the present invention have been disclosed as above, and those skilled in the art can still make various substitutions and modifications without departing from the spirit and scope of the invention. Therefore, the scope of the present invention should be construed as being limited by the scope of the appended claims. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a schematic diagram of a video-aware obstacle collision avoidance device according to the present invention; FIG. 2 is a flow chart of a video-aware obstacle collision prevention method according to the present invention; Figure 4 is a flow chart for analyzing a plurality of image steps of the obstacle; Figure 4 is an imaging geometry of the depth distance measurement; Figure 5 is a schematic diagram of the hardware structure of the photosensitive plate; Figure 6 is an imaging geometry for measuring the lateral distance; Figure 7 shows the height measurement of the vehicle as an example. (The picture of the square frame is like the 96682 catching correction. The doc -26- 1253998 is the length D image; Figure 8 (4) ~ (4) is the vehicle in four different ways. Figure 9 is a schematic diagram of the image geometry of the image sensor; Figure 1 is a flow chart showing the steps of the obstacle recognition process; 圊11 Example 6 Figure 12 is a flow chart of the steps of performing a collision avoidance strategy of Figure 2; and Figures 13(a), 13(b) and 13(c) illustrate an experiment for obstacle recognition using Boolean variables Figure. [Key component symbol description] 10 Video-sensing Obstacle collision avoidance method 11~16 Step 20 Obstacle anti-collision device 21 with video perception Obstacle 22 Image sensor 24 System carrier 25 Warning device 26 Operation unit 31 Road edge 32 Road center separation line 33 Road line 40 Line drawing line 50 Image plane 52 Optical axis 111 Measuring depth distance 112 Measuring lateral distance 113 Measuring the height of the obstacle 131 Setting a scan line pattern 132 Providing an edge point identification 133 Setting a scan mode 96682 supplementary correction. Doc -27- 1253998 134 provides the true and false values of two Boolean variables 135. Determine the obstacle category 161 to provide an equivalent speed 162. Provide a safety distance 163. Provide a safety factor 164. Provide an alarm level 165 to emit sound or light or generate vibration. 166 Provides an absolute speed of 96,826 supplementary corrections. % doc -28-

Claims (1)

1253998 十、申請專利範圍: 1. 一種以視訊感知的障礙物防撞方法,其係應用於一系統載 體,且一影像感測器係架設於該系統載體上,該防撞方法 包含下列步驟·· 擷取及分析複數個影像; 定位該影像感測器; 執行一障礙物辨識流程; 獲取該系統載體之絕對速度; 獲取该系統載體與該障礙物之一相對距離和一相對速 度;以及 執行一防撞策略。 2·根據請求項1之以視訊感知的障礙物防撞方法,其中該定 位影像感測器之步驟係用以獲得該影像感測器之俯角及 該影像感測器與地面之距離。 3·根據請求項2之以視訊感知的障礙物防撞方法,其中該影 像感測器之俯角及該影像感測器與地面之距離之獲得係 包含以下步驟: 將一水平掃描線由下而上每間隔作橫向掃描; 辨識出具路面邊線特徵之一特徵點; 辨識出該特徵點所在的一特徵線段之兩第一端點; 將該兩第一端點經水平掃描得兩水平線,該二水平線 分別交於另一特徵線段於兩第二端點; 辨識該兩第一端點連線和該兩第二端點連線之交點; 求出該影像感測器之俯角;以及 96682補充修正之中說.doc 39c 求出該影像感測器至地面的距離。 根據清求項3之以視訊感知的障礙物防撞方法,其中該麥 像感測器之俯角係利用該影像上像素之間距、影像之縱向 長度一半的值、影像感測器的焦距及該交點而得。1253998 X. Patent application scope: 1. A video-aware obstacle collision prevention method, which is applied to a system carrier, and an image sensor is erected on the system carrier, and the collision avoidance method comprises the following steps: · capturing and analyzing a plurality of images; locating the image sensor; performing an obstacle recognition process; acquiring an absolute speed of the system carrier; obtaining a relative distance between the system carrier and the obstacle and a relative speed; and executing A collision avoidance strategy. 2. The video-aware obstacle collision avoidance method according to claim 1, wherein the step of positioning the image sensor is to obtain a depression angle of the image sensor and a distance between the image sensor and the ground. 3. The method according to claim 2, wherein the image sensor's depression angle and the distance between the image sensor and the ground comprise the following steps: Performing a horizontal scan on each interval; identifying a feature point having a feature of the roadside edge; identifying two first end points of a feature line segment where the feature point is located; and horizontally scanning the two first end points to obtain two horizontal lines, the second The horizontal lines are respectively assigned to the other characteristic line segments at the two second end points; the intersection of the two first end point lines and the two second end point lines is identified; the depression angle of the image sensor is obtained; and the 96682 supplementary correction is obtained It is said that .doc 39c finds the distance of the image sensor to the ground. According to the method 3 of the video-aware obstacle collision avoidance method, wherein the image sensor has a depression angle using a distance between pixels on the image, a value of half of a longitudinal length of the image, a focal length of the image sensor, and Get it at the intersection. 6. 根據請求項3之以視訊感知的障礙物防撞方法,其中影像 感測器至地面的距離係利用該影像感測器之俯角及該兩 水平線與影像感測器之縱深距離求得。 根據請求項3之以視訊感知的障礙物防撞方法,其中該影 像感測器之俯角係根據下式求得: —丨 其中Θ為該影像感測器之俯角; Μ為該影像平面上像素的間距; e為影像之縱向長度一半的值; 乃為該交點的位置;以及 /為該影像感測器的焦距。 ‘根據清求項3之以視訊感知的障礙物防撞方法,其中今f 像感測器至地面的距離係根據下式求得: Hc:——;—~^~~一 tan ⑷+θ2) tai明 其中圮為該影像感測器至地面的距離,q為一路面線 段的長度值;0為該影像感測器之俯角;且&amp;、化分別篇 ,其中切與以分別為兩水平 96682補充修正之中說doc 1253998 線至該影像感測器之縱深距離。 8·根據請求項1之以視訊感知的障礙物防撞方法,其中該障 礙物辨識流程包含以下步驟: 又疋一掃描線態樣’該掃描線態樣選自:單線型掃描 線曲折型掃描線、二條線型掃描線、五條線型掃描線、 轉臀型掃描線及橫向型掃描線; 提供一邊緣點鑑定; 設定一掃描方式,該掃描方式係偵測區間式或逐步式; 提供至少兩個布林變數中之一,該兩個布林變數係分魯 別關於障礙物陰影特性、障礙物投射或反射光之亮度遞減 特性; 判斷該布林變數之真假值;以及 判定該障礙物種類。 9·根據請求項8之以視訊感知的障礙物防撞方法,其中該邊 緣點鐘定包含以下步驟·· 計算該水平掃描線上之一像素及其相鄰像素在色階上 之一歐幾里德距離;以及 馨 若該歐幾里德距離大於一臨界常數,則該像素被視為 一邊緣點。 10·根據請求項8之以視訊感知的障礙物防撞方法,其中關於 障礙物陰影特性之布林變數之真假值係由下式判斷·· 若^^L Μ*成立,則該布林變數為真; 若^^ 〈A成立,則該布林變數為假; 96682補充修正之中說d〇c 1253998 其中為一常數值; 匕為偵測區間的長度;以及 Nshadow 一 pixel 為符合陰影特徵的像素量。 11 ·根據請求項8之以視訊感知的障礙物防撞方法,其中關於 障礙物投射或反射光之亮度遞減特性之布林變數之真科 值係由下式判斷: ‘ 若或成立,該布林變數為真,否則為假; 其中〇8和C9為臨界常數; Λ代表分析彩色影像時,像素資料的紅綠藍三原色_ 中以紅色為主要顏色的色階值,Gr吵代表分析黑白 影像時,像素資料的灰階色階值。 12·根據清求項8之以视訊感知的障礙物防撞方》,其另包含 -曰間及夜間辨識法則轉換步驟,其中日間辨識法則係運 用障礙物陰影特性之布林變數’夜間辨識法則係運用障礙 物投射或反射光之亮度遞減特性之布林變數,該轉換步驟 之轉換時間係内定於設置在該系統載艚上之一運算單元 13‘根據清求項8之以視訊感知的障礙物防撞方法,其中關於 障礙物陰影特性之布林變數真假值若為真,則該障礙物被 底部具暗黑顏色像素的物體,否則該障礙物被辨 &quot;·為一底部不具暗黑顏色像素的物體。 14.根據清求項8之以視訊感知的障礙物防撞 =投射或反射光之亮度遞減特性之布林變I: 右一,則該障礙物被辨識為一立體障礙物,否則該障礙 96682補充修正之中說d〇c 1253998 物被辨識無障礙物。 15 16. 17. 18. •根據明求項8之以視訊感知的障礙物防撞方法 _ -近遠燈自動切換步驟,係藉由計算續栽二其另包含 .系‘裁體與對向直 勺距離是否小於-特定的距離作為依據進行切換。 根據請求項8之以視訊感知的障礙物防撞 、/ /次,具另包含 -車燈亮度自動調節步驟,係藉由攫取的道路像素並 其顏色色階平均值,得以判斷系統載體所在位置之天候亮 度’並作為自動調整車燈亮度的依據。 根據請求項i之以視訊感知的障礙物防撞方法,其中該系· 統載體之絕對速度之獲取包含以下步驛: 、 辨識一特徵線段之一端點於一第一影像中之位置; 辨識該端點於一第二影像之位置;以及 將該二端點之距離除以擷取該第一及第二影像的時間 差; 其中該第一及第二影像包含於該複數個影像,且第二 影像之擷取遲於該第一影像之擷取。 根據請求項1之以視訊感知的障礙物防撞方法,其中該防泰 撞策略包含以下步驟: 提供一等效速度,選自該絕對速度和該相對速度之較 大者; 提供一安全距離; 提供一安全係數,其大小定義為該相對距離與該安全 距離之比值’且該安全係數之大小位於〇和1之間; 提供一警報程度,其大小定義為1減去該安全係數; 96682補充修正之中說.doc 1253998 根據该警報程度之大小,以聲光或震動之方式警告該 系統載體之駕駛者或以聲光警告該系統載體周圍之人; 提供一次絕對速度,該次絕對速度之定義為該系統栽 艘目前之絕對速度與該安全係數之乘積;以及 提供一錄影功能。 19·根據請求項18之以視訊感知的障礙物防撞方法,其中該錄 影功能於該安全係數小於某一常數值時開啟。 20·根據請求項1之以視訊感知的障礙物防撞方法,其中該系 統載體之絕對速度可直接自該系統載體之速度表取得。镰 21 ·根據請求項1之以視訊感知的障礙物防撞方法,其中該影 像感測器係選自以下之一:一電荷耦合元作攝影機、一互 補式金屬氧化物半導體元作攝影機、一單條條狀攝影機及 一手持式通訊設備上之攝影機。 22·根據請求項1之以視訊感知的障礙物防撞方法,其中該系 統載體為一交通工具。 23· —種以視訊感知的障礙物防撞裝置,其係應用於一系統載 體上,其包含: Λ 一影像感測器,用以擷取一障礙物之複數個影像;以 及 一運算單元,其包含下列功能: (a) 分析該複數個影像; (b) 根據複數個影像之分析結果執行一障礙物辨識 流程,以判斷障礙物是否存在;以及 (c) 執行一防撞策略。 96682補充修正之中說doc 1253998 24.根據請求項23之以視訊感知的障礙物防撞裝置,其另包含 一警告器,當該複數個影像經分析判定有障礙物時,該警 告器將發出聲光或產生震動。 2 5 ·根據清求項2 3之以視訊感知的障礙物防撞裝置,其中該影 像感測器係選自以下之一 ··一電荷耦合元件攝影機、一互 補式金屬氧化物半導體元件攝影機、一單條條狀攝影機及 一手持式通訊設備上之攝影機。 26·根據請求項23之以視訊感知的障礙物防撞裝置,其中該系 統載體為一交通工具。6. The video-aware obstacle collision avoidance method according to claim 3, wherein the distance from the image sensor to the ground is obtained by using a depression angle of the image sensor and a depth distance between the two horizontal lines and the image sensor. According to claim 3, the image-aware obstacle collision avoidance method, wherein the image sensor's depression angle is obtained according to the following formula: - wherein Θ is the depression angle of the image sensor; Μ is the pixel on the image plane The spacing; e is the value of half the longitudinal length of the image; the position of the intersection; and / is the focal length of the image sensor. 'According to the obstacle-obstacle-based obstacle collision avoidance method, the distance between the current sensor and the ground is obtained according to the following formula: Hc:——;—~^~~ a tan (4)+θ2 The tai is the distance from the image sensor to the ground, q is the length of a road segment; 0 is the depression angle of the image sensor; and &amp; Level 96682 supplemental correction says doc 1253998 line to the depth distance of the image sensor. 8. The video-aware obstacle collision avoidance method according to claim 1, wherein the obstacle recognition process comprises the following steps: a scan line state of the scan line selected from: a single-line scan line zigzag scan Line, two line scan lines, five line scan lines, hip type scan lines and horizontal type scan lines; provide an edge point identification; set a scan mode, the scan mode is to detect interval or stepwise; provide at least two One of the Boolean variables, the two Boolean variables are divided into the shadow characteristic of the obstacle, the projection of the obstacle or the brightness of the reflected light; determining the true and false value of the Boolean variable; and determining the type of the obstacle . 9. The video-aware obstacle collision avoidance method according to claim 8, wherein the edge point clock comprises the following steps: calculating one of the pixels on the horizontal scan line and its neighboring pixels in the color gradation De distance; and if the Euclidean distance is greater than a critical constant, the pixel is considered to be an edge point. 10. The method for preventing obstacle collision by video perception according to claim 8, wherein the true and false values of the Boolean variables of the shadow characteristic of the obstacle are judged by the following formula: If ^^L Μ* is established, the Bollinger The variable is true; if ^^ <A is established, the Boolean variable is false; 96682 supplementary correction says d〇c 1253998 where is a constant value; 匕 is the length of the detection interval; and Nshadow a pixel is the shadow The amount of pixels of the feature. 11. The method according to claim 8, wherein the true value of the Boolean variable of the diminishing characteristic of the obstacle projection or reflected light is determined by the following formula: 'If or not, the cloth The forest variable is true, otherwise it is false; 〇8 and C9 are critical constants; Λ represents the red, green and blue primary colors of the pixel data in the analysis of color images _ with red as the main color gradation value, Gr noisy represents the analysis of black and white images When the grayscale level value of the pixel data. 12. According to the item 8 of the obstacle-observing obstacles of video perception, which additionally includes the conversion procedure of the daytime and nighttime identification rules, wherein the daytime identification rule is the Boolean variable using the shadow characteristic of the obstacles. The law is a Boolean variable that uses a brightness reduction characteristic of an obstacle projection or reflected light, and the conversion time of the conversion step is determined by a video unit that is set on the system carrier unit 13' according to the clearing item 8 An obstacle collision avoidance method, wherein if the true or false value of the Boolean variable of the obstacle shadow characteristic is true, the obstacle is an object having a dark color pixel at the bottom, otherwise the obstacle is discriminated &quot;· is a bottom without dark Color pixel object. 14. Obstacle-sensing obstacle according to the clearing item 8: Bolling change of the brightness decreasing characteristic of the projected or reflected light: Right one, the obstacle is recognized as a steric obstacle, otherwise the obstacle 96682 In the supplementary amendment, d〇c 1253998 was identified as an obstacle-free object. 15 16. 17. 18. • Obstacle-sensing obstacle collision avoidance method according to Item 8 _ - The automatic switching procedure of the near-far light is calculated by continuation. Whether the distance of the straight spoon is less than - a specific distance is used as a basis for switching. According to the request 8 of the video-aware obstacle collision avoidance, / /, with the additional - the lamp brightness automatic adjustment step, by taking the road pixels and their color gradation average, can determine the location of the system carrier The brightness of the weather' is used as the basis for automatically adjusting the brightness of the lights. According to the method of claim i, the method for obscuring the obstacle collision avoidance, wherein the obtaining of the absolute speed of the system carrier comprises the following steps: identifying a position of one of the characteristic line segments in a first image; The endpoint is located at a second image; and the distance between the two endpoints is divided by the time difference between the first and second images; wherein the first and second images are included in the plurality of images, and the second The capture of the image is later than the capture of the first image. The video-aware obstacle collision avoidance method according to claim 1, wherein the anti-collision strategy comprises the steps of: providing an equivalent speed selected from the absolute speed and the greater of the relative speed; providing a safe distance; A safety factor is provided, the size of which is defined as the ratio of the relative distance to the safety distance 'and the magnitude of the safety factor is between 〇 and 1; an alarm level is provided, the size of which is defined as 1 minus the safety factor; 96682 supplement According to the amendment, doc 1253998 warns the driver of the system carrier by sound and light or vibration to warn the person around the carrier of the system according to the magnitude of the alarm; provide an absolute speed, the absolute speed Defined as the product of the current absolute speed of the system and the safety factor; and provides a video function. 19. The video-aware obstacle collision avoidance method of claim 18, wherein the recording function is turned on when the safety factor is less than a certain constant value. 20. A method of video-aware obstacle collision avoidance according to claim 1, wherein the absolute speed of the system carrier is obtained directly from a speedometer of the system carrier. According to claim 1, the image sensing device is selected from one of the following: a charge coupled device as a camera, a complementary metal oxide semiconductor device as a camera, and a camera. A single strip camera and a camera on a handheld communication device. 22. The method of claim 1 for video-aware obstacle collision avoidance, wherein the system carrier is a vehicle. A video-aware obstacle collision avoidance device for use in a system carrier, comprising: Λ an image sensor for capturing a plurality of images of an obstacle; and an arithmetic unit, It includes the following functions: (a) analyzing the plurality of images; (b) performing an obstacle identification process based on the analysis results of the plurality of images to determine whether the obstacle exists; and (c) performing an anti-collision strategy. </ RTI> </ RTI> </ RTI> </ RTI> </ RTI> </ RTI> </ RTI> </ RTI> </ RTI> </ RTI> </ RTI> </ RTI> </ RTI> </ RTI> </ RTI> </ RTI> </ RTI> </ RTI> </ RTI> </ RTI> <RTIgt; Sound and light or vibration. 2 5 · A video-aware obstacle anti-collision device according to claim 2, wherein the image sensor is selected from one of the following: a charge coupled device camera, a complementary metal oxide semiconductor device camera, A single strip camera and a camera on a handheld communication device. 26. A video-aware obstacle avoidance device according to claim 23, wherein the system carrier is a vehicle. 96682補充修正之中說.d〇c 1253998 七、指定代表圖: (一) 本案指定代表圖為:第(2)圖。 (二) 本代表圖之元件符號簡單說明: 10 以視訊感知的障礙物防撞方法 11〜16 步驟 八、本案若有化學式時,請揭示最能顯示發明特徵的化學式: (無) 96682補充修正之中說.doc96682 Supplementary amendment said. d〇c 1253998 VII. Designated representative map: (1) The representative representative of the case is: (2). (2) Simple description of the symbol of the representative figure: 10 Obstacle avoidance method by video perception 11~16 Step 8. If there is a chemical formula in this case, please disclose the chemical formula that best shows the characteristics of the invention: (none) 96682 supplementary correction Said in the .doc
TW93135791A 2004-11-19 2004-11-19 Method and apparatus for obstacle avoidance with camera vision TWI253998B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
TW93135791A TWI253998B (en) 2004-11-19 2004-11-19 Method and apparatus for obstacle avoidance with camera vision
US11/260,723 US20060111841A1 (en) 2004-11-19 2005-10-27 Method and apparatus for obstacle avoidance with camera vision
JP2005332937A JP2006184276A (en) 2004-11-19 2005-11-17 All-weather obstacle collision preventing device by visual detection, and method therefor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW93135791A TWI253998B (en) 2004-11-19 2004-11-19 Method and apparatus for obstacle avoidance with camera vision

Publications (2)

Publication Number Publication Date
TWI253998B true TWI253998B (en) 2006-05-01
TW200616816A TW200616816A (en) 2006-06-01

Family

ID=37587177

Family Applications (1)

Application Number Title Priority Date Filing Date
TW93135791A TWI253998B (en) 2004-11-19 2004-11-19 Method and apparatus for obstacle avoidance with camera vision

Country Status (1)

Country Link
TW (1) TWI253998B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8244403B2 (en) 2007-11-05 2012-08-14 Industrial Technology Research Institute Visual navigation system and method based on structured light
TWI611961B (en) * 2016-06-01 2018-01-21 緯創資通股份有限公司 Device, method, and computer-readable medium for analyzing lane line image
TWI618647B (en) * 2016-02-02 2018-03-21 財團法人資訊工業策進會 System and method of detection, tracking and identification of evolutionary adaptation of vehicle lamp
TWI646306B (en) * 2017-12-15 2019-01-01 財團法人車輛研究測試中心 Method for analyzing error and existence probability of multi-sensor fusion of obstacle detection

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8244403B2 (en) 2007-11-05 2012-08-14 Industrial Technology Research Institute Visual navigation system and method based on structured light
TWI618647B (en) * 2016-02-02 2018-03-21 財團法人資訊工業策進會 System and method of detection, tracking and identification of evolutionary adaptation of vehicle lamp
TWI611961B (en) * 2016-06-01 2018-01-21 緯創資通股份有限公司 Device, method, and computer-readable medium for analyzing lane line image
TWI646306B (en) * 2017-12-15 2019-01-01 財團法人車輛研究測試中心 Method for analyzing error and existence probability of multi-sensor fusion of obstacle detection

Also Published As

Publication number Publication date
TW200616816A (en) 2006-06-01

Similar Documents

Publication Publication Date Title
WO2021226776A1 (en) Vehicle drivable area detection method, system, and automatic driving vehicle using system
CN108960183B (en) Curve target identification system and method based on multi-sensor fusion
CN110501018B (en) Traffic sign information acquisition method for high-precision map production
JP5820774B2 (en) Road boundary estimation apparatus and program
US7046822B1 (en) Method of detecting objects within a wide range of a road vehicle
JP6082802B2 (en) Object detection device
Kühnl et al. Spatial ray features for real-time ego-lane extraction
JP3049603B2 (en) 3D image-object detection method
JP2006184276A (en) All-weather obstacle collision preventing device by visual detection, and method therefor
CN107891808B (en) Driving reminding method and device and vehicle
CN106908783A (en) Obstacle detection method based on multi-sensor information fusion
JP4205825B2 (en) Object recognition device
US20050270286A1 (en) Method and apparatus for classifying an object
CN110531376A (en) Detection of obstacles and tracking for harbour automatic driving vehicle
CN101075376A (en) Intelligent video traffic monitoring system based on multi-viewpoints and its method
JP2007234019A (en) Vehicle image area specifying device and method for it
JP2007293627A (en) Periphery monitoring device for vehicle, vehicle, periphery monitoring method for vehicle and periphery monitoring program for vehicle
RU2635280C2 (en) Device for detecting three-dimensional objects
CN109635737A (en) Automobile navigation localization method is assisted based on pavement marker line visual identity
Deng et al. Semantic segmentation-based lane-level localization using around view monitoring system
CN108021849B (en) Pedestrian early warning method and device
JP3456339B2 (en) Object observation method, object observation device using the method, traffic flow measurement device and parking lot observation device using the device
Janda et al. Road boundary detection for run-off road prevention based on the fusion of video and radar
CN112749584B (en) Vehicle positioning method based on image detection and vehicle-mounted terminal
DE102021132199A1 (en) Determining object mobility parameters using an object sequence

Legal Events

Date Code Title Description
MM4A Annulment or lapse of patent due to non-payment of fees