JP4940177B2 - Traffic flow measuring device - Google Patents

Traffic flow measuring device Download PDF

Info

Publication number
JP4940177B2
JP4940177B2 JP2008089182A JP2008089182A JP4940177B2 JP 4940177 B2 JP4940177 B2 JP 4940177B2 JP 2008089182 A JP2008089182 A JP 2008089182A JP 2008089182 A JP2008089182 A JP 2008089182A JP 4940177 B2 JP4940177 B2 JP 4940177B2
Authority
JP
Japan
Prior art keywords
vehicle
image
width
horizontal edge
traffic flow
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
JP2008089182A
Other languages
Japanese (ja)
Other versions
JP2009245042A (en
Inventor
竜 弓場
一哉 高橋
忠明 北村
都 堀田
茂寿 崎村
徹也 山崎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hitachi Ltd
Original Assignee
Hitachi Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Ltd filed Critical Hitachi Ltd
Priority to JP2008089182A priority Critical patent/JP4940177B2/en
Publication of JP2009245042A publication Critical patent/JP2009245042A/en
Application granted granted Critical
Publication of JP4940177B2 publication Critical patent/JP4940177B2/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Description

本発明は路上の車両を検出および追跡して台数や速度などを計測する交通流計測装置に関するものである。 The present invention relates to a traffic flow measurement equipment for measuring the like number and speed by detecting and tracking the vehicle on the street.

路上の支柱に設置したカメラで道路および車両を俯瞰した動画を撮影し、撮影した動画を画像処理することにより、車両を検知後に追跡して車線毎の通過台数と速度を計測する交通量計測装置が知られている。カメラ画面内で検出した車両は前記カメラのカメラパラメータたとえば、カメラの高さ,ピッチ角(俯角),ロール角(前記カメラのレンズ光軸を軸とした回転角),ヨー角(道路方向とカメラの向きの偏角),焦点距離(カメラのズーム率に関する)等を反映した射影変換によって、画像上の車両の検知位置や移動量を空間中の座標系に変換することで通行車線の判定と通過速度を計測している。なお、画像の座標系が2次元に対して空間の座標系は3次元なので、画像から道路への座標変換は一定の想定条件、例えば画像上における車両の追跡位置の高さを0mとおく。   A traffic measurement device that captures videos of bird's-eye view of roads and vehicles with a camera installed on a road post, and tracks the captured video after image detection to measure the number of vehicles passing through each lane and speed. It has been known. The vehicle detected in the camera screen is the camera parameters of the camera, such as the camera height, pitch angle (decline angle), roll angle (rotation angle about the camera lens optical axis), yaw angle (road direction and camera). By changing the detection position and movement amount of the vehicle on the image into a coordinate system in space by projective conversion reflecting the declination of the direction of the image), the focal length (related to the zoom ratio of the camera), etc. The passing speed is measured. Since the coordinate system of the image is two-dimensional and the spatial coordinate system is three-dimensional, the coordinate conversion from the image to the road is made under certain assumptions, for example, the height of the tracking position of the vehicle on the image is 0 m.

交通量計測装置の開示技術のうち〔特許文献1〕では、車両の前後の輪郭や、車両の屋根とフロントガラスやリアガラスの窓枠との間の境界線といった部分に出現する水平方向のエッジの並びを車両の特徴として、画像を小ブロック単位で格子分割したときに移動領域の水平方向のエッジがしきい値を超過したブロックの連結領域を車両として検出する技術が開示されている。また〔特許文献2〕では、車線毎の車両の見え方や車両の形状のバリエーションを一つの分布モデルで代表し、背景画像と撮影画像との差分領域と分布モデルとの畳み込み演算によって個々の車両を検知する技術が開示されている。   Of the disclosed technology of the traffic volume measuring apparatus, in [Patent Document 1], the edge of the horizontal direction appearing in the contours of the front and rear of the vehicle and the boundary line between the roof of the vehicle and the window frame of the windshield and rear glass. As a feature of the vehicle, there is disclosed a technique for detecting, as a vehicle, a connected region of blocks in which the horizontal edge of the moving region exceeds a threshold when the image is divided into small blocks. Further, in [Patent Document 2], the variation in the appearance of the vehicle and the shape of the vehicle for each lane is represented by one distribution model, and each vehicle is calculated by convolution calculation of the difference area between the background image and the captured image and the distribution model. A technique for detecting the above is disclosed.

特開2002−32747号公報JP 2002-32747 A 特開2001−118182号公報JP 2001-118182 A 特開3435623号公報JP 3435623 A 特開2001−6089広報JP 2001-6089 PR

〔特許文献1〕では水平方向のエッジの密度が高い小ブロックの連結領域を車両とするため、隣接する車線の左右の車両や同一車線上の前後の車両が見かけ上重なる場合には、個別の車両を検知することが困難である。図4は路側に立てた支柱に設置のカメラにより道路を俯瞰した画像の例であり、画像において下から上に進む車両を背面から撮影している。図4において、第1車線には手前から中型バス2とバン3、第2車線にはセダン車3が走行している。図4において第2車線の計測において中型バス2のはみ出しとセダン車3が見かけ上重なるので、特許文献1の技術では正確に個別の車両を検知することが困難であり、正確な台数を計測することができないという第1の課題がある。   In [Patent Document 1], since the connection region of small blocks having a high density of horizontal edges is used as a vehicle, when vehicles on the left and right of adjacent lanes and vehicles on the same lane appear to overlap, It is difficult to detect the vehicle. FIG. 4 is an example of an image obtained by bird's-eye view of a road by a camera installed on a support column standing on the road side, and a vehicle traveling from the bottom to the top in the image is photographed from the back. In FIG. 4, the medium-sized bus 2 and the van 3 are traveling from the front in the first lane, and the sedan vehicle 3 is traveling in the second lane. In FIG. 4, in the measurement of the second lane, the protrusion of the medium-sized bus 2 and the sedan vehicle 3 appear to overlap, so it is difficult to accurately detect individual vehicles with the technology of Patent Document 1, and the exact number of vehicles is measured. There is a first problem that cannot be done.

また〔特許文献1〕では水平方向のエッジの密度が高い小ブロックの連結領域を車両とするため、貨物を積んだ大型車のように屋根および背面に複雑な模様がある車両の屋根および背面のエッジを誤検知することを抑止することが困難という第2の課題がある。前記のように屋根および背面のエッジを検知、追跡したときには、交通流計測装置の速度の精度が低下する。   Further, in [Patent Document 1], since the connecting region of small blocks having a high horizontal edge density is used as a vehicle, the roof and the back of a vehicle having a complicated pattern on the roof and the back like a large vehicle loaded with cargo. There is a second problem that it is difficult to prevent erroneous detection of edges. When the edges of the roof and the back are detected and tracked as described above, the accuracy of the speed of the traffic flow measuring device is lowered.

図23を用いて車両の屋根および背面を追跡した場合に速度が低下することを説明する。図23において、80はカメラ、VPはカメラの視点、85は時刻tにおける車両、81は車両85の末尾の追跡位置、82はVPと81を通る直線と地平面の交点、95は時刻t+Δtにおける車両、91は車両95の末尾の追跡位置、92はVPと91を通る直線と地平面の交点である。時刻tから時刻t+Δtの間の81から91への空間での移動量はLt、追跡位置を高さ0mと置いたときの81から91への見かけ上の移動量はLaである。81および91の高さは0mに近いのでLaとLtの差は小さく、時刻差Δtで移動量を除算した81の見かけ上の速度La/Δtは実際の速度Lt/Δtに近く精度は高い。   It will be described with reference to FIG. 23 that the speed decreases when the roof and back of the vehicle are tracked. In FIG. 23, 80 is a camera, VP is a camera viewpoint, 85 is a vehicle at time t, 81 is a tracking position at the end of the vehicle 85, 82 is an intersection of a straight line passing through VP and 81, and a ground plane, and 95 is at time t + Δt. The vehicle, 91 is a tracking position at the end of the vehicle 95, and 92 is an intersection of a straight line passing through VP and 91 and the ground plane. The amount of movement in the space from 81 to 91 between time t and time t + Δt is Lt, and the apparent amount of movement from 81 to 91 when the tracking position is set at 0 m is La. Since the heights of 81 and 91 are close to 0 m, the difference between La and Lt is small, and the apparent speed La / Δt of 81 obtained by dividing the movement amount by the time difference Δt is close to the actual speed Lt / Δt and has high accuracy.

一方、図23において83は時刻tにおける車両85の屋根付近の追跡位置、84はVPと83を通る直線と地平面の交点、93は時刻t+Δtにおける車両95の屋根付近の追跡位置、92はVPと91を通る直線と地平面の交点、Hは83と93の高さ、θはVPと83および84を結ぶ直線と鉛直方向とが成す俯角、θ+ΔθはVPと93および94を結ぶ直線と鉛直方向とが成す俯角である。車両85および95の高さにより、84は83よりも(数7)だけ前方にあり、94は93よりも(数8)だけ前方にある。よって、追跡位置を高さ0mと置いたときの83から93への見かけの移動量Lbは、実際の移動量Ltよりも(数9)だけ多くなる。時刻差Δtで移動量を除算した83の見かけ上の速度Lb/Δtは実際の速度Lt/Δtよりも、(数9)をΔtで除した分だけ過剰となり精度が低くなる。(数9)はHの比例があるので、追跡位置83が車高の高い車両をとらえるほど見かけの速度は誤差が大きくなることがわかる。   On the other hand, in FIG. 23, 83 is the tracking position near the roof of the vehicle 85 at time t, 84 is the intersection of the straight line passing through VP and 83 and the ground plane, 93 is the tracking position near the roof of the vehicle 95 at time t + Δt, and 92 is VP , 91 is the height of 83 and 93, θ is the angle between the straight line connecting VP and 83 and 84, and the vertical direction, and θ + Δθ is the vertical line between VP and 93 and 94. It is the depression angle formed by the direction. Due to the height of the vehicles 85 and 95, 84 is ahead of 83 (Equation 7) and 94 is ahead of 93 (Equation 8). Therefore, the apparent movement amount Lb from 83 to 93 when the tracking position is set at a height of 0 m is larger than the actual movement amount Lt by (Equation 9). The apparent speed Lb / Δt of 83, which is obtained by dividing the movement amount by the time difference Δt, is more than the actual speed Lt / Δt by the amount obtained by dividing (Equation 9) by Δt, resulting in lower accuracy. Since (Equation 9) is proportional to H, it can be seen that the apparent speed increases as the tracking position 83 catches a vehicle with a high vehicle height.

Figure 0004940177
Figure 0004940177

Figure 0004940177
Figure 0004940177

Figure 0004940177
Figure 0004940177

〔特許文献2〕では反映した分布モデルを用いて車両を検知するので、隣接する車線の車両の重なりを誤検知したり車両の屋根や背面を誤検知することは抑止されるが、大型車から普通車までの形状を1つの分布モデルで代表するため、画像上で前後の車両が重なる場合には、個別の車両を検知することが困難という課題がある。車両の大型車の車長の範囲は6m程度から12mであり、一方普通車の車長の範囲は3m程度から5m程度と車両のサイズには大きなばらつきがあるので、図4のようにバン3と中型バス2とが同一車線を近い距離で走行する時には、2台の車両を1台の大型車両と誤って検知してしまうことの抑止が困難という第3の課題がある。   [Patent Document 2] detects a vehicle using a reflected distribution model, so that it is possible to prevent erroneous detection of overlapping vehicles in adjacent lanes or erroneous detection of the roof or back of a vehicle. Since the shape up to the ordinary vehicle is represented by one distribution model, there is a problem that it is difficult to detect individual vehicles when front and rear vehicles overlap on the image. The vehicle length range of large vehicles is about 6m to 12m, while the vehicle length range of ordinary vehicles is about 3m to 5m. When the medium-sized bus 2 and the medium-sized bus 2 travel on the same lane at a short distance, there is a third problem that it is difficult to prevent two vehicles from being mistakenly detected as one large vehicle.

以上述べた第1の課題と第2の課題と第3の課題は、画像上における車両の密度が高くなる混雑時において顕著に現れる。   The first problem, the second problem, and the third problem described above are conspicuous at the time of congestion when the density of the vehicle on the image is high.

上記課題を解決するために、本発明の交通流計測装置は、路上のカメラの撮影画像を取得する画像入力手段と、画像座標と道路座標を変換する座標変換手段と、前記撮影画像中の水平エッジを抽出する水平エッジ検出手段と、前記水平エッジ検出手段が抽出した前記撮影画像中の水平エッジから車両の形状をあらわす直方体モデルの末尾にあたる水平エッジを検出する末尾候補検出手段と、前記直方体モデルの末尾にあたる水平エッジの幅を当該車両の車幅としたとき、当該車幅に応じて当該車両の車高と車長を推定する車高車長範囲推定手段と、前記車高車長範囲推定手段が推定した前記車高と前記車長と前記直方体モデルの末尾にあたる水平エッジの幅とを寸法とした前記直方体モデルの背面上と屋根前の辺に当たる水平エッジを検出するペアエッジ検出手段と、当該車両を追跡する車両追跡手段と、追跡する当該車両の台数と速度とを計測する交通指標計算手段から構成されることを特徴とするものである。 In order to solve the above problems, a traffic flow measuring apparatus according to the present invention includes an image input unit that acquires a captured image of a camera on a road, a coordinate conversion unit that converts image coordinates and road coordinates, and a horizontal position in the captured image. A horizontal edge detecting means for extracting an edge; a tail candidate detecting means for detecting a horizontal edge corresponding to the end of a rectangular parallelepiped model representing the shape of the vehicle from the horizontal edges in the captured image extracted by the horizontal edge detecting means; and the rectangular parallelepiped model. Vehicle height range estimating means for estimating the vehicle height and vehicle length according to the vehicle width when the width of the horizontal edge corresponding to the end of the vehicle is the vehicle width of the vehicle, and the vehicle height vehicle length range estimation detect means horizontal edge striking the rear on the roof front edge of the rectangular parallelepiped model the dimensions and width of the horizontal edge corresponding to the end of the rectangular model and the vehicle length and the ride height estimated And Peaejji detection means and is characterized with a vehicle tracking means for tracking the vehicle, in that they are composed of transportation index calculation means for measuring the number and speed of the vehicle to be tracked.

更に、上記課題を解決するために、本発明の交通流計測装置は、路上のカメラの撮影画像から抽出した水平エッジのうち車幅に相当した水平エッジを直方体モデルの末尾の辺と仮定したときに、前記直方体モデルの背面上の辺と屋根前の辺に相当するエッジが、前記車幅に相当した水平エッジの幅から換算した車幅と、当該車幅から推定した車高および車長の範囲に応じた画像上の範囲から検出されることを条件に車両を検知し追跡することを特徴とするものである。 Further, assume in order to solve the above problems, traffic flow measurement equipment of the present invention, among the horizontal edges extracted from an image captured by a camera of the road, the horizontal edge that corresponds to the vehicle width and trailing edges of the rectangular prism model when, the corresponding edge to the side and roof front side on the back of a rectangular parallelepiped model, a vehicle width converted from the width of the horizontal edge that corresponds to the vehicle width, vehicle height and estimated from the vehicle width car The vehicle is detected and tracked on condition that the vehicle is detected from a range on the image corresponding to the long range.

また、上記課題を解決するために、本発明は交通流計測装置において、前記車高および前記車長の範囲を、前記車幅、前記車高および前記車長の比率の上限ならびに下限から求めることを特徴とするものである。 In order to solve the above problems, the present invention is Oite the traffic flow measurement equipment, the range of the vehicle height and the vehicle length, the vehicle width, the upper and lower limit of the ratio of the vehicle height and the vehicle length It is characterized by obtaining from.

また、上記課題を解決するために、本発明は交通流計測装置において、前記車高および前記車長の範囲を、前記車幅、前記車高および前記車長の比率の上限ならびに下限の特性曲線から求めることを特徴とするものである。 In order to solve the above problems, the present invention is Oite the traffic flow measurement equipment, the range of the vehicle height and the vehicle length, the vehicle width, the upper and lower limit of the ratio of the vehicle height and the vehicle length It is obtained from the characteristic curve.

また、上記課題を解決するために、本発明は交通流計測装置において、現時刻以前に抽出した直方体モデルの現時刻における領域を求めて、現時刻に抽出した直方体モデルが前記現時刻以前に抽出した直方体モデルの現時刻における領域を包含する場合には、前記現時刻以前に抽出した直方体モデルの追跡を中断することを特徴とするものである。 In order to solve the above problems, the present invention is Oite the traffic flow measurement equipment, seeking area at the current time of a rectangular parallelepiped model extracted to the current time before parallelepiped model the current time extracted to the current time When the region at the current time of the rectangular parallelepiped model extracted before is included , the tracking of the rectangular parallelepiped model extracted before the current time is interrupted.

また、上記課題を解決するために、本発明は交通流計測装置において、任意の車両検知手段が同時に動作し、前記任意の車両検知手段の現時刻における領域が前記直方体モデルの内部にあれば、前記任意の車両検知手段の検知による追跡を中断することを特徴とするものである。 In order to solve the above problems, the present invention is Oite the traffic flow measurement equipment, operating any vehicle detection means at the same time, the inside of the rectangular prism model area at the current time of the arbitrary vehicle detecting means If there is any, the tracking by the detection of the arbitrary vehicle detection means is interrupted.

本発明の交通流計測装置によれば、車両の密度が高い混雑交通でも、台数と速度を高い精度で計測することが実現出来る。 According to the traffic flow measurement device of the present invention, it is possible to measure the number and speed with high accuracy even in congested traffic with high vehicle density.

以下、本発明にかかる交通流計測装置の具体的な実施形態について、図面を参照しながら説明する。なお、本実施形態では、車両の一例として自動車を挙げて説明を行うが発明にかかる「車両」とは自動車に限定されず、路上を走行するあらゆる種類の移動体を含む。 Hereinafter, specific embodiments of a traffic flow measuring apparatus according to the present invention will be described with reference to the drawings. In the present embodiment, an automobile will be described as an example of the vehicle. However, the “vehicle” according to the invention is not limited to the automobile, and includes all kinds of moving bodies that travel on the road.

図1は、本発明の実施例1の機能構成の概略を示すブロック図である。以下の説明において、実施例1の機能構成は最も手前の第1車線を対象としたときの機能を述べるが、第1車線の処理を他の車線にも同様の処理を繰り返し適用することで本発明は複数の車線を対象とすることができる。   FIG. 1 is a block diagram illustrating an outline of a functional configuration according to the first embodiment of the present invention. In the following description, the functional configuration of the first embodiment will describe the function when the first lane in front is the target, but the same processing is repeatedly applied to the other lanes in the first lane. The invention can be directed to multiple lanes.

図1において、画像入力手段11は路上のカメラの撮影映像あるいは路上のカメラの録画映像を利用することができ、毎秒60フレームから5フレーム程度のフレームレートで道路の画像を取り込んで、車両後部絞込み手段12および車両追跡手段16に出力する。   In FIG. 1, the image input means 11 can use a video image taken by a camera on the road or a video image recorded by a camera on the road. The image input means 11 captures a road image at a frame rate of about 60 to 5 frames per second and narrows the rear of the vehicle. Output to the means 12 and the vehicle tracking means 16.

図4は、画像入力手段11が取り込んだ画像の一例であり、1は画像下部に設けた検知領域である。検知領域1は、少なくとも建築限界に定められた最大車幅,最大車高,最大車長の車両が領域内に収まるように設定する。   FIG. 4 shows an example of an image captured by the image input means 11, and 1 is a detection area provided at the lower part of the image. The detection area 1 is set so that at least a vehicle having the maximum vehicle width, the maximum vehicle height, and the maximum vehicle length determined in the building limit is within the area.

なお画像入力手段11が取り込む画像が全体として傾いている場合には、画像入力手段11が2次元の回転変換等により画像全体あるいは検知領域1付近の車両の水平部が画像上において水平に近づく攻勢をとってもよい。   Note that when the image captured by the image input unit 11 is inclined as a whole, the image input unit 11 is aggressively approaching the entire image or the horizontal portion of the vehicle in the vicinity of the detection region 1 horizontally on the image by two-dimensional rotation conversion or the like. You may take

図1において水平エッジ検出手段は図5のフローに従い、検知領域31内の画像を水平方向のソーベルフィルタにより水平方向の濃度勾配を求めた後に(S101)、求めた濃度勾配を所定のしきい値で2値化する(S102)。次に、履歴フレームの画像と現フレームの検知領域1内の画像のフレーム差分を求めた後に(S103)、求めたフレーム差分を所定のしきい値で2値化する(S104)。なお、次フレーム以降の処理のために、履歴フレームの画像を保持するバッファの内容を更新しておく。次に、S102の2値画像とS104の2値画像のAND演算を取った水平エッジを出力する(S103)。画像入力手段11の画像において車両の部分には、車両の前後の輪郭や車両の屋根とフロントガラスやリアガラスの窓枠との間の境界線といった内部の水平部分に水平方向の濃度勾配が生じることと、画像上の車両付近はフレーム間で速度に応じた範囲が明度変化することより、S103の水平エッジは車両の輪郭や水平部位に集中する。図5(a)は図4の画像のうち第1車線の検知領域1の画像を示し、図5(b)において20は水平エッジ検出手段12が検出した水平エッジの例を示している。水平エッジ20のうち、20aは中型バス2のバンパー、20bと20cは中型バス2の左右のライト、20dと20eは中型バス2のリアガラス窓枠の上端と下端、20fは中型バス2の屋根と背面の境界、20gは中型バス2の屋根の前端に相当している。   In FIG. 1, the horizontal edge detecting means obtains a horizontal density gradient from the image in the detection region 31 by a horizontal Sobel filter (S101) according to the flow of FIG. 5, and then obtains the obtained density gradient with a predetermined threshold. The value is binarized (S102). Next, after obtaining the frame difference between the image of the history frame and the image in the detection area 1 of the current frame (S103), the obtained frame difference is binarized with a predetermined threshold value (S104). Note that the contents of the buffer that holds the image of the history frame are updated for processing in the subsequent frames. Next, a horizontal edge obtained by performing an AND operation on the binary image in S102 and the binary image in S104 is output (S103). In the image of the image input means 11, a horizontal density gradient is generated in an internal horizontal portion such as a front / rear contour of the vehicle or a boundary line between the roof of the vehicle and the window frame of the windshield or rear glass in the vehicle portion. In the vicinity of the vehicle on the image, the range according to the speed changes between frames, so that the horizontal edge in S103 is concentrated on the contour and the horizontal part of the vehicle. FIG. 5A shows an image of the detection area 1 of the first lane in the image of FIG. 4, and in FIG. 5B, 20 shows an example of a horizontal edge detected by the horizontal edge detecting means 12. FIG. Among the horizontal edges 20, 20 a is a bumper of the medium bus 2, 20 b and 20 c are left and right lights of the medium bus 2, 20 d and 20 e are upper and lower ends of the rear glass window frame of the medium bus 2, and 20 f is a roof of the medium bus 2 The rear boundary 20g corresponds to the front edge of the roof of the medium-sized bus 2.

図1において座標変換手段18は画像座標と道路座標を相互に変換する。図2は座標変換手段18が前提とするピンホールカメラのモデル図であり、VPはカメラの視点であり、距離FはVPと画像面の焦点距離である。なお、現実にはカメラの高さは路上から数メートルで、焦点距離Fは数十ミリメートルであるが、説明のためにデフォルメされている。図2において画像の座標系は画像の右向きにx軸、画像の上向きにy軸、画像中心を原点oに持つ。図2において、空間の座標系は、道路横断方向にX軸、道路進行方向にY軸、鉛直上向きにZ軸、VPと点oを通る直線と地上面との交点Oを原点に持つ。空間中の任意の点Pに対応する画像上の点pは、点PとVPとを結ぶ直線と画像面との交点となる。座標変換手段18は、図3(a)のように公知の射影変換により空間中の任意の点Pの座標(X,Y,Z)に対応する画像上の点pの座標(x,y)を計算する。射影変換では画像上の点pに対応した道路中の点はVPと点pとを結ぶ直線上で図2の点Pと点P′との用に不定になるが、図3(b)のように道路中の対応点のZ(高さ)を指定したときには、画像上の点pの座標(x,y)に対応した空間中の点Pの座標(X,Y,Z)を一意に求めることができる。また、座標変換手段18は図3(c)に示すように、Zが共通とおいたときの2つの点p1とp2の画像座標から、それぞれの空間中の点P1とP2座標を求めて、2点の空間座標のユークリッド距離から、点p1と点p2の空間での距離を求めることができる。また座標変換手段18は図3(d)に示すように、点p1の画像座標、点p1の空間の対応点のZ座標、点P2の点P1からの変位が既知としたとき、順番に点P1の空間座標、点P2の空間座標を求めることで、点P2の画像上の対応点p2の画像座標を求めることができる。座標変換手段18の射影変換に必要なデータは、信号処理の開始以前に求めておく。射影変換に必要なデータは、画像入力手段11が画像を取得するカメラの空間座標での位置と方向と焦点距離Fあるいは、座標のわかっている空間と画像の6点以上の対応点から求めることができる。あるいは設計情報のような近似的な手段を用いて、射影変換に必要なデータを求めてもよい。   In FIG. 1, the coordinate conversion means 18 mutually converts image coordinates and road coordinates. FIG. 2 is a model diagram of a pinhole camera on which the coordinate conversion means 18 is premised. VP is the viewpoint of the camera, and the distance F is the focal length of VP and the image plane. In reality, the height of the camera is several meters from the road, and the focal length F is several tens of millimeters. In FIG. 2, the coordinate system of the image has an x axis on the right side of the image, a y axis on the upper side of the image, and an image center at the origin o. In FIG. 2, the coordinate system of the space has the origin at the intersection point O of the straight line passing the VP and the point o and the ground plane on the X axis in the road crossing direction, the Y axis in the road traveling direction, the Z axis vertically upward, and the point o. A point p on the image corresponding to an arbitrary point P in the space is an intersection of a straight line connecting the point P and VP and the image plane. The coordinate conversion means 18 coordinates (x, y) of the point p on the image corresponding to the coordinates (X, Y, Z) of an arbitrary point P in the space by known projective transformation as shown in FIG. Calculate In the projective transformation, the point on the road corresponding to the point p on the image becomes indefinite for the point P and the point P ′ in FIG. 2 on the straight line connecting VP and the point p, but in FIG. When the Z (height) of the corresponding point on the road is specified as described above, the coordinates (X, Y, Z) of the point P in the space corresponding to the coordinates (x, y) of the point p on the image are uniquely specified. Can be sought. Further, as shown in FIG. 3C, the coordinate conversion means 18 obtains the coordinates P1 and P2 in each space from the image coordinates of the two points p1 and p2 when Z is common, and 2 The distance in the space between the points p1 and p2 can be obtained from the Euclidean distance of the spatial coordinates of the points. Further, as shown in FIG. 3 (d), the coordinate conversion means 18 is arranged in order when the image coordinate of the point p1, the Z coordinate of the corresponding point in the space of the point p1, and the displacement of the point P2 from the point P1 are known. By obtaining the spatial coordinates of P1 and the spatial coordinates of the point P2, the image coordinates of the corresponding point p2 on the image of the point P2 can be obtained. Data necessary for the projective transformation of the coordinate transformation means 18 is obtained before the start of signal processing. The data necessary for the projective transformation is obtained from the position and direction in the spatial coordinates of the camera from which the image input means 11 obtains the image and the focal length F, or from the corresponding points in the space in which the coordinates are known and six or more corresponding points in the image. Can do. Alternatively, approximate data such as design information may be used to obtain data necessary for projective transformation.

図1において末尾候補検出手段13は図6のフローにより、図13(a)のように車両の形状に直方体モデル40を当てはめたときに末尾の辺41に相当する末尾候補の水平エッジを、水平エッジ20の中から検出する。図6を説明すると末尾候補検出手段13は水平エッジ20をループで変えながら(S120〜S127)、まず水平エッジ20[I]の左右の端が画像上において車線1の範囲内であるかを判定する(S121)。S121の判定がYであれば、水平エッジ[I]のZ座標を0mと仮定したときの左右端の画像座標から、図3(c)に示す座標変換手段18の座標変換によって水平エッジ20[I]の空間中の幅を求めて(S121)、S121の幅が最低車幅を考慮したしきい値以上であれば(S123)、S123の条件を満たす水平エッジ20のうち最も下側のものが末尾候補となるように、末尾候補が未登録の場合や(S124でY)、登録済みの末尾候補よりも水平エッジ[I]が画像上において下側のときに(S124でN、S125でY)、水平エッジ20[I]を末尾候補に登録する(S126)。図7(b)は末尾候補検出手段13が末尾候補21を抽出した例であり、末尾候補21は図7(c)において中型バス2のバンパーに相当した水平エッジ20aと一致している。   In FIG. 1, the tail candidate detecting means 13 applies the horizontal edge of the tail candidate corresponding to the tail side 41 when the cuboid model 40 is applied to the shape of the vehicle as shown in FIG. Detect from the edge 20. Referring to FIG. 6, the tail candidate detection means 13 determines whether the left and right ends of the horizontal edge 20 [I] are within the lane 1 range on the image while changing the horizontal edge 20 in a loop (S120 to S127). (S121). If the determination in S121 is Y, the horizontal edge 20 [] is converted from the image coordinates at the left and right ends when the Z coordinate of the horizontal edge [I] is assumed to be 0 m by coordinate conversion of the coordinate conversion means 18 shown in FIG. I] is obtained in the space (S121), and if the width of S121 is equal to or greater than the threshold value considering the minimum vehicle width (S123), the lowest one of the horizontal edges 20 satisfying the condition of S123. Is the end candidate, or when the horizontal edge [I] is lower on the image than the registered end candidate (Y in S124, N in S124, and S125). Y) The horizontal edge 20 [I] is registered as a tail candidate (S126). FIG. 7B shows an example in which the tail candidate detection unit 13 extracts the tail candidate 21. The tail candidate 21 matches the horizontal edge 20a corresponding to the bumper of the medium bus 2 in FIG.

図1において車高車長範囲推定手段14は、末尾候補21の空間座標での幅Wに応じて車高Hと車長Lの範囲を推定する。ここで図8は1BOX,RV,セダン,スポーツカー,コンパクト,軽自動車,軽トラック,小型トラック,中型トラック,中型トラック、大型トラックのように種々の車両の車幅W,車高H,車長Lのサンプルを一覧したものである。図8において車幅Wが最も狭い軽自動車は、車高Hも車長Lも最小であることがわかる。一方、軽トラックの車幅は軽自動車と同じであるが、軽トラックの車高はセダンやコンパクトカーよりも高く1BOXと同程度である。よって、図8より車幅W,車高H,車長Lの間には相関関係があること、また相関関係は一意で無く分布幅を持つことがわかる。図9(a)や図9(b)は図8のデータから求めた車高H/車幅W,車長L/車幅Wの比率のグラフであり、車幅Wを問わず車高H/車幅Wは下限αから上限β,車長L/車幅Wは下限μから上限νの範囲に分布することがわかる。よって、車幅Wが既知のときには車高は(数1)の範囲となり車長は(数2)の範囲となる。車高車長範囲推定手段14は末尾候補21の空間での幅Wを、末尾候補のZ座標を0mと仮定したときの左右端の画像座標から、図3(c)に示す座標変換手段18の処理により求める。車高車長範囲推定手段14は末尾候補21の空間での幅Wと(数1)と(数2)により、直方体モデルの車高H、車長Lの範囲を推定する。   In FIG. 1, the vehicle height vehicle length range estimation means 14 estimates the range of the vehicle height H and the vehicle length L according to the width W of the tail candidate 21 in the spatial coordinates. Here, FIG. 8 shows various vehicle widths W, vehicle heights H, vehicle lengths such as 1BOX, RV, sedan, sports car, compact, light vehicle, light truck, light truck, medium truck, medium truck, and large truck. This is a list of L samples. In FIG. 8, it can be seen that the light vehicle having the smallest vehicle width W has the smallest vehicle height H and vehicle length L. On the other hand, the width of a light truck is the same as that of a light car, but the height of a light truck is higher than that of a sedan or a compact car and is about the same as 1BOX. Therefore, it can be seen from FIG. 8 that there is a correlation among the vehicle width W, the vehicle height H, and the vehicle length L, and that the correlation is not unique but has a distribution width. FIG. 9A and FIG. 9B are graphs of the ratio of vehicle height H / vehicle width W and vehicle length L / vehicle width W obtained from the data of FIG. / Vehicle width W is distributed from the lower limit α to the upper limit β, and the vehicle length L / vehicle width W is distributed from the lower limit μ to the upper limit ν. Therefore, when the vehicle width W is known, the vehicle height is in the range of (Equation 1) and the vehicle length is in the range of (Equation 2). The vehicle height range estimation means 14 uses the coordinate conversion means 18 shown in FIG. 3C from the left and right image coordinates when the width W of the tail candidate 21 in the space and the Z coordinate of the tail candidate is assumed to be 0 m. Obtained by the process of The vehicle height range estimation means 14 estimates the range of the vehicle height H and the vehicle length L of the rectangular parallelepiped model based on the width W in the space of the tail candidate 21 and (Equation 1) and (Equation 2).

Figure 0004940177
Figure 0004940177

Figure 0004940177
Figure 0004940177

ペアエッジ検出手段15は図10と図11のフローによって末尾候補21が車両の直方体モデル40の末尾の辺41に対応すると仮定したときに、直方体モデル40の背面上の辺42および屋根前の辺43に対応する水平エッジを検出する。   The pair edge detection means 15 assumes that the tail candidate 21 corresponds to the tail side 41 of the cuboid model 40 of the vehicle by the flow of FIGS. 10 and 11, and the side 42 on the back surface of the cuboid model 40 and the side 43 before the roof. The horizontal edge corresponding to is detected.

ペアエッジ検出手段15はまず図10(a)に示すフローによって、画像上における背面上の辺42の存在範囲を求める。図10(a)に於いてペアエッジ検出手段15は末尾候補21のZを0mと置いたときに空間中において末尾候補の中央に対応する点Pの座標(S150)、点PからZ軸方向に車高の下限αWだけ離れた点Pαの空間座標(S151)、点Pαの画像上の対応点pαの画像y座標yαを求める(S152)。同様に、点PからZ軸方向に車高の上限βWだけ離れた点Pβの空間座標(S153)および点Pβの画像上の対応点pβの画像y座標yβを求める(S154)。そして、背面上の辺42の画像y座標は[yα,yβ]にあるとみなす。次にペアエッジ検出手段15は図10(b)に示すフローによって、画像上における屋根前の辺42存在範囲を求める。ペアエッジ検出手段15は、S150の点PからZ軸方向に車高の下限αWかつY軸方向に車長の下限μWだけ移動した点Pμの空間座標(S155)から点Pμの画像上の対応点pμの画像y座標yμを計算する(S156)とともに、点PからZ軸方向に車高の上限βWかつY軸方向に車長の上限νWだけ移動した点Pνの空間座標(S157)から画像上の対応点pνの画像y座標yνを求め(S158)、屋根前の辺43の画像y座標は[yμ,yν]にあるとみなす。 First, the pair edge detection means 15 obtains the existence range of the side 42 on the back surface on the image by the flow shown in FIG. Peaejji detection means 15 have at FIG. 10 (a) of the point P corresponding to the center of the trailing candidate in a space when placing the Z trailing candidates 21 and 0m coordinates (S150), from the point P in the Z axis direction The spatial coordinates (S151) of the point Pα separated by the lower limit αW of the vehicle height and the image y coordinate yα of the corresponding point pα on the image of the point Pα are obtained (S152). Similarly, the spatial coordinates (S153) of the point Pβ that is separated from the point P in the Z-axis direction by the upper limit βW of the vehicle height and the image y coordinate yβ of the corresponding point pβ on the image of the point Pβ are obtained (S154). The image y coordinate of the side 42 on the back surface is assumed to be in [yα, yβ]. Next, the pair edge detection means 15 calculates | requires the side 42 presence range before the roof on an image with the flow shown in FIG.10 (b). The pair edge detection means 15 corresponds to the corresponding point on the image of the point Pμ from the spatial coordinate (S155) of the point Pμ moved by the lower limit αW of the vehicle height in the Z-axis direction and the lower limit μW of the vehicle length in the Y-axis direction from the point P of S150. The image y coordinate yμ of pμ is calculated (S156) and on the image from the spatial coordinate (S157) of the point Pν moved from the point P by the vehicle height upper limit βW in the Z-axis direction and the vehicle length upper limit νW in the Y-axis direction. The image y coordinate yν of the corresponding point pν is obtained (S158), and the image y coordinate of the side 43 in front of the roof is regarded as [yμ, yν].

次に、ペアエッジ検出手段15は図11に示すフローにより、水平エッジ20の中から(S130からS135)、y座標が図10(a)のフローで求めた背面上の辺42の範囲にあり(S131でY)かつ図12(a)に図示する水平エッジ20と末尾候補21の画像x軸方向の重なり30が所定のしき値T以上ならば(S132でY)、水平エッジ20[I]を背面上の辺に相当する背面上ペアみなし、背面上ペアのフラグを1にする(S133)。また、背面上ペアの条件を満たす水平エッジ20[I]のうち、画像上において一番上にあるものを更新する(S134)。なお、S132の判定のしきい値Tは、サイドのピラーが傾いたセダン車やまるみを帯びた車両のように、実際の車両の形状が直方体モデルからずれる分および水平エッジ20の端が撮像系のノイズや照明変動により欠落することを考慮して、末尾候補21の幅に1より小さく0より大きな所定の比率を乗じて求める。 Next, the pair edge detection means 15 is within the range of the side 42 on the back surface obtained from the horizontal edge 20 (S130 to S135) by the flow shown in FIG. if S131 in Y) and FIG. 12 (a) to a horizontal edge 20 illustrating the image x-axis direction of the overlap 30 of the trailing candidate 21 is equal to or greater than the predetermined threshold T (Y in S132), the horizontal edge 20 [I] Is regarded as a pair on the back surface corresponding to the side on the back surface, and the flag of the pair on the back surface is set to 1 (S133). In addition, among the horizontal edges 20 [I] satisfying the condition of the pair on the back side, the top edge on the image is updated (S134). Note that the threshold value T for the determination in S132 is that the actual vehicle shape deviates from the rectangular parallelepiped model and the edge of the horizontal edge 20 is the imaging system, such as a sedan vehicle with a side pillar tilted or a rounded vehicle. Is determined by multiplying the width of the tail candidate 21 by a predetermined ratio smaller than 1 and larger than 0.

図7(d)は背面上ペアの抽出の一例であり、25は図10(a)のフローで求めた背面上の辺42の画像y座標の範囲、22は背面上ペアである。背面上ペア22は、図5(b)において屋根と背面の境界に対応した水平エッジ2eおよびリアガラス窓枠の上端に対応した水平エッジ20fと一致している。 FIG. 7D is an example of extraction of the pair on the back surface, 25 is the range of the image y coordinate of the side 42 on the back surface obtained by the flow of FIG. 10A, and 22 is the pair on the back surface. The rear upper pair 22 coincides with the horizontal edge 20 0 e corresponding to the boundary between the roof and the rear surface in FIG. 5B and the horizontal edge 20 f corresponding to the upper end of the rear glass window frame.

S135の後ペアエッジ検出手段15は水平エッジ20の中から(S140からS145)、y座標が図10(b)で求めた屋根前の辺43の範囲にあるときには(S141でY)、水平エッジ20[I]を道路Y軸方向に沿って背面上の辺42まで移動したときに画像x軸方向の重なりがしきい値T以上であれば(S142でY)、水平エッジ20[I]を屋根前の辺43に相当した水平エッジとして屋根前ペアのフラグを1にする(S143)。S142のしきい値Tは、S132のしきい値Tと同一である。また、屋根前ペアの条件を満たす水平エッジ20[I]のうち、画像上において一番上にあるものを更新する(S144)。   When the y-coordinate is within the range of the side 43 before the roof obtained in FIG. 10B (Y in S141), the rear pair edge detecting means 15 in S135 is within the horizontal edge 20 (S140 to S145). If the overlap in the image x-axis direction is equal to or greater than the threshold value T when [I] is moved along the road Y-axis direction to the side 42 on the back surface, the horizontal edge 20 [I] is covered with the roof. As a horizontal edge corresponding to the front side 43, the flag of the pair in front of the roof is set to 1 (S143). The threshold value T in S142 is the same as the threshold value T in S132. In addition, among the horizontal edges 20 [I] that satisfy the condition of the pair in front of the roof, the one on the top in the image is updated (S144).

図12(b)を用いてS142を補足すると、25は背面上の辺43の画像y座標の範囲である。32および33は図13(a)に示す直方体モデル40の屋根の左右の辺45および46に相当する直線であり、水平エッジ20[I]の左端および右端を通る点lの空間での対応点のZ座標を近似的に(数1)の下限と(数2)の上限の平均(α+β)/2×Wとしたときの空間中の対応する点Lおよび点Rを通り空間Y軸方向と平行な直線の座標変換手段18による画像上の像である。35は点lおよび点rを直線32に沿って同一y座標に移動した点mおよび点nを結ぶ線分、31は末尾候補21と線分35の画像x軸方向の重なりである。S142では車高が正確にはわからないものとして範囲25内でy座標を変えながら線分35および重なり31を求め、重なり31の最大値がしきい値Tを超えれば、判定をYとする。図7(e)は屋根前ペアの抽出の一例であり、26は図10(b)のフローで求めた屋根前の辺43の画像y座標の範囲、23は屋根前ペアである。屋根前ペア23は、図5(b)において中型バス2の屋根の前端に対応した水平エッジ20gと一致している。   If S142 is supplemented using FIG.12 (b), 25 will be the range of the image y coordinate of the edge | side 43 on a back surface. 32 and 33 are straight lines corresponding to the left and right sides 45 and 46 of the roof of the rectangular parallelepiped model 40 shown in FIG. 13A, and corresponding points in the space of the point l passing through the left end and the right end of the horizontal edge 20 [I]. Is approximately equal to the lower limit of (Equation 1) and the average of the upper limit of (Equation 2) (α + β) / 2 × W. It is an image on the image by the coordinate conversion means 18 of a parallel straight line. A line segment 35 connects the point m and the point n obtained by moving the point l and the point r along the straight line 32 to the same y coordinate, and 31 is an overlap of the tail candidate 21 and the line segment 35 in the image x-axis direction. In S142, the line height 35 and the overlap 31 are obtained while changing the y coordinate within the range 25 on the assumption that the vehicle height is not accurately determined. If the maximum value of the overlap 31 exceeds the threshold value T, the determination is Y. FIG. 7E is an example of extraction of a pair in front of the roof, 26 is a range of the image y coordinate of the side 43 before the roof obtained by the flow of FIG. 10B, and 23 is a pair in front of the roof. The pair 23 in front of the roof coincides with the horizontal edge 20g corresponding to the front end of the roof of the medium-sized bus 2 in FIG.

ペアエッジ検出手段15はS133およびS144で背面上ペアのフラグおよび屋根前のフラグが1になったときには、末尾候補21,背面上ペア22,屋根前ペア23の3つの水平エッジ20が直方体モデル40に適合するとして、図13(b)のように末尾候補21,背面上ペア22,屋根前ペア23を3辺とする直方体モデル40を車両の領域とする。なお、背面上ペア22あるいは屋根前ペア23が複数ある場合には、S134あるいはS144で求めた画像上で一番上にある背面上ペア22および屋根前ペア23を直方体モデル40の辺とする。以上説明した処理によってペアエッジ検出手段15、中型バス2の周囲のバン3やセダン車4の水平エッジ20、および中型バス2の内部の水平エッジ20の影響を受けることなく、頑強に中型バス2に適合した直方体モデル40を抽出することができる。   In S133 and S144, when the pair flag on the back surface and the flag in front of the roof are set to 1, the pair edge detection means 15 includes the three horizontal edges 20 of the tail candidate 21, the back surface pair 22, and the roof front pair 23 in the rectangular parallelepiped model 40. As shown in FIG. 13B, a rectangular parallelepiped model 40 having the tail candidate 21, the rear upper pair 22, and the roof front pair 23 as three sides as shown in FIG. When there are a plurality of back upper pairs 22 or front roof pairs 23, the upper back pair 22 and front roof pair 23 on the top of the image obtained in S134 or S144 are the sides of the rectangular parallelepiped model 40. By the processing described above, the paired edge detection means 15, the van 3 around the medium-sized bus 2, the horizontal edge 20 of the sedan car 4, and the horizontal edge 20 inside the medium-sized bus 2 are not affected by the medium-sized bus 2. An adapted rectangular parallelepiped model 40 can be extracted.

図14(a)は第2車線を対象としたときの検知領域1内の映像であり、検知領域1にはセダン車4および第1車線からはみ出した中型バス2が存在している。図14(a)において、セダン車3の側面は第1車線からはみ出した中型バス2により遮蔽されている。図14(b)は図14(a)から12,13,14,15の機能により抽出した結果であり、水平エッジ20,末尾候補21,背面上の辺42の画像y座標の範囲25,背面上ペア22を示している。図14(c)は図14(a)から14,15の機能により抽出した結果であり、屋根前の辺43の画像y座標の範囲26,屋根前ペア23を示している。図14(d)は図14(a)から抽出した末尾候補21,背面上ペア22,屋根前ペア23により構成する直方体モデル40を示している。図14(b)と図14(c)はセダン車4より抽出された末尾候補22の幅に応じた範囲から背面上ペア22および屋根前ペア23が抽出された例、図14(d)より第1車線からはみ出した中型バス2の水平エッジ20の影響を受けず、セダン車4の直方体モデル40が抽出された例を示している。   FIG. 14A is an image in the detection area 1 when the second lane is targeted. In the detection area 1, the sedan vehicle 4 and the medium-sized bus 2 protruding from the first lane exist. In FIG. 14A, the side surface of the sedan vehicle 3 is shielded by the medium-sized bus 2 that protrudes from the first lane. FIG. 14B shows the result extracted from FIG. 14A by the functions 12, 13, 14, and 15. The horizontal edge 20, the end candidate 21, the range y of the image y coordinate of the side 42 on the back surface, and the back surface. The upper pair 22 is shown. FIG. 14C shows the result extracted from the functions of FIGS. 14 and 15 from FIG. 14A, and shows the range 26 of the image y coordinate of the side 43 before the roof and the pair 23 before the roof. FIG. 14D shows a rectangular parallelepiped model 40 constituted by the tail candidate 21, the back upper pair 22, and the roof front pair 23 extracted from FIG. 14A. 14B and 14C are examples in which the rear upper pair 22 and the front roof pair 23 are extracted from a range corresponding to the width of the tail candidate 22 extracted from the sedan vehicle 4, from FIG. 14D. An example in which the rectangular parallelepiped model 40 of the sedan vehicle 4 is extracted without being affected by the horizontal edge 20 of the medium-sized bus 2 that protrudes from the first lane is shown.

ペアエッジ検出手段15が直方体モデル40を抽出したとき、車両追跡手段16の追跡処理を開始する。図15は車両追跡手段16のフローである。車両追跡手段16は、まずペアエッジ検出手段15が抽出した直方体モデル40の末尾の辺42の中心位置を追跡体の初期位置として求める(S201)。次に、画像入力手段11の画像からS201の初期位置付近を切り取って初期の追跡テンプレートとする(S202)。追跡テンプレートの大きさは〔特許文献3〕に開示されているように、当該画像の切り出し位置における車線の幅に比例して定めることができる。S202で初期のテンプレートを設定した次のフレームでは、S202にて切り取ったテンプレート内の画像の次のフレームの画像における照合位置を特許文献3に開示されているように正規化相関演算によりパタン照合により探索する(S203)。パタン照合により該当領域がある場合(S204でY)は、S203で照合したテンプレートの中心から追跡体の位置を求めた後(S205)、次フレームの画像からS205で更新した追跡体の位置付近の画像を切り出すことによりテンプレートを更新する(S206)。追跡テンプレートの更新後は処理周期ごとにS203の処理に戻り、S203はS206で更新したテンプレートの次フレームでの照合処理を繰り返す。パタン照合により該当領域がない場合は(S204でN)、追跡テンプレートの最終座標を求めて(S207)追跡を終了する(S208)。   When the pair edge detection means 15 extracts the rectangular parallelepiped model 40, the tracking process of the vehicle tracking means 16 is started. FIG. 15 is a flow of the vehicle tracking means 16. The vehicle tracking unit 16 first obtains the center position of the end side 42 of the rectangular parallelepiped model 40 extracted by the pair edge detection unit 15 as the initial position of the tracking body (S201). Next, the vicinity of the initial position of S201 is cut out from the image of the image input means 11 to obtain an initial tracking template (S202). The size of the tracking template can be determined in proportion to the width of the lane at the cutout position of the image, as disclosed in [Patent Document 3]. In the next frame in which the initial template is set in S202, the collation position in the image of the next frame of the image cut out in S202 is obtained by pattern collation by normalized correlation calculation as disclosed in Patent Document 3. Search is performed (S203). When there is a corresponding region by pattern matching (Y in S204), the position of the tracking object is obtained from the center of the template verified in S203 (S205), and then the position near the tracking object position updated in S205 from the image of the next frame. The template is updated by cutting out the image (S206). After the tracking template is updated, the processing returns to S203 every processing cycle, and S203 repeats the matching processing in the next frame of the template updated in S206. If there is no corresponding region by pattern matching (N in S204), the final coordinates of the tracking template are obtained (S207) and the tracking is terminated (S208).

なお、車両追跡手段16のS203のパタン照合は、前記の正規化相関のほか、絶対差分のようなほかの類似度、差分の演算を使っても同様の効果が得られる。また、車両追跡手段16は図15に示したフロー以外でも、S201,S202,S204,S205,S207の処理を有する任意の追跡手法を適用しても、同様の効果を得ることができる。   Note that the pattern matching in S203 of the vehicle tracking means 16 can achieve the same effect by using other similarities and differences such as absolute differences in addition to the normalized correlation. Further, the vehicle tracking means 16 can obtain the same effect by applying any tracking method having the processes of S201, S202, S204, S205, and S207 other than the flow shown in FIG.

交通指標計算手段17は画像上におけるS201の座標、S207の座標から、図3(b)に示す座標変換手段18の処理によって、S201の座標の点とS207の座標の点の空間での対応点のZ座標を0mとしたときの座標をそれぞれ求めて、2時刻のY座標の移動量とS201の時刻とS207の時刻の差から速度を計測する。交通指標計算手段17は、所定周期毎に追跡が終了した追跡体の数の合計から台数、前記計測した速度の平均から平均速度を計算する。なお、交通指標計算手段17は前記速度の計測において、S101の座標と時刻の変わりに、追跡途中のS205の時刻を用いても同様の効果が得られる。   The traffic index calculation means 17 calculates the corresponding points in the space between the coordinates of S201 and the coordinates of S207 from the coordinates of S201 and S207 on the image by the processing of the coordinate conversion means 18 shown in FIG. The coordinates are obtained when the Z coordinate is set to 0 m, and the speed is measured from the amount of movement of the Y coordinate at two times and the difference between the times of S201 and S207. The traffic index calculation means 17 calculates the average speed from the total of the number of tracked bodies that have been tracked every predetermined period and the average of the measured speeds. The traffic index calculation means 17 can also obtain the same effect by using the time of S205 during tracking in place of the coordinates and time of S101 in the speed measurement.

本発明の実施例1では以上説明した機能構成により、混雑した交通でも車両の大きさを問わず個別の車両を精度よく抽出し、台数と速度を計測することが可能になる。   In the first embodiment of the present invention, with the functional configuration described above, it is possible to accurately extract individual vehicles and measure the number and speed even in congested traffic regardless of the size of the vehicle.

図16は本発明の実施例2の機能構成の概略を示すブロック図である。図16において、16の機能を除いた11,12,13,14,15,17,18,19の各機能は図1に示した実施例1と同様の機能を果たす。図16において車両追跡手段16は、実施例1の車両追跡手段16の機能に加えてペアエッジ検出手段15が検出した直方体モデル40のデータを追跡体と組にして保持する機能を持つ。   FIG. 16 is a block diagram showing an outline of a functional configuration according to the second embodiment of the present invention. In FIG. 16, each function of 11, 12, 13, 14, 15, 17, 18, and 19 excluding 16 functions performs the same function as the first embodiment shown in FIG. In FIG. 16, in addition to the function of the vehicle tracking means 16 of the first embodiment, the vehicle tracking means 16 has a function of holding the data of the rectangular parallelepiped model 40 detected by the pair edge detection means 15 as a pair with the tracking body.

図16において包含手段10は、図17にフローを示す処理を行う。図18(a)と図18(b)は、包含手段10を説明する図であり、図18(a)は時刻tのフレームにて、バン3の末尾が検知領域1内に進入するより前に、誤ってリアウインドの下端を末尾の辺41として直方体モデル40aを抽出した例を示している。図18(a)において、51はS202で求めた追跡体の初期位置である。図18(b)において、52は時刻t+Δtにおいて更新された追跡体の位置であり、50は追跡位置51から追跡位置52への変位と同じだけ直方体モデル40aを移動した直方体モデルである(S151)。包含手段10は、直方体モデル41が直方体モデル40の内部にある場合には(S152でY)追跡体52の追跡を中断して交通指標計算手段17の対象外とする(S153)。包含手段10は、S151,S152,S153の処理を、車両追跡手段16内の追跡体の数だけ繰り返す(S150,S154)。   In FIG. 16, the inclusion means 10 performs the processing shown in the flow in FIG. 18 (a) and 18 (b) are diagrams illustrating the inclusion means 10, and FIG. 18 (a) is a frame at time t before the end of the van 3 enters the detection area 1. FIG. In this example, the rectangular parallelepiped model 40a is erroneously extracted with the lower end of the rear window as the end side 41. In FIG. 18 (a), 51 is the initial position of the tracker obtained in S202. In FIG. 18B, reference numeral 52 denotes the position of the tracking body updated at time t + Δt, and reference numeral 50 denotes a rectangular parallelepiped model obtained by moving the rectangular parallelepiped model 40a by the same amount as the displacement from the tracking position 51 to the tracking position 52 (S151). . If the cuboid model 41 is inside the cuboid model 40 (Y in S152), the inclusion means 10 interrupts the tracking of the tracking body 52 and excludes it from the traffic index calculation means 17 (S153). The inclusion unit 10 repeats the processes of S151, S152, and S153 by the number of tracking bodies in the vehicle tracking unit 16 (S150, S154).

実施例2では包含手段10により、末尾候補検出手段13やペアエッジ検出手段15の誤判定により車両内部の局所的なエッジから直方体モデル40を抽出してしまった場合でも、以降の時刻で車両全体をとらえた直方体モデル40を抽出することによって追跡を中断することで、実施例1以上に高い台数や速度の計測精度を実現する。   In the second embodiment, even when the inclusion unit 10 has extracted the rectangular parallelepiped model 40 from the local edge inside the vehicle due to the erroneous determination of the tail candidate detection unit 13 or the paired edge detection unit 15, the entire vehicle is detected at a later time. By stopping the tracking by extracting the captured cuboid model 40, the measurement accuracy of the number and speed higher than those of the first embodiment is realized.

図19は本発明の実施例3の機能構成の概略を示すブロック図である。図19において、10と16の機能を除いた11,12,13,14,15,17,18,19の各機能は実施例1と同様の機能を果たす。   FIG. 19 is a block diagram showing an outline of a functional configuration according to the third embodiment of the present invention. In FIG. 19, the functions of 11, 12, 13, 14, 15, 17, 18, 19 excluding the functions of 10 and 16 perform the same functions as in the first embodiment.

図19において、追加検知手段9は画像入力手段11が取り込んだ画像を入力として、本発明以外の方式により車両を検知し、追跡の初期位置を計算する。追加検知手段9の車両検知方式は〔特許文献3〕をはじめとして任意である。また、追加検知手段9の数は1つ以上の複数であってよい。   In FIG. 19, the additional detection means 9 receives the image captured by the image input means 11 as an input, detects the vehicle by a method other than the present invention, and calculates the initial position of tracking. The vehicle detection method of the additional detection means 9 is arbitrary including [Patent Document 3]. Moreover, the number of the additional detection means 9 may be one or more.

図19において、車両追跡手段16は車両を検知したのがペアエッジ検出手段15であれば直方体モデル40のデータを追跡体と組にして保持し、車両を検知したのが追加検知手段9であれば直方体モデル40のデータ空にする。   In FIG. 19, if the vehicle tracking means 16 detects the vehicle if it is the pair edge detection means 15, it holds the data of the rectangular parallelepiped model 40 in combination with the tracking body, and if the vehicle is detected if it is the additional detection means 9. The data of the rectangular parallelepiped model 40 is made empty.

図19において包含手段10は図20のフローに従い、追跡体が直方体モデル40のデータを持つ場合には(S161でY)、実施例2の包含手段10と同様にS151,S152,S153の処理を行う。直方体モデル40のデータを持たない場合には(S161でY)、図21(a)に示すように時刻t+Δtにおける追跡体の位置52の近傍領域60を求め(S162)、近傍領域60が時刻tで検知した直方体モデル40の内部にあれば(S162でY)追跡を中断する(S153)。近傍領域60はS205で求めるテンプレートの領域で求めることができる。近傍領域60は他にも、図21(b)のように追跡位置52付近の時刻t+Δtの水平エッジ20(図21(b)において20h,20i,20j)の外接矩形62で求めてもよい。あるいは、近傍領域60は追加検知手段9で特定した車両の領域に応じて定めてもよい。   In FIG. 19, the inclusion means 10 follows the flow of FIG. 20, and if the tracked body has the data of the rectangular parallelepiped model 40 (Y in S161), the processes of S151, S152, and S153 are performed as in the inclusion means 10 of the second embodiment. Do. If the data of the rectangular parallelepiped model 40 is not available (Y in S161), the vicinity area 60 of the tracking body position 52 at time t + Δt is obtained as shown in FIG. 21A (S162). If it is within the rectangular parallelepiped model 40 detected in (Y in S162), the tracking is interrupted (S153). The neighborhood area 60 can be obtained from the template area obtained in S205. Alternatively, the neighboring region 60 may be obtained by a circumscribed rectangle 62 of the horizontal edge 20 (20h, 20i, 20j in FIG. 21B) at the time t + Δt near the tracking position 52 as shown in FIG. 21B. Alternatively, the neighborhood area 60 may be determined according to the area of the vehicle specified by the additional detection means 9.

実施例3では、追加検知手段9に直方体モデル40の抽出が困難な状況でも良好に検知する車両の検知方式を導入することで、広い環境条件をカバーして車両を検知することができる。また、直方体モデル40が良好に動作する環境下では、追加検知手段9が車両の背面や屋根を検知した場合あるいはペアエッジ検出手段15と同一箇所を検知した場合でも、包含手段10により余分な追跡テンプレートの追跡を中断することで台数と速度を良好に計測することができる。   In the third embodiment, the vehicle can be detected while covering a wide range of environmental conditions by introducing a detection method of the vehicle that detects well even when it is difficult to extract the rectangular parallelepiped model 40 to the additional detection means 9. Further, in an environment in which the rectangular parallelepiped model 40 operates well, even if the additional detection means 9 detects the back surface or the roof of the vehicle or the same place as the paired edge detection means 15, an extra tracking template is used by the inclusion means 10. By interrupting tracking, the number and speed can be measured well.

例えば〔特許文献4〕に開示されたテールランプを検知する検知方式のように、コントラストが極端に低くて水平エッジ20の検出が困難な夜間の環境条件でも良好に検知する車両検知の方式を導入すれば、夜間をカバーすると共に混雑時において台数および速度の計測精度が良好な交通流計測が可能になる。 For example, as a detection method for detecting a tape Ruranpu disclosed in [Patent Document 4], introducing a system of the vehicle detected satisfactorily detected even difficult nighttime environmental condition detection of the horizontal edge 20 with the contrast is extremely low By doing so, it is possible to measure traffic flow with good accuracy in measuring the number and speed at the time of congestion while covering the night.

実施例4の装置構成は、図1の機能構成において、14と15の機能を除いた11,12,13,16,17,18は実施例1と同一である。   The apparatus configuration of the fourth embodiment is the same as that of the first embodiment in the functional configuration of FIG. 1 except for the functions of 14 and 15, except for 11, 12, 13, 16, 17, and 18.

実施例4の車高車長範囲推定手段14は、図22(a)に示すように車高H/車幅Wの下限を曲線Fα、上限を曲線Fβで保持し、図3(b)に示すように座標変換手段18が計算する末尾候補21のZ座標を0mとしたときの空間での幅W毎に(数3)および(数4)を用いて車高H/車幅Wの上限αおよび下限βを計算する。同様に前記末尾候補21のZ座標を0mとしたときの空間での幅W毎に、(数5)および(数6)を用いて車長L/車幅Wの上限νおよび下限μを計算する。実施例4のペアエッジ検出手段15では、S131,S141,S142,S151,S152,S153,S154,S155,S156の各処理にて、(数3)および(数4)および(数5)および(数6)の関数で求めたαおよびβおよびνおよびμを用いる。 As shown in FIG. 22A, the vehicle height vehicle length range estimating means 14 of the fourth embodiment holds the lower limit of the vehicle height H / vehicle width W as a curve F α and the upper limit as a curve F β . The vehicle height H / the vehicle width W using (Equation 3) and (Equation 4) for each width W in the space when the Z coordinate of the tail candidate 21 calculated by the coordinate conversion means 18 is 0 m as shown in FIG. An upper limit α and a lower limit β are calculated. Similarly, for each width W in the space when the Z coordinate of the tail candidate 21 is 0 m, the upper limit ν and the lower limit μ of the vehicle length L / the vehicle width W are calculated using (Equation 5) and (Equation 6). To do. In the pair edge detection means 15 of the fourth embodiment, in the processes of S131, S141, S142, S151, S152, S153, S154, S155, and S156, (Equation 3), (Equation 4), (Equation 5), and (Equation 5) Α and β and ν and μ obtained by the function of 6) are used.

Figure 0004940177
Figure 0004940177

Figure 0004940177
Figure 0004940177

Figure 0004940177
Figure 0004940177

Figure 0004940177
Figure 0004940177

図22(a)および図23(b)の曲線は車幅Wを適当な刻みで離散化したときに(Wi=W1,W2,W3…、離散化した各Wiの区間[Wi,Wi+1]内の車幅をもつ車両を図8のような一覧データから全て抽出したときの車高H/車幅W,車高L/車幅Wの上限,下限それぞれを、Wiを網羅的に換えて計算することで取得できる。なお、Wiの区間に一つも車両がない場合には、前後区間Wi-1,Wi+1の平均により取得する。また、以上の手順で求めた曲線FαおよびFβおよびFνおよびFμを平滑などのフィルタ処理をしてもよい。以上の曲線FαおよびFβおよびFνおよびFμの求め方は一例であり、スプライン曲線のように他の方法で求めても同様の効果が得られる。 Figure 22 (a) and FIG. 23 (b) of the curve when discretizing the vehicle width W in appropriate increments (W i = W 1, W 2, W 3 ..., discretized section of the W i [ W i , W i + 1 ], the vehicle height H / vehicle width W, vehicle height L / vehicle width W when the vehicles having the vehicle width are all extracted from the list data as shown in FIG. can be obtained by calculating instead the W i comprehensively. when there is no one even vehicles in the interval W i is obtained by the average of the front and rear sections W i-1, W i + 1. the The curves F α, F β, F ν and F μ obtained by the above procedure may be subjected to filtering such as smoothing, etc. The above methods for obtaining the curves F α, F β, F ν and F μ are an example. The same effect can be obtained even if it is obtained by other methods such as a spline curve.

ここで図22(a)および図22(b)をみると、車高H/車幅Wと車長L/車幅Wの上限,下限ともに、車幅Wに応じて異なることがわかる。特に、図22(b)に着目すると、車長L/車幅Wの比率は車幅Wが大きなものほど上限Fν(W),下限Fμ(W)ともに高くなること、また上限Fν(W)と下限Fμ(W)の差が大きくなることがわかる。実施例4では実施例1と比べると、FαおよびFβおよびFνおよびFμの4曲線を作成する手間がかかるが、車幅Wに応じて最適なペアエッジ検出手段15の背面上ペア22および屋根前ペア23の範囲25および26を計算することで、ペアエッジ検出手段15が抽出する直方体モデル40の精度を向上できる。 22A and 22B, it can be seen that both the upper limit and the lower limit of the vehicle height H / vehicle width W and the vehicle length L / vehicle width W differ depending on the vehicle width W. In particular, focusing on FIG. 22 (b), the ratio of the vehicle length L / vehicle width W is such that the higher the vehicle width W, the higher the upper limit F v (W) and the lower limit F μ (W), and the upper limit F v It can be seen that the difference between (W) and the lower limit F μ (W) increases. Compared with the first embodiment, the fourth embodiment takes time and effort to create four curves of F α and F β and F ν and F μ , but the pair 22 on the back surface of the pair edge detecting means 15 that is optimal according to the vehicle width W. And by calculating the ranges 25 and 26 of the pair 23 in front of the roof, the accuracy of the rectangular parallelepiped model 40 extracted by the pair edge detection means 15 can be improved.

なお実施例4の車高車長範囲推定手段14を、図16に示す実施例2および図19に示す実施例3の機能構成に置き換えることで、実施例1と同様にペアエッジ検出手段15が抽出する直方体モデル40の精度を向上できる。   The vehicle height range estimation means 14 of the fourth embodiment is replaced with the functional configuration of the second embodiment shown in FIG. 16 and the third embodiment shown in FIG. 19, so that the pair edge detection means 15 extracts the same as in the first embodiment. The accuracy of the rectangular parallelepiped model 40 to be improved can be improved.

図1および図16および図19において、交通流DB(DBはデータベースの略記)19は、9,10,11,12,13,14,15,16,17,18の各機能と図示しない線により接続され、各機能が信号処理に必要なデータを保持する(なお略記したが、図1には9と10、図16には9の機能は存在しない)。   1, 16, and 19, a traffic flow DB (DB is an abbreviation of database) 19 is represented by functions of 9, 10, 11, 12, 13, 14, 15, 16, 17, 18 and lines not shown. Each function is connected and holds data necessary for signal processing (note that the functions 9 and 10 in FIG. 1 and 9 functions in FIG. 16 do not exist).

実施例1あるいは実施例2あるいは実施例3あるいは実施例4において、9,10,11,12,13,14,15,16,17,18,19の各機能は路側に設置された計算機上の信号処理にて実現される。あるいは、路側のカメラを伝送するネットワークあるいはカメラの映像を記録した記憶媒体と接続された計算機上の信号処理にて実現される。   In the first embodiment, the second embodiment, the third embodiment, or the fourth embodiment, the functions 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, and 19 are performed on a computer installed on the roadside. Realized by signal processing. Alternatively, it is realized by signal processing on a computer connected to a network that transmits a roadside camera or a storage medium that records video of the camera.

なお、実施例1の説明において画像入力手段11が取り込んだ画像の例として、図4に画像上を下から上の向きに進む車両を後方から撮影した画像を示したが、車両を前面から撮影した場合でも車両の輪郭や車両の屋根とフロントガラス窓枠との間の境界線といった内部に水平方向の濃度勾配が生じることおよび、車両の進行方向を画像上の上から下としても画像上の車両付近はフレーム間で速度に応じた範囲が明度変化することより、水平エッジ検出手段12は図5のフローにより同様の水平エッジ20を抽出できる。よって、実施例1および実施例2および実施例3および実施例4は、車両の撮影方向や進行方向によらず同様の効果を発揮できる。   As an example of the image captured by the image input means 11 in the description of the first embodiment, FIG. 4 shows an image obtained by photographing a vehicle traveling from the bottom to the top on the image, but the vehicle is photographed from the front. Even in such a case, a horizontal density gradient such as the outline of the vehicle and the boundary line between the vehicle roof and the windshield window frame is generated, and the moving direction of the vehicle is changed from top to bottom on the image. In the vicinity of the vehicle, the range according to the speed changes between frames, so that the horizontal edge detecting means 12 can extract the same horizontal edge 20 by the flow of FIG. Therefore, the first embodiment, the second embodiment, the third embodiment, and the fourth embodiment can exhibit the same effect regardless of the shooting direction and the traveling direction of the vehicle.

本発明は俯瞰したカメラの画像から車両を検知,追跡する用途に広く一般に適用することができる。   The present invention can be widely applied to a purpose of detecting and tracking a vehicle from an image of a camera viewed from above.

本発明の実施例1のブロック図。1 is a block diagram of Embodiment 1 of the present invention. 座標変換手段18の処理の説明。Explanation of processing of the coordinate conversion means 18. 車両後部絞込み手段の処理画面の例。The example of the processing screen of a vehicle rear part narrowing means. 路上カメラにより撮影した画像の例。The example of the image image | photographed with the street camera. 水平エッジ検出手段12のフロー。The flow of the horizontal edge detection means 12. 末尾候補検出手段13のフロー。The flow of the tail candidate detection means 13. 水平エッジの検出の例。An example of horizontal edge detection. 車幅,車高,車長の表の例。Example of table of vehicle width, vehicle height, and vehicle length. 車高/車幅,車長/車幅の分布グラフ。Distribution graph of vehicle height / width and length / width. 実施例1において車幅から車高の範囲,車幅から車長の範囲を推定するフロー。The flow which estimates the range of vehicle height from vehicle width in Example 1, and the range of vehicle length from vehicle width. ペアエッジ検出手段15のフロー。The flow of the pair edge detection means 15. ペアエッジ検出手段15のフローにおける水平エッジ20の重なりを説明する図。The figure explaining the overlap of the horizontal edge 20 in the flow of the pair edge detection means 15. FIG. (a)画像中の車両に直方体モデル40を当てはめた例および(b)水平エッジ20に直方体モデル40を当てはめた例。(A) An example in which the cuboid model 40 is applied to the vehicle in the image, and (b) an example in which the cuboid model 40 is applied to the horizontal edge 20. 画像中の車両から水平エッジ20および直方体モデル40を抽出したもうひとつの例。Another example in which the horizontal edge 20 and the rectangular parallelepiped model 40 are extracted from the vehicle in the image. 車両追跡手段の処理16フロー。Process 16 flow of vehicle tracking means. 本発明の実施例2のブロック図。The block diagram of Example 2 of this invention. 本発明の実施例2の包含手段10のフロー。The flow of the inclusion means 10 of Example 2 of this invention. 本発明の実施例2の包含手段10を説明する図。The figure explaining the inclusion means 10 of Example 2 of this invention. 本発明の実施例3のブロック図。The block diagram of Example 3 of this invention. 本発明の実施例3の包含手段10のフロー。The flow of the inclusion means 10 of Example 3 of this invention. 本発明の実施例3の包含手段10を説明する図。The figure explaining the inclusion means 10 of Example 3 of this invention. 本発明の実施例4車高車長範囲推定手段14を説明する図。The figure explaining Example 4 vehicle height range estimation means 14 of this invention. 速度計測の誤差を説明する図。The figure explaining the error of speed measurement.

符号の説明Explanation of symbols

11 画像入力手段
12 水平エッジ検出手段
13 末尾候補検出手段
14 車高車長範囲推定手段
15 ペアエッジ検出手段
16 車両追跡手段
17 交通指標計算手段
18 座標変換手段
20 水平エッジ
21 末尾候補
22 背面上ペア
23 屋根前ペア
40 直方体モデル
DESCRIPTION OF SYMBOLS 11 Image input means 12 Horizontal edge detection means 13 End candidate detection means 14 Vehicle height vehicle length range estimation means 15 Pair edge detection means 16 Vehicle tracking means 17 Traffic index calculation means 18 Coordinate conversion means 20 Horizontal edge 21 End candidate 22 Back upper pair 23 Pair 40 in front of the roof

Claims (6)

路上のカメラの撮影画像を取得する画像入力手段と、画像座標と道路座標とを変換する座標変換手段と、前記撮影画像中の水平エッジを抽出する水平エッジ検出手段と、前記水平エッジ検出手段が抽出した前記撮影画像中の水平エッジから車両の形状をあらわす直方体モデルの末尾にあたる水平エッジを検出する末尾候補検出手段と、前記直方体モデルの末尾にあたる水平エッジの幅を当該車両の車幅としたとき、当該車幅に応じて当該車両の車高と車長とを推定する車高車長範囲推定手段と、前記車高車長範囲推定手段が推定した前記車高と前記車長と前記直方体モデルの末尾にあたる水平エッジの幅とを寸法とした前記直方体モデルの背面上と屋根前の辺とに当たる水平エッジを検出するペアエッジ検出手段と、当該車両を追跡する車両追跡手段と、追跡する当該車両の台数と速度とを計測する交通指標計算手段と、から構成されることを特徴とする交通流計測装置。 Image input means for acquiring a captured image of a camera on the road, coordinate conversion means for converting image coordinates and road coordinates, horizontal edge detection means for extracting a horizontal edge in the captured image, and the horizontal edge detection means End candidate detection means for detecting a horizontal edge corresponding to the end of the rectangular parallelepiped model representing the shape of the vehicle from the extracted horizontal edge in the captured image, and the width of the horizontal edge corresponding to the end of the rectangular parallelepiped model as the vehicle width of the vehicle the vehicle height vehicle length range estimating means, and the vehicle the vehicle height to a high vehicle length range estimating means has estimated that the vehicle length rectangular model for estimating the vehicle height and vehicle length of the vehicle in response to the vehicle width end corresponding to the vehicle to track the Peaejji detecting means for detecting a horizontal edge which hits the width of the horizontal edge and the rear on the roof front edge of the rectangular parallelepiped model and size, the vehicle of A trace unit, characterized in that it is composed of a transport index calculation means for measuring the number and speed of the vehicle to be tracked traffic flow measuring device. 路上のカメラの撮影画像から抽出した水平エッジのうち車幅に相当した水平エッジを直方体モデルの末尾の辺と仮定したときに、前記直方体モデルの背面上の辺と屋根前の辺に相当するエッジが、前記車幅に相当した水平エッジの幅から換算した車幅と、当該車幅から推定した車高および車長の範囲に応じた画像上の範囲から検出されることを条件に車両を検知し追跡することを特徴とする交通流計測装置。 Of the horizontal edges extracted from path of an image captured by a camera, when assuming the horizontal edges corresponds to the vehicle width and trailing edges of the rectangular prism model, corresponding to the back on the side and roof front edge of the rectangular parallelepiped model edge, and a vehicle width converted from the width of the horizontal edge that corresponds to the vehicle width, the condition to be detected from the range of the image corresponding to the range of vehicle height and vehicle length estimated from the vehicle width, the vehicle traffic flow measurement equipment, characterized in that for detecting and tracking. 請求項1または請求項2に記載の交通流計測装置において、前記車高および前記車長の範囲を、前記車幅、前記車高および前記車長の比率の上限ならびに下限から求めることを特徴とする交通流計測装置。 Oite the traffic flow measurement equipment according to claim 1 or claim 2, determining the range of the vehicle height and the vehicle length, the vehicle width, the upper and lower limits of the ratio of the vehicle height and the vehicle length traffic flow measurement equipment according to claim. 請求項1または請求項2に記載の交通流計測装置において、前記車高および前記車長の範囲を、前記車幅、前記車高および前記車長の比率の上限ならびに下限の特性曲線から求めることを特徴とする交通流計測装置。 Oite the traffic flow measurement equipment according to claim 1 or claim 2, the range of the vehicle height and the vehicle length, the vehicle width, the upper and lower limits of the characteristic curve of the ratio of the vehicle height and the vehicle length traffic flow measurement equipment, characterized in that obtained from. 請求項1または請求項2に記載の交通流計測装置において、現時刻以前に抽出した直方体モデルの現時刻における領域を求めて、現時刻に抽出した直方体モデルが前記現時刻以前に抽出した直方体モデルの現時刻における領域を包含する場合には、前記現時刻以前に抽出した直方体モデルの追跡を中断することを特徴とする交通流計測装置。 Oite the traffic flow measurement equipment according to claim 1 or claim 2, seeking area at the current time of a rectangular parallelepiped model extracted to the current time before extracting the rectangular prism model previously the current time extracted to the current time It was when including an area at the current time is the rectangular prism model, traffic flow measurement equipment, characterized in that to interrupt the tracking of rectangular model extracted the the current time previously. 請求項1請求項2または請求項4に記載の交通流計測装置において、任意の車両検知手段が同時に動作し、前記任意の車両検知手段の現時刻における領域が前記直方体モデルの内部にあれば、前記任意の車両検知手段の検知による追跡を中断することを特徴とする交通流計測装置。 Claim 1, Oite the traffic flow measurement equipment according to claim 2 or claim 4, any vehicle detection means operate simultaneously, the inside of the rectangular prism model area at the current time of the arbitrary vehicle detecting means if the traffic flow measurement equipment, characterized in that interrupting the tracking by detecting the arbitrary vehicle detection means.
JP2008089182A 2008-03-31 2008-03-31 Traffic flow measuring device Expired - Fee Related JP4940177B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2008089182A JP4940177B2 (en) 2008-03-31 2008-03-31 Traffic flow measuring device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2008089182A JP4940177B2 (en) 2008-03-31 2008-03-31 Traffic flow measuring device

Publications (2)

Publication Number Publication Date
JP2009245042A JP2009245042A (en) 2009-10-22
JP4940177B2 true JP4940177B2 (en) 2012-05-30

Family

ID=41306873

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2008089182A Expired - Fee Related JP4940177B2 (en) 2008-03-31 2008-03-31 Traffic flow measuring device

Country Status (1)

Country Link
JP (1) JP4940177B2 (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8878927B2 (en) * 2010-10-28 2014-11-04 Raytheon Company Method and apparatus for generating infrastructure-based basic safety message data
JP6413318B2 (en) * 2014-04-22 2018-10-31 サクサ株式会社 Vehicle detection device, system, and program
JP6413319B2 (en) * 2014-04-22 2018-10-31 サクサ株式会社 Vehicle detection device, system, and program
CN105894825A (en) * 2015-06-03 2016-08-24 杭州远眺科技有限公司 Flow sensor-based urban road occupancy calculating method
CN110033479B (en) * 2019-04-15 2023-10-27 四川九洲视讯科技有限责任公司 Traffic flow parameter real-time detection method based on traffic monitoring video
JP7290531B2 (en) 2019-09-27 2023-06-13 鹿島建設株式会社 TRAFFIC CONTROL SYSTEM AND METHOD FOR CONSTRUCTION ROAD
US11814080B2 (en) 2020-02-28 2023-11-14 International Business Machines Corporation Autonomous driving evaluation using data analysis
US11702101B2 (en) 2020-02-28 2023-07-18 International Business Machines Corporation Automatic scenario generator using a computer for autonomous driving
US11644331B2 (en) 2020-02-28 2023-05-09 International Business Machines Corporation Probe data generating system for simulator

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3453952B2 (en) * 1995-09-29 2003-10-06 株式会社日立製作所 Traffic flow measurement device
JPH11252587A (en) * 1998-03-03 1999-09-17 Matsushita Electric Ind Co Ltd Object tracking device
JPH11353581A (en) * 1998-06-09 1999-12-24 Anritsu Corp Method and device for discriminating vehicle kind in the daytime
JP2000163691A (en) * 1998-11-30 2000-06-16 Oki Electric Ind Co Ltd Traffic flow measuring instrument
JP2007164565A (en) * 2005-12-15 2007-06-28 Sumitomo Electric Ind Ltd System and device of vehicle sensing for traffic-actuated control
JP5105400B2 (en) * 2006-08-24 2012-12-26 コイト電工株式会社 Traffic measuring method and traffic measuring device

Also Published As

Publication number Publication date
JP2009245042A (en) 2009-10-22

Similar Documents

Publication Publication Date Title
JP4940177B2 (en) Traffic flow measuring device
JP6657789B2 (en) Image processing device, imaging device, device control system, frequency distribution image generation method, and program
US11620837B2 (en) Systems and methods for augmenting upright object detection
EP2549457B1 (en) Vehicle-mounting vehicle-surroundings recognition apparatus and vehicle-mounting vehicle-surroundings recognition system
US11670087B2 (en) Training data generating method for image processing, image processing method, and devices thereof
JP6705496B2 (en) Image processing device, imaging device, mobile device control system, mobile device, image processing method, and program
JP5714940B2 (en) Moving body position measuring device
WO2018016394A1 (en) Traveling road boundary estimation apparatus and traveling assistance system using same
JP6583527B2 (en) Image processing apparatus, imaging apparatus, mobile device control system, image processing method, and program
JP6171612B2 (en) Virtual lane generation apparatus and program
JP6702340B2 (en) Image processing device, imaging device, mobile device control system, image processing method, and program
KR20160123668A (en) Device and method for recognition of obstacles and parking slots for unmanned autonomous parking
JP2008168811A (en) Traffic lane recognition device, vehicle, traffic lane recognition method, and traffic lane recognition program
JP2013232091A (en) Approaching object detection device, approaching object detection method and approaching object detection computer program
JP6705497B2 (en) Image processing device, imaging device, mobile device control system, image processing method, program, and mobile device
JP6038422B1 (en) Vehicle determination device, vehicle determination method, and vehicle determination program
JP2010132056A (en) Sensing device, sensing method, and vehicle control device
Liu et al. Vehicle detection and ranging using two different focal length cameras
JP6815963B2 (en) External recognition device for vehicles
JP2007249257A (en) Apparatus and method for detecting movable element
JP2004355139A (en) Vehicle recognition system
JP5974923B2 (en) Road edge detection system, method and program
JP5888275B2 (en) Road edge detection system, method and program
KR102368262B1 (en) Method for estimating traffic light arrangement information using multiple observation information
JP2018088234A (en) Information processing device, imaging device, apparatus control system, movable body, information processing method, and program

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20100325

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20110811

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20110816

RD02 Notification of acceptance of power of attorney

Free format text: JAPANESE INTERMEDIATE CODE: A7422

Effective date: 20110929

A521 Request for written amendment filed

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20111013

RD04 Notification of resignation of power of attorney

Free format text: JAPANESE INTERMEDIATE CODE: A7424

Effective date: 20111024

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20120131

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20120227

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20150302

Year of fee payment: 3

R150 Certificate of patent or registration of utility model

Ref document number: 4940177

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150

Free format text: JAPANESE INTERMEDIATE CODE: R150

LAPS Cancellation because of no payment of annual fees