JP2008052517A - Traffic measurement method and traffic measurement apparatus - Google Patents

Traffic measurement method and traffic measurement apparatus Download PDF

Info

Publication number
JP2008052517A
JP2008052517A JP2006228340A JP2006228340A JP2008052517A JP 2008052517 A JP2008052517 A JP 2008052517A JP 2006228340 A JP2006228340 A JP 2006228340A JP 2006228340 A JP2006228340 A JP 2006228340A JP 2008052517 A JP2008052517 A JP 2008052517A
Authority
JP
Japan
Prior art keywords
vehicle
image
detected
pair
dimensional shape
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2006228340A
Other languages
Japanese (ja)
Other versions
JP5105400B2 (en
Inventor
Naomi Takaoka
直実 高岡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Koito Industries Ltd
Original Assignee
Koito Industries Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koito Industries Ltd filed Critical Koito Industries Ltd
Priority to JP2006228340A priority Critical patent/JP5105400B2/en
Publication of JP2008052517A publication Critical patent/JP2008052517A/en
Application granted granted Critical
Publication of JP5105400B2 publication Critical patent/JP5105400B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

<P>PROBLEM TO BE SOLVED: To accurately count traffic even when vehicles travelling in parallel are imaged in an overlapped manner or when a vehicle has an illuminator similar to a headlight at night, in a traffic measurement apparatus for measuring traffic by analyzing an image obtained by imaging a vehicle travelling on a road by a camera. <P>SOLUTION: A processing part 12 detects a position (base point) of a specific part (a front end part of the vehicle in daytime, and a pair of headlights at night) of a vehicle in an image picked by a camera 11. A three-dimensional shape is calculated which is obtained by viewing from the camera 11 a prescribed standard vehicle disposed along the road so that the specific part is located at a position corresponding to the base point in a real world, and superimposed on the image. It is determined that a vehicle exists in the region surrounded by the three-dimensional shape on the image, and it is judged whether another vehicle exists outside the three-dimensional shape. <P>COPYRIGHT: (C)2008,JPO&INPIT

Description

本発明は、道路を通行する車両をカメラで撮影して得た画像を解析して交通量を計測する交通量計測方法および交通量計測装置に関する。   The present invention relates to a traffic volume measuring method and a traffic volume measuring apparatus for analyzing traffic images obtained by photographing a vehicle traveling on a road with a camera.

道路の状況をカメラで撮影し、その画像を解析することで交通量を計測したり、通行する車両の車種や走行速度を判定したりする計測装置では、道路を跨ぐように支持枠を架け渡してカメラを設置すると設備が大掛かりになるので、道路脇に立てた支柱にカメラを設置し、斜め前方から車両を撮影することがしばしば行なわれる。   In the measuring device that measures the traffic volume by taking a picture of the road situation with a camera and analyzing the image, and judging the vehicle type and traveling speed of the passing vehicle, a support frame is bridged across the road If the camera is installed, the equipment becomes large. Therefore, it is often done to install the camera on a column that stands on the side of the road and photograph the vehicle from diagonally forward.

しかし、このように車両を斜めから撮影すると、図2に示すように、隣の車線を並走する車両A,Bが重なり合って撮影されるため、複数の並走車両を1台として誤認する虞がある。そこで、たとえば、車線領域毎に映像を抽出し、車線毎の映像から車両を検出することで、隣接する車線を走行する車両の重なりを分離して画像を解析するといった方法がとられている。   However, if the vehicle is photographed obliquely in this way, as shown in FIG. 2, the vehicles A and B running side by side in the adjacent lane are overlapped and photographed, so there is a possibility that a plurality of parallel running vehicles are mistaken as one. There is. Therefore, for example, a method is used in which an image is extracted for each lane region, and a vehicle is detected from the image for each lane to separate the overlap of vehicles traveling in adjacent lanes and analyze the image.

一方、夜間は、車両の輪郭が不明確になるので、ヘッドライトなどの高輝度部を抽出して画像解析が行なわれる。通常、ヘッドライトは左右で対を成すなどの特徴を有するので、このような特徴を利用して一対のヘッドライトを他のランプなどと区別して抽出し、該抽出したヘッドライト対を基準に、交通量や車両の車種を判定することが行なわれる(たとえば、特許文献1参照。)。   On the other hand, since the outline of the vehicle becomes unclear at night, image analysis is performed by extracting a high luminance part such as a headlight. Usually, since the headlight has a feature such as a pair on the left and right, using such a feature, a pair of headlights are extracted and distinguished from other lamps, and the extracted headlight pair is used as a reference. The traffic volume and the type of vehicle are determined (for example, see Patent Document 1).

特開平11−353580号公報JP 11-353580 A

車線領域毎に映像を抽出して解析する方法では、隣り合う車線を並走する車両を分離して認識できるが、逆に、大型車両においては、屋根の部分が隣の車線に入り込むため、1台の大型車両を2台に誤認する虞がある。また、車線と車線の間を走行するような車両を正しく検出することはできない。さらに、車両を正面から撮影するようにカメラを設置した場合においても、車間距離を詰めて走行する前後の車両が重なりあって撮影されることがあり、かかる場合には、車線領域毎に映像を抽出しても、前後の車両を分離することはできない。   In the method of extracting and analyzing images for each lane region, it is possible to separate and recognize vehicles running in parallel in adjacent lanes. On the other hand, in large vehicles, the roof portion enters the adjacent lane. There is a risk of misidentifying two large vehicles as two. In addition, a vehicle that travels between lanes cannot be detected correctly. Furthermore, even when a camera is installed so that the vehicle is photographed from the front, the vehicles before and after traveling with a close distance between the vehicles may overlap and be photographed. Even if extracted, the front and rear vehicles cannot be separated.

また、ヘッドライト対を基準に夜間の車両を検出する場合において、ヘッドライト対と他の発光体とを、ヘッドライトの特性(ペアをなすことや、ペアを成す発光体の間隔が車幅程度であることなど)を利用して判別する方法では、判別の精度には限界があり、ヘッドライト以外の発光体や路面の反射をヘッドライト対として誤認する可能性がある。たとえば、大型トラックには上部や側部に様々なランプが設置されていることが多く、これらの発光体のいずれかをヘッドライト対として誤認すると、1台のトラックを複数台の車両として計数してしまい、正しい交通量を把握できないといった問題があった。   In addition, when detecting a vehicle at night based on a pair of headlights, the headlight pair and other light emitters are separated from each other by the characteristics of the headlights (pairing and the distance between the paired light emitters is about the vehicle width. In such a method, there is a limit to the accuracy of the determination, and there is a possibility that a light emitter other than the headlight or a reflection on the road surface is mistaken as a headlight pair. For example, many large trucks are often equipped with various lamps on the top and side, and if one of these light emitters is mistaken for a headlight pair, one truck is counted as multiple vehicles. As a result, there was a problem that the correct traffic volume could not be grasped.

本発明は、上記の問題を解決しようとするものであり、並走する車両や前後の車両が重なって撮影された場合や、夜間の車両にヘッドライトと紛らわしい発光体が存在する場合であっても、道路を通行する車両の交通量を的確に計測することのできる交通量計測方法および交通量計測装置を提供することを目的としている。   The present invention is intended to solve the above-mentioned problem, and is a case where a parallel running vehicle and a front and rear vehicle are photographed with overlapping, or a nighttime vehicle has a illuminant confusing with a headlight. Another object of the present invention is to provide a traffic volume measuring method and a traffic volume measuring apparatus capable of accurately measuring the traffic volume of a vehicle traveling on a road.

かかる目的を達成するための本発明の要旨とするところは、次の各項の発明に存する。   The gist of the present invention for achieving the object lies in the inventions of the following items.

[1]道路を通行する車両をカメラで撮影して得た画像を解析して交通量を計測する交通量計測方法において、
前記カメラで撮影して得た画像上で、車両の位置を検出する検出ステップと、
前記道路上に配置された所定の標準車両を前記カメラから見た場合の3次元形状を求め、前記検出した車両の位置に合わせて、前記3次元形状を前記画像上に重ね合わせる重畳ステップと、
前記画像上の前記3次元形状で覆われた範囲に1台の車両が存在すると判定する判定ステップと
を有する
ことを特徴とする交通量計測方法。
[1] In a traffic volume measuring method for measuring traffic volume by analyzing an image obtained by photographing a vehicle traveling on a road with a camera,
A detection step of detecting a position of the vehicle on an image obtained by photographing with the camera;
A superimposing step of obtaining a three-dimensional shape when a predetermined standard vehicle arranged on the road is viewed from the camera, and superimposing the three-dimensional shape on the image in accordance with the position of the detected vehicle;
And a determination step of determining that one vehicle is present in the range covered with the three-dimensional shape on the image.

上記発明では、カメラで撮影した画像内で車両の位置を検出し、道路を通行する所定の標準車両をカメラから見た場合の3次元形状を求め、該3次元形状を、先に検出した車両の位置に基づいて位置合わせをして画像上に重ね合わせる。そして、画像上の該3次元形状で覆われた範囲に1台の車両が存在すると判定する。   In the above invention, the position of the vehicle is detected in the image photographed by the camera, the three-dimensional shape when the predetermined standard vehicle passing through the road is viewed from the camera is obtained, and the three-dimensional shape is detected first. The image is aligned based on the position of the image and superimposed on the image. Then, it is determined that one vehicle exists in the range covered with the three-dimensional shape on the image.

これにより、並走する車両が重なり合っていても、たとえば、手前の1台の車両の位置を検出し、この位置に標準車両の3次元形状を重ね合わせることで、画像上で手前にある1台の車両が占める領域を把握でき、重なり合った2台の並走車両を車両毎に分離して認識することができる。また、夜間においては、たとえば、ヘッドライトの存在する位置に基づいて車両の位置を検出し、この位置に、カメラから見た標準車両の3次元形状を重ね合わせる。これにより、車両の輪郭が不明確に映る夜間でも、画像上で1台の車両が占める領域を認識でき、この領域内に紛らわしい他の高輝度部が存在しても、それらを統合して1台の車両と判定することができる。   Thereby, even if vehicles running in parallel overlap each other, for example, the position of one vehicle in front is detected, and one vehicle in front on the image is superimposed on this position by superimposing the three-dimensional shape of the standard vehicle. The area occupied by the vehicle can be grasped, and two overlapping parallel vehicles can be recognized separately for each vehicle. Further, at night, for example, the position of the vehicle is detected based on the position where the headlight exists, and the three-dimensional shape of the standard vehicle viewed from the camera is superimposed on this position. As a result, even at night when the outline of the vehicle is unclear, the area occupied by one vehicle can be recognized on the image, and even if other high-intensity parts that are confusing are present in this area, they are integrated into 1 It can be determined as a single vehicle.

標準車両は、道路を通行する車両の標準的な立体形状をモデル化したものであり、たとえば、車両を直方体にモデル化したものを使用する。標準車両の3次元形状は、画像上で検出された車両の位置に応じて算出してもよいし、検出された車両の位置に係わらず、予め算出して記憶してある固定の3次元形状を使用してもかまわない。また、画像上での代表的な車両の位置を複数設定し、各位置に対応する3次元形状を予め算出して記憶しておき、これらの中から、検出された車両の位置に応じた3次元形状を選択して重ね合わせるようにしてもよい。   The standard vehicle is a model of a standard three-dimensional shape of a vehicle traveling on a road. For example, a model obtained by modeling a vehicle in a rectangular parallelepiped is used. The three-dimensional shape of the standard vehicle may be calculated according to the position of the vehicle detected on the image, or a fixed three-dimensional shape that is calculated and stored in advance regardless of the detected position of the vehicle. Can be used. In addition, a plurality of representative vehicle positions on the image are set, and a three-dimensional shape corresponding to each position is calculated and stored in advance, and 3 corresponding to the detected vehicle position is selected from these. A dimensional shape may be selected and overlapped.

[2]前記重畳ステップでは、前記画像上で検出された前記車両の位置に対応する実世界での位置に配置された前記標準車両を前記カメラから見た場合の3次元形状を算出する
ことを特徴とする[1]に記載の交通量計測方法。
[2] In the superimposing step, calculating a three-dimensional shape when the standard vehicle arranged at a position in the real world corresponding to the position of the vehicle detected on the image is viewed from the camera. The traffic volume measuring method according to [1], which is characterized.

上記発明では、画像の手前寄りに車両が大きく映っていても、奥寄りに比較的小さく車両が写っていても、その車両の位置に応じた3次元形状を算出するので、車両の位置に係わらず画像上でその車両の占める領域を少ない誤差で特定することができる。   In the above-described invention, even if the vehicle appears large in front of the image or the vehicle is relatively small in the back, the three-dimensional shape is calculated according to the position of the vehicle. The area occupied by the vehicle on the image can be specified with a small error.

[3]前記画像上の前記3次元形状で覆われた範囲の外側の領域に、他の車両が存在するか否かを判定する
ことを特徴とする[1]または[2]に記載の交通量計測方法。
[3] The traffic according to [1] or [2], wherein it is determined whether or not another vehicle exists in a region outside the range covered with the three-dimensional shape on the image. Quantity measurement method.

上記発明では、3次元形状で覆われた範囲を1台の車両の存する範囲としているので、この範囲の外側に、他の車両が存在するか否かを判定する。すなわち、並走車両の2台目の検出が行なわれる。   In the above invention, since the range covered with the three-dimensional shape is the range where one vehicle exists, it is determined whether or not another vehicle exists outside the range. That is, the second detection of the parallel running vehicle is performed.

[4]前記標準車両として、大型車、小型車など車種の異なる複数種類の標準車両を設定しておき、
前記検出ステップにおいて位置検出された車両の車種を判定する車種判定ステップをさらに設け、
前記車種判定ステップの判定結果が示す車種に対応する標準車両の3次元形状を前記画像上に重ね合わせる
ことを特徴とする[1]乃至[3]のいずれかに記載の交通量計測方法。
[4] As the standard vehicle, a plurality of types of standard vehicles having different vehicle types such as a large vehicle and a small vehicle are set in advance.
A vehicle type determination step for determining the vehicle type of the vehicle whose position is detected in the detection step;
The traffic volume measuring method according to any one of [1] to [3], wherein a three-dimensional shape of a standard vehicle corresponding to a vehicle type indicated by a determination result of the vehicle type determination step is superimposed on the image.

上記発明では、車種を判定し、その車種に応じた標準車両の3次元形状を求めて画像上に重ね合わせる。車種の判定は、たとえば、車幅やヘッドライト対の間隔などで判定することができる。このように大型車、小型車などの車種を判定し、その車種に応じた標準車両の3次元形状を画像に重ね合わせるので、1台の車両が存する領域をより高い精度で特定することができる。なお、車種は大型車、小型車に限定されず、必要に応じて適宜に設定すればよい。   In the said invention, a vehicle type is determined, the three-dimensional shape of the standard vehicle according to the vehicle type is calculated | required, and it superimposes on an image. The vehicle type can be determined by, for example, the vehicle width or the distance between the pair of headlights. As described above, the type of vehicle such as a large vehicle or a small vehicle is determined, and the three-dimensional shape of the standard vehicle corresponding to the vehicle type is superimposed on the image, so that the region where one vehicle exists can be specified with higher accuracy. Note that the vehicle type is not limited to a large vehicle or a small vehicle, and may be set as necessary.

[5]前記検出ステップでは、前記画像上で車両の特定箇所が存する基点を検出することにより車両の位置を検出する
ことを特徴とする[1]乃至[4]のいずれかに記載の交通量計測方法。
[5] The traffic volume according to any one of [1] to [4], wherein in the detection step, the position of the vehicle is detected by detecting a base point where a specific location of the vehicle exists on the image. Measurement method.

上記発明では、車両の特定箇所を基準に車両の位置を検出する。特定箇所は、車両の前端部や一対のヘッドライトなど、画像上で車両の位置を識別可能な部位であればよい。   In the said invention, the position of a vehicle is detected on the basis of the specific location of a vehicle. The specific part may be a part that can identify the position of the vehicle on the image, such as the front end of the vehicle or a pair of headlights.

[6]前記特定箇所を車両の前端部とし、
前記検出ステップにおいて、前記画像内で最も車両の進行方向側に存する車両の前端部を前記基点として検出する
ことを特徴とする[5]に記載の交通量計測方法。
[6] The specific location is the front end of the vehicle,
The traffic measurement method according to [5], wherein, in the detecting step, a front end portion of a vehicle existing closest to a traveling direction of the vehicle in the image is detected as the base point.

上記発明では、複数の車両が重なり合っていても、画像内で最も車両の進行方向側(画像内での最も下方)に存する車両の前端部については、その輪郭を確実に認識できるので、基点として好適である。   In the above invention, even if a plurality of vehicles are overlapped, the outline of the front end of the vehicle located on the most vehicle traveling direction side (lowermost in the image) in the image can be reliably recognized. Is preferred.

[7]前記特定箇所を一対のヘッドライトとし、
前記検出ステップにおいて、一対のヘッドライトに相当する特徴を備えた対を成す高輝度部を一対のヘッドライトの候補として検出し、該検出した候補の中で、前記画像内で最も車両の進行方向側に存する一対のヘッドライトの候補を前記基点として検出する
ことを特徴とする[5]に記載の交通量計測方法。
[7] The specific portion is a pair of headlights,
In the detection step, a pair of high-intensity portions having characteristics corresponding to a pair of headlights are detected as a pair of headlight candidates, and the vehicle traveling direction is the most in the image among the detected candidates. A traffic volume measuring method according to [5], wherein a pair of headlight candidates existing on the side is detected as the base point.

上記発明では、車両を前方から撮影する状態では、大型車両の屋根や側部にあるライトよりもヘッドライトは必ず画像内で車両の進行方向側に存在するので、これを基点にすることで、車両の存する位置を正しく認識して3次元形状を当てはめることができる。   In the above invention, in a state where the vehicle is photographed from the front, the headlight is always present in the traveling direction side of the vehicle in the image rather than the light on the roof or side of the large vehicle. It is possible to correctly recognize the position of the vehicle and apply a three-dimensional shape.

[8]昼夜を判別する昼夜判別ステップを有し、
前記昼夜判別ステップの判別結果が昼の場合は、
前記特定箇所を車両の前端部とすると共に、前記検出ステップにおいて、前記画像内で最も車両の進行方向側に存する車両の前端部を前記基点として検出し、
前記昼夜判別ステップの判別結果が夜の場合は、
前記特定箇所を一対のヘッドライトとすると共に、前記検出ステップにおいて、一対のヘッドライトに相当する特徴を備えた対を成す高輝度部を一対のヘッドライトの候補として検出し、該検出した候補の中で、前記画像内で最も車両の進行方向側に存する一対のヘッドライトの候補を前記基点として検出する
ことを特徴とする[5]に記載の交通量計測方法。
[8] A day / night discrimination step for discriminating between day and night,
If the discrimination result of the day / night discrimination step is noon,
The specific location is the front end of the vehicle, and in the detection step, the front end of the vehicle existing closest to the vehicle traveling direction in the image is detected as the base point,
When the discrimination result of the day / night discrimination step is night,
The specific portion is a pair of headlights, and in the detection step, a pair of high-intensity parts having characteristics corresponding to the pair of headlights is detected as a pair of headlight candidates, and the detected candidate The traffic volume measuring method according to [5], wherein a pair of headlight candidates that are closest to the vehicle traveling direction in the image is detected as the base point.

上記発明では、昼夜を判別し、昼は、車両の輪郭を認識できるので、車両の前端部を基準にして基点を検出し、夜は、一対のヘッドライト対を基準にして基点を検出する。昼夜の判別は、たとえば、撮影時の周囲照度や、画像の平均の明るさが予め定めたしきい値以上か否かなどにより判別することができる。   In the above invention, day and night are discriminated, and since the outline of the vehicle can be recognized at daytime, the base point is detected with reference to the front end portion of the vehicle, and at night, the base point is detected with reference to a pair of headlights. The determination of day and night can be made based on, for example, the ambient illuminance at the time of shooting or whether the average brightness of the image is equal to or higher than a predetermined threshold value.

[9]道路を通行する車両をカメラで撮影して得た画像を解析して交通量を計測する交通量計測装置において、
前記カメラで撮影して得た画像に対して、[1]乃至[8]のいずれかに記載の交通量計測方法を実行する処理部を有する
ことを特徴とする交通量計測装置。
[9] In a traffic measurement device that measures traffic by analyzing an image obtained by photographing a vehicle traveling on a road with a camera,
A traffic volume measuring apparatus comprising: a processing unit that executes the traffic volume measuring method according to any one of [1] to [8] with respect to an image captured by the camera.

本発明に係わる交通量計測方法および交通量計測装置によれば、カメラで撮影して得た画像上で、車両の位置を検出すると共に、道路上に配置した所定の標準車両をカメラから見た場合の3次元形状を求め、先に検出した車両の位置に合わせてこの3次元形状を画像上に重ね合わせ、該3次元形状で覆われた範囲に1台の車両が存在すると判定するようにしたので、複数の車両が重なり合って撮影されている場合や、夜間の車両にヘッドライトと紛らわしい発光体が存在する場合であっても、画像上で車両の占める領域を1台ずつ的確に認識して交通量を正しく計測することができる。   According to the traffic measuring method and the traffic measuring device according to the present invention, the position of the vehicle is detected on the image obtained by photographing with the camera, and the predetermined standard vehicle arranged on the road is seen from the camera. The three-dimensional shape of the case is obtained, and this three-dimensional shape is superimposed on the image according to the position of the vehicle detected earlier, and it is determined that one vehicle exists in the range covered with the three-dimensional shape. Therefore, even when multiple vehicles are photographed in an overlapping manner or when a nighttime vehicle has a illuminant confusing with headlights, the area occupied by the vehicle on the image is accurately recognized one by one. Traffic volume can be measured correctly.

以下、図面に基づき本発明の実施の形態を説明する。   Hereinafter, embodiments of the present invention will be described with reference to the drawings.

図1は、本発明に係わる交通量計測装置10の概略構成を示すブロック図である。交通量計測装置10は、道路の状況をカメラ11で撮影して得た画像を解析することで交通量を計測する装置であり、カメラ11で撮影して得た画像を処理する処理部12を備えている。カメラ11は、道路脇に立てた支柱などに固定され、道路を通行する車両を斜め前方から撮影して図2に例示するような画像を取得するように設置される。   FIG. 1 is a block diagram showing a schematic configuration of a traffic volume measuring apparatus 10 according to the present invention. The traffic volume measuring device 10 is a device that measures traffic volume by analyzing an image obtained by shooting a road condition with the camera 11, and includes a processing unit 12 that processes an image shot with the camera 11. I have. The camera 11 is fixed to a pillar or the like standing on the side of the road, and is installed so as to capture an image illustrated in FIG.

処理部12は、画像取得部13と、画像メモリ14と、前処理部15と、車両位置・車種判定部16と、座標変換部18と、車両領域分離統合部19と、車両追跡部21と、交通量計測部22とを備えている。   The processing unit 12 includes an image acquisition unit 13, an image memory 14, a preprocessing unit 15, a vehicle position / vehicle type determination unit 16, a coordinate conversion unit 18, a vehicle region separation / integration unit 19, and a vehicle tracking unit 21. The traffic volume measuring unit 22 is provided.

カメラ11は、毎秒数十フレームの画像を撮影し、画像取得部13は、カメラ11からその画像データを取り込む。画像メモリ14は、画像取得部13がカメラ11から取り込んだ画像データを一時記憶する。前処理部15は、撮影時の周囲の明るさに基づいて昼夜を判定する機能と、昼と判定した場合は画像メモリ14に記憶されている画像から移動する物体を車両領域として抽出する。また、夜と判定した場合は画像メモリ14に記憶されている画像から一定以上の明るさを持つ高輝度部を抽出する機能を果たす。周囲の明るさは別途のセンサを設けて検知してもよいし、カメラ11で撮影した画像の平均の明るさもしくは予め定めた特定領域(たとえば、道路脇の一部)の明るさに基づいて判定してもよい。また、移動体の抽出には、フレーム差分、背景差分などの手法を用いる。   The camera 11 captures an image of several tens of frames per second, and the image acquisition unit 13 captures the image data from the camera 11. The image memory 14 temporarily stores the image data captured from the camera 11 by the image acquisition unit 13. The preprocessing unit 15 extracts a moving object from the image stored in the image memory 14 as a vehicle area when it is determined that it is daytime and a function that determines day and night based on ambient brightness at the time of shooting. Further, when it is determined that it is night, the function of extracting a high luminance part having a certain brightness or more from the image stored in the image memory 14 is achieved. The ambient brightness may be detected by providing a separate sensor, or based on the average brightness of an image captured by the camera 11 or the brightness of a predetermined specific area (for example, a part of the roadside). You may judge. In addition, a method such as frame difference or background difference is used for extraction of the moving object.

車両位置・車種判定部16は、前処理部15で車両領域もしくは高輝度部を抽出した後の画像上で、車両の特定箇所が存在する位置を基点として検出することにより車両の位置を検出する機能と、該位置(基点)の検出された車両の車種を判定する機能とを果たす。ここでは、前処理部15によって昼と判定された場合は車両の前端部を特定箇所とし、夜と判定された場合は一対のヘッドライトを特定箇所として使用する。車種の判定には、車幅や車両の長さを用いる。   The vehicle position / vehicle type determination unit 16 detects the position of the vehicle by detecting the position where the specific part of the vehicle is present as a base point on the image after the vehicle region or the high luminance part is extracted by the preprocessing unit 15. A function and a function of determining a vehicle type of the vehicle from which the position (base point) is detected. Here, when it is determined by the preprocessing unit 15 to be daytime, the front end portion of the vehicle is used as a specific location, and when it is determined to be night, a pair of headlights is used as the specific location. The vehicle width and the vehicle length are used to determine the vehicle type.

座標変換部18は、車両位置・車種判定部16にて検出した基点に対応する実世界(カメラ11で撮影している実際の世界)での位置に特定箇所(車両の前端部もしくは一対のヘッドライト)が存するようにして予め定めた標準車両を道路に沿って配置し、カメラ11から見た場合の該標準車両の3次元形状を算出する機能を果たす。   The coordinate conversion unit 18 is located at a specific location (front end of the vehicle or a pair of heads) at a position in the real world (actual world taken by the camera 11) corresponding to the base point detected by the vehicle position / vehicle type determination unit 16. A predetermined standard vehicle is arranged along the road so that the light is present, and the three-dimensional shape of the standard vehicle when viewed from the camera 11 is calculated.

標準車両は、車両の大きさに応じて数種類設定される。ここでは、大型車と小型車の2タイプとする。また、標準車両として車両を直方体にモデル化したものを用いる。たとえば、大型車の標準車両のサイズは、法律で定められた最大車幅などに準じて決定するとよい。座標変換部18は車両位置・車種判定部16での車種判定結果に応じた車種の標準車両の3次元形状を算出する。算出には、事前に設定しておいた、カメラ11の姿勢情報や画角情報を利用する。   Several types of standard vehicles are set according to the size of the vehicle. Here, two types, a large vehicle and a small vehicle, are used. In addition, a standard vehicle in which a vehicle is modeled in a rectangular parallelepiped is used. For example, the size of the standard vehicle for large vehicles may be determined according to the maximum vehicle width defined by law. The coordinate conversion unit 18 calculates the three-dimensional shape of the standard vehicle of the vehicle type according to the vehicle type determination result in the vehicle position / vehicle type determination unit 16. For the calculation, posture information and angle of view information of the camera 11 set in advance are used.

車両領域分離統合部19は、座標変換部18で算出した3次元形状を、基点の検出に使用した画像上に重ね合わせ、該3次元形状で覆われた範囲に1台の車両が存在すると判定する。また、上記のように3次元形状を画像上に重ね合わせた後、車両位置・車種判定部16および座標変換部18、車両領域分離統合部19は、この3次元形状で覆われた範囲を除外した残りの領域(外側の領域)を対象に、他の車両が存在するか否かをさらに判定する。   The vehicle area separation / integration unit 19 superimposes the three-dimensional shape calculated by the coordinate conversion unit 18 on the image used for detecting the base point, and determines that one vehicle exists in the range covered with the three-dimensional shape. To do. In addition, after the three-dimensional shape is superimposed on the image as described above, the vehicle position / vehicle type determination unit 16, the coordinate conversion unit 18, and the vehicle region separation / integration unit 19 exclude the range covered with the three-dimensional shape. It is further determined whether or not there is another vehicle for the remaining area (outer area).

車両追跡部21は、移動する車両を認識して追跡する機能を果たし、交通量計測部22は車両領域分離統合部19の判定結果と車両追跡部21の追跡状況とに基づき、当該道路を通行する車両の台数を計数し、単位時間あたりの交通量などを計測する機能を果たす。   The vehicle tracking unit 21 performs a function of recognizing and tracking a moving vehicle, and the traffic volume measuring unit 22 passes through the road based on the determination result of the vehicle region separation and integration unit 19 and the tracking status of the vehicle tracking unit 21. The number of vehicles to be counted is counted and the traffic volume per unit time is measured.

次に、カメラ11で撮影される画像上での座標(カメラ座標)と実世界での座標(ワールド座標)との相互変換式について説明する。   Next, a mutual conversion formula between coordinates (camera coordinates) on an image photographed by the camera 11 and coordinates in the real world (world coordinates) will be described.

図3は、カメラ座標とワールド座標との関係を示している。この図において、カメラの位置・姿勢・特性を以下のように定義する。
レンズの焦点距離:λ
カメラ雲台の位置:W=(X,Y,Z
カメラ雲台から撮像面へのベクトル:r=(r,r,r
X,Y,Z軸周りのカメラ姿勢の回転をそれぞれα、β、θとする。
このとき、ワールド座標系での位置(X,Y,Z)がカメラ座標系(x,y)のどの点に投影されるかは、以下に示す(1)式で求めることができる。
FIG. 3 shows the relationship between camera coordinates and world coordinates. In this figure, the position, posture, and characteristics of the camera are defined as follows.
Lens focal length: λ
Camera head position: W 0 = (X 0 , Y 0 , Z 0 )
Vector from camera head to imaging plane: r = (r 1 , r 2 , r 3 )
The rotation of the camera posture around the X, Y, and Z axes is α, β, and θ, respectively.
At this time, to which point in the camera coordinate system (x, y) the position (X, Y, Z) in the world coordinate system is projected can be obtained by the following equation (1).

Figure 2008052517
Figure 2008052517

また、逆に、カメラ座標系の点(x,y)がワールド座標系(X,Y,Z)でどの位置であるかは、ワールド座標系のZを定数とすることで、以下に示す(2)式で求めることができる。   Conversely, the position of the point (x, y) in the camera coordinate system in the world coordinate system (X, Y, Z) is shown below by setting Z in the world coordinate system as a constant ( 2) It can obtain | require by Formula.

Figure 2008052517
Figure 2008052517

図4は、交通量計測装置10が行なう処理全体の流れを示している。まずカメラ11で画像を取り込む(ステップS101)。次に、撮影時の周囲照度が所定のしきい値以上か否かを判定する(ステップS102)。しきい値以上の明るさがあって昼と判定したときは(ステップS102;高)、車両全体の特徴量をとる(車両の輪郭を認識する)ことができるため、車両領域を抽出し、並走車両などの重なり合った車両を分離する車両分離処理を行なう(ステップS103)。一方、しきい値以上の明るさがなく、夜と判定したときは(ステップS102;低)、ヘッドライトや車体側部のランプなどの高輝度部を抽出し、該抽出した高輝度部を車両単位に統合するライト統合処理を行なう(ステップS104)。車両分離処理もしくはライト統合処理の詳細は後述する。   FIG. 4 shows the flow of the entire process performed by the traffic measuring device 10. First, an image is captured by the camera 11 (step S101). Next, it is determined whether or not the ambient illuminance at the time of shooting is greater than or equal to a predetermined threshold value (step S102). When it is determined that it is daytime when the brightness is equal to or higher than the threshold (step S102; high), the feature amount of the entire vehicle can be taken (recognizing the contour of the vehicle). A vehicle separation process for separating overlapping vehicles such as running vehicles is performed (step S103). On the other hand, when it is determined that the brightness is not equal to or higher than the threshold and the night (step S102; low), a high luminance part such as a headlight or a lamp on the side of the vehicle body is extracted, and the extracted high luminance part is A light integration process for integrating the units is performed (step S104). Details of the vehicle separation process or the light integration process will be described later.

車両分離処理もしくはライト統合処理により画像内で各車両の存在する領域を認識した後、車両追跡処理(ステップS105)および交通量計測処理(ステップS106)を行なう。このような処理を繰り返し行なうことで、道路を通行する車両の交通量や通行車両の車種を計測する。   After recognizing the area where each vehicle exists in the image by the vehicle separation process or the light integration process, a vehicle tracking process (step S105) and a traffic volume measurement process (step S106) are performed. By repeatedly performing such processing, the traffic volume of the vehicle passing through the road and the vehicle type of the passing vehicle are measured.

図5は、図4のS103で行なわれる車両分離処理の流れを示している。図6、図7に車両分離処理の処理過程の画像を示す。この例では、カメラ11により、図2に示す画像30を撮影したものとし、この画面上で重なって撮影された2台の車両A、Bを分離する過程を説明する。前処理部15は、カメラ11から取り込んだ画像30から、移動している物体を抽出する(図5ステップS201)。図6(a)は図2の画像30から移動体(車両領域32)を抽出した抽出画像31を示している。   FIG. 5 shows the flow of the vehicle separation process performed in S103 of FIG. 6 and 7 show images of the process of the vehicle separation process. In this example, it is assumed that an image 30 shown in FIG. 2 is taken by the camera 11, and a process of separating two vehicles A and B that are taken in an overlapping manner on this screen will be described. The preprocessing unit 15 extracts the moving object from the image 30 captured from the camera 11 (step S201 in FIG. 5). FIG. 6A shows an extracted image 31 obtained by extracting a moving body (vehicle region 32) from the image 30 of FIG.

次に、車両位置・車種判定部16は、図6(b)に示すように、抽出した車両領域32の下部分(図中は破線Lの下方部分)に対して縦方向(y方向)の射影処理を行なうことで(ステップS202)、図6(c)に示す分布35を取得する。分布35は、画像の横方向(x方向)の各座標位置で破線Lより下方の部分に存在する車両領域32の画素数(射影画素数)を示している。図6(c)において、車幅に相当する範囲Wにおける射影画素数の加算値(図中の斜線部の面積)を求める作業を、車幅Wの位置を横方向(x方向)に移動させながらそれぞれ求め(ステップS203、S204)、射影画素数の加算値が最大となったときの範囲Wの位置を車両の前端部が存在する位置として求める(ステップS205)。図6(c)に示す例では、車両の前端部の右端が存在する位置を基点Kとして検出している。   Next, as shown in FIG. 6 (b), the vehicle position / vehicle type determination unit 16 is arranged in the vertical direction (y direction) with respect to the lower part of the extracted vehicle region 32 (the lower part of the broken line L in the figure). By performing the projection process (step S202), the distribution 35 shown in FIG. 6C is acquired. The distribution 35 indicates the number of pixels (projection pixel number) of the vehicle region 32 existing in a portion below the broken line L at each coordinate position in the horizontal direction (x direction) of the image. In FIG. 6C, the operation of obtaining the added value of the number of projected pixels in the range W corresponding to the vehicle width (area of the hatched portion in the figure) is performed by moving the position of the vehicle width W in the horizontal direction (x direction). However, it calculates | requires each (step S203, S204), and calculates | requires the position of the range W when the addition value of the number of projection pixels becomes the maximum as a position where the front-end part of a vehicle exists (step S205). In the example shown in FIG. 6C, the position where the right end of the front end portion of the vehicle exists is detected as the base point K.

さらに、検出した車両の存在位置の近傍における車両領域32の大きさ(車幅、車長など)から車種の判定を行なう(ステップS206)。この判定は数段階(ここでは、小型車と大型車の2段階)に分けて行なう。   Further, the vehicle type is determined from the size (vehicle width, vehicle length, etc.) of the vehicle area 32 in the vicinity of the detected vehicle location (step S206). This determination is made in several stages (here, two stages, a small car and a large car).

次に、座標変換部18は、車両位置・車種判定部16での判定結果の示す車種に応じた標準車両を、基点に対応する実世界の位置に特定箇所が位置するように配置し、カメラ11から見た該標準車両の3次元形状(3次元直方体モデル)を算出する(ステップS207)。ここでは、実世界における車幅、車長、車高のデータを標準車両の車種毎に予め定めて記憶してあり、判定結果の車種に応じた標準車両のデータを算出に使用する。   Next, the coordinate conversion unit 18 arranges the standard vehicle corresponding to the vehicle type indicated by the determination result in the vehicle position / vehicle type determination unit 16 so that the specific part is located at the position in the real world corresponding to the base point, and the camera The three-dimensional shape (three-dimensional rectangular parallelepiped model) of the standard vehicle viewed from 11 is calculated (step S207). Here, vehicle width, vehicle length, and vehicle height data in the real world are determined and stored in advance for each vehicle type of the standard vehicle, and data of the standard vehicle corresponding to the vehicle type of the determination result is used for calculation.

車両領域分離統合部19は、図7(a)に示すように、座標変換部18で算出した3次元形状33を、基点Kの検出に用いた画像上に重ね合わせ、該3次元形状33で覆われた範囲に1台の車両が存在すると判定し、かつ、この3次元形状33で覆われた範囲の外側を他の車両の存在し得る範囲と判定する(ステップS208)。そして、3次元形状33で覆われた範囲の外側に車両と考えられる抽出部分が存在するか否かを判定する(ステップS209)。ここでは、図7(b)に示すように、3次元形状33で覆われた範囲を画像から削除し、削除後に残った車両領域34について他の車両の存否を判断する。たとえば、削除後に残った車両領域34の面積、形状などから当該車両領域34に別の車両が存在するか否かを判定する。   As shown in FIG. 7A, the vehicle region separation and integration unit 19 superimposes the three-dimensional shape 33 calculated by the coordinate conversion unit 18 on the image used for detecting the base point K, and uses the three-dimensional shape 33. It is determined that one vehicle exists in the covered range, and the outside of the range covered with the three-dimensional shape 33 is determined as a range in which another vehicle can exist (step S208). Then, it is determined whether or not an extraction portion that is considered to be a vehicle exists outside the range covered with the three-dimensional shape 33 (step S209). Here, as shown in FIG. 7B, the range covered with the three-dimensional shape 33 is deleted from the image, and the presence or absence of another vehicle is determined for the vehicle area 34 remaining after the deletion. For example, it is determined whether another vehicle exists in the vehicle area 34 based on the area and shape of the vehicle area 34 remaining after the deletion.

別の車両が存在すると判定したときは(ステップS209;Y)、ステップS202に戻り、この車両領域34を対象に同様の処理を行ない(S202〜S208)、これ以上別の車両がないと判定したら(ステップS209;N)、該車両分離処理を終了する。   If it is determined that there is another vehicle (step S209; Y), the process returns to step S202, the same processing is performed on this vehicle area 34 (S202 to S208), and if it is determined that there is no further vehicle. (Step S209; N), the vehicle separation process is terminated.

この処理により、画面上で重なって撮影される複数の車両を一台ずつ分離して検出することができ、交通量の計測精度が向上する。   By this process, a plurality of vehicles that are photographed on the screen can be separated and detected one by one, and the traffic volume measurement accuracy is improved.

次に、ライト統合処理について説明する。   Next, the light integration process will be described.

図8は、ライト統合処理の流れを示し、図9、図10はその処理過程の画像の一例を示している。図9(a)は、夜間にカメラ11で道路の状況を撮影した画像の一例を示している。図9(a)では、説明の都合上、車両Cや背景の道路などを鮮明に表したが、実際は周囲照度が低く、車両Cの輪郭情報などは得にくい。そこで、ヘッドライトなどの高輝度部に着目し、一対のヘッドライトを車両の特定箇所として車両の位置検出を行なう。   FIG. 8 shows the flow of the light integration process, and FIGS. 9 and 10 show an example of an image in the process. FIG. 9A shows an example of an image obtained by photographing the road situation with the camera 11 at night. In FIG. 9A, for convenience of explanation, the vehicle C and the background road are clearly shown. However, the ambient illuminance is actually low, and it is difficult to obtain the contour information of the vehicle C. Therefore, focusing on high-luminance parts such as headlights, the position of the vehicle is detected using a pair of headlights as specific locations of the vehicle.

まず、取り込み画像から高輝度部を抽出する(図8:ステップS301、図9(b))。図9(b)では、ヘッドライトのほか、車両Cの屋根や側部に設置されたランプなどが高輝度部として抽出されている。   First, a high luminance part is extracted from the captured image (FIG. 8: step S301, FIG. 9B). In FIG. 9B, in addition to the headlight, lamps installed on the roof and side portions of the vehicle C are extracted as high-luminance portions.

つぎに、抽出した高輝度部から、一対のヘッドライトに相当する特徴を備えた対を成す高輝度部を探し、これを一対のヘッドライト(以後、ライトペアと呼ぶ。)の候補として検出する(ステップS302)。ライトペアの候補は、ライト相互間のワールド座標におけるX方向の距離、Y方向の距離が予め定めたしきい値以下のものとする。これにより、予め定めたライト間隔以下であってほぼ水平に並ぶ一対の高輝度部がライトペアの候補として検出される。なお、X方向については、ワールド座標におけるX方向の距離の最小値と最大値とを定め、該最小値・最大値間に入るものをライトペアの候補として抽出してもよい。こうすればヘッドライトとしては近すぎる高輝度部の対を排除することができる。   Next, a pair of high luminance portions having characteristics corresponding to a pair of headlights is searched from the extracted high luminance portions, and this is detected as a candidate for a pair of headlights (hereinafter referred to as a light pair). (Step S302). The light pair candidates are those in which the distance in the X direction and the distance in the Y direction in the world coordinates between the lights are equal to or less than a predetermined threshold value. As a result, a pair of high-intensity parts that are equal to or less than the predetermined light interval and are arranged almost horizontally are detected as light pair candidates. As for the X direction, the minimum value and the maximum value of the distance in the X direction in the world coordinates may be determined, and those that fall between the minimum value and the maximum value may be extracted as light pair candidates. In this way, it is possible to eliminate a pair of high luminance portions that are too close as a headlight.

図10(a)は、ライトペア候補として検出された部分を破線で囲って示してある。この例では、3つのライトペア候補P1、P2、P3が見つかっている。これらのライトペア候補P1〜P3のうち、カメラ座標が一番下(画像内で最も車両の進行方向側)にあるライトペアP1を、車両位置を定めるための基点に選択する。   FIG. 10A shows a portion detected as a light pair candidate surrounded by a broken line. In this example, three light pair candidates P1, P2, and P3 are found. Among these light pair candidates P1 to P3, the light pair P1 having the lowest camera coordinate (the lowest vehicle traveling direction side in the image) is selected as a base point for determining the vehicle position.

基点に選択したライトペアP1のX方向の距離、すなわち車幅から、車種を判定する(ステップS303)。そして、該車種に応じた標準車両の3次元形状を算出する(ステップS304)。すなわち、先に検出した基点に対応する実世界での位置に、先の判定結果の示す車種の標準車両をその特定箇所(この例では一対のヘッドライト)が存するようにして道路に沿って配置し、この標準車両をカメラ11から見たときに見える3次元形状を算出する。   The vehicle type is determined from the distance in the X direction of the light pair P1 selected as the base point, that is, the vehicle width (step S303). Then, the three-dimensional shape of the standard vehicle corresponding to the vehicle type is calculated (step S304). In other words, at the position in the real world corresponding to the base point detected earlier, the standard vehicle of the vehicle type indicated by the previous determination result is arranged along the road so that the specific location (a pair of headlights in this example) exists. Then, a three-dimensional shape that can be seen when the standard vehicle is viewed from the camera 11 is calculated.

車両領域分離統合部19は、図10(b)に示すように、算出した3次元形状41を、基点P1の検出に用いた画像上に重ね合わせ、該3次元形状41で覆われた範囲に1台の車両が存在すると判定する。すなわち、3次元形状41の内側に取り込まれたライトペア候補P2、P3やその他の高輝度部はすべてライトペアP1と同一車両に含まれると判断する。その結果、ライトペア候補P2、P3は他の車両を検出するための基点の候補から除外される。この例では、大型車両Cの屋根や側部などに多数存在する高輝度部が、大型車両の3次元形状(3次元直方体モデル)41の内部にすべて入っているため、これらすべての高輝度部は統合されて1台の車両として計数される。   As shown in FIG. 10B, the vehicle region separation and integration unit 19 superimposes the calculated three-dimensional shape 41 on the image used for detecting the base point P <b> 1, and covers the range covered by the three-dimensional shape 41. It is determined that one vehicle exists. That is, it is determined that the light pair candidates P2, P3 and other high-intensity parts captured inside the three-dimensional shape 41 are all included in the same vehicle as the light pair P1. As a result, the light pair candidates P2 and P3 are excluded from the base point candidates for detecting other vehicles. In this example, a large number of high-intensity parts existing on the roof, sides, etc. of the large vehicle C are all contained within the three-dimensional shape (three-dimensional rectangular parallelepiped model) 41 of the large vehicle. Are integrated and counted as one vehicle.

その後、3次元形状41で覆われた範囲の外側を対象に他のライトペア候補があるか否かを判断し(ステップS306)、他のライトペア候補があるときは(ステップS306;Y)、ステップS304に戻り、当該他のライトペア候補を対象にして次の車両位置の検出および3次元形状の当てはめを行なう(ステップS304、S305)。他のライトペアが存在しなくなったら(ステップS306;N)、処理を終了する。   Thereafter, it is determined whether or not there are other light pair candidates outside the range covered by the three-dimensional shape 41 (step S306). When there are other light pair candidates (step S306; Y), Returning to step S304, detection of the next vehicle position and fitting of a three-dimensional shape are performed on the other light pair candidates (steps S304 and S305). If no other write pair exists (step S306; N), the process is terminated.

このようなライト統合処理を行なうことで、夜間の計測において、一台の車両を複数台として誤って計測することが防止される。   By performing such a light integration process, it is possible to prevent erroneous measurement of a single vehicle as a plurality of vehicles during nighttime measurement.

以上、本発明の実施の形態を図面によって説明してきたが、具体的な構成は実施の形態に示したものに限られるものではなく、本発明の要旨を逸脱しない範囲における変更や追加があっても本発明に含まれる。   The embodiment of the present invention has been described with reference to the drawings. However, the specific configuration is not limited to that shown in the embodiment, and there are changes and additions within the scope of the present invention. Are also included in the present invention.

たとえば、標準車両は、大型車、小型車以外の車種を設けてもよく、長尺のトレーラなどを設定してもよい。また、標準車両として直方体モデルを使用したが、標準車両の形状はこれに限定されるものではなく、対象とする車両の形状に応じて適宜設定すればよい。   For example, the standard vehicle may be provided with a vehicle type other than a large vehicle or a small vehicle, or a long trailer may be set. Further, although a rectangular parallelepiped model is used as the standard vehicle, the shape of the standard vehicle is not limited to this, and may be set as appropriate according to the shape of the target vehicle.

車両の位置の検出は、車両の前端部やヘッドライト対などの特定箇所を基準にして検出するものに限定されない。また、実施の形態では、画像から検出した基点の位置(車両の位置)に応じて標準車両の3次元形状を算出したが、検出された基点の位置に係わらず、固定の3次元形状を使用してもかまわない。たとえば、車両が画像内のほぼ一定の位置に撮影される場合には、その位置に応じた標準車両の3次元形状を予め算出して記憶しておき、該3次元形状を画像上に重ね合わせるようにしてもよい。また、代表的な基点の位置を複数設定し、各基点に対応する3次元形状を予め算出して記憶しておき、これらの中から、画像上で実際に検出された基点の位置に応じた3次元形状を選択して使用するように構成してもかまわない。こうすれば、毎回、3次元形状を算出する場合に比べて、処理負担が軽減される。   The detection of the position of the vehicle is not limited to detection based on a specific location such as the front end of the vehicle or a headlight pair. In the embodiment, the three-dimensional shape of the standard vehicle is calculated according to the position of the base point (vehicle position) detected from the image, but a fixed three-dimensional shape is used regardless of the detected base point position. It doesn't matter. For example, when the vehicle is photographed at a substantially constant position in the image, the three-dimensional shape of the standard vehicle corresponding to the position is calculated and stored in advance, and the three-dimensional shape is superimposed on the image. You may do it. Also, a plurality of representative base point positions are set, and a three-dimensional shape corresponding to each base point is calculated and stored in advance, and from these, the position of the base point actually detected on the image is determined. You may comprise so that a three-dimensional shape may be selected and used. In this way, the processing load is reduced as compared with the case where the three-dimensional shape is calculated each time.

また、実施の形態では、車両を斜め前方から撮影するようにカメラ11を設置した場合を例示したが、カメラ11の設置状態はこれに限定されず、車両を正面から撮影したり、通り過ぎた車両を後方から撮影したりするようにカメラ11を設置した場合においても、本発明は有効に作用する。たとえば、車両を正面から撮影するようにカメラ11を設置した場合でも、車間距離を詰めて同一車線を走行する前後の車両が重なって撮影されることがある。このとき、本発明では、3次元形状の当てはめによって1台(たとえば前方の1台)の存在する範囲を特定するので、前後の車両を分離して認識することができる。   Further, in the embodiment, the case where the camera 11 is installed so as to photograph the vehicle from diagonally forward is illustrated, but the installation state of the camera 11 is not limited thereto, and the vehicle is photographed from the front or passed by Even when the camera 11 is installed so as to take a picture from behind, the present invention works effectively. For example, even when the camera 11 is installed so as to photograph the vehicle from the front, the vehicles before and after traveling in the same lane with a close distance may be photographed. At this time, in the present invention, the range in which one vehicle (for example, one vehicle in front) exists is specified by fitting the three-dimensional shape, so that the front and rear vehicles can be separated and recognized.

本発明の実施の形態に係わる交通量計測装置の概略構成を示すブロック図である。It is a block diagram which shows schematic structure of the traffic measuring device concerning embodiment of this invention. 本発明の実施の形態に係わる交通量計測装置のカメラにより撮影した画像の一例を示す説明図である。It is explanatory drawing which shows an example of the image image | photographed with the camera of the traffic volume measuring apparatus concerning embodiment of this invention. カメラ座標とワールド座標との関係を示す説明図である。It is explanatory drawing which shows the relationship between a camera coordinate and a world coordinate. 本発明の実施の形態に係わる交通量計測装置が行なう処理全体の流れを示すフローチャートである。It is a flowchart which shows the flow of the whole process which the traffic measuring device concerning embodiment of this invention performs. 車両分離処理の流れを示すフローチャートである。It is a flowchart which shows the flow of a vehicle separation process. 車両分離処理の処理過程の画像(車両領域抽出、基点検出)を例示した説明図である。It is explanatory drawing which illustrated the image (vehicle area extraction, base point detection) of the process of a vehicle separation process. 車両分離処理の処理過程の画像(3次元形状の重ね合わせ、3次元形状の外側領域)を例示した説明図である。It is explanatory drawing which illustrated the image (superposition of a three-dimensional shape, the outer area | region of a three-dimensional shape) of the process of a vehicle separation process. ライト統合処理の流れを示すフローチャートである。It is a flowchart which shows the flow of a light integration process. ライト統合処理の処理過程の画像(撮影画像、高輝度部抽出画像)を例示した説明図である。It is explanatory drawing which illustrated the image (a picked-up image, a high-intensity part extraction image) of the process of a light integration process. ライト統合処理の処理過程の画像(ライトペア候補検出、3次元形状の重ね合わせ)を例示した説明図である。It is explanatory drawing which illustrated the image (Light pair candidate detection, three-dimensional shape superimposition) of the process of a light integration process.

符号の説明Explanation of symbols

10…交通量計測装置
11…カメラ
12…処理部
13…画像取得部
14…画像メモリ
15…前処理部
16…車両位置・車種判定部
18…座標変換部
19…車両領域分離統合部
21…車両追跡部
22…交通量計測部
32…車両領域
33、41…算出した3次元形状
34…削除後に残った車両領域
A、B、C…車両
P1、P2、P3…ライトペア候補
K…基点
DESCRIPTION OF SYMBOLS 10 ... Traffic measuring device 11 ... Camera 12 ... Processing part 13 ... Image acquisition part 14 ... Image memory 15 ... Pre-processing part 16 ... Vehicle position and vehicle type determination part 18 ... Coordinate conversion part 19 ... Vehicle area | region separation integration part 21 ... Vehicle Tracking unit 22 ... Traffic volume measuring unit 32 ... Vehicle region 33, 41 ... Calculated three-dimensional shape 34 ... Vehicle region remaining after deletion A, B, C ... Vehicles P1, P2, P3 ... Light pair candidates K ... Base point

Claims (9)

道路を通行する車両をカメラで撮影して得た画像を解析して交通量を計測する交通量計測方法において、
前記カメラで撮影して得た画像上で、車両の位置を検出する検出ステップと、
前記道路上に配置された所定の標準車両を前記カメラから見た場合の3次元形状を求め、前記検出した車両の位置に合わせて、前記3次元形状を前記画像上に重ね合わせる重畳ステップと、
前記画像上の前記3次元形状で覆われた範囲に1台の車両が存在すると判定する判定ステップと
を有する
ことを特徴とする交通量計測方法。
In the traffic measurement method that measures the traffic volume by analyzing the image obtained by photographing the vehicle traveling on the road with the camera,
A detection step of detecting a position of the vehicle on an image obtained by photographing with the camera;
A superimposing step of obtaining a three-dimensional shape when a predetermined standard vehicle arranged on the road is viewed from the camera, and superimposing the three-dimensional shape on the image in accordance with the position of the detected vehicle;
And a determination step of determining that one vehicle is present in the range covered with the three-dimensional shape on the image.
前記重畳ステップでは、前記画像上で検出された前記車両の位置に対応する実世界での位置に配置された前記標準車両を前記カメラから見た場合の3次元形状を算出する
ことを特徴とする請求項1に記載の交通量計測方法。
The superimposing step calculates a three-dimensional shape when the standard vehicle arranged at a position in the real world corresponding to the position of the vehicle detected on the image is viewed from the camera. The traffic volume measuring method according to claim 1.
前記画像上の前記3次元形状で覆われた範囲の外側の領域に、他の車両が存在するか否かを判定する
ことを特徴とする請求項1または2に記載の交通量計測方法。
The traffic volume measuring method according to claim 1 or 2, wherein it is determined whether or not another vehicle exists in a region outside the range covered with the three-dimensional shape on the image.
前記標準車両として、大型車、小型車など車種の異なる複数種類の標準車両を設定しておき、
前記検出ステップにおいて位置検出された車両の車種を判定する車種判定ステップをさらに設け、
前記車種判定ステップの判定結果が示す車種に対応する標準車両の3次元形状を前記画像上に重ね合わせる
ことを特徴とする請求項1乃至3のいずれかに記載の交通量計測方法。
As the standard vehicle, a plurality of types of standard vehicles such as a large vehicle and a small vehicle are set, and
A vehicle type determination step for determining the vehicle type of the vehicle whose position is detected in the detection step;
The traffic volume measuring method according to any one of claims 1 to 3, wherein a three-dimensional shape of a standard vehicle corresponding to a vehicle type indicated by a determination result of the vehicle type determination step is superimposed on the image.
前記検出ステップでは、前記画像上で車両の特定箇所が存する基点を検出することにより車両の位置を検出する
ことを特徴とする請求項1乃至4のいずれかに記載の交通量計測方法。
5. The traffic volume measuring method according to claim 1, wherein in the detecting step, the position of the vehicle is detected by detecting a base point where a specific location of the vehicle exists on the image.
前記特定箇所を車両の前端部とし、
前記検出ステップにおいて、前記画像内で最も車両の進行方向側に存する車両の前端部を前記基点として検出する
ことを特徴とする請求項5に記載の交通量計測方法。
The specific location is the front end of the vehicle,
6. The traffic volume measuring method according to claim 5, wherein, in the detecting step, a front end portion of a vehicle existing closest to a traveling direction of the vehicle in the image is detected as the base point.
前記特定箇所を一対のヘッドライトとし、
前記検出ステップにおいて、一対のヘッドライトに相当する特徴を備えた対を成す高輝度部を一対のヘッドライトの候補として検出し、該検出した候補の中で、前記画像内で最も車両の進行方向側に存する一対のヘッドライトの候補を前記基点として検出する
ことを特徴とする請求項5に記載の交通量計測方法。
The specific portion is a pair of headlights,
In the detection step, a pair of high-intensity portions having characteristics corresponding to a pair of headlights are detected as a pair of headlight candidates, and the vehicle traveling direction is the most in the image among the detected candidates. The traffic volume measuring method according to claim 5, wherein a pair of headlight candidates existing on a side is detected as the base point.
昼夜を判別する昼夜判別ステップを有し、
前記昼夜判別ステップの判別結果が昼の場合は、
前記特定箇所を車両の前端部とすると共に、前記検出ステップにおいて、前記画像内で最も車両の進行方向側に存する車両の前端部を前記基点として検出し、
前記昼夜判別ステップの判別結果が夜の場合は、
前記特定箇所を一対のヘッドライトとすると共に、前記検出ステップにおいて、一対のヘッドライトに相当する特徴を備えた対を成す高輝度部を一対のヘッドライトの候補として検出し、該検出した候補の中で、前記画像内で最も車両の進行方向側に存する一対のヘッドライトの候補を前記基点として検出する
ことを特徴とする請求項5に記載の交通量計測方法。
A day / night discrimination step for discriminating between day and night,
If the discrimination result of the day / night discrimination step is noon,
The specific location is the front end of the vehicle, and in the detection step, the front end of the vehicle existing closest to the vehicle traveling direction in the image is detected as the base point,
When the discrimination result of the day / night discrimination step is night,
The specific portion is a pair of headlights, and in the detection step, a pair of high-intensity parts having characteristics corresponding to the pair of headlights is detected as a pair of headlight candidates, and the detected candidate The traffic volume measuring method according to claim 5, wherein a candidate of a pair of headlights existing closest to the vehicle traveling direction in the image is detected as the base point.
道路を通行する車両をカメラで撮影して得た画像を解析して交通量を計測する交通量計測装置において、
前記カメラで撮影して得た画像に対して、請求項1乃至8のいずれかに記載の交通量計測方法を実行する処理部を有する
ことを特徴とする交通量計測装置。
In a traffic measuring device that measures traffic by analyzing images obtained by shooting a vehicle traveling on a road with a camera,
A traffic measuring device, comprising: a processing unit that executes the traffic measuring method according to claim 1 for an image obtained by photographing with the camera.
JP2006228340A 2006-08-24 2006-08-24 Traffic measuring method and traffic measuring device Active JP5105400B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2006228340A JP5105400B2 (en) 2006-08-24 2006-08-24 Traffic measuring method and traffic measuring device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2006228340A JP5105400B2 (en) 2006-08-24 2006-08-24 Traffic measuring method and traffic measuring device

Publications (2)

Publication Number Publication Date
JP2008052517A true JP2008052517A (en) 2008-03-06
JP5105400B2 JP5105400B2 (en) 2012-12-26

Family

ID=39236517

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2006228340A Active JP5105400B2 (en) 2006-08-24 2006-08-24 Traffic measuring method and traffic measuring device

Country Status (1)

Country Link
JP (1) JP5105400B2 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009245042A (en) * 2008-03-31 2009-10-22 Hitachi Ltd Traffic flow measurement device and program
JP2010041322A (en) * 2008-08-04 2010-02-18 Sumitomo Electric Ind Ltd Mobile object identification device, image processing apparatus, computer program and method of specifying optical axis direction
JP2012037965A (en) * 2010-08-04 2012-02-23 Toyota Motor Corp Inter-vehicle distance detecting device and inter-vehicle distance detecting method
JP2012037964A (en) * 2010-08-04 2012-02-23 Toyota Motor Corp Vehicle information acquisition device and vehicle information acquisition method
JP2013025489A (en) * 2011-07-19 2013-02-04 Sumitomo Electric Ind Ltd Moving body detecting apparatus, moving body detecting system, and computer program
JP2013134667A (en) * 2011-12-27 2013-07-08 Toshiba Corp Vehicle detection device
CN105427626A (en) * 2015-12-19 2016-03-23 长安大学 Vehicle flow statistics method based on video analysis
JP2016143264A (en) * 2015-02-03 2016-08-08 富士重工業株式会社 Outside of car environment recognizing device
KR101995012B1 (en) * 2018-09-03 2019-07-02 렉스젠(주) Apparatus for detecting vehicle and method thereof
CN110969055A (en) * 2018-09-29 2020-04-07 百度在线网络技术(北京)有限公司 Method, apparatus, device and computer-readable storage medium for vehicle localization
JP2020077367A (en) * 2018-08-31 2020-05-21 バイドゥ オンライン ネットワーク テクノロジー (ベイジン) カンパニー リミテッド Intelligent roadside unit and information processing method of the same
KR20230039865A (en) * 2021-09-14 2023-03-22 (주) 하나텍시스템 Day and night driving detection system using cosssntrol camera

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0997335A (en) * 1995-09-29 1997-04-08 Hitachi Ltd Vehicle recognition device and traffic flow measuring instrument
JP2002190023A (en) * 2000-12-21 2002-07-05 Toshiba Corp Device and method for discriminating car model, and storage medium storing car model discriminating program readable in computer

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0997335A (en) * 1995-09-29 1997-04-08 Hitachi Ltd Vehicle recognition device and traffic flow measuring instrument
JP2002190023A (en) * 2000-12-21 2002-07-05 Toshiba Corp Device and method for discriminating car model, and storage medium storing car model discriminating program readable in computer

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009245042A (en) * 2008-03-31 2009-10-22 Hitachi Ltd Traffic flow measurement device and program
JP2010041322A (en) * 2008-08-04 2010-02-18 Sumitomo Electric Ind Ltd Mobile object identification device, image processing apparatus, computer program and method of specifying optical axis direction
JP2012037965A (en) * 2010-08-04 2012-02-23 Toyota Motor Corp Inter-vehicle distance detecting device and inter-vehicle distance detecting method
JP2012037964A (en) * 2010-08-04 2012-02-23 Toyota Motor Corp Vehicle information acquisition device and vehicle information acquisition method
JP2013025489A (en) * 2011-07-19 2013-02-04 Sumitomo Electric Ind Ltd Moving body detecting apparatus, moving body detecting system, and computer program
JP2013134667A (en) * 2011-12-27 2013-07-08 Toshiba Corp Vehicle detection device
JP2016143264A (en) * 2015-02-03 2016-08-08 富士重工業株式会社 Outside of car environment recognizing device
CN105427626B (en) * 2015-12-19 2018-03-02 长安大学 A kind of statistical method of traffic flow based on video analysis
CN105427626A (en) * 2015-12-19 2016-03-23 长安大学 Vehicle flow statistics method based on video analysis
JP2020077367A (en) * 2018-08-31 2020-05-21 バイドゥ オンライン ネットワーク テクノロジー (ベイジン) カンパニー リミテッド Intelligent roadside unit and information processing method of the same
US11145194B2 (en) 2018-08-31 2021-10-12 Baidu Online Network Technology (Beijing) Co., Ltd. Smart roadside unit and method for processing information by smart roadside unit
JP7077282B2 (en) 2018-08-31 2022-05-30 アポロ インテリジェント ドライビング テクノロジー(ペキン)カンパニー リミテッド Intelligent roadside unit and its information processing method
KR101995012B1 (en) * 2018-09-03 2019-07-02 렉스젠(주) Apparatus for detecting vehicle and method thereof
CN110969055A (en) * 2018-09-29 2020-04-07 百度在线网络技术(北京)有限公司 Method, apparatus, device and computer-readable storage medium for vehicle localization
JP2020057387A (en) * 2018-09-29 2020-04-09 バイドゥ オンライン ネットワーク テクノロジー (ベイジン) カンパニー リミテッド Vehicle positioning method, vehicle positioning device, electronic apparatus, and computer-readable storage medium
US11144770B2 (en) 2018-09-29 2021-10-12 Baidu Online Network Technology (Beijing) Co., Ltd. Method and device for positioning vehicle, device, and computer readable storage medium
CN110969055B (en) * 2018-09-29 2023-12-19 阿波罗智能技术(北京)有限公司 Method, apparatus, device and computer readable storage medium for vehicle positioning
KR20230039865A (en) * 2021-09-14 2023-03-22 (주) 하나텍시스템 Day and night driving detection system using cosssntrol camera
KR102646288B1 (en) * 2021-09-14 2024-03-12 (주) 하나텍시스템 Day and night driving detection system using cosssntrol camera

Also Published As

Publication number Publication date
JP5105400B2 (en) 2012-12-26

Similar Documents

Publication Publication Date Title
JP5105400B2 (en) Traffic measuring method and traffic measuring device
US11087148B2 (en) Barrier and guardrail detection using a single camera
KR100682624B1 (en) Tunnel detecting device for vehicle and light control device for vehicle
US8184859B2 (en) Road marking recognition apparatus and method
EP2608536B1 (en) Method for counting objects and apparatus using a plurality of sensors
CN108197523B (en) Night vehicle detection method and system based on image conversion and contour neighborhood difference
US20070127778A1 (en) Object detecting system and object detecting method
JP2004112144A (en) Front car tracking system and method for tracking front car
JP2003346278A (en) Apparatus and method for measuring queue length of vehicles
CN105825495A (en) Object detection apparatus and object detection method
JP2015090679A (en) Vehicle trajectory extraction method, vehicle region extraction method, vehicle speed estimation method, vehicle trajectory extraction program, vehicle region extraction program, vehicle speed estimation program, vehicle trajectory extraction system, vehicle region extraction system, and vehicle speed estimation system
KR101134857B1 (en) Apparatus and method for detecting a navigation vehicle in day and night according to luminous state
JP4296287B2 (en) Vehicle recognition device
JP5200861B2 (en) Sign judging device and sign judging method
JP2004086417A (en) Method and device for detecting pedestrian on zebra crossing
JP3914447B2 (en) Image-type vehicle detection system and image-type vehicle detection method
JP2014164461A (en) Pedestrian detector and pedestrian detection method
JP3605955B2 (en) Vehicle identification device
JP4947592B2 (en) Vehicle detection device
JPH11205663A (en) Exposure amount controller for image pickup device for vehicle
JP2002163645A (en) Device and method for detecting vehicle
JP4334744B2 (en) Traffic flow measuring device
US9430707B2 (en) Filtering device and environment recognition system
JP7460393B2 (en) Vehicle external environment recognition device
JP7344776B2 (en) Traffic light recognition method and traffic light recognition device

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20090811

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20110406

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20110412

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20110610

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20110719

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20110913

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20111018

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20111213

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20120111

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20120207

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20120926

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20120926

R150 Certificate of patent or registration of utility model

Ref document number: 5105400

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20151012

Year of fee payment: 3

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250