JP2020087180A - Movable body tracking method and image processing device used therefor - Google Patents

Movable body tracking method and image processing device used therefor Download PDF

Info

Publication number
JP2020087180A
JP2020087180A JP2018223497A JP2018223497A JP2020087180A JP 2020087180 A JP2020087180 A JP 2020087180A JP 2018223497 A JP2018223497 A JP 2018223497A JP 2018223497 A JP2018223497 A JP 2018223497A JP 2020087180 A JP2020087180 A JP 2020087180A
Authority
JP
Japan
Prior art keywords
moving body
person
point
visual field
cameras
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2018223497A
Other languages
Japanese (ja)
Other versions
JP6831117B2 (en
Inventor
泰祐 岡部
Taisuke Okabe
泰祐 岡部
覚 渋谷
Satoru Shibuya
覚 渋谷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Giken Trastem Co Ltd
Original Assignee
Giken Trastem Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Giken Trastem Co Ltd filed Critical Giken Trastem Co Ltd
Priority to JP2018223497A priority Critical patent/JP6831117B2/en
Publication of JP2020087180A publication Critical patent/JP2020087180A/en
Application granted granted Critical
Publication of JP6831117B2 publication Critical patent/JP6831117B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

To provide a movable body tracking method capable of identifying a movable body accurately, in a relationship between only adjacent cameras without an integrated server, and tracking the same.SOLUTION: In an overlapping visual field region of visual field regions of adjacent two cameras, a common point which is handled as the same position in the visual field regions of respective cameras, is provided, when only one common point is provided, the one point is taken as a reference point, and when a plurality of common points are provided, one point out of the common points is taken as the reference point. Each of the cameras collates a movable body detected in a visual field thereof with a movable body model base and represents a movable body position by a representative point for tracking the movable body. With respect to the movable body detected by each of the adjacent two cameras in the overlapping visual field region, movable bodies having identity in an actual distance to a movable body position in a real space of the movable body represented by the reference point and the representative point and a direction are determined to be the same movable body to the detected movable body. With respect to the movable bodies which are determined to be the same movable body, movable body tracking information on one camera side is transmitted to the adjacent other camera side, for coordination of two adjacent cameras.SELECTED DRAWING: Figure 3

Description

本発明は、複数のカメラを連携させて広域にわたって同一移動体の行動を追跡するための移動体追跡方法及び画像処理装置に関する。ここで移動体としては、人物、その他の動物、自動車等の車両など様々な移動体を含む。 The present invention relates to a moving body tracking method and an image processing apparatus for linking a plurality of cameras to track the behavior of the same moving body over a wide area. Here, the moving body includes various moving bodies such as a person, another animal, and a vehicle such as an automobile.

移動体としての人物をカメラ画像を用いて検出する人物検出システムは、例えば、店舗の客数カウントなどの用途に用いられ、店舗の来客パターンの分析などに利用される。人物検出システムとして、例えば、カメラで人物を検出する場合、予め人物モデルを設定しカメラ画像中からこの人物モデルと合致する画像を検出することで人物を検出するようにした方法が知られている(例えば、特許文献1参照)。また、画像の座標系上の位置からワールド座標系の3次元座標系上の位置へと変換する技術及び、ワールド座標系の3次元座標系を地上からの高さを仮定することで画像の座標系の位置へと変換する技術も知られている(例えば、特許文献2参照)。 A person detection system for detecting a person as a moving body using a camera image is used, for example, for purposes such as counting the number of customers in a store, and is used for analyzing a visitor pattern in a store. As a person detection system, for example, when a person is detected by a camera, a method is known in which a person model is set in advance and a person is detected by detecting an image that matches the person model from a camera image. (For example, refer to Patent Document 1). Further, the technology for converting the position on the coordinate system of the image into the position on the three-dimensional coordinate system of the world coordinate system and the coordinate of the image by assuming the height of the three-dimensional coordinate system of the world coordinate system from the ground There is also known a technique of converting to a system position (for example, refer to Patent Document 2).

特許第3406587号公報Japanese Patent No. 3406587 特開2006−338123号公報JP, 2006-338123, A 特開2016−162306号公報JP, 2016-162306, A 米国特許第7319479号明細書U.S. Pat. No. 7,319,479

複数のカメラを用いて監視領域の全域にわたって人物を追跡する場合、人物位置を予め2台のカメラ間の距離が分かっている隣接する2台のカメラの真下位置からの距離に基づいて人物同定(同一人物判定)をしようとすると、カメラ位置と人物位置との距離が長くなるため、カメラの設置条件などの原因から誤差が大きくなり、人物同定が困難となることがある。また、各カメラからの人物位置情報を受け取り、監視領域を表す大きな1つの平面地図上にマッピングする等して人物の行動記録を検出するためには、例えば特許文献4のように複数のカメラを取りまとめる統括的なサーバーが必要であった。 When a person is tracked over the entire monitoring area using a plurality of cameras, the person position is identified based on the distance from the position directly below two adjacent cameras whose distances between the two cameras are known in advance ( If the same person is tried to be determined), the distance between the camera position and the person position becomes long, which may cause a large error due to factors such as the installation conditions of the camera, making it difficult to identify the person. Further, in order to detect the action record of a person by receiving the person position information from each camera and mapping it on a large one-dimensional map that represents the monitoring area, for example, as shown in Patent Document 4, a plurality of cameras are used. A centralized server was needed to coordinate.

また、上記の方法においては隣接する2台のカメラの設置角度や向き、設置距離を事前に計測しておくまたは一定にしておく必要があり、実際にはカメラの傾きの計測は特に困難であり、更にカメラ設置時の作業が煩雑であるという課題があった。
カメラの設置条件が完全に計測出来ている場合にはカメラ画像から推定される人物の距離の誤差は少ないが、実際にはカメラの設置角度や地面の傾きなどの誤差要因を完全に取り除くことは困難であり、カメラ画像においてカメラの真下位置からの距離に基づいた人物距離の推定を行うと、カメラ真下位置から人物位置との距離が遠くなるほど誤差が累積してより大きな誤差が生じてしまう。
Further, in the above method, it is necessary to measure the installation angle, orientation, and installation distance of two adjacent cameras in advance or to make them constant, and it is particularly difficult to measure the tilt of the camera in practice. Further, there is a problem that the work at the time of installing the camera is complicated.
If the camera installation conditions are completely measured, the error of the distance of the person estimated from the camera image is small, but in reality it is not possible to completely remove the error factors such as the installation angle of the camera and the inclination of the ground. It is difficult to estimate the person distance in the camera image based on the distance from the position directly below the camera. As the distance from the position directly below the camera to the person position increases, the error accumulates and a larger error occurs.

以上の課題は、人物を対象とする場合に限らず、自動車等の様々な移動体を対象とする場合も同様である。
本発明は、以上の事情に鑑みてなされたものであり、統括的なサーバー要素を不要とし、隣接する2台のカメラ間だけの関係で精度よく移動体の同定を行い追跡することが可能となる移動体追跡方法及びこれに用いる画像処理装置を提供することを目的とする。
The above problems are not limited to the case of targeting a person, and are similar to the case of targeting various moving bodies such as an automobile.
The present invention has been made in view of the above circumstances, and it is possible to accurately identify and track a moving object by eliminating the need for an integrated server element and only between two adjacent cameras. Another object of the present invention is to provide a moving body tracking method and an image processing apparatus used for the same.

本発明に係る移動体追跡方法は、
複数のカメラの視野領域が設定される監視領域の移動体を追跡する方法であって、
隣接する2台のカメラの視野領域が部分的に重複する重複視野領域を有し、
重複視野領域において隣接する2台のカメラの視野領域間で同一位置として扱う共有点を配置し、
前記共有点を1点のみ有する場合はこの1点を基準点とし、前記共有点を複数有する場合はこれらの点のうちの1点を基準点とし、
各カメラが各々の視野領域で検出する移動体を移動体モデルベースと照合して当該移動体を代表点で表して追跡し、
重複視野領域内で隣接する2台のカメラの各々が検出する移動体に対して、前記基準点と前記代表点で表す移動体の実空間上の移動体位置との実距離及び方向が同一性を有する移動体同士を同一移動体と判定し、
同一移動体と判定した移動体について一方のカメラ側での移動体追跡情報を隣接する他方のカメラ側へと伝達して隣接する2台のカメラ間を連携させる。
The moving body tracking method according to the present invention,
A method of tracking a moving object in a surveillance area in which the field of view of a plurality of cameras is set,
Has an overlapping field of view in which the field of view of two adjacent cameras partially overlaps,
Place a common point to be treated as the same position between the view areas of two adjacent cameras in the overlapping view area,
In the case of having only one common point, this one point is used as a reference point, and in the case of having a plurality of common points, one of these points is used as a reference point,
The moving body detected by each camera in each field of view is collated with the moving body model base, and the moving body is represented by a representative point and tracked,
With respect to the moving body detected by each of two adjacent cameras in the overlapping visual field area, the actual distance and the direction of the moving body position in the real space of the moving body represented by the reference point and the representative point are the same. It is determined that the moving bodies having the same are the same moving body,
With respect to the moving bodies which are determined to be the same moving body, the moving body tracking information on one camera side is transmitted to the other adjacent camera side so that the two adjacent cameras cooperate with each other.

また、本発明に係る画像処理装置は、
前記移動体追跡方法に共通に用いる画像処理装置であって、
カメラとプロセッサとを有し、
プロセッサは、
自己のカメラの視野領域が隣接する他のカメラの視野領域と部分的に重複する重複視野領域において各視野領域間で同一位置として扱う共有点を記録し、更に、前記共有点を1点のみ有する場合はこの1点を基準点とし、前記共有点を複数有する場合はこれらの点のうちの1点を基準点として記録する共有点記録部と、
自己のカメラが視野領域で検出する移動体を移動体モデルベースと照合して当該人物を代表点で表す画像処理プロセス部と、
画像処理プロセス部で捉えた移動体を追跡する追跡処理プロセス部と、
追跡している移動体の移動体位置情報を隣接する他のカメラ側の画像処理装置と通信する通信プロセス部と、
重複視野領域内で自己のカメラと隣接する他のカメラの各々が検出する移動体に対して、前記基準点と前記代表点で表す移動体の実空間上の人物位置との実距離及び方向が同一性を有する移動体同士を同一移動体と判定する一致判定プロセス部とを備える。
Further, the image processing device according to the present invention,
An image processing device commonly used in the moving body tracking method,
Has a camera and a processor,
The processor is
In the overlapping visual field area where the visual field area of the own camera partially overlaps with the adjacent visual field areas of the other cameras, common points to be treated as the same position between the respective visual field areas are recorded, and further, only one common point is provided. In the case of using this one point as a reference point, and in the case of having a plurality of the common points, one of these points is recorded as the reference point,
An image processing process unit that represents the person as a representative point by collating the moving body detected by the own camera with the moving body model base,
A tracking processing unit that tracks the moving object captured by the image processing unit,
A communication process unit that communicates the moving body position information of the moving body being tracked with another image processing device on the other camera side,
For a moving body detected by each of the other cameras adjacent to the own camera in the overlapping field of view, the actual distance and direction between the reference point and the person's position in the real space of the moving body represented by the representative point are A coincidence determination process unit that determines that moving bodies having the same identity are the same moving body.

本発明によれば、統括的なサーバー要素を不要とし、共有点の登録のみで動作するため設定が容易であり、隣接する2台のカメラ間だけの関係で精度よく移動体同定を行い追跡することが可能となる。また、移動体追跡のためのカメラ台数も実質的に制限なく連動することができる。よって、広域にわたり大規模な移動体追跡が可能となる。 According to the present invention, since an integrated server element is not required and operation is performed only by registering a common point, setting is easy, and a moving object is accurately identified and tracked only by the relationship between two adjacent cameras. It becomes possible. In addition, the number of cameras for tracking a moving object can be linked virtually without any limitation. Therefore, it is possible to track a large-scale moving object over a wide area.

実施形態の人物追跡システムを示す全体構成図である。It is the whole block diagram which shows the person tracking system of execution form. 2台のカメラの重複視野領域に2つ共有点と1つ中点を配置した状態を示す模式図である。It is a schematic diagram which shows the state which has arrange|positioned two common points and one middle point in the overlapping visual field area of two cameras. 2台のカメラの重複視野領域で認識された人物の人物同定方法として基準点と人物位置との間の距離及び方向を表した各視野領域を示す模式図である。It is a schematic diagram which shows each visual field area|region showing the distance and direction between a reference point and a person position as a person identification method of the person recognized by the overlapping visual field area|region of two cameras. 地上高さを持つ共有点から地上高さ0の中点を算出する方法を説明するために重複視野領域での各視野領域を示す模式図である。It is a schematic diagram which shows each visual field area in an overlapping visual field area in order to demonstrate the method of calculating the midpoint of ground height 0 from the common point which has ground height. 実施形態における画像処理装置の構成を示すブロック図である。It is a block diagram showing the composition of the image processing device in an embodiment. 2つのカメラ画像の座標系に角度差を有する状態を示す模式図である。It is a schematic diagram which shows the state which has an angle difference in the coordinate system of two camera images.

以下に、本発明の実施形態について添付図面を参照しながら説明する。
本実施形態では、移動体として人物を対象とする。ただし、本発明は、人物に限らず、人間以外の動物や自動車等の車両など様々な移動体を対象とすることができる。
図1、図2に示すように、移動体追跡システムとしての人物追跡システム1は、カメラ11(11a,11b,11c等)を備える画像処理装置2(2a,2b,2c等)を複数有し、これらの画像処理装置2がLAN等のネットワーク3により通信接続されている。また、ネットワーク3には集計装置4が接続されている。集計装置4には各画像処理装置2で連携して追跡した人物5の人物追跡情報が最後に出力される。各画像処理装置2は、カメラ11とプロセッサ12とを備えている。これらの画像処理装置2は、同一構造を有し、同一のプログラムで作動し、連携して人物追跡を行う。
Embodiments of the present invention will be described below with reference to the accompanying drawings.
In this embodiment, a person is used as the moving body. However, the present invention is not limited to humans, and can be applied to various moving bodies such as animals other than humans and vehicles such as automobiles.
As shown in FIGS. 1 and 2, a person tracking system 1 as a moving body tracking system has a plurality of image processing devices 2 (2a, 2b, 2c, etc.) including cameras 11 (11a, 11b, 11c, etc.). The image processing apparatuses 2 are communicatively connected by a network 3 such as a LAN. A totaling device 4 is connected to the network 3. The person tracking information of the person 5 tracked in cooperation with each image processing apparatus 2 is finally output to the totaling apparatus 4. Each image processing device 2 includes a camera 11 and a processor 12. These image processing devices 2 have the same structure, operate with the same program, and cooperate to perform person tracking.

なお、隣接する画像処理装置2間ではカメラ画像の同期が取れていてカメラ画像の同時性が担保されているものとする。このとき、ネットワーク3はイーサネット(登録商標)など高速通信が可能なネットワークであることが好ましい。 It is assumed that the camera images are synchronized between the adjacent image processing apparatuses 2 and the simultaneity of the camera images is secured. At this time, the network 3 is preferably a network capable of high-speed communication such as Ethernet (registered trademark).

カメラ11は、CMOSカメラ等の撮像手段であり、その視野領域13(13a,13b,13c等)の範囲内で監視領域6を常時撮像し、このカメラ画像(視野領域13)をビデオ信号として連続的にプロセッサ12(12a,12b,12c等)に送信する。各カメラ11は、監視領域6の床面等を撮像するように天井又は壁等に設置されている。また、各カメラ11は、監視領域6に対して隣接する2台のカメラ11の視野領域13(13a,13b,13c等)が部分的に重複する重複視野領域30(30ab,30bc等)を持つように設置される。なお、図1では、視野領域13が横方向一列に隣接するように複数のカメラ11を設置しているが、視野領域13の周囲4方向の任意の位置に別の視野領域13が隣接するように複数のカメラ11を設置するようにしてもよい。 The camera 11 is an image pickup means such as a CMOS camera, and always picks up an image of the monitoring area 6 within the field of view 13 (13a, 13b, 13c, etc.), and continuously uses this camera image (field of view 13) as a video signal. To the processor 12 (12a, 12b, 12c, etc.). Each camera 11 is installed on a ceiling or a wall so as to capture an image of the floor surface or the like of the monitoring area 6. In addition, each camera 11 has an overlapping visual field area 30 (30ab, 30bc, etc.) in which the visual field areas 13 (13a, 13b, 13c, etc.) of two cameras 11 adjacent to the monitoring area 6 partially overlap. Is installed. In FIG. 1, the plurality of cameras 11 are installed so that the visual field regions 13 are adjacent to each other in a row in the lateral direction, but another visual field region 13 may be adjacent to any position in four directions around the visual field region 13. A plurality of cameras 11 may be installed in each.

プロセッサ12は、カメラ11が撮像する視野領域13の人物5を捉えて追跡し、隣接する他の画像処理装置2へ同一人物の人物追跡情報を伝達し、隣接する2つの画像処理装置2間で人物5の追跡を連携させる。プロセッサ12は、人物5が一方の視野領域13から他方の視野領域13へ移動する際、同一人物か否か人物同定を行う。この人物同定方法は、隣接する2台のカメラ11の視野領域13が部分的に重複する重複視野領域30において各々のカメラ11側で撮像する人物5について、図3に示すように、基準点18から人物位置51までの実距離dp1,dp2及び方向α1,α2が同一性を有する人物5a,5b同士を同一人物として判定する。人物同定方法の詳細は後述する。 The processor 12 captures and tracks the person 5 in the visual field region 13 captured by the camera 11, transmits the person tracking information of the same person to another adjacent image processing apparatus 2, and the two image processing apparatuses 2 adjacent to each other. Coordinate the tracking of the person 5. When the person 5 moves from one visual field area 13 to the other visual field area 13, the processor 12 performs person identification as to whether or not they are the same person. According to this person identification method, as shown in FIG. 3, a reference point 18 is applied to a person 5 imaged on each camera 11 side in an overlapping visual field area 30 in which the visual field areas 13 of two adjacent cameras 11 partially overlap. Persons 5a and 5b having the same actual distances dp1 and dp2 and the directions α1 and α2 from the person position 51 to the person position 51 are determined as the same person. Details of the person identifying method will be described later.

図5に示すように、プロセッサ12は、共有点記録部21、人物追跡履歴記録部22、画像処理プロセス部23、追跡処理プロセス部24、座標変換プロセス部25、通信プロセス部26、一致判定プロセス部27などを備えている。このプロセッサ12により人物追跡方法が実現される。 As illustrated in FIG. 5, the processor 12 includes a shared point recording unit 21, a person tracking history recording unit 22, an image processing process unit 23, a tracking processing process unit 24, a coordinate conversion process unit 25, a communication process unit 26, and a match determination process. The unit 27 and the like are provided. The processor 12 implements a person tracking method.

共有点記録部21は、隣接する2台のカメラ11の視野領域13が部分的に重複する重複視野領域30にマーカーにより目印を2点配置し、これら2点を共有点15,16として記録する。この2つの共有点15,16は、隣接する2台のカメラ11の各々の視野領域13で同一位置として扱われる。また、共有点記録部21は、2つの共有点15,16の実空間上の中間位置に配置する中点17を共有点として記録する。中点17も隣接する2台のカメラ11間では、各々の視野領域13での同一位置として扱われる。すなわち、共有点15,16は、マーカーを目印として決定した点であり、中点17は、この目印の点(共有点15,16)を基に決定した点であるが、いずれの点15〜17も、隣接する2台のカメラ11の各々の視野領域13で同一位置として扱われる共有点とするものである。共有点15,16、及び、共有点として登録された中点17の中から代表点14で表す人物5の人物位置51まで最も近い点を選択し、重複視野領域30内で人物同定を行う際の基準点18とされる。人物同定の際、この基準点18から人物位置51までの実距離dp1,dp2及び方向α1,α2が求められる。これによれば、例えば、カメラ位置の真下位置から人物位置までの実距離を求めて人物同定を行う場合に比べて短い距離となるので、距離の誤差も小さくすることができ、また、登録された共有点15〜17は隣接する2台のカメラ11の各々の視野領域13で同一地点と定義することで基準点18が校正され、誤差を減らすことが出来る。その結果、人物同定を精度よく行うことができる。なお、共有点として登録された中点17は、重複視野領域30のほぼ中央付近に配置されるから、この中点17を基準点18としてもよい。この場合、重複視野領域30内の人物位置51との間の距離dp1,dp2も相対的に短い距離として得やすく、また、基準点18の決定処理も簡易に行なうことができる。また、2つの各共有点15,16は、重複視野領域30の任意の位置に配置することができるが、長方形形状の重複視野領域30では各短辺側にそれぞれ配置するのが好ましく、また、各共有点15,16は、重複視野領域30の短辺の中央付近に配置するのが好ましい。これにより、2つの共有点15,16と1つの中点17の3点をできるだけ重複視野領域30内で分散させることができる。 The common point recording unit 21 arranges two marks by markers in the overlapping visual field area 30 in which the visual field areas 13 of two adjacent cameras 11 partially overlap, and records these two points as the common points 15 and 16. .. The two shared points 15 and 16 are treated as the same position in the visual field regions 13 of the two adjacent cameras 11. Further, the shared point recording unit 21 records the midpoint 17 arranged at the intermediate position in the real space between the two shared points 15 and 16 as the shared point. The midpoint 17 is also treated as the same position in each visual field region 13 between the two adjacent cameras 11. That is, the shared points 15 and 16 are points determined using the marker as a mark, and the midpoint 17 is a point determined based on the point (shared point 15 and 16) of the marker, but any of the points 15 to Reference numeral 17 is also a common point that is treated as the same position in the visual field regions 13 of two adjacent cameras 11. When selecting a point closest to the person position 51 of the person 5 represented by the representative point 14 from the shared points 15 and 16 and the midpoint 17 registered as the shared point, and performing person identification in the overlapping visual field region 30. Is set as a reference point 18. At the time of person identification, the actual distances dp1 and dp2 from the reference point 18 to the person position 51 and the directions α1 and α2 are obtained. According to this, for example, the distance is shorter than that in the case of performing the person identification by obtaining the actual distance from the position directly below the camera position to the person position. By defining the common points 15 to 17 as the same point in the visual field areas 13 of the two adjacent cameras 11, the reference point 18 is calibrated and the error can be reduced. As a result, person identification can be performed accurately. Since the midpoint 17 registered as the common point is arranged near the center of the overlapping visual field region 30, the midpoint 17 may be used as the reference point 18. In this case, the distances dp1 and dp2 with respect to the person position 51 in the overlapping visual field region 30 are easily obtained as relatively short distances, and the determination process of the reference point 18 can be easily performed. Further, the two respective shared points 15 and 16 can be arranged at arbitrary positions in the overlapping visual field area 30, but in the rectangular overlapping visual field area 30, it is preferable to arrange them on the respective short sides, and The common points 15 and 16 are preferably arranged near the center of the short side of the overlapping visual field region 30. Thereby, the three points of the two shared points 15 and 16 and the one middle point 17 can be dispersed in the overlapping visual field region 30 as much as possible.

なお、図3において共有点15,16を決めるためのマーカーは地上面(地上高さ0)に配置されたものとして説明しているが、実際にはマーカーの地上高さは任意の高さとすることができる。 In FIG. 3, the markers for determining the common points 15 and 16 are described as being placed on the ground surface (ground height 0), but the ground height of the markers is actually arbitrary. be able to.

また、共有点15,16を決めるための目標としてマーカーを配置しているが、重複視野領域30内に同一の地点または物体と判断できる特徴があれば、この特徴を基に共有点15,16を配置することができ、目印としてマーカーを配置する必要はない。 Further, although markers are arranged as targets for determining the shared points 15 and 16, if there is a feature that can be determined to be the same point or object in the overlapping visual field region 30, the shared points 15 and 16 are based on this feature. Can be placed and there is no need to place a marker as a marker.

共有点記録部21には地上高さの情報とカメラ画像上の位置情報が記録される。これにより、公知の座標変換技術(例えば、段落0002等)により、共有点15,16の地上高さを変更した場合のカメラ画像上の位置を算出することが可能となる。座標変換技術により、例えば、図4に示すように、共有点15,16が地上高さ0のときのカメラ画像上の位置を点52,53として登録する。そして、この地上高さ0の共有点52,53から地上高0の中点17を算出することが出来る。ただし、計算が煩雑になることと、実際には目印となるマーカーは地上面に配置されることが多いことから、地上高さ0で固定して共有点15,16を取得することが好ましい。 In the common point recording unit 21, information on the ground height and position information on the camera image are recorded. Accordingly, it is possible to calculate the position on the camera image when the ground height of the shared points 15 and 16 is changed by using a known coordinate conversion technique (for example, paragraph 0002). With the coordinate conversion technique, for example, as shown in FIG. 4, the positions on the camera image when the shared points 15 and 16 have a ground height of 0 are registered as points 52 and 53. Then, the midpoint 17 of the ground clearance 0 can be calculated from the common points 52 and 53 of the ground clearance of 0. However, since the calculation becomes complicated and the marker that serves as a mark is often placed on the ground surface, it is preferable that the common points 15 and 16 are fixed at the ground height of 0.

重複視野領域30が広い場合においては、共有点を3点以上配置することで重複視野領域30内で共有点を分散させることができ、精度を安定させることが出来る。また、それぞれの共有点の実空間上の中間位置に配置する中点を求めて共有点として記録することで、更に共有点の数を増やし、共有点から人物位置までの距離を短くすることで人物同定の精度を安定させることが出来る。 When the overlapping visual field area 30 is wide, by disposing three or more common points, the common points can be dispersed in the overlapping visual field area 30 and the accuracy can be stabilized. In addition, by finding the midpoint to be placed at the intermediate position of each shared point in the real space and recording it as the shared point, it is possible to further increase the number of shared points and shorten the distance from the shared point to the person position. The accuracy of person identification can be stabilized.

また、図3では共有点15,16の2点間の実空間上の中間位置に中点17を配置しこれを共有点として追加する説明としているが、追加する共有点は、このような中点17に限定されない。例えば、共有点15,16を基に更に追加する共有点として、共有点15,16の2点を結ぶ実空間上の線上に位置する任意の数の点より1点又は2点以上選択して決定してもよいし、更には共有点15,16から算出される実空間上の位置が同一として扱われる点を1点又は2点以上選択して決定してもよい。特に、初期化時に画像上の全ての地点に対応するように実空間上の位置が同一として扱われる点を配置することで、一致判定プロセス部27の計算量を軽減することが出来る。 Further, in FIG. 3, it is described that the midpoint 17 is placed at an intermediate position in the real space between the two shared points 15 and 16 and this is added as a shared point. It is not limited to point 17. For example, as a shared point to be further added based on the shared points 15 and 16, one point or two or more points are selected from an arbitrary number of points located on a line in the real space connecting the two shared points 15 and 16. It may be determined, or one or two or more points whose positions in the real space calculated from the shared points 15 and 16 are treated as the same may be selected and determined. Particularly, by arranging points whose positions in the real space are treated as the same so as to correspond to all the points on the image at the time of initialization, it is possible to reduce the calculation amount of the coincidence determination processing unit 27.

また、目印より決定する共有点は、図3では点15,16の2点で説明しているが、隣接する2台のカメラ11の視野領域13の画像上のそれぞれの座標系19a,19bが同じ座標系であると予め分かっている場合においては1点の共有点のみを配置、記録し、この1点の共有点を基準点18と扱うことで目印(マーカー)の記録を行う手間を減らすことが出来る。 Although the shared points determined by the marks are described as two points 15 and 16 in FIG. 3, the coordinate systems 19a and 19b on the images of the visual field regions 13 of the two adjacent cameras 11 have the same coordinates. When it is known in advance that they have the same coordinate system, only one common point is arranged and recorded, and by handling this one common point as the reference point 18, the time and effort for recording the mark (marker) is reduced. You can

人物追跡履歴記録部22は、自己の画像処理装置2で追跡する人物5の人物追跡情報を記録し、また、隣接する他の画像処理装置2から伝達されてきた人物5の人物追跡情報を記録する。 The person tracking history recording unit 22 records the person tracking information of the person 5 tracked by the image processing apparatus 2 of its own, and also records the person tracking information of the person 5 transmitted from another adjacent image processing apparatus 2. To do.

画像処理プロセス部23は、カメラ11が撮像するカメラ画像(視野領域13)内の人物5を認識し捉える処理を行う。この画像処理プロセス部23での人物認識方法として、ベクトル焦点法(特許第3406587号)を用いてカメラ画像中の人物5を、標準化された人物モデルベース(例えば、実空間で身長1600mmの人物の外形をかたどったモデル)と照合することにより人物5を認識し検出する。認識された人物5は、カメラ画像上でその人物5を代表点14として表す。また、人物モデルベースから得られた画像上の人物5の代表点14を座標変換技術(例えば、段落0002で説明した座標変換技術等)を用いて、人物5の実空間上の地上高0の地点を実空間上の人物位置51として算出する。これにより、視野領域13において人物位置51を誤差が少ない状態で示すことができる。 The image processing unit 23 performs a process of recognizing and capturing the person 5 in the camera image (viewing area 13) captured by the camera 11. As a person recognition method in the image processing unit 23, the person 5 in the camera image is vectorized by using the vector focus method (Japanese Patent No. 3406587), and a standardized person model base (for example, a person having a height of 1600 mm in the real space) is used. The person 5 is recognized and detected by collating with a model whose outer shape is modeled. The recognized person 5 is represented as a representative point 14 on the camera image. In addition, the representative point 14 of the person 5 on the image obtained from the person model base is subjected to coordinate conversion technology (for example, the coordinate conversion technology described in paragraph 0002) to obtain the ground height 0 of the person 5 in the real space. The point is calculated as the person position 51 in the real space. Thereby, the person position 51 can be shown in the visual field region 13 with a small error.

なお、人物認識方法として、例えば、人物画像の重心位置を人物位置として示す方法を用いると、カメラ画像上に映る形状によって人物画像の重心位置が変わることから人物位置が変動し、人物位置の検出に誤差が大きく生じる。また、人物画像の重心位置それ自体も地上高さがどの位置となるか不明である。その結果、人物同定において別人であると間違えるおそれが高くなる。これに対して、ベクトル焦点法(特許第3406587号)のように人物位置を標準化した人物モデルベースと照合し人物モデルベースの足元位置(地上高さ0の位置)を人物位置51の形で正規化する。これにより、カメラ画像で捉える人物位置を誤差の少ない状態で表すことができ、人物同定を精度よく行うことができる。 As the person recognition method, for example, when the method of indicating the barycentric position of the person image as the person position is used, the barycentric position of the person image changes depending on the shape shown in the camera image, and thus the person position changes, and the person position is detected. A large error will occur. In addition, it is not known what the height of the center of gravity of the person image itself is. As a result, there is a high risk of mistakenly identifying a person as another person. On the other hand, as in the vector focus method (Japanese Patent No. 3406587), the human position is collated with a standardized human model base, and the foot position of the human model base (position at ground height 0) is normalized in the form of the human position 51. Turn into. As a result, the position of the person captured in the camera image can be represented with a small error, and the person can be identified with high accuracy.

追跡処理プロセス部24は、画像処理プロセス部23で捉えた人物5を時系列的に追跡する処理を行う。 The tracking process unit 24 performs a process of tracking the person 5 captured by the image processing unit 23 in time series.

座標変換プロセス部25は、追跡している人物5の位置をカメラ画像の2次元座標系により座標変換する処理を行う。この際、図6に示すように、自己のカメラ11におけるカメラ画像(13a)の2次元座標系19aと隣接するカメラ11におけるカメラ画像(13b)の2次元座標系19bとに角度差θがある場合は、2点の共有点15,16を結ぶ線から角度差θを求めることができ、下記式1より前記の角度差θから算出した回転行列を用いて座標変換した人物位置51を算出する。これにより、2台のカメラ11間でカメラ画像の2次元座標系に角度差θがある場合でも、各々の視野領域13a,13bにおける人物位置51を同一の2次元座標系として扱うことができる。 The coordinate conversion processing unit 25 performs a process of performing coordinate conversion on the position of the person 5 being tracked by the two-dimensional coordinate system of the camera image. At this time, as shown in FIG. 6, there is an angle difference θ between the two-dimensional coordinate system 19a of the camera image (13a) of its own camera 11 and the two-dimensional coordinate system 19b of the camera image (13b) of the adjacent camera 11. In this case, the angle difference θ can be obtained from the line connecting the two shared points 15 and 16, and the person position 51 which is coordinate-converted using the rotation matrix calculated from the angle difference θ is calculated by the following formula 1. .. As a result, even if there is an angle difference θ in the two-dimensional coordinate system of the camera images between the two cameras 11, the person positions 51 in the respective visual field regions 13a and 13b can be treated as the same two-dimensional coordinate system.

図6の例では、角度差θは、視野領域13a,13bそれぞれの座標軸(例えば、y軸)と共有点15,16を結ぶ直線との差から求められた角度θa,θbの合計から求める。すなわち、角度差θは、視野領域13a側の画像処理装置2aにおいてはθa−θbの式で、視野領域13b側の画像処理装置2bにおいてはθb−θaの式で求めることができるため、予め画像処理装置2a,2b間で角度差θを算出しておく。
若しくは、画像処理装置2a,2b各々の座標変換プロセス部25において、各々の角度θa,θbから算出した回転行列を用いて送受信時に変換することで、各々の視野領域13a,13bにおける人物位置51を同一の2次元座標系として扱うことが出来る。
In the example of FIG. 6, the angle difference θ is obtained from the total of the angles θa and θb obtained from the difference between the coordinate axes (for example, the y-axis) of the visual field regions 13a and 13b and the straight line connecting the common points 15 and 16. That is, the angle difference θ can be obtained by the expression θa−θb in the image processing apparatus 2a on the visual field area 13a side and by the expression θb−θa in the image processing apparatus 2b on the visual field area 13b side. An angle difference θ is calculated between the processing devices 2a and 2b.
Alternatively, in the coordinate conversion processing unit 25 of each of the image processing devices 2a and 2b, the person position 51 in each of the visual field regions 13a and 13b is converted by performing conversion at the time of transmission and reception using the rotation matrix calculated from the angles θa and θb. It can be treated as the same two-dimensional coordinate system.

Figure 2020087180
Figure 2020087180

通信プロセス部26は、座標変換プロセス部25が出力する人物5の位置座標(人物位置情報)を隣接する画像処理装置2へ送信し、また、隣接する画像処理装置2から送信された人物5の位置座標(人物位置情報)を受信する。また、通信プロセス部26は、同一人物と判定した人物5についての人物追跡情報を隣接する画像処理装置2との間で通信する。さらに、通信プロセス部26は、最後に人物追跡情報を伝達すべきカメラ11側となる画像処理装置2が無い場合は当該人物追跡情報を集計装置4へ送信する。 The communication process unit 26 transmits the position coordinates (person position information) of the person 5 output by the coordinate conversion process unit 25 to the adjacent image processing device 2, and the person 5 transmitted from the adjacent image processing device 2. Receives position coordinates (person position information). Further, the communication process unit 26 communicates the person tracking information about the person 5 determined to be the same person with the adjacent image processing device 2. Furthermore, if there is no image processing apparatus 2 on the camera 11 side to which the person tracking information should be finally transmitted, the communication process unit 26 transmits the person tracking information to the totaling apparatus 4.

一致判定プロセス部27は、重複視野領域30内の人物5に対して隣接する2台のカメラ11の各々で撮像する人物同士が同一人物であるか否かの人物同定の処理を行う。すなわち、隣接する2台のカメラ11の重複視野領域30内の人物5に対して、一方のカメラ11側における基準点18から人物位置51までの間の実空間上の実距離及び方向と、他方のカメラ11側における基準点18から人物位置51までの間の実空間上の実距離及び方向とを対比する。その結果、前記の距離と方向において同一性を有する人物同士を同一人物と判定する。この人物同定は、重複視野領域30内に複数の人物5が存在する場合は、複数の人物のすべてに対して個々に対比判断して同一人物の特定を行う。 The coincidence determination processing unit 27 performs a person identification process on whether or not the persons imaged by the two cameras 11 adjacent to the person 5 in the overlapping visual field region 30 are the same person. That is, with respect to the person 5 in the overlapping visual field region 30 of two adjacent cameras 11, the actual distance and direction in the real space between the reference point 18 and the person position 51 on one camera 11 side, and the other The actual distance and direction in the real space between the reference point 18 and the person position 51 on the camera 11 side are compared. As a result, the persons having the sameness in the distance and the direction are determined to be the same person. In this person identification, when a plurality of persons 5 are present in the overlapping visual field region 30, the same person is identified by individually comparing all of the plurality of persons.

人物同定の際の同一性は、一致判定における距離と方向についての許容値としてはモデルベース幅以内の値を設定するのが好ましい。例えば、人物モデルベースの幅が42cm程度の場合、許容値として30cmとし、前記一致判定における距離と方向を対比したときの差がこの許容値以内であれば、同一人物であると判定することができる。 As for the identity at the time of person identification, it is preferable to set the value within the model base width as the allowable value for the distance and the direction in the matching determination. For example, if the width of the person model base is about 42 cm, the allowable value is set to 30 cm, and if the difference in the distance and direction in the matching determination is within the allowable value, it can be determined that they are the same person. it can.

次に、この人物追跡システム1による人物追跡方法を説明する。
複数のカメラ11は、隣接する2台のカメラ11の各々の視野領域13が端部の領域で重複する重複視野領域30を持つように設置される(図1参照)。そして、初期設定として、重複視野領域30には、マーカーにより目印を2点配置し、この2点を隣接する2台のカメラ11の視野領域13間で同一位置として扱う共有点15,16とする(図2参照)。この共有点15,16により隣接する2台のカメラ11の各々の視野領域13が関連付けられて連結される。共有点15,16を決定すると、2つの共有点15,16の実空間上の中間位置に中点17が配置される。共有点15,16またはこの中点17から選択された人物同定に用いる点を基準点18とする。ここでは、例えば、中点17を基準点18とする(図3参照)。以上の設定がなされた複数の画像処理装置2を用いて監視領域6の人物追跡を行う。
Next, a person tracking method by the person tracking system 1 will be described.
The plurality of cameras 11 are installed so that the visual field regions 13 of two adjacent cameras 11 have overlapping visual field regions 30 that overlap at the end regions (see FIG. 1 ). Then, as an initial setting, two markers are arranged in the overlapping visual field region 30 by a marker, and these two points are set as common points 15 and 16 which are treated as the same position between the visual field regions 13 of two adjacent cameras 11. (See Figure 2). By the common points 15 and 16, the visual field regions 13 of the two adjacent cameras 11 are associated and connected. When the shared points 15 and 16 are determined, the midpoint 17 is arranged at an intermediate position in the real space between the two shared points 15 and 16. A point selected from the shared points 15 and 16 or the midpoint 17 and used for identifying a person is used as a reference point 18. Here, for example, the midpoint 17 is set as the reference point 18 (see FIG. 3). A person in the monitoring area 6 is tracked by using the plurality of image processing apparatuses 2 having the above settings.

例えば、図2に示すように、隣接する2台のカメラ11a,11b間で一方のカメラ11a側の視野領域13a内に人物5を新たに検出した時は、この人物5に新たな人物IDを付与し人物5を代表点14で示して追跡を行う。そして、人物5が重複視野領域30ab内に入ると、他方のカメラ11b側では自身の視野領域13b内に新たな人物5として検出されることとなるので、同様に、この人物5に新たなIDを付与し人物5を代表点14で示して追跡を行う。すなわち、重複視野領域30ab内では、隣接するカメラ11a,11b毎に検出した人物5に人物IDが付与され、カメラ11a,11b毎に各々の視野領域13a,13b内で人物5の追跡が行われる。 For example, as shown in FIG. 2, when a person 5 is newly detected in the visual field region 13a on the side of one of the cameras 11a and 11b between the two adjacent cameras 11a and 11b, a new person ID is assigned to this person 5. The assigned person 5 is indicated by the representative point 14 and is traced. When the person 5 enters the overlapping visual field area 30ab, the other camera 11b detects the person 5 as a new person 5 in the visual field area 13b. Is assigned and the person 5 is indicated by the representative point 14 for tracking. That is, in the overlapping visual field region 30ab, a person ID is given to the person 5 detected by each of the adjacent cameras 11a and 11b, and the person 5 is tracked in each of the visual field regions 13a and 13b for each of the cameras 11a and 11b. ..

図5を参照して、重複視野領域30ab内の人物5に対して、一方のカメラ11a側は、自身で検出する人物5の人物位置51の位置座標を求め、これを人物位置座標データ(人物位置情報)として他方のカメラ11b側へ伝達し、また、他方のカメラ11b側も、自身で検出する人物5の人物位置51の位置座標を求め、これを人物位置座標データ(人物位置情報)として一方のカメラ11a側へ伝達する(図5に示す座標変換プロセス部25、通信プロセス部26)。この際、図6に示すように、互いのカメラ11a,11bのカメラ画像(視野領域13a,13b)において画像上の座標系19a,19bに角度差θがある場合は、2つの共有点15,16から求めた回転行列を用いて座標変換した人物位置座標データ(上記式1を参照)を相手方のカメラ11a,11b側へ伝達する。 Referring to FIG. 5, with respect to the person 5 in the overlapping visual field region 30ab, one camera 11a side obtains the position coordinates of the person position 51 of the person 5 detected by himself/herself, and uses this to obtain the person position coordinate data (person Position information) to the other camera 11b side, and the other camera 11b side also obtains the position coordinates of the person position 51 of the person 5 detected by himself/herself, and uses this as person position coordinate data (person position information). It is transmitted to one of the cameras 11a (the coordinate conversion process unit 25 and the communication process unit 26 shown in FIG. 5). At this time, as shown in FIG. 6, when there is an angular difference θ between the coordinate systems 19a and 19b on the images of the cameras 11a and 11b (view fields 13a and 13b), the two shared points 15 and The person position coordinate data (see the above equation 1), which has been subjected to coordinate conversion using the rotation matrix obtained from 16, is transmitted to the other cameras 11a and 11b.

そして、隣接する各画像処理装置2a,2bは、相手方から受け取った人物位置座標データと、自身が検出する人物位置座標データとを対比し、人物同定を行う(図3参照、図5に示す一致判定プロセス部27)。すなわち、自身の画像処理装置2aにおける人物位置座標データに基づいて自身で検出する人物位置51と基準点18との間の距離dp1及び方向α1を算出し、また、相手方の画像処理装置2bから受け取った人物位置座標データに基づいて相手方で検出する人物位置51と基準点18との間の距離dp2及び方向α2を算出する。これら距離dp1,dp2及び方向α1,α2を対比して一致度を求め、一致度が最も高く同一性を有すると認められる人物同士を同一人物であると判定する。この時、人物位置51と基準点18との間の距離dp1,dp2及び方向α1,α2は、ワールド座標系での距離(実距離)及び方向で表現される。なお、一方の画像処理装置2aでの人物同定と、他方の画像処理装置2bでの人物同定とは、これら画像処理装置2a,2b間の双方向通信により共通する人物位置座標データに基づき、また、双方の実空間上のワールド座標系で同一位置である基準点18に基づき処理されているので、人物同定の結果は、双方の画像処理装置2a,2bで同じ結果が得られる。 Then, the adjacent image processing devices 2a and 2b perform person identification by comparing the person position coordinate data received from the other party with the person position coordinate data detected by itself (see FIG. 3 and FIG. 5). Judgment process unit 27). That is, the distance dp1 and the direction α1 between the person position 51 to be detected by itself and the reference point 18 are calculated based on the person position coordinate data in the own image processing apparatus 2a, and the distance dp1 and the direction α1 are received from the other party image processing apparatus 2b. The distance dp2 and the direction α2 between the person position 51 detected by the other party and the reference point 18 are calculated based on the person position coordinate data. The distances dp1 and dp2 and the directions α1 and α2 are compared to obtain the degree of coincidence, and the persons having the highest degree of coincidence and having the same identity are determined to be the same person. At this time, the distances dp1, dp2 and the directions α1, α2 between the person position 51 and the reference point 18 are expressed by the distance (actual distance) and the direction in the world coordinate system. The person identification in one image processing apparatus 2a and the person identification in the other image processing apparatus 2b are based on common person position coordinate data through bidirectional communication between these image processing apparatuses 2a and 2b. Since the processing is performed based on the reference point 18 that is the same position in both world coordinate systems in the real space, the same result can be obtained as the person identification result in both the image processing devices 2a and 2b.

そして、同一人物と判定された人物5については、各々の画像処理装置2a,2bで付与した人物IDをどちらか一方の人物IDに統合し、この人物5が重複視野領域30abから脱出すると同時に視野領域13aからも脱出する画像処理装置2a側の当該人物5の人物追跡情報を、相手方の画像処理装置2bへ渡して引き継がせる。このようにして、隣接する2台のカメラ11a,11b間で同一人物の人物追跡情報を伝達して連携させていく。最後に、人物追跡情報を伝達すべきカメラ11側となる画像処理装置2が無い状態で人物5が視野領域13外へ移動した時は、最後のカメラ11側の画像処理装置2が当該人物5のこれまでの人物追跡情報を、集計装置4へ出力する。 Then, regarding the person 5 determined to be the same person, the person IDs assigned by the image processing devices 2a and 2b are integrated into one of the person IDs, and the person 5 exits from the overlapping visual field area 30ab and at the same time The person tracking information of the person 5 on the image processing apparatus 2a side, which also escapes from the area 13a, is passed to the image processing apparatus 2b of the other party and handed over. In this way, the person tracking information of the same person is transmitted between the two adjacent cameras 11a and 11b to cooperate with each other. Finally, when the person 5 moves out of the visual field region 13 without the image processing apparatus 2 on the camera 11 side to which the person tracking information should be transmitted, the image processing apparatus 2 on the last camera 11 side transmits the person 5 concerned. The person tracking information up to now is output to the aggregation device 4.

以上の各画像処理装置2は、同一の構成を有し、同一のプログラムで動くものである。従って、各カメラ11のセットアップも容易であり、迅速にかつ安価に広域の監視領域6に対して人物追跡システム1を構築することができる。 The image processing devices 2 described above have the same configuration and operate with the same program. Therefore, the setup of each camera 11 is easy, and the person tracking system 1 can be quickly and inexpensively constructed in the wide area monitoring region 6.

また、2つの共有点15,16及び共有点15,16間の中点17を配置し、共有点15,16または中点17のうちから選択された1点を基準点18として人物位置51までの距離dp1,dp2及び方向α1,α2に基づいて人物同定を行う。これにより、2台のカメラ11間が共有点15,16のみで関連付けて連結されるから、安定して、かつ精度よく人物同定を行うことができる。なお、カメラ位置から人物位置までの距離に基づいて人物同定を行うと、一方のカメラ位置と人物位置との距離、他方のカメラ位置と人物位置との距離、各カメラ間の距離について評価を行うこととなる。この場合、カメラ位置と人物位置との距離や各カメラ間の距離が長くなり、加えてカメラ11の設置条件などの原因から各距離の誤差が大きくなり、人物同定を安定的に精度よく行うことが困難となる。これに対して、各カメラ11の重複視野領域30内に共有点15,16を配置して連動し、基準点18を配置することで、一方のカメラ11側の人物位置51と基準点18との距離、他方のカメラ11側の人物位置51と基準点18との距離について評価を行って人物同定を行うことにより、各距離が短いから距離の誤差があっても小さくなり、かつカメラ位置を認識することなく人物位置の追跡履歴の認識だけに基づいて人物同定を行うことができ、安定的に精度よく人物同定を行うことができる。 Further, two shared points 15 and 16 and a midpoint 17 between the shared points 15 and 16 are arranged, and one point selected from the shared points 15 and 16 or the midpoint 17 is used as a reference point 18 to the person position 51. Person identification is performed based on the distances dp1 and dp2 and the directions α1 and α2. As a result, the two cameras 11 are linked and connected only by the common points 15 and 16, so that stable and accurate person identification can be performed. If person identification is performed based on the distance from the camera position to the person position, the distance between one camera position and the person position, the distance between the other camera position and the person position, and the distance between the cameras are evaluated. It will be. In this case, the distance between the camera position and the person position and the distance between the cameras become long, and in addition, the error of each distance becomes large due to the installation conditions of the camera 11 and the like, and the person identification can be performed stably and accurately. Becomes difficult. On the other hand, by disposing the common points 15 and 16 in the overlapping visual field region 30 of each camera 11 and interlocking with each other and disposing the reference point 18, the person position 51 and the reference point 18 on one camera 11 side are By performing the person identification by evaluating the distance between the reference position 18 and the person position 51 on the other camera 11 side, each distance is short, so that even if there is a distance error, the camera position can be reduced. The person can be identified based on only the recognition of the tracking history of the person's position without recognition, and the person can be stably and accurately identified.

以上より、この人物追跡システムによれば、複数の画像処理装置2の各カメラ画像が共有点15,16のみで関連付け連結されていること、連結のための画像処理装置2間の通信が人物位置座標データと人物追跡情報だけで済み画像データの通信を行わないため通信量が少ないこと等の特徴を有している。これより、隣接する2台のカメラ11間だけの関係で人物5を追跡することができるから、各カメラ11からの人物位置情報をマッピングするための監視領域6全域にわたった統一的な地図情報が不要であり、また、複数のカメラ11を取りまとめる統括的なサーバー要素も不要であり、さらに、追跡のためのカメラ11の台数も実質的に制限が無い等の効果を有する。また、広域な監視領域6にわたり大規模な人物追跡が可能となる。 As described above, according to this person tracking system, the camera images of the plurality of image processing apparatuses 2 are associated and connected only at the common points 15 and 16, and the communication between the image processing apparatuses 2 for connection is performed by the person position. Since the image data is not communicated only with the coordinate data and the person tracking information, the communication amount is small. As a result, since the person 5 can be tracked only by the relationship between the two adjacent cameras 11, the unified map information over the entire monitoring area 6 for mapping the person position information from each camera 11 is obtained. Is unnecessary, and there is also no need for an overall server element that brings together a plurality of cameras 11, and the number of cameras 11 for tracking is virtually unlimited. Also, large-scale person tracking can be performed over a wide monitoring area 6.

なお、本発明は、上記実施形態に限定されるものではなく、特許請求の範囲内で必要な変形を施すことが可能である。 It should be noted that the present invention is not limited to the above-mentioned embodiment, and it is possible to make necessary modifications within the scope of the claims.

上記実施形態では、例として人物の追跡を挙げているが、人物以外の移動体のモデルベース(例えば自動車)を用いることで、前記のモデルベースと照合し、移動体位置を算出することで上記実施形態と同様の方法で移動体同定を精度よく行うことが出来る。 In the above embodiment, tracking of a person is taken as an example, but by using a model base of a moving body other than a person (for example, a car), the model base is collated and the moving body position is calculated. The moving body can be identified with high accuracy by the same method as in the embodiment.

1 人物追跡システム
2 画像処理装置
3 ネットワーク
4 集計装置
5 人物(移動体)
6 監視領域
11 カメラ
12 プロセッサ
13 視野領域
14 代表点
15,16 共有点
17 中点(共有点)
18 基準点
21 共有点記録部
22 人物追跡履歴記録部
23 画像処理プロセス部
24 追跡処理プロセス部
25 座標変換プロセス部
26 通信プロセス部
27 一致判定プロセス部
30 重複視野領域
51 人物位置(移動体位置)
52,53 共有点の位置(地上高さ0)
dp1,dp2 距離
α1,α2 方向
θ 角度差
1 Person Tracking System 2 Image Processing Device 3 Network 4 Aggregation Device 5 Person (Mobile)
6 monitoring area 11 camera 12 processor 13 field of view area 14 representative points 15 and 16 common point 17 middle point (common point)
18 reference point 21 common point recording unit 22 person tracking history recording unit 23 image processing process unit 24 tracking processing process unit 25 coordinate conversion processing unit 26 communication processing unit 27 coincidence determination processing unit 30 overlapping visual field region 51 person position (moving body position)
52,53 Common point position (ground height 0)
dp1, dp2 distance α1, α2 direction θ angle difference

Claims (6)

複数のカメラの視野領域が設定される監視領域の移動体を追跡する方法であって、
隣接する2台のカメラの視野領域が部分的に重複する重複視野領域を有し、
重複視野領域において隣接する2台のカメラの視野領域間で同一位置として扱う共有点を配置し、
前記共有点を1点のみ有する場合はこの1点を基準点とし、前記共有点を複数有する場合はこれらの点のうちの1点を基準点とし、
各カメラが各々の視野領域で検出する移動体を移動体モデルベースと照合して当該移動体を代表点で表して追跡し、
重複視野領域内で隣接する2台のカメラの各々が検出する移動体に対して、前記基準点と前記代表点で表す移動体の実空間上の移動体位置との実距離及び方向が同一性を有する移動体同士を同一移動体と判定し、
同一移動体と判定した移動体について一方のカメラ側での移動体追跡情報を隣接する他方のカメラ側へと伝達して隣接する2台のカメラ間を連携させる移動体追跡方法。
A method of tracking a moving object in a surveillance area in which the field of view of a plurality of cameras is set,
Has an overlapping field of view in which the field of view of two adjacent cameras partially overlaps,
Place a common point to be treated as the same position between the view areas of two adjacent cameras in the overlapping view area,
In the case of having only one common point, this one point is used as a reference point, and in the case of having a plurality of common points, one of these points is used as a reference point,
The moving body detected by each camera in each field of view is collated with the moving body model base, and the moving body is represented by a representative point and tracked,
With respect to the moving body detected by each of two adjacent cameras in the overlapping visual field area, the actual distance and the direction of the moving body position in the real space of the moving body represented by the reference point and the representative point are the same. It is determined that the moving bodies having the same are the same moving body,
A moving body tracking method for transmitting moving body tracking information on one camera side to another adjacent camera side for moving bodies that are determined to be the same moving body so that two adjacent cameras cooperate with each other.
請求項1に記載の移動体追跡方法において、
前記重複視野領域に目印を2点以上配置してこれら2点以上の目印を共有点とし、更にこれら目印から算出することが出来る任意の点を共有点として加え、これら全ての前記共有点のいずれかを前記基準点とする移動体追跡方法。
The moving body tracking method according to claim 1,
Any two or more of these common points are set by arranging two or more landmarks in the overlapping visual field area, adding these two or more landmarks as common points, and adding any points that can be calculated from these landmarks as common points. A method for tracking a moving body, wherein the above is used as the reference point.
請求項1又は2に記載の移動体追跡方法において、
隣接する2台のカメラが撮像する画像上の座標系に角度差がある場合は、少なくとも2つの共有点を配置し、2つの共有点から求めた回転行列を用いて変換した移動体位置情報を用いて同一移動体の判定を行う移動体追跡方法。
The moving body tracking method according to claim 1,
When there is an angle difference between the coordinate systems on the images captured by the two adjacent cameras, at least two common points are arranged, and the moving body position information converted using the rotation matrix obtained from the two common points is used. A moving body tracking method for determining the same moving body using the same.
請求項1に記載の移動体追跡方法において、
隣接する2台のカメラが撮像する画像上の座標系に角度差がない場合には、前記共有点を1点のみ配置し、この1点を前記基準点とする移動体追跡方法。
The moving body tracking method according to claim 1,
A moving object tracking method in which when the coordinate systems on the images captured by two adjacent cameras do not have an angle difference, only one shared point is arranged and this one point is used as the reference point.
請求項1〜4のいずれか1項に記載の移動体追跡方法において、
一方のカメラ側で検出する移動体が視野領域外へ移動した時に、移動体追跡情報を伝達すべき他方のカメラ側が無い場合は、この移動体の移動体追跡情報を集計装置に出力する移動体追跡方法。
The moving body tracking method according to any one of claims 1 to 4,
When the moving body detected by one of the cameras moves out of the field of view, if there is no other camera side to which the moving body tracking information should be transmitted, the moving body that outputs the moving body tracking information of this moving body to the tallying device Tracking method.
請求項1〜5のいずれか1項に記載の移動体追跡方法に共通に用いる画像処理装置であって、
カメラとプロセッサとを有し、
プロセッサは、
自己のカメラの視野領域が隣接する他のカメラの視野領域と部分的に重複する重複視野領域において各視野領域間で同一位置として扱う共有点を記録し、更に、前記共有点を1点のみ有する場合はこの1点を基準点とし、前記共有点を複数有する場合はこれらの点のうちの1点を基準点として記録する共有点記録部と、
自己のカメラが視野領域で検出する移動体を移動体モデルベースと照合して当該移動体を代表点で表す画像処理プロセス部と、
画像処理プロセス部で捉えた移動体を追跡する追跡処理プロセス部と、
追跡している移動体の移動体位置情報を隣接する他のカメラ側の画像処理装置と通信する通信プロセス部と、
重複視野領域内で自己のカメラと隣接する他のカメラの各々が検出する移動体に対して、前記基準点と前記代表点で表す移動体の実空間上の移動体位置との実距離及び方向が同一性を有する移動体同士を同一移動体と判定する一致判定プロセス部とを備える画像処理装置。
An image processing apparatus commonly used in the moving body tracking method according to claim 1.
Has a camera and a processor,
The processor is
In the overlapping visual field area where the visual field area of the own camera partially overlaps with the adjacent visual field areas of the other cameras, common points to be treated as the same position between the respective visual field areas are recorded, and further, only one common point is provided. In the case of using this one point as a reference point, and in the case of having a plurality of the common points, one of these points is recorded as the reference point,
An image processing process unit that represents the moving body as a representative point by comparing the moving body detected by the own camera with the moving body model base,
A tracking processing unit that tracks the moving object captured by the image processing unit,
A communication process unit that communicates the moving body position information of the moving body being tracked with another image processing device on the other camera side,
The actual distance and direction of the moving body position in the real space of the moving body represented by the reference point and the representative point with respect to the moving body detected by each of the other cameras adjacent to the own camera in the overlapping visual field area. An image processing apparatus comprising: a matching determination processing unit that determines that moving objects having the same identity are the same moving object.
JP2018223497A 2018-11-29 2018-11-29 Moving object tracking method and image processing device used for this Active JP6831117B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2018223497A JP6831117B2 (en) 2018-11-29 2018-11-29 Moving object tracking method and image processing device used for this

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2018223497A JP6831117B2 (en) 2018-11-29 2018-11-29 Moving object tracking method and image processing device used for this

Publications (2)

Publication Number Publication Date
JP2020087180A true JP2020087180A (en) 2020-06-04
JP6831117B2 JP6831117B2 (en) 2021-02-17

Family

ID=70908418

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2018223497A Active JP6831117B2 (en) 2018-11-29 2018-11-29 Moving object tracking method and image processing device used for this

Country Status (1)

Country Link
JP (1) JP6831117B2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112885097A (en) * 2021-02-07 2021-06-01 启迪云控(上海)汽车科技有限公司 Road side fusion management method and system based on cross-point location
CN113012199A (en) * 2021-03-23 2021-06-22 北京灵汐科技有限公司 System and method for tracking moving object
JP2022098433A (en) * 2020-12-21 2022-07-01 阿波▲羅▼智▲聯▼(北京)科技有限公司 Vehicle relating method, vehicle relating device, computer readable storage medium, computer program product, roadside apparatus, cloud control platform, and program

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011010490A1 (en) * 2009-07-22 2011-01-27 オムロン株式会社 Surveillance camera terminal
JP2012104022A (en) * 2010-11-12 2012-05-31 Omron Corp Monitoring system and monitoring server
JP2018205870A (en) * 2017-05-31 2018-12-27 Kddi株式会社 Object tracking method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011010490A1 (en) * 2009-07-22 2011-01-27 オムロン株式会社 Surveillance camera terminal
JP2012104022A (en) * 2010-11-12 2012-05-31 Omron Corp Monitoring system and monitoring server
JP2018205870A (en) * 2017-05-31 2018-12-27 Kddi株式会社 Object tracking method and device

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022098433A (en) * 2020-12-21 2022-07-01 阿波▲羅▼智▲聯▼(北京)科技有限公司 Vehicle relating method, vehicle relating device, computer readable storage medium, computer program product, roadside apparatus, cloud control platform, and program
JP7280331B2 (en) 2020-12-21 2023-05-23 阿波▲羅▼智▲聯▼(北京)科技有限公司 Vehicle association method, vehicle association device, electronic device, computer readable storage medium, roadside device, cloud control platform and program
CN112885097A (en) * 2021-02-07 2021-06-01 启迪云控(上海)汽车科技有限公司 Road side fusion management method and system based on cross-point location
CN113012199A (en) * 2021-03-23 2021-06-22 北京灵汐科技有限公司 System and method for tracking moving object
WO2022199422A1 (en) * 2021-03-23 2022-09-29 北京灵汐科技有限公司 Moving target tracking system and method, and electronic device and readable storage medium
CN113012199B (en) * 2021-03-23 2024-01-12 北京灵汐科技有限公司 System and method for tracking moving target

Also Published As

Publication number Publication date
JP6831117B2 (en) 2021-02-17

Similar Documents

Publication Publication Date Title
US9911226B2 (en) Method for cleaning or processing a room by means of an autonomously mobile device
US9885573B2 (en) Method, device and computer programme for extracting information about one or more spatial objects
CN110142785A (en) A kind of crusing robot visual servo method based on target detection
CN106541404B (en) A kind of Robot visual location air navigation aid
JP2019057312A (en) Adaptive mapping with spatial summaries of sensor data
JP6831117B2 (en) Moving object tracking method and image processing device used for this
WO2015098971A1 (en) Calibration device, calibration method, and calibration program
KR101239532B1 (en) Apparatus and method for recognizing position of robot
Shim et al. A mobile robot localization using external surveillance cameras at indoor
WO2018228258A1 (en) Mobile electronic device and method therein
JP2015138333A (en) Display control program, display control apparatus, and display control system
JP2008209354A (en) Calibration method and device, and automatic detection device
JP2018185239A (en) Position attitude estimation device and program
Sogo et al. N-ocular stereo for real-time human tracking
JP6096601B2 (en) Station platform fall detection device
US20240085448A1 (en) Speed measurement method and apparatus based on multiple cameras
KR100593503B1 (en) Position recognition device and method of mobile robot
Godil et al. 3D ground-truth systems for object/human recognition and tracking
JP7130423B2 (en) Parts information management system and parts information management program
WO2020111053A1 (en) Monitoring device, monitoring system, monitoring method, and monitoring program
JP2020088840A (en) Monitoring device, monitoring system, monitoring method, and monitoring program
US20230375343A1 (en) Photogrammetry system
US20230131425A1 (en) System and method for navigating with the assistance of passive objects
JP2018014064A (en) Position measuring system of indoor self-propelled robot
JP4674316B2 (en) Position detection apparatus, position detection method, and position detection program

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20190801

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20200901

A521 Request for written amendment filed

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20201008

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20201222

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20210114

R150 Certificate of patent or registration of utility model

Ref document number: 6831117

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250