JP2013205412A - Method for detecting size of object in space monitoring apparatus - Google Patents

Method for detecting size of object in space monitoring apparatus Download PDF

Info

Publication number
JP2013205412A
JP2013205412A JP2012091403A JP2012091403A JP2013205412A JP 2013205412 A JP2013205412 A JP 2013205412A JP 2012091403 A JP2012091403 A JP 2012091403A JP 2012091403 A JP2012091403 A JP 2012091403A JP 2013205412 A JP2013205412 A JP 2013205412A
Authority
JP
Japan
Prior art keywords
pixel
pixels
parallax
determined
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP2012091403A
Other languages
Japanese (ja)
Inventor
Masashi Igarashi
政司 五十嵐
Tomohiro Sasaki
智弘 佐々木
Chakma Allen
チャクマ アレン
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SYSTEM CONSULTANTS KK
Original Assignee
SYSTEM CONSULTANTS KK
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SYSTEM CONSULTANTS KK filed Critical SYSTEM CONSULTANTS KK
Priority to JP2012091403A priority Critical patent/JP2013205412A/en
Publication of JP2013205412A publication Critical patent/JP2013205412A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Image Analysis (AREA)

Abstract

PROBLEM TO BE SOLVED: To provide a method for detecting how much size an object existing in a monitoring space area has in an arbitrary monitoring space area.SOLUTION: A space area including a monitoring space area whose range is arbitrarily set is photographed by two digital cameras, parallax is found out from obtained digital images, a spatial coordinate position of an object projected to pixels is calculated, a set of adjacent pixels is discriminated and aggregated, and the size of the object is detected by the number of pixels included in the set. Namely, the size occupied by the object in the monitoring space area is detected from the number of pixels existing in the position included in the monitoring space area out of pixels included in the set of adjacent pixels.

Description

発明の詳細な説明Detailed Description of the Invention

発明の属する分野Field of Invention

本発明は任意に範囲設定された監視空間領域内に物体が在るかどうかを検知する装置において、対象物がどの程度の大きさかを検知する方法に関し、詳しくは任意に範囲設定された空間領域を2台のデジタルカメラで撮影し、得られたデジタル画像から視差を求めて画素に写る物体の空間座標位置を算出し、隣接画素の集合を弁別してその集合に含まれる画素数により物体の大きさを検知するものである。The present invention relates to a method for detecting how large an object is in an apparatus for detecting whether or not an object is present in an arbitrarily set surveillance space area, and more specifically to an arbitrarily set space area. Is taken with two digital cameras, the parallax is obtained from the obtained digital image, the spatial coordinate position of the object shown in the pixel is calculated, the set of adjacent pixels is discriminated, and the size of the object is determined by the number of pixels included in the set. This is to detect the thickness.

空間監視装置の基本については、図−6空間監視装置の機能図、及び図−7空間監視装置の構造図、及び図−8空間監視の全体処理のフロー図で説明する。この空間監視装置は、図−6、図−7に示されるように、基準画像用カメラ61、参照画像用カメラ62、主制御装置66、及び空間距離演算装置67を備えている。The basics of the space monitoring device will be described with reference to FIG. 6 functional diagram of the space monitoring device, FIG. 7 structural diagram of the space monitoring device, and FIG. As shown in FIGS. 6 and 7, the space monitoring device includes a standard image camera 61, a reference image camera 62, a main control device 66, and a spatial distance calculation device 67.

主制御装置66からの制御信号65a、65bにより、基準画像用カメラ61、参照画像用カメラ62は繰り返し撮影するように指示されており、撮影された基準画像データ63、及び参照画像データ64は空間距離演算装置67に送られて、ここで各種の演算処理が行われ警報信号68.画像信号69が出力される。The control signal 65a, 65b from the main controller 66 instructs the base image camera 61 and the reference image camera 62 to repeatedly shoot, and the taken base image data 63 and reference image data 64 are spatial. It is sent to the distance calculation device 67, where various calculation processes are performed, and an alarm signal 68. An image signal 69 is output.

空間距離演算装置67の演算処理方法は、図−8に記載されている。
ステップS801は、監視空間領域を設定するために、監視空間領域の3次元形状の頂点座標を設定しておく。監視空間領域の設定方法としては、監視空間領域の辺の関数、面の関数、監視装置の解像度により定まる監視空間領域内全座標として設定することもできる。
The calculation processing method of the spatial distance calculation device 67 is described in FIG.
In step S801, the vertex coordinates of the three-dimensional shape of the monitoring space area are set in order to set the monitoring space area. As a setting method of the monitoring space area, it is possible to set all coordinates in the monitoring space area determined by the function of the side of the monitoring space area, the function of the surface, and the resolution of the monitoring device.

ステップS802は、ステレオマッチング処理のために、基準画像用カメラ61、参照画像用カメラ62よりそれぞれ基準画像データ63、参照画像データ64を取得し記憶する。In step S802, standard image data 63 and reference image data 64 are acquired and stored from the standard image camera 61 and the reference image camera 62, respectively, for stereo matching processing.

ステップS803は、基準画像データ63中の画素の視差を求めるため、参照画像データ64を用いてステレオマッチング処理を行う。各画素の視差は、視差マップとして基準画像データ63の画素の位置と視差値をあらわすデータとして保持する。In step S803, stereo matching processing is performed using the reference image data 64 in order to obtain the parallax of the pixels in the standard image data 63. The parallax of each pixel is held as data representing the position and parallax value of the pixel of the reference image data 63 as a parallax map.

ステップS804は、画素に写る物体が監視空間領域内に在るかどうかを確認するために、特定の視差値以上の各画素が示す空間座標位置を、その視差値、2台のカメラの基線長、レンズの焦点距離を用いて計算する。In step S804, the spatial coordinate position indicated by each pixel that is equal to or greater than a specific parallax value is represented by the parallax value and the baseline lengths of the two cameras in order to confirm whether or not an object reflected in the pixel is present in the monitoring space area. Calculate using the focal length of the lens.

ステップS805は、物体が監視空間領域内に含まれているかどうかを判断するために、ステップS804で計算された各画素に写る物体の空間座標位置と監視空間領域の包含関係を調べる。監視空間領域内に画素が存在するならば、物体が監視空間領域内に存在すると判断してステップS806で警報信号を出力する。もし、画素が存在しなければ監視空間領域内に物体は存在しないと判断する。In step S805, in order to determine whether or not the object is included in the monitoring space area, the inclusion relationship between the space coordinate position of the object shown in each pixel calculated in step S804 and the monitoring space area is checked. If there is a pixel in the monitoring space area, it is determined that the object is in the monitoring space area, and an alarm signal is output in step S806. If no pixel exists, it is determined that there is no object in the monitoring space area.

ステップS807は、さらに空間監視状況の実時間確認や事後解析のために、警報とは別に、基準画像データ63、参照画像データ64、視差マップ、物体の空間座標位置を画像情報として出力する。In step S807, the standard image data 63, the reference image data 64, the parallax map, and the spatial coordinate position of the object are output as image information separately from the alarm for real-time confirmation and post-mortem analysis of the space monitoring situation.

ステップS808は、監視の終了指示がないか確認し、必要ならば監視を終了する。
以上が従来の空間監視装置の演算処理の基本的な流れであるが、この方法は検知物体の大きさに関係なく、物体が監視領域内に存在すれば必ず警報を出力してしまうため、たとえば、昆虫あるいは猫のような小さな物体の進入については、警報を出さないようにしたいという要望を実現することはできなかった。
In step S808, it is confirmed whether there is an instruction to end monitoring, and monitoring is ended if necessary.
The above is the basic flow of the arithmetic processing of the conventional space monitoring device, but this method always outputs an alarm if the object exists in the monitoring area regardless of the size of the detected object. It was not possible to realize the desire to avoid alarms for small objects such as insects or cats.

発明が解決しようとする課題Problems to be solved by the invention

本発明の課題は、任意の空間領域を、視差を生じるように配置した2台のデジタルカメラで撮影し、得られたデジタル画像からステレオマッチング法により全画素の視差値を求め、求めた視差値から各画素に写る物体の空間座標位置を算出して、隣接する空間座標位置に在る画素の数が一定以上の数となる画素集合を弁別し、弁別した画素集合に含まれる画素で監視空間領域内に位置する画素を数えることで検知対象となる物体の大きさを検知することで処理の高速化を行うことである。第2の課題は有効物体、つまり検知したい物体の大きさを事前に決めることのできる空間監視装置を提供することである。An object of the present invention is to shoot an arbitrary spatial region with two digital cameras arranged to generate parallax, obtain parallax values of all pixels from the obtained digital image by a stereo matching method, and obtain the obtained parallax value To calculate a spatial coordinate position of an object shown in each pixel, discriminate a pixel set in which the number of pixels in an adjacent spatial coordinate position is equal to or greater than a certain number, and monitor the pixels in the discriminated pixel set The processing speed is increased by detecting the size of the object to be detected by counting the pixels located in the region. A second problem is to provide a space monitoring device that can determine in advance the size of an effective object, that is, an object to be detected.

課題を解決するための手段Means for solving the problem

請求項1記載の発明は、任意の監視空間領域で、視差を生じるように配置された2台のデジタルカメラで撮影されたデジタル画像データを用いてステレオマッチング法により全画素の視差を求め、画素に写る物体を検出する空間監視装置において、各画素の視差値と空間座標位置を求め、特定の視差値を有する画素を核として、この核画素の上、右、下、左にある座標位置の画素との視差値の差を求め、上、右、下、左のうち最初に視差値の差が所定値以下となった画素を集合化し、次に集合化した画素を隣接確認対象画素として順次上、右、下、左の画素との視差値の差を求めていくが、同様に最初に視差値の差が所定値以下になった画素の集合化を進め、最終的に集合化できない状態となったときに、今までに集合化した画素数により物体の大きさを検知することを特徴とする物体の大きさ検知方法である。According to the first aspect of the present invention, the parallax of all pixels is obtained by a stereo matching method using digital image data photographed by two digital cameras arranged to generate parallax in an arbitrary monitoring space region, In the space monitoring device that detects an object reflected in the image, a parallax value and a spatial coordinate position of each pixel are obtained, and a pixel having a specific parallax value is used as a nucleus, and a coordinate position above, right, below, and left of this nucleus pixel is determined. Find the difference in parallax value with the pixel, first gather the pixels where the difference in parallax value is less than or equal to a predetermined value from the top, right, bottom, and left, and then use the gathered pixels as adjacent confirmation target pixels sequentially The parallax value difference between the upper, right, lower, and left pixels is calculated, but in the same way, assembling of pixels whose parallax value difference is equal to or less than the predetermined value is advanced first, and finally it is not possible to perform the aggregation The number of pixels that have been assembled so far The size detection method is of an object and detecting the magnitude.

請求項2記載の発明は、請求項1において、視差値の差が所定値以下の場合に集合化した全画素の平均視差値を求め、その平均視差値と集合化した全画素数から物体の監視面の面積を計算し、計算された面積が有効物体と判定する面積の閾値未満ならば検知物体なしと判定し、計算された面積が有効物体と判定する面積の閾値以上であれば検知物体ありと判定する物体の大きさ検知方法である。According to a second aspect of the present invention, in the first aspect, when the difference between the parallax values is equal to or less than a predetermined value, an average parallax value of all the aggregated pixels is obtained, and the average parallax value and the total number of pixels aggregated determine the object Calculate the area of the monitoring surface, and if the calculated area is less than the threshold value of the area to be determined as an effective object, determine that there is no detection object, and if the calculated area is equal to or greater than the threshold value of the area to be determined as an effective object This is a method for detecting the size of an object that is determined to be present.

発明の効果Effect of the invention

この発明では、三次元的に隣接しているかどうかをその画素が写す物体のカメラを基準とした三次元座標をいちいち計算せずに、二次元としての平面的な視差マップ中の各画素の位置関係(上右下左)とその視差値の差のみで判断するため、隣接している画素の集合化を高速に行うことができ、物体検知の時間を短縮することができる。In the present invention, the position of each pixel in the two-dimensional planar parallax map is calculated without calculating the three-dimensional coordinates based on the camera of the object that the pixel is imaged to determine whether or not they are three-dimensionally adjacent. Since the determination is made based only on the relationship (upper right lower left) and the difference between the parallax values, adjacent pixels can be aggregated at high speed, and the object detection time can be shortened.

また、集合化された画素が写す物体の監視面の面積を計算し、計算された面積が有効物体と判定する面積の閾値未満ならば検知物体なしと判定し、計算された面積が有効物体と判定する面積の閾値以上であれば検知物体ありと判定しているため、有効物体と判定する面積の閾値を変更することにより、検知を除外したい大きさを自由に設定できる。したがって、有効物体と判定する面積の閾値をある程度大きくすれば昆虫や猫のような小さな物体を検知しないようにできる効果がある。Further, the area of the monitoring surface of the object captured by the aggregated pixels is calculated, and if the calculated area is less than the threshold value of the area to be determined as an effective object, it is determined that there is no detection object, and the calculated area is determined as an effective object. Since it is determined that there is a detected object if it is equal to or larger than the threshold value of the area to be determined, by changing the threshold value of the area that is determined to be an effective object, the size at which detection is desired can be freely set. Therefore, if the threshold value of the area determined as an effective object is increased to some extent, there is an effect that small objects such as insects and cats can be prevented from being detected.

発明の実施形態Embodiment of the Invention

図−1空間監視の画素弁別処理のフロー図に示すフローチャートは画素集合弁別処理の詳細である。画素集合弁別とは、視差マップ上の各画素を核にして集合化し、その画素集合の平均視差値と画素数から物体の大きさを計算し、規定以上の大きさと推定されるものだけを抽出する処理である。以下、図−3隣接の確認による集合化の説明図とあわせて説明する。図−3は全体で視差マップのある状態の例を示している。各矩形は視差マップ中の画素を示していて、矩形中の数字はその画素の視差値である。視差値0は視差なしの意味で、その画素は有効な物体を写していないか、または無限遠の物体を写していることを示す。例えば、列1行1の画素は視差値0なので、物体を写した画素ではないが、列4行2の画素は視差値5なので物体を写した画素である。太線で囲まれた部分は、以下に示す集合化処理によって作られる画素集合である。The flowchart shown in the flow chart of FIG. 1 spatial monitoring pixel discrimination processing is details of the pixel set discrimination processing. Pixel set discrimination is a collection of pixels on the parallax map as a core, calculates the size of the object from the average parallax value and the number of pixels in the pixel set, and extracts only those that are estimated to be larger than the specified size. It is processing to do. Hereinafter, it will be described together with an explanatory diagram of aggregation by confirmation of FIG. 3 adjacency. FIG. 3 shows an example of a state with a parallax map as a whole. Each rectangle represents a pixel in the parallax map, and the number in the rectangle is the parallax value of the pixel. A parallax value of 0 means no parallax, and the pixel does not represent a valid object or represents an object at infinity. For example, the pixel in column 1 and row 1 has a parallax value of 0, so it is not a pixel that captured an object, but the pixel in column 4 and row 2 has a parallax value of 5, and is a pixel that has captured an object. A portion surrounded by a thick line is a pixel set created by the following aggregation process.

S101は、まず先頭の画素を画素集合の核の候補とする。図−3では列1行1の画素が画素集合の核の最初の候補となり、次いで列2行1、列3行1…列20行1、列1行2…列20行2、…列20行20という順番で核の候補となっていく。なお、核となる画素は特定の視差値、例えば5以上のように定められている。In step S101, the first pixel is set as a candidate for the nucleus of the pixel set. In FIG. 3, the pixel in column 1 and row 1 is the first candidate for the nucleus of the pixel set, and then column 2 row 1, column 3 row 1, column 20 row 1, column 1 row 2, column 20 row 2,. It becomes a nuclear candidate in the order of line 20. Note that the core pixel is set to a specific parallax value, for example, 5 or more.

ステップS102は、核の候補の画素が複数の画素集合に重複登録されないように、既存の画素集合に属する画素の中に候補画素がないか調べる。図−3では列14行15の画素を核の候補として画素集合33を作る際には、既に列4行2の画素を核として画素集合31が、列3行3の画素を核として画素集合32が生成されているので、画素集合31、画素集合32の中に列14行15の画素が含まれていないか確認する。In step S102, it is checked whether there is a candidate pixel among the pixels belonging to the existing pixel set so that the nucleus candidate pixel is not registered in a plurality of pixel sets. In FIG. 3, when the pixel set 33 is created using the pixels in column 14 and row 15 as the nucleus candidates, the pixel set 31 is already set using the pixels in column 4 and row 2 as the nucleus, and the pixel set using the pixels in column 3 and row 3 as the nucleus. 32 is generated, it is confirmed whether or not the pixel set 31 and the pixel set 32 do not include the pixel of column 14 row 15.

どの画素集合にも属していなければ、ステップS103で新しい画素集合を作り、核の候補画素を核の画素として確定し、さらにステップS104で隣接確認を行うための対象画素(隣接確認対象画素)とする。図−3の列14行15の画素の場合は、画素集合31、画素集合32中に列14行15の画素は存在しないので、この画素を核として新しい画素集合33を生成し、さらに隣接確認対象画素として隣接確認による画素の集合化を行う。つまり、ステップS104は、画素集合の核の画素列14行15が写している物体の大きさを推定するために、核の画素列14行15をもとに隣接画素を調べ、画素集合33を作る。なお、集合の作り方(画素の集合化)の詳細は後述する。If it does not belong to any pixel set, a new pixel set is created in step S103, a candidate pixel for a nucleus is determined as a nucleus pixel, and a target pixel (adjacency confirmation target pixel) for performing adjacent confirmation in step S104 To do. In the case of the pixel in the column 14 and row 15 in FIG. 3, since the pixel in the column 14 and row 15 does not exist in the pixel set 31 and the pixel set 32, a new pixel set 33 is generated using this pixel as a nucleus, and the adjacency confirmation is further performed. As a target pixel, pixel aggregation is performed by adjacency confirmation. That is, in step S104, in order to estimate the size of the object represented by the 14th row 15 of the pixel column of the pixel set, adjacent pixels are examined based on the 14th row 15 of the core pixel column, and the pixel set 33 is determined. create. Details of how to create a set (grouping of pixels) will be described later.

ステップS105は、有効物体、つまり検知したい物体の大きさを弁別するため、この画素集合の画素の平均視差値を求め、その平均視差値と画素集合内の全画素数から核の画素が写している物体の監視面の面積を計算する。計算された面積が有効物体と判定する面積の閾値未満ならばステップS106でその画素集合を検知物体なしの画素集合とし、有効物体と判定する面積の閾値以上ならばステップS107で検知物体ありの画素集合とする。図−3の画素集合33では、画素数21、平均視差値30.5となり、視差値30.5の際の距離が3000mm、3000mmの位置の1画素の占める面積が100平方mmとすると、21×100=2100平方mmとなる。例えば有効物体と判定する面積の閾値が1000平方mmならば、画素集合33は検知物体ありの画素集合となる。In step S105, in order to discriminate the size of the effective object, that is, the object to be detected, the average parallax value of the pixels of this pixel set is obtained, and the core pixel is copied from the average parallax value and the total number of pixels in the pixel set. Calculate the area of the monitoring surface of the object. If the calculated area is less than the threshold value of the area to be determined as an effective object, the pixel set is determined as a pixel set without a detected object in step S106. If the calculated area is equal to or greater than the threshold value of the area to be determined as an effective object, a pixel with a detected object is determined in step S107. Let it be a set. In the pixel set 33 in FIG. 3, when the number of pixels is 21, the average parallax value is 30.5, and the distance occupied by a pixel at the position of 3000 mm and 3000 mm is 100 square mm, the distance when the parallax value is 30.5 is 21. X100 = 2100 square mm. For example, if the threshold value of the area to be determined as an effective object is 1000 square mm, the pixel set 33 is a pixel set with a detected object.

ステップS108は、視差マップ上の全画素の確認が完了したかどうか調べ、完了したならば、画素集合の確認を完了する。図−3では、列1行1〜列20行20の画素まで全てを確認したかどうかを調べる。In step S108, it is checked whether or not the confirmation of all the pixels on the parallax map has been completed. If completed, the confirmation of the pixel set is completed. In FIG. 3, it is checked whether all the pixels from column 1 row 1 to column 20 row 20 have been confirmed.

ステップS109は、視差マップ中に核の候補になりうる画素が残っている場合、その画素を新たなる画素集合の核の候補とし、ステップS102に戻り画素集合化の過程を繰り返す。なお、図−3で列14行15の画素の確認が完了した場合も、次に列15行15の画素についてステップS102から画素集合化の処理を行う。In step S109, when a pixel that can be a nucleus candidate remains in the parallax map, the pixel is set as a nucleus candidate of a new pixel set, and the process returns to step S102 to repeat the pixel grouping process. In addition, also when the confirmation of the pixel in column 14 and row 15 is completed in FIG. 3, the pixel grouping process is performed on the pixel in column 15 and row 15 from step S102.

図−2空間監視の画素の集合化処理のフロー図に示すフローチャートは、ステップS104で行う画素の集合化処理の詳細である。画素の集合化とは、ある画素を核にして、その上右下左の画素に対して順次三次元的に隣接しているかどうか確認し、隣接していれば画素集合に所属させる手続きを、三次元的な隣接がなくなるまで再帰的に繰り返す処理である。しかし、本発明は画素が三次元的に隣接していても、二次元の視差マップ上で上下左右に位置する画素同士の視差値の差が所定値以下の場合の状態と概ね等価になることを利用している。FIG. 2 is a flowchart showing the details of the pixel grouping process performed in step S104. Pixel grouping is a procedure in which a certain pixel is used as a core, and whether or not the upper right lower left pixel is sequentially adjacent to each other in three dimensions. This process repeats recursively until there are no three-dimensional neighbors. However, even if the pixels are three-dimensionally adjacent to each other, the present invention is substantially equivalent to a state in which the difference between the parallax values between the pixels located on the two-dimensional parallax map in the vertical and horizontal directions is equal to or less than a predetermined value. Is used.

図−4監視装置と物体、視差マップの位置関係を示す図は、図−3の視差マップを得た際の監視空間内の監視装置と物体、視差マップの位置関係を上方からのイメージで示している。なお、視差マップ44は図3のように示される。一点鎖線で示す物体41は図−3の視差マップ上で画素集合31として写し込まれており、実線で示す物体42は画素集合32として写し込まれ、点線で示す物体43は画素集合33として写し込まれている。監視装置45から出ている点線は各画素が写している物体の水平方向の位置を示し、視差マップ44と交差する位置の画素の視差値として取得する。つまり、図−3の視差マップの列番号は、上に述べたように監視空間内に物体が存在する場合の水平(幅)方向の位置を示しており、列番号が大きくなるほど物体が右側に存在していることになる。一方、図−3の行番号は監視空間内の垂直(高さ)方向の位置を示しており、行番号が小さいほど物体が高い位置に存在していることになる。
また、一つの物体、例えば物体42を写す画素の視差値は、図3の画素集合32のようにほぼ同じ値(ここでは20〜21)になる。
Fig. 4 shows the positional relationship between the monitoring device, the object, and the parallax map. Fig. 3 shows the positional relationship between the monitoring device, the object, and the parallax map in the monitoring space when the parallax map shown in Fig. 3 is obtained. ing. The parallax map 44 is shown as in FIG. An object 41 indicated by a one-dot chain line is copied as a pixel set 31 on the parallax map of FIG. 3, an object 42 indicated by a solid line is copied as a pixel set 32, and an object 43 indicated by a dotted line is copied as a pixel set 33. It is included. A dotted line coming out from the monitoring device 45 indicates a horizontal position of an object captured by each pixel, and is acquired as a parallax value of a pixel at a position intersecting with the parallax map 44. In other words, the column number of the parallax map in FIG. 3 indicates the position in the horizontal (width) direction when the object exists in the monitoring space as described above, and the object increases on the right side as the column number increases. It will exist. On the other hand, the row number in FIG. 3 indicates the position in the vertical (height) direction in the monitoring space, and the smaller the row number, the higher the object is located.
Further, the parallax value of a pixel that captures one object, for example, the object 42, is substantially the same value (here, 20 to 21) as in the pixel set 32 of FIG.

ここで、図3の視差マップで見ると分かるとおり、物体41の垂直方向は視差マップの行2〜行20の範囲を占めている。一方、物体42は行3〜行6の範囲に位置し、物体43は行15〜行19の範囲に位置している。したがって、図4が図3の視差マップ上の行3に相当する水平面だとすると、物体41と物体42は視差マップ上で確認できるが、低い位置にある物体43は確認できないことになる。一方、図4が、物体43が含まれる行15の水平面だとすると、物体41及び物体43は確認できるが、高い位置にある物体42は確認できないことになる。例えば、図4が図3の視差マップの行3に相当する水平面とした場合について考えると、物体42については画素集合32の水平方向の列3、列4、列5、列6に相当する視差値が得られ、物体41については画素集合31の列7〜列15に相当する視差値が得られる。Here, as can be seen from the parallax map in FIG. 3, the vertical direction of the object 41 occupies the range from row 2 to row 20 of the parallax map. On the other hand, the object 42 is located in the range of the rows 3 to 6, and the object 43 is located in the range of the rows 15 to 19. Therefore, if FIG. 4 is a horizontal plane corresponding to row 3 on the parallax map of FIG. 3, the object 41 and the object 42 can be confirmed on the parallax map, but the object 43 at a low position cannot be confirmed. On the other hand, if FIG. 4 is a horizontal plane of the row 15 including the object 43, the object 41 and the object 43 can be confirmed, but the object 42 at a high position cannot be confirmed. For example, considering the case where FIG. 4 is a horizontal plane corresponding to row 3 of the parallax map of FIG. 3, the parallax corresponding to the horizontal column 3, column 4, column 5, and column 6 of the pixel set 32 for the object 42 is considered. A value is obtained, and for the object 41, parallax values corresponding to the columns 7 to 15 of the pixel set 31 are obtained.

なお、図−4は上方から見た図であるため幅方向のみが確認可能であるが、一つの物体を写す画素は視差マップの幅方向でも隣り合うことになり、二次元的に隣接している状態となる。したがって、三次元の複雑な計算をしなくても二次元の簡単な計算で済ませることができる。Note that since FIG. 4 is a view seen from above, only the width direction can be confirmed, but the pixels that capture one object are also adjacent in the width direction of the parallax map, and are adjacent two-dimensionally. It becomes a state. Therefore, a simple two-dimensional calculation can be performed without a complicated three-dimensional calculation.

次に、図−3を使って上下左右の画素を調べる方法について説明する。ステップS201は、隣接確認の対象画素とその上に位置する画素の隣接を確認するために、隣接確認の対象画素とその上の画素の視差値の差を計算する。視差値の差が集合化する視差値の差の所定値以下ならば、隣接確認の対象画素とその上に位置する画素は隣接していると判定する。Next, a method for examining the upper, lower, left and right pixels will be described with reference to FIG. In step S201, in order to confirm the adjacency between the target pixel for adjacency confirmation and the pixel located thereabove, the difference between the parallax values of the target pixel for adjacency confirmation and the pixel above it is calculated. If the difference between the parallax values is equal to or smaller than a predetermined value of the difference between the parallax values to be aggregated, it is determined that the target pixel to be confirmed for adjacency and the pixel located thereon are adjacent.

隣接していると判定したら、ステップS202で上に位置する画素を集合化し、次に上の画素を隣接確認対象画素として、ステップS201〜S208に基づき画素の集合化を進めるが、隣接確認対象画素の上の画素が隣接しないと判定されたらステップS203に進む。それまでは再帰的にこの処理を行う。If it is determined that the pixels are adjacent to each other, in step S202, the pixels located above are grouped together, and then the upper pixel is set as an adjacent confirmation target pixel, and pixel aggregation proceeds based on steps S201 to S208. If it is determined that the pixel above is not adjacent, the process proceeds to step S203. Until then, this process is performed recursively.

ステップS203からステップS204では、右に位置する画素との隣接を確認する。
つまり、ステップS203では、上記した隣接確認の対象画素とその右に位置する画素の隣接を確認するために、隣接確認対象画素とその右の画素の視差値の差を計算し、視差値の差が集合化する条件である所定値以下ならば、隣接確認対象画素とその右に位置する画素は隣接していると判定する。
In step S203 to step S204, the adjacency with the pixel located on the right side is confirmed.
That is, in step S203, in order to confirm the adjacency between the above-described adjacent confirmation target pixel and the pixel located on the right side thereof, the difference between the disparity values between the adjacent confirmation target pixel and the right pixel is calculated, and the difference between the disparity values is calculated. Is equal to or less than a predetermined value which is a condition for aggregation, it is determined that the adjacent confirmation target pixel and the pixel located on the right side thereof are adjacent to each other.

隣接していると判定したら、ステップS204で右に位置する画素を集合化し、次に右の画素を隣接確認対象画素として、ステップS201〜S208に基づき画素の集合化を進めるが、隣接確認対象画素の右の画素が隣接しないと判定されたらステップS205に進む。それまでは再帰的にこの処理を行う。If it is determined that the pixels are adjacent to each other, the pixels located on the right are assembled in step S204, and the right pixel is set as an adjacent confirmation target pixel. Then, the pixel aggregation proceeds based on steps S201 to S208. If it is determined that the right pixel is not adjacent, the process proceeds to step S205. Until then, this process is performed recursively.

ステップS205からステップS206では、下に位置する画素との隣接を確認する。
つまり、ステップS205では、上記した隣接確認対象画素とその下に位置する画素の隣接を確認するために、隣接確認対象画素とその下の画素の視差値の差を計算し、視差値の差が集合化する条件である所定値以下ならば、隣接確認対象画素とその下に位置する画素は隣接していると判定する。
In step S205 to step S206, the adjoining with the pixel located below is confirmed.
That is, in step S205, in order to confirm the adjacency between the above-described adjacent confirmation target pixel and the pixel located therebelow, the difference between the parallax values between the adjacent confirmation target pixel and the pixel below the pixel is calculated. If it is equal to or less than a predetermined value that is a condition for grouping, it is determined that the adjacent confirmation target pixel and the pixel located therebelow are adjacent.

隣接していると判定したら、ステップS206で下に位置する画素を集合化し、次に下の画素を隣接確認対象画素として、ステップS201〜S208に基づき画素の集合化を進めるが、隣接確認対象画素の下の画素が隣接しないと判定されたらステップS207に進む。それまでは再帰的にこの処理を行う。If it is determined that the pixels are adjacent to each other, the pixels located below are assembled in step S206, and the lower pixel is set as an adjacent confirmation target pixel. Then, the pixel aggregation proceeds based on steps S201 to S208. If it is determined that the lower pixel is not adjacent, the process proceeds to step S207. Until then, this process is performed recursively.

ステップS207からステップS208では左に位置する画素との隣接を確認する。
つまり、ステップS207では、上記した隣接確認対象画素とその左に位置する画素の隣接を確認するために、隣接確認対象画素とその左の画素の視差値の差を計算し、視差値の差が集合化する条件である所定値以下ならば、隣接確認対象画素とその左に位置する画素は隣接していると判定する。
In steps S207 to S208, the adjacent pixel to the left is confirmed.
That is, in step S207, in order to confirm the adjacency between the above-described adjacent confirmation target pixel and the pixel located on the left side, the difference between the parallax values between the adjacent confirmation target pixel and the left pixel is calculated. If it is equal to or less than a predetermined value which is a condition for aggregation, it is determined that the adjacent confirmation target pixel and the pixel located on the left side are adjacent to each other.

隣接していると判定したら、ステップS208で左に位置する画素を集合化し、次に左の画素を隣接確認対象画素として、上に述べたステップS201〜S208に基づき画素の集合化を進めるが、隣接確認対象画素の左の画素が隣接しないと判定されたらステップS201に進む。それまでは再帰的にこの処理を行う。If it is determined that the pixels are adjacent to each other, the pixels located on the left in step S208 are assembled, and then the pixels on the left are set as adjacent confirmation target pixels, and the pixel aggregation is advanced based on steps S201 to S208 described above. If it is determined that the pixel to the left of the adjacent confirmation target pixel is not adjacent, the process proceeds to step S201. Until then, this process is performed recursively.

図−3の列14行15の画素を隣接確認対象画素とした場合、上(列14行14)の画素との視差値の差は|30−0|=30となる。例えば集合化する視差値の差の所定値が1ならばこの状態は隣接していないと判定されるため、画素集合には含めない。
右(列15行15)の画素との視差値の差を調べると差は|30−30|=0なので、この状態は隣接していると判定し、右(列15行15)の画素を画素集合に含めその位置、視差値を記憶する。このように画素集合に含まれる画素が見つかったら、列14行15の画素と下や左の画素との確認を行う前に、集合化した右(列15行15)の画素の隣接を先に確認するルールとしている。したがって、今度はその画素(列15行15)を隣接確認対象画素としてさらに隣接確認を行う。つまり、列15行15の画素が画素集合に含まれたら、列15行15を隣接確認対象画素として画素の集合化処理(ステップS201からステップS208)を再帰的に行う。
When the pixel in column 14 and row 15 in FIG. 3 is used as an adjacent confirmation target pixel, the difference in parallax value from the upper (column 14 and row 14) pixel is | 30-0 | = 30. For example, if the predetermined value of the difference between the parallax values to be aggregated is 1, it is determined that this state is not adjacent, so it is not included in the pixel set.
When the difference in the parallax value with the pixel on the right (column 15 row 15) is examined, the difference is | 30-30 | = 0, so this state is determined to be adjacent, and the pixel on the right (column 15 row 15) is determined. The position and parallax value included in the pixel set are stored. When a pixel included in the pixel set is found in this manner, the adjacent right (column 15 row 15) pixel is first connected before checking the pixel in column 14 and row 15 and the lower and left pixels. It is a rule to check. Therefore, this time, the adjacency confirmation is further performed using the pixel (column 15 row 15) as the adjacency confirmation target pixel. In other words, when the pixel at column 15 and row 15 is included in the pixel set, pixel aggregation processing (step S201 to step S208) is recursively performed with column 15 and row 15 as the adjacent confirmation target pixel.

図−5隣接の確認による集合化過程の説明図は、図−3に示す画素集合33の集合化過程を示している。各表は隣接確認対象画素を示し、表の最上位にその画素の集合化順序(A〜U)と列、行を示し、次に上、右、下、左の隣接確認をする画素を示し、その横に隣接確認画素が集合化できるかどうかを示す。つまり、“差”ならば視差値の差が大きいため集合化できないことを示し、“(A)”ならば、既に(A)の順序で集合化されていることを示し、矢印が出ている画素は集合化され、矢印の先で隣接確認対象画素として隣接確認することを示している。
図−5では、図−3の画素集合33に示す矢印のとおり、まず(A)列14行15から(Q)列16行19の画素が集合化され、次に(R)列14行19の画素が集合化され、さらに(S)列13行18から(U)列13行16の画素が集合化されている。
FIG. 5 is an explanatory diagram of the assembling process based on the adjacency confirmation, and shows the assembling process of the pixel set 33 shown in FIG. Each table shows the pixels to be checked for adjacency. At the top of the table, the grouping order (A to U), columns, and rows of the pixels are shown. Next, the pixels for checking the adjacency on the top, right, bottom, and left are shown. Next, whether or not adjacent confirmation pixels can be assembled is shown. That is, “difference” indicates that the disparity value difference is large and cannot be aggregated, and “(A)” indicates that they are already aggregated in the order of (A), and an arrow appears. Pixels are aggregated, and the adjacency confirmation is performed as the adjacency confirmation target pixel at the tip of the arrow.
In FIG. 5, as indicated by the arrows in the pixel set 33 in FIG. 3, first, the pixels from (A) column 14 row 15 to (Q) column 16 row 19 are assembled, and then (R) column 14 row 19. The pixels in (S) column 13 and row 18 to (U) column 13 and row 16 are further gathered.

空間監視の画素弁別処理のフロー図Flow chart of pixel discrimination processing for space monitoring 空間監視の画素の集合化処理のフロー図Spatial monitoring pixel aggregation process flow diagram 隣接の確認による集合化の説明図Explanatory diagram of grouping by checking neighbors 監視装置と物体、視差マップの位置関係を示す図The figure which shows the positional relationship of a monitoring apparatus, an object, and a parallax map 隣接の確認による集合化過程の説明図Explanatory diagram of assembly process by checking neighbors 空間監視装置の機能構成図Functional configuration diagram of the space monitoring device 空間監視装置の構造図Structure diagram of space monitoring device 空間監視の全体処理のフロー図Flow chart of overall processing of space monitoring

S101 核の候補画素の初期化ステップ
S102 核の候補画素の既集合化確認ステップ
S103 新しい画素集合の生成と隣接確認対象画素の設定ステップ
S104 画素の集合化ステップ
S105 画素の集合化の検知面積計算ステップ
S106 画素集合に対する検知物体無効化ステップ
S107 画素集合に対する検知物体有効化ステップ
S108 次の核の候補画素の確認ステップ
S109 核の候補画素の移動ステップ
31 列4行2を核として生成された画素集合
32 列3行3を核として生成された画素集合
33 列14行15を核として生成された画素集合
S101 Nuclei candidate pixel initialization step S102 Nuclei candidate pixel pre-aggregation confirmation step S103 New pixel set generation and adjacent confirmation target pixel setting step S104 Pixel aggregation step S105 Pixel aggregation detection area calculation step S106 Detected object invalidation step for pixel set S107 Detected object validation step for pixel set S108 Confirmation step for next candidate pixel in nucleus S109 Movement step 31 for candidate pixel in nucleus Pixel set 32 generated using column 4 and row 2 as a nucleus Pixel set generated with column 3 and row 3 as the nucleus 33 Pixel set generated with column 14 and row 15 as the nucleus

Claims (2)

任意の監視空間領域で、視差を生じるように配置された2台のデジタルカメラで撮影されたデジタル画像データを用いてステレオマッチング法により全画素の視差を求め、画素に写る物体を検出する空間監視装置において、各画素の視差値と空間座標位置を求め、特定の視差値を有する画素を核として、この核画素の上、右、下、左にある座標位置の画素との視差値の差を求め、上、右、下、左のうち最初に視差値の差が所定値以下となった画素を集合化し、次に集合化した画素を隣接確認対象画素として順次上、右、下、左の画素との視差値の差を求めていくが、同様に最初に視差値の差が所定値以下になった画素の集合化を進め、最終的に集合化できない状態となったときに、今までに集合化した画素数により物体の大きさを検知することを特徴とする物体の大きさ検知方法。Spatial monitoring to detect parallax of all pixels by stereo matching method using digital image data taken by two digital cameras arranged to generate parallax in an arbitrary surveillance space area and detect an object reflected in the pixel In the apparatus, the parallax value and the spatial coordinate position of each pixel are obtained, and the difference between the parallax value and the pixel at the upper, right, lower, and left coordinate positions is determined using the pixel having the specific parallax value as the nucleus. First, the pixels whose difference in parallax value is less than or equal to a predetermined value are first gathered from among the top, right, bottom, and left, and then the gathered pixels are sequentially selected as the adjacent confirmation target pixels, and the top, right, bottom, and left The difference in the parallax value from the pixel is calculated. Similarly, when the aggregation of the pixels where the difference in the parallax value is equal to or less than the predetermined value is first promoted and finally it becomes impossible to aggregate, until now The size of the object is detected by the number of pixels assembled in Size detection method of an object according to claim. 請求項1において、視差値の差が所定値以下の場合に集合化した全画素の平均視差値を求め、その平均視差値と集合化した全画素数から物体の監視面の面積を計算し、計算された面積が有効物体と判定する面積の閾値未満ならば検知物体なしと判定し、計算された面積が有効物体と判定する面積の閾値以上であれば検知物体ありと判定する物体の大きさ検知方法。In claim 1, when the difference between the parallax values is equal to or less than a predetermined value, the average parallax value of all the aggregated pixels is obtained, and the area of the monitoring surface of the object is calculated from the average parallax value and the total number of pixels aggregated, If the calculated area is less than the threshold value of the area to be determined as an effective object, it is determined that there is no detection object. If the calculated area is equal to or greater than the threshold value of the area to be determined as an effective object, the size of the object to be determined as having a detection object Detection method.
JP2012091403A 2012-03-27 2012-03-27 Method for detecting size of object in space monitoring apparatus Pending JP2013205412A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2012091403A JP2013205412A (en) 2012-03-27 2012-03-27 Method for detecting size of object in space monitoring apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2012091403A JP2013205412A (en) 2012-03-27 2012-03-27 Method for detecting size of object in space monitoring apparatus

Publications (1)

Publication Number Publication Date
JP2013205412A true JP2013205412A (en) 2013-10-07

Family

ID=49524608

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2012091403A Pending JP2013205412A (en) 2012-03-27 2012-03-27 Method for detecting size of object in space monitoring apparatus

Country Status (1)

Country Link
JP (1) JP2013205412A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111586369A (en) * 2020-06-05 2020-08-25 上海商汤智能科技有限公司 Aggregation detection method and device, electronic equipment and readable storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111586369A (en) * 2020-06-05 2020-08-25 上海商汤智能科技有限公司 Aggregation detection method and device, electronic equipment and readable storage medium

Similar Documents

Publication Publication Date Title
JP6855587B2 (en) Devices and methods for acquiring distance information from a viewpoint
JP6464337B2 (en) Traffic camera calibration update using scene analysis
CN104902246B (en) Video monitoring method and device
JP5631025B2 (en) Information processing apparatus, processing method thereof, and program
KR101467663B1 (en) Method and system of providing display in display monitoring system
CN104966062B (en) Video monitoring method and device
JP6484321B2 (en) Flying object flight data measuring apparatus and method using high-speed video camera, and computer-readable recording medium recording a program for performing the same
CN106463032A (en) Intrusion detection with directional sensing
JP5987541B2 (en) Component installation judgment system
JP6524529B2 (en) Building limit judging device
JP2015079444A5 (en)
CN107396037B (en) Video monitoring method and device
JP2010049296A (en) Moving object tracking device
JP6988704B2 (en) Sensor control device, object search system, object search method and program
KR20160117143A (en) Method, device and system for generating an indoor two dimensional plan view image
JP2018026724A (en) Image processing device, image processing method, and program
CN108111802B (en) Video monitoring method and device
JP4914870B2 (en) Congestion degree measuring device, congestion degree measuring method, congestion degree measuring program, and recording medium recording the program
JP2013205412A (en) Method for detecting size of object in space monitoring apparatus
JP2018172849A (en) Information processing device, information processing method and program
JP2009301242A (en) Head candidate extraction method, head candidate extraction device, head candidate extraction program and recording medium recording the program
JP2020088840A (en) Monitoring device, monitoring system, monitoring method, and monitoring program
KR101783585B1 (en) Method for managing plant facilities
JP2014027640A (en) Method for discriminating inside/outside existence of object in monitoring area in space monitoring device
JP2007241960A (en) Mobile object counter