JP2004295416A - Image processing apparatus - Google Patents

Image processing apparatus Download PDF

Info

Publication number
JP2004295416A
JP2004295416A JP2003086084A JP2003086084A JP2004295416A JP 2004295416 A JP2004295416 A JP 2004295416A JP 2003086084 A JP2003086084 A JP 2003086084A JP 2003086084 A JP2003086084 A JP 2003086084A JP 2004295416 A JP2004295416 A JP 2004295416A
Authority
JP
Japan
Prior art keywords
image
contour
similarity
detection target
difference
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2003086084A
Other languages
Japanese (ja)
Other versions
JP4042602B2 (en
Inventor
Kenichi Hagio
健一 萩尾
Eiji Nakamoto
栄次 中元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Panasonic Electric Works Co Ltd
Original Assignee
Matsushita Electric Works Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Matsushita Electric Works Ltd filed Critical Matsushita Electric Works Ltd
Priority to JP2003086084A priority Critical patent/JP4042602B2/en
Publication of JP2004295416A publication Critical patent/JP2004295416A/en
Application granted granted Critical
Publication of JP4042602B2 publication Critical patent/JP4042602B2/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Abstract

<P>PROBLEM TO BE SOLVED: To provide an image processing apparatus that can precisely detect an object of detection in any motion whether the object of detection moves fast or slow or remains still. <P>SOLUTION: A first moving profile extracting means 51 extract part of profiles varied in a given period from profile images stored in a profile image storing means 4, and a second moving profile extracting means 52 extract part of profiles varied in a period longer than the given period from the profile images stored in the profile image storing means 4, to thereby create moving profile images in the detection times, respectively. First to third search area setting means 61 to 63 set first to third search areas, respectively, in the images created by the first and second moving profile extracting means 51 and 52 or a profile image created by profile extracting means 3. First to third similarity calculating means 71 to 73 calculate similarities with an object of detection in the first to third search areas, respectively. Detection determining means 8 resultantly determine whether the object of detection is present or not. <P>COPYRIGHT: (C)2005,JPO&NCIPI

Description

【0001】
【発明の属する技術分野】
本発明は、所定の検知エリアを撮像した画像から検知エリア内の人や車輌などの検出対象の有無や動きを検知する画像処理装置に関するものである。
【0002】
【従来の技術】
従来の画像処理装置では、所定の検知エリアの画像から人や車輌などの検出対象の有無や動きを検知するために背景差分方法やフレーム間差分方法などの手法が用いられていた。
【0003】
背景差分方法を用いるものでは、予め検出対象が存在しない画像を撮像し、この画像を背景画像として記憶させる。その後、検知エリアの画像を逐次撮像して、背景画像との差分を抽出し(この処理を背景差分と言う。)、抽出された部位の領域の特徴に基づいて検出対象の有無を検出していた。例えば背景差分により抽出された領域の画像を二値化し、その大きさや形状と検出対象(例えば人)のテンプレート画像とのずれが所定の範囲内にあれば人であると判断し、所定の範囲から外れていれば人ではないと判断する。
【0004】
またフレーム間差分方法を用いるものでは、時刻(t−1)に撮像した画像と時刻tに撮像した画像との差分を抽出し、抽出された部位の領域の特徴に基づいて検出対象の有無を検出していた。なおフレーム間差分方法においても背景差分方法と同様の処理を行って検出対象か否かを判断していた(例えば特許文献1参照)。
【0005】
【特許文献1】
特開平6−201715号公報(第3頁、及び、第4図)
【0006】
【発明が解決しようとする課題】
上述した画像処理装置の内、背景差分方法を用いるものでは、背景画像が適切な画像であれば検出対象の領域を精度良く検出できるが、検出対象の存在しない画像を撮像し、背景画像として記憶しておく必要があり、背景画像の撮像は容易ではない。例えば背景画像を手動で撮像する場合には、検出対象が存在していないタイミングで撮像しなければならないため人手がかかるという問題があった。また、日照の変化などの影響を受けて検知エリアの明るさが変化する場合は、その変化が背景差分で求めた画像に残るため、このような環境下では検知エリアの明るさが変化する毎に背景画像を更新する必要があり、人手では対応できなかった。そのため背景画像を自動的に撮像する方法も提案されており、例えば長時間撮像した画像の平均値を背景画像とする方法や、画像内の画素毎に輝度変化が大きい場合には更新を遅く、小さい場合には更新を早くして画素毎に背景画像を順次更新する方法があるが、いずれの方法でも検知エリア内で人が長時間静止していると、人が背景画像の中に埋もれてしまい、人を検出できなくなるという問題があった。
【0007】
またフレーム間差分方法を用いるものでは、異なる時刻に撮像された2枚の画像の差分を求めているので、その間に検出対象が大きく移動していれば、検出対象を精度良く抽出できるものの、検出対象の動きが小さい場合や同じ場所で停止している場合には、検出対象の一部しか抽出できないか、或いは全く抽出できなくなるという問題があった。また検出対象の動きが遅い場合でも検出対象を精度良く抽出するためには、検知エリアの画像を撮像する間隔を長くすることによって、フレーム間での検出対象の移動量を大きくすることが考えられるが、この場合、動きの大きな検出対象ではフレーム間で検出対象の位置が非常に大きく変化するため、検出対象の探索が難しくなるという問題があった。
【0008】
本発明は上記問題点に鑑みて為されたものであり、その目的とするところは、検出対象の動きが速くても、遅くても、又は静止していても、どんな動きに対しても検出対象を精度良く検出できる画像処理装置を提供することにある。
【0009】
【課題を解決するための手段】
上記目的を達成するために、請求項1の発明では、所定の検知エリアを撮像した画像から人や車輌などの検出対象の有無及び動きを検出する画像処理装置において、検知エリアの画像を所定の時間間隔で撮像する撮像手段と、撮像手段が撮像した画像から輪郭を抽出した輪郭画像を作成する輪郭抽出手段と、輪郭抽出手段が作成した輪郭画像を記憶する輪郭画像記憶手段と、輪郭画像記憶手段に記憶された1乃至複数の輪郭画像をもとに所定期間内で変化のあった輪郭の部分を抽出し、検出対象時間における移動輪郭画像を作成する第1の移動輪郭抽出手段と、輪郭画像記憶手段に記憶された複数の輪郭画像をもとに前記所定期間よりも長い期間内で変化のあった輪郭の部分を抽出し、検出対象時間における移動輪郭画像を作成する第2の移動輪郭抽出手段と、第1の移動輪郭抽出手段が作成した画像に対し、前回の検出対象の検出位置から当該検出対象が移動可能な範囲を第1の探索範囲として設定する第1の探索範囲設定手段と、第1の探索範囲において第1の移動輪郭抽出手段が抽出した輪郭と検出対象との類似度を算出する第1の類似度算出手段と、第2の移動輪郭抽出手段が作成した画像に対し、前回の検出対象の検出位置近傍で第1の探索範囲よりも狭い範囲を第2の探索範囲として設定する第2の探索範囲設定手段と、第2の探索範囲において第2の移動輪郭抽出手段が抽出した輪郭と検出対象との類似度を算出する第2の類似度算出手段と、輪郭抽出手段が作成した検出対象時間における輪郭画像に対し、前回の検出対象の検出位置近傍で第2の探索範囲よりも狭い範囲を第3の探索範囲として設定する第3の探索範囲設定手段と、第3の探索範囲において今回輪郭抽出手段が抽出した輪郭と検出対象との類似度を算出する第3の類似度算出手段と、第1、第2及び第3の類似度算出手段がそれぞれ算出した類似度をもとに検出対象の有無を判断する検知判断手段とを備えて成ることを特徴とする。
【0010】
請求項2の発明では、請求項1の発明において、第1の移動輪郭抽出手段を、輪郭画像記憶手段に記憶された前々回撮像時の輪郭画像と前回撮像時の輪郭画像との差分を抽出した第1の差分画像を作成する第1の輪郭画像差分手段と、輪郭画像記憶手段に記憶された前回撮像時の輪郭画像と今回輪郭抽出手段が作成した輪郭画像との差分を抽出した第2の差分画像を作成する第2の輪郭画像差分手段と、第1の差分画像と第2の差分画像との共通部分を抽出した画像を作成する共通輪郭抽出手段とで構成したことを特徴とする。
【0011】
請求項3の発明では、請求項1の発明において、第2の移動輪郭抽出手段を、輪郭画像記憶手段に記憶された前々回撮像時の輪郭画像と今回輪郭抽出手段が作成した輪郭画像との差分を抽出した第1の差分画像を作成する第1の輪郭画像差分手段と、輪郭画像記憶手段に記憶された前回撮像時の輪郭画像と今回輪郭抽出手段が作成した輪郭画像との差分を抽出した第2の差分画像を作成する第2の輪郭画像差分手段と、第1の差分画像と第2の差分画像とを合成した画像を作成する輪郭合成手段とで構成したことを特徴とする。
【0012】
請求項4の発明では、請求項1の発明において、第1の類似度算出手段を、検出対象の輪郭を表すテンプレート画像を記憶する記憶手段と、第1の移動輪郭抽出手段が作成した画像を探索対象の画像として第1の探索範囲内を走査し、各走査点においてテンプレート画像の各画素と探索対象の画像の対応する画素とで共に輪郭が存在する画素の数を計数する計数手段と、計数手段の計数値をテンプレート画像の総画素数で正規化することで類似度を算出する手段とで構成したことを特徴とする。
【0013】
請求項5の発明では、請求項1の発明において、第2の類似度算出手段を、検出対象の輪郭を表すテンプレート画像を記憶する記憶手段と、第2の移動輪郭抽出手段が作成した画像を探索対象の画像として第2の探索範囲内を走査し、各走査点においてテンプレート画像の各画素と探索対象の画像の対応する画素とで共に輪郭がある画素の数を計数する第1の計数手段と、各走査点においてテンプレート画像の各画素と探索対象の画像の対応する画素とで共に輪郭がない画素の数を計数する第2の計数手段と、第1の計数手段の計数結果をテンプレート画像の輪郭がある画素数で正規化した値と第2の計数手段の計数結果をテンプレート画像の輪郭がない画素数で正規化した値とを加算することによって類似度を算出する手段とで構成したことを特徴とする。
【0014】
請求項6の発明では、請求項1の発明において、第3の類似度算出手段を、検出対象の輪郭を表すテンプレート画像を記憶する記憶手段と、今回輪郭抽出手段が作成した画像を探索対象の画像として第3の探索範囲内を走査し、各走査点においてテンプレート画像の各画素と探索対象の画像の対応する画素とで共に輪郭がある画素の数を計数する第1の計数手段と、各走査点においてテンプレート画像の各画素と探索対象の画像の対応する画素とで共に輪郭がない画素の数を計数する第2の計数手段と、第1の計数手段の計数結果をテンプレート画像の輪郭がある画素数で正規化した値と第2の計数手段の計数結果をテンプレート画像の輪郭がない画素数で正規化した値とを加算することによって類似度を算出する手段とで構成したことを特徴とする。
【0015】
請求項7の発明では、請求項1の発明において、検知判断手段は、第3の類似度算出手段で算出した第3の類似度が第1のしきい値以上であれば検出対象が静止していると判断するとともに、第3の類似度が第1のしきい値よりも低い場合に第2の類似度算出手段で算出した第2の類似度が第2のしきい値以上であれば検出対象が微少な動きをしていると判断し、第3の類似度が第1のしきい値よりも低く且つ第2の類似度が第2のしきい値よりも低い場合に第1の類似度算出手段で算出した第1の類似度が第3のしきい値以上であれば検出対象がより大きな動きをしていると判断することを特徴とする。
【0016】
【発明の実施の形態】
(実施形態1)
本発明の実施形態1を図1〜図7に基づいて説明する。本実施形態の画像処理装置は所定の検知エリアにおける人や車輌などの検出対象の有無や動きを検出するためのものである。図1は画像処理装置の概略構成を示すブロック図であり、例えば天井面に設置され、所定の時間間隔で天井面から室内を撮影するTVカメラのような撮像手段1と、撮像手段1から入力される画像信号を取り込む画像入力手段2と、画像入力手段を介して入力される画像信号をもとに物体の輪郭(エッジ)を抽出して輪郭画像を作成する輪郭抽出手段3と、輪郭抽出手段3が抽出した輪郭の画像を記憶する輪郭画像記憶手段4と、輪郭画像記憶手段4に記憶された1乃至複数の輪郭画像をもとに所定期間内で変化のあった輪郭の部分を抽出し、検出対象時間における移動輪郭画像を作成する第1の移動輪郭抽出手段51と、第1の移動輪郭抽出手段51が作成した画像(移動輪郭画像)に対し検出対象が移動可能な範囲を第1の探索範囲として設定する第1の探索範囲設定手段61と、第1の探索範囲内の輪郭と検出対象(例えば人体)との類似度を計算する第1の類似度算出手段71と、輪郭画像記憶手段4に記憶された複数の輪郭画像をもとに上記所定期間よりも長い期間内で変化のあった輪郭の部分を抽出し、検出対象時間における移動輪郭画像を作成する第2の移動輪郭抽出手段52と、第2の移動輪郭抽出手段52が作成した画像(移動輪郭画像)に対し、前回の検出対象の検出位置近傍で第1の探索範囲よりも狭い範囲を第2の探索範囲として設定する第2の探索範囲設定手段62と、第2の探索範囲内の輪郭と検出対象(人体)との類似度を計算する第2の類似度算出手段72と、輪郭抽出手段3が作成した検出対象時間における輪郭画像に対し、前回の検出対象の検出位置近傍で第2の探索範囲よりも狭い範囲を第3の探索範囲として設定する第3の探索範囲設定手段63と、第3の探索範囲内の輪郭と検出対象(人体)との類似度を計算する第3の類似度算出手段73と、第1〜第3の類似度算出手段71〜73が算出した類似度をもとに検出対象が検知エリア内に存在するか否か、また存在する場合には検出対象の存在位置を検出する検知判断手段8とで構成される。尚、輪郭抽出手段3、第1及び第2の移動輪郭抽出手段51,52、第1〜第3の探索範囲設定手段61〜63、第1〜第3の類似度算出手段71〜73、及び検知判断手段8は例えばパーソナルコンピュータの演算機能により構成される。
【0017】
輪郭抽出手段3は例えばSOBEL演算子を用いて輪郭を抽出する。SOBEL演算子は3×3のフィルタであり、3ライン分のラインバッファを用いることで、画像入力を行いながら輪郭を抽出することができる。尚、1画面分の画像信号を記憶するフレームメモリを設け、画像入力手段2から入力される1画面分の画像信号をフレームメモリに記憶させた後、輪郭抽出手段3がフレームメモリから必要な画像信号を読み出し、読み出した画像信号にSOBEL演算子を用いた演算処理を施して、抽出した輪郭を輪郭画像記憶手段4に書き込むようにしても良い。
【0018】
第1の移動輪郭抽出手段51は、検出対象が大きく移動したりその動きが大きい場合にその輪郭を精度良く抽出できるような処理を行う。例えば図2に示すように第1の移動輪郭抽出手段51は、輪郭画像記憶手段4に記憶された前回撮像時(時刻(t−1))の輪郭画像B2と今回撮像時(時刻t)の輪郭画像B3との間で差分処理及び二値化処理を行うことによって、所定期間内(前回撮像時と今回撮像時との間)で変化のあった輪郭の部分を抽出し、検出対象時間における移動輪郭画像C1を出力する。因みに検出対象が完全に静止している場合には前回撮像時の輪郭画像B1と今回の輪郭画像B2との間に差分が存在しないので、移動輪郭画像C1には輪郭が抽出されない。
【0019】
第1の探索範囲設定手段61は、図3に示すように検出対象が検出される前の状態では検出対象が画面のどこに現れるか分からないので、画像W0の全体を探索範囲として、検出対象を探索する。つまり、初期動作時には画像W0内で移動している全ての輪郭から検出対象を探し出すようにしている。そして、一旦検出対象が検出された後は、前回撮像時の検出対象の検出位置を記憶しておき、前回撮像時の検出位置を中心に検出対象が移動可能な範囲を第1の探索範囲W1とし、第1の探索範囲W1に限定して探索することで、外乱の影響を受けにくくしている。
【0020】
第1の類似度算出手段71は、図示しないメモリ(記憶手段)に予め記憶させておいた検出対象の輪郭の特徴(テンプレート画像)と、移動輪郭画像C1に現れた輪郭Xの特徴との類似度を算出する。第1の類似度算出手段71では検出対象の移動量や動きが大きい場合に精度良く検出できるような処理を行っており、この場合は検出対象のほぼ全体の輪郭が抽出されるが、大きさや形状は比較的大きく変化する傾向がある。したがって、第1の類似度算出手段71では、その特徴を厳密に評価するのではなく、比較的大まかな評価を行っている。例えば第1の類似度算出手段71では、予め検出対象のテンプレート画像の輪郭部分の画素数を参照特徴量として記憶しておき、移動輪郭画像C1内の第1の探索範囲を走査し、各走査点においてテンプレート画像と同じサイズの枠を設定し、枠内に存在する輪郭の画素数を計算しており、計算で求めた画素数と上記参照特徴量との比率を類似度として求めている。そして、第1の探索範囲において画素数の比率(類似度)が最も1に近く、且つ、所定の閾値よりも1に近ければ、その走査点に検出対象が存在すると判断し、その位置を検出対象の存在位置として出力する。
【0021】
第2の移動輪郭抽出手段52は、検出対象がわずかに移動したり、その動きが小さい場合にその輪郭を精度良く抽出できるような処理を行う。例えば図4に示すように第2の移動輪郭抽出手段52は、輪郭画像記憶手段4に記憶された前々回撮像時(時刻(t−2))の輪郭画像B1と、今回輪郭抽出手段3が作成した輪郭画像B3との間で差分処理及び二値化処理を行うことによって、上記所定期間よりも長い期間内(前々回撮像時と今回撮像時との間)で変化のあった輪郭の部分を抽出し、検出対象時間における移動輪郭画像C2を出力する。上述した第1の移動輪郭抽出手段51に比べて差分処理を行う2つの輪郭画像B1,B2のサンプリング間隔が2倍になるため、動きの遅い検出対象の輪郭Yを抽出するのに適している。因みに検出対象が静止している場合には前々回撮像時の輪郭画像B1と今回の輪郭画像B3との間に差分が存在しないので、移動輪郭画像C2には輪郭が抽出されない。
【0022】
検出対象の移動量や動きが小さい場合には前回撮像時の検出位置の近傍に検出対象が存在していると想定されるので、第2の探索範囲設定手段62では、図5に示すように前回の検出対象の検出位置の近傍で第1の探索範囲W1よりも狭い範囲を第2の探索範囲W2とし、第2の探索範囲W2に限定して検出対象を探索することで外乱の影響を受けにくくしている。
【0023】
第2の類似度算出手段72は、メモリに予め記憶させておいた検出対象の輪郭の特徴(テンプレート画像)と、移動輪郭画像C2に現れた輪郭Xの特徴との類似度を算出する。第2の類似度算出手段72では検出対象の移動量や動きが小さい場合に精度良く検出できるような処理を行っており、この場合は検出対象の大きさや形状があまり変化しない傾向がある。したがって、第2の類似度算出手段72では輪郭の特徴を比較的厳密に評価しており、例えば移動輪郭画像C2内の探索範囲W2を走査し、各走査点においてテンプレート画像と同じサイズの枠を設定して、枠内の輪郭とテンプレート画像とを比較する。そして、対応する画素に共に輪郭が存在する場合、又は、共に輪郭が存在しない場合にカウントアップし、そのカウント値が探索範囲W2内で最も高く、且つ所定の閾値よりも高ければ、その走査点に検出対象が存在すると判断し、その位置を検出対象の存在位置として出力する。
【0024】
また輪郭抽出手段3は、検出対象が静止している場合にその輪郭を精度良く抽出できるような処理を行う。例えば図6に示すように輪郭抽出手段3は、時刻tにおいて画像入力手段2から入力された画像Aに対して輪郭抽出及び二値化処理を行って、輪郭画像B3を出力する。この輪郭画像B3には検出対象の移動、静止に関わらず、背景も含め検出対象の全ての輪郭が抽出される。
【0025】
検出対象が静止している場合には前回撮像時の検出位置のごく近傍に検出対象が存在していると想定されるので、第3の探索範囲設定手段63では、図7に示すように前回の検出対象の検出位置又はその近傍であって第2の探索範囲よりも狭い範囲W3又はW4を第3の探索範囲とし、第3の探索範囲W3又はW4に限定して検出対象を探索することで、外乱の影響を受けにくくしている。輪郭抽出手段3の抽出した輪郭画像B3には検出対象以外の物体(背景など)の輪郭が多く含まれているため、第3の探索範囲をできるだけ絞り込むことが望ましい。尚、図7中のW3は検出対象の前回の検出位置に範囲を限定した場合の探索範囲、W4は前回の検出位置の近傍に範囲を限定した場合の探索範囲である。
【0026】
第3の類似度算出手段73は、メモリに予め記憶させておいた検出対象の輪郭の特徴(テンプレート画像)と、今回輪郭抽出手段3が作成した輪郭画像B3の輪郭の特徴との類似度を計算する。ここでは検出対象が静止している場合に精度良く検出できるよう考慮しており、この場合は検出対象の大きさや形状は殆ど変化しないものと想定される。したがって、第3の類似度算出手段73は輪郭の特徴を厳密に評価しており、例えば輪郭画像B3内の探索範囲を走査して、各走査点においてテンプレート画像と同じサイズの枠を設定し、この枠内の輪郭とテンプレート画像とを比較する。そして、対応する画素に共に輪郭がある場合、又は、共に輪郭がない場合にカウントアップし、そのカウント値が探索範囲W3内で最も高く、且つ所定の閾値よりも高ければ、その走査点に検出対象が存在すると判断し、その位置を検出対象の存在位置として出力する。
【0027】
そして、検知判断手段8は、第1〜第3の類似度算出手段71〜73の算出結果をもとに検出対象の有無などを判定しており、例えば第1の類似度算出手段71の算出結果を最優先として、第2の類似度算出手段72の算出結果、第3の類似度算出手段73の算出結果の順番に重み付けし、何れかの類似度算出手段71〜73から検出対象の位置が出力されていれば検出対象(例えば人)が存在し、出力がなければ人が存在しないと判断する。
【0028】
以上説明したように本実施形態では、検出対象の移動量や動きが大きい場合に検出対象を精度良く検出できる第1の移動輪郭抽出手段51と、移動量や動きが比較的小さい場合に検出対象を精度良く検出できる第2の移動輪郭抽出手段52と、検出対象が静止している場合に検出対象を精度良く検出できる輪郭抽出手段3とを設け、しかも検出対象の移動量や動きが大きい場合は探索範囲を広く設定し、検出対象の移動量や動きが比較的小さい場合は前回撮像時の検出位置の近傍に第2の探索範囲を設定し、検出対象が静止している場合は前回撮像時の検出位置の近傍で第2の探索範囲よりも狭い範囲を第3の探索範囲として設定しているので、検出対象がどのような挙動をしていても検出対象を精度良く確実に検出することができる。
【0029】
(実施形態2)
本発明の実施形態2を図8及び図9に基づいて説明する。本実施形態では、上述した実施形態1において、第1の移動輪郭抽出手段51を、輪郭画像記憶手段4に記憶された前々回撮像時(時刻(t−2))の輪郭画像B1と前回撮像時(時刻(t−1))の輪郭画像B2との差分をとることによって、所定期間内(前々回撮像時と前回撮像時との間)で変化した部分の輪郭を抽出した第1の差分画像D1を作成する第1の輪郭画像差分手段51aと、輪郭画像記憶手段4に記憶された前回撮像時の輪郭画像B2と今回撮像時(時刻t)の輪郭画像B3との差分をとることによって、所定期間内(前回撮像時と今回撮像時との間)で変化した部分の輪郭を抽出した第2の差分画像D2を作成する第2の輪郭画像差分手段51bと、第1の差分画像D1と第2の差分画像D2との共通部分を抽出することにより、検出対象時間における画像(移動輪郭画像)を出力する共通輪郭抽出手段51cとで構成している。尚、第1の移動輪郭抽出手段51以外の構成は実施形態1と同様であるので、共通する構成要素には同一の符号を付して、その説明は省略する。
【0030】
以下に第1の移動輪郭抽出手段51の動作を図9に基づいて説明する。撮像手段1は所定の時間間隔で検知エリアの画像を撮像しており、輪郭抽出手段3は画像入力手段2を介して入力された撮像手段1の画像信号をもとに輪郭を抽出した輪郭画像を作成し、作成した画像を輪郭画像記憶手段4に逐次記憶させている。そして、第1の移動輪郭抽出手段51では、第1の輪郭画像差分手段51aが、前々回撮像時の輪郭画像B1と前回撮像時の輪郭画像B2との差分処理を行い、前々回撮像時と前回撮像時との間で変化した輪郭を抽出して差分画像D1を作成する。また、第2の輪郭画像差分手段51bは、前回撮像時の輪郭画像B2と今回の輪郭画像B3との差分処理を行い、前回撮像時と今回撮像時との間で変化した輪郭を抽出して差分画像D2を作成する。そして、共通輪郭抽出手段51cが、第1及び第2の輪郭画像差分手段51a,51bから出力された差分画像D1,D2の共通部分を抽出した画像D1を作成する。
【0031】
例えば前々回撮像時から今回撮像時にかけて検出対象が大きく移動した場合、第1の輪郭画像差分手段51aが輪郭画像B1,B2の差分をとることで差分画像D1に検出対象の輪郭Xが抽出されるが、前々回撮像時に検出対象の影に隠れていた背景部分の輪郭が前回撮像時の輪郭画像B2に現れるため、この部分も差分画像D1に抽出されることになる。同様に、第2の輪郭画像差分手段51bが輪郭画像B2,B3の差分をとることで差分画像D2に検出対象の輪郭Xが抽出されるが、前回撮像時に検出対象の影に隠れていた背景部分の輪郭が今回の輪郭画像B3に現れるため、この部分も差分画像D2に抽出されることになる。そして、共通輪郭抽出手段51cでは差分画像D1,D2に共通する輪郭を抽出し(MIN処理)、二値化処理を行って、移動輪郭画像C1を作成しており、差分画像D1,D2にそれぞれ現れた背景部分の輪郭は一方の画像にしか存在しないので、共通部分を抽出する際に背景部分の輪郭を取り除くことができ、検出対象時間における検出対象の輪郭Xのみを精度良く抽出することができる。
【0032】
そして、第2の類似度算出手段71では、上述の処理で得られた移動輪郭画像C1から検出対象の有無を探索しているので、検出対象の移動量や動きが大きい場合にも背景の影響を受けずに精度良く検出対象を検出することができる。
【0033】
(実施形態3)
本発明の実施形態2を図10及び図11に基づいて説明する。本実施形態では、上述した実施形態1において、第2の移動輪郭抽出手段52を、輪郭画像記憶手段4に記憶された前々回撮像時(時刻(t−2))の輪郭画像B1と今回撮像時(時刻t)の輪郭画像B3との差分をとり、この間に変化した部分の輪郭を抽出した第1の差分画像D3を作成する第1の輪郭画像差分手段52aと、輪郭画像記憶手段4に記憶された前回撮像時(時刻(t−1))の輪郭画像B2と今回の輪郭画像B3との差分をとり、この間に変化した部分の輪郭を抽出した第2の差分画像D4を作成する第2の輪郭画像差分手段52bと、第1の差分画像D3と第2の差分画像D4とを合成した画像C2を出力する輪郭合成手段52cとで構成している。このように、前々回撮像時と今回撮像時の輪郭画像B1,B3との差分をとって、この間に変化した部分の輪郭画像を抽出するとともに、前回撮像時と今回撮像時の輪郭画像B2,B3との差分をとって、この間に変化した部分の輪郭画像を抽出し、それらを合成しているので、上記所定期間よりも長い期間で変化した部分の輪郭を抽出することになる。尚、第2の移動輪郭抽出手段52以外の構成は実施形態1と同様であるので、共通する構成要素には同一の符号を付して、その説明は省略する。
【0034】
以下に第2の移動輪郭抽出手段52による画像処理の手順を図11に基づいて説明する。撮像手段1は所定の時間間隔で検知エリアの画像を撮像しており、輪郭抽出手段3は画像入力手段2を介して入力された撮像手段1の画像信号をもとに輪郭を抽出した輪郭画像を作成し、作成した画像を輪郭画像記憶手段4に逐次記憶させている。第2の移動輪郭抽出手段52では、第1の輪郭画像差分手段52aが前々回撮像時の輪郭画像B1と今回の輪郭画像B3との差分処理を行い、前々回撮像時と今回撮像時との間で変化した輪郭を抽出して差分画像D3を作成する。また、第2の輪郭画像差分手段52bは、前回撮像時の輪郭画像B2と今回撮像時の輪郭画像B3との差分処理を行い、前回撮像時と今回との間で変化した輪郭の部位を抽出して差分画像D4を作成する。そして、輪郭合成手段52cが、第1及び第2の差分画像D3,D4を合成して、二値化処理を行い、移動輪郭抽出画像C2を作成する。
【0035】
このように第2の移動輪郭抽出手段52では、前々回撮像時と今回撮像時との間の動きと、前回撮像時と今回撮像時との間の動きの両方が抽出され、その結果を合成しているため、例えば前々回撮像時から前回撮像時の間で検出対象の人が手をある位置まで動かし、その後今回撮像時までの間に前々回撮像時にあった位置まで手を戻した場合を考えると、前々回撮像時と今回撮像時との間では人の動きがないため輪郭画像B1,B3の差分をとっても移動輪郭として抽出することはできないが、前回撮像時と今回撮像時との間では人の動きがあるため、輪郭画像B2,B3の差分をとれば移動輪郭として抽出することができる。また、前々回撮像時から前回撮像時の間で検出対象の人が手をある位置まで動かし、その後今回撮像時では前回撮像時と同じ位置に手があった場合(つまり前回撮像時から今回撮像時までの間で手を止めた場合)を考えると、前回撮像時と今回撮像時との間では人の動きがないため、輪郭画像B2,B3の差分をとっても移動輪郭として抽出することはできないが、前々回撮像時と今回撮像時との間では人の動きがあるため、輪郭画像B1,B3の差分をとれば移動輪郭として抽出することができる。
【0036】
検出対象の人物が上述のように動いた場合、本実施形態では第1又は第2の輪郭画像差分手段52a,52bの何れか一方で移動輪郭を抽出することができるから、輪郭合成手段52cが第1、第2の輪郭画像差分手段52a,52bの出力を合成しているので、何れの場合にも移動輪郭を抽出することができ、このようにして得られた移動輪郭画像C2から検出対象の有無を探索しているので、検出対象又はその一部の移動量や動きが小さい場合や動いたり止まったりしている場合でも、精度良く検出対象を検出することができる。
【0037】
(実施形態4)
本発明の実施形態4を図12及び図13に基づいて説明する。本実施形態は、第1の類似度算出手段71による類似度の算出処理以外は実施形態1と同様であるので、共通する構成要素には同一の符号を付してその説明は省略し、本実施形態の特徴部分のみを説明する。
【0038】
第1の類似度算出手段71は、図12に示すように検出対象の画像W0内で第1の探索範囲設定手段61が設定した第1の探索範囲W1を走査し、各走査点において予め記憶した検出対象のテンプレート画像Tと同じサイズの枠を設定し、この枠内の画像Eとテンプレート画像Tとの間でマッチング処理を行う。第1の類似度算出手段71は、実施形態1で説明したように第2及び第3の類似度算出手段72,73に比べて厳密でない類似度演算を行っており、テンプレート画像T内の画素と、この画素に対応する画像Eの画素とで共に輪郭のある画素の数を計数し、その計数値をテンプレート画像Tのサイズ(総画素数)で正規化する。
【0039】
ここで、第1の類似度算出手段71による類似度の演算手順を図13(a)〜(d)の画像を例に説明する。図13(a)はテンプレート画像Tの一例を示し、このテンプレート画像Tは7×8画素の二値画像(総画素数56画素)からなる。個々の升目が画素を示しており、白抜きの画素は輪郭がある部分、斜線を付した画素は輪郭のない部分である。図13(b)〜(d)は、それぞれ、第1の探索範囲W1内の3つの走査点で設定した枠内の画像E1,E2,E3を示し、画像E1は比較的輪郭の多い画像で、画像E1の画素とテンプレート画像Tの対応する画素とで共に輪郭がある画素の数は17、共に輪郭がない画素の数は21である。また画像E2はやや輪郭の少ない画像で、画像E2の画素とテンプレート画像Tの対応する画素とで共に輪郭がある画素の数は11、共に輪郭がない画素の数は35である。また画像E3は、3つの画像E1〜E3の中で輪郭が最も少なく、画像E3の画素とテンプレート画像Tの対応する画素とで共に輪郭がある画素の数は3、共に輪郭がない画素の数は35である。第1の類似度算出手段71は、テンプレート画像T内の画素と、この画素に対応する探索対象の画像E1〜E3の画素とで共に輪郭がある画素の数を計数し、その結果をテンプレート画像Tのサイズ(総画素数)で正規化することで類似度(一致度)を求めており、画像E1の場合は共に輪郭がある画素の数が17であるから、類似度は17/56=0.304、画像E2の場合は共に輪郭がある画素の数が11であるから、類似度は11/56=0.196、画像E3の場合は共に輪郭がある画素の数が3であるから、類似度は3/56=0.0536となり、3つの画像E1〜E3の中では画像E1の類似度が最も高くなる。
【0040】
このように第1の類似度算出手段71は第1の探索範囲W1内の全ての走査点においてテンプレート画像Tと同じサイズの枠を設定し、枠内の画像とテンプレート画像Tとの類似度を求めており、類似度の最大値が所定の閾値よりも高ければ、類似度が最大の画像内に検出対象が存在すると判断し、その走査点を検出対象の存在位置として出力する。なお本実施形態の演算の特徴は、比較的輪郭の多い位置で類似度が高くなることであり、これは検出対象の移動量や動きが大きい場合を想定しているからである。またテンプレート画像Tのサイズ(総画素数)で正規化することで類似度を求めているので、テンプレート画像Tの大きさに関わらず類似度を同じ尺度で使用できるという利点がある。
【0041】
(実施形態5)
本発明の実施形態5を図14及び図15に基づいて説明する。本実施形態は、第2の類似度算出手段72による類似度の算出処理以外は実施形態1と同様であるので、共通する構成要素には同一の符号を付してその説明は省略し、本実施形態の特徴部分のみを説明する。
【0042】
第2の類似度算出手段72は、図14に示すように検出対象の画像W0内で第2の探索範囲設定手段62が設定した第2の探索範囲W2を走査し、各走査点において予め記憶した検出対象のテンプレート画像Tと同じサイズの枠を設定し、この枠内の画像Fとテンプレート画像Tとの間でマッチング処理を行う。第2の類似度算出手段72は、実施形態1で説明したように第1の類似度算出手段71に比べて比較的厳密な類似度演算を行っており、テンプレート画像T内の画素と、この画素に対応する画像F内の画素とで共に輪郭がある画素の数と共に輪郭がない画素とを各々別個に計数し、輪郭がある画素とない画素との割合で正規化することにより類似度を求めているので、輪郭のある部分も無い部分もバランス良くテンプレート画像に一致している画像の類似度が高くなり、逆の場合は類似度が低くなる。
【0043】
ここで、第2の類似度算出手段72による類似度の演算手順を図15(a)〜(d)の画像を例に説明する。図15(a)はテンプレート画像Tの一例を示し、このテンプレート画像Tは7×8画素の二値画像(総画素数56画素)からなる。各々の升目は画素を示し、輪郭のある画素(白抜きの画素)の数が21、輪郭のない画素(斜線部の画素)の数が35である。図15(b)〜(d)は、それぞれ、第2の探索範囲W2内の3つの走査点で設定した枠内の画像F1,F2,F3を示し、画像F1は比較的輪郭の多い画像で、画像F1の画素とテンプレート画像Tの対応する画素とで共に輪郭がある画素の数は17、共に輪郭がない画素の数は21である。また画像F2はやや輪郭の少ない画像で、画像F2の画素とテンプレート画像Tの対応する画素とで共に輪郭がある画素の数は11、共に輪郭がない画素の数は35である。また画像F3は、3つの画像F1〜F3の中で輪郭が最も少なく、画像F3の画素とテンプレート画像Tの対応する画素とで共に輪郭がある画素の数は3、共に輪郭がない画素の数は35である。
【0044】
第2の類似度算出手段72は、テンプレート画像T内の画素と、この画素に対応する探索対象の画像F1〜F3の画素とで共に輪郭がある画素の数と、共に輪郭がない画素の数とを各々別個に計数し、各々の計数結果をテンプレート画像Tの輪郭がある画素の数と輪郭がない画素の数とでそれぞれ正規化することによって類似度(一致度)を求め、輪郭がある画素の類似度と輪郭がない画素の類似度とを加算してトータルの類似度を求めている。すなわち、画像F1の場合は共に輪郭がある画素の数が17、共に輪郭がない画素の数が21であるから、共に輪郭がある部分の類似度は17/21=0.810、共に輪郭がない部分の類似度は21/35=0.60で、トータルの類似度(類似度の和)は1.410である。また、画像F2の場合は共に輪郭がある画素の数が11、共に輪郭がない画素の数が35であるから、共に輪郭がある部分の類似度は11/21=0.524、共に輪郭がない部分の類似度は35/35=1.0で、トータルの類似度は1.524である。また、画像F3の場合は共に輪郭がある画素の数が3、共に輪郭がない画素の数が35であるから、共に輪郭がある部分の類似度は3/21=0.143、共に輪郭がない部分の類似度は35/35=1.0で、トータルの類似度は1.143である。而して、3つの画像F1〜F3の中では画像F2の類似度が最も高くなる。
【0045】
このように第2の類似度算出手段72は第2の探索範囲W2内の全ての走査点においてテンプレート画像Tと同じサイズの枠を設定し、枠内の画像とテンプレート画像Tとの類似度をそれぞれ求めており、類似度の最大値が所定の閾値よりも高ければ、類似度が最大の画像内に検出対象が存在すると判断し、その走査点を検出対象の存在位置として出力する。なお本実施形態の演算の特徴は、輪郭のある部分も輪郭の無い部分もバランス良くテンプレート画像と一致している場合に類似度が高くなることであり、これは検出対象の移動量や動きが小さい場合を想定しているからである。
【0046】
(実施形態6)
本発明の実施形態6を図16に基づいて説明する。本実施形態は、第3の類似度算出手段73による類似度の算出処理以外は実施形態1と同様であるので、共通する構成要素には同一の符号を付してその説明は省略し、本実施形態の特徴となる部分のみを説明する。
【0047】
第3の類似度算出手段73は、検出対象の画像W0内で第3の探索範囲設定手段63が設定した第3の探索範囲W3又はW4を走査しており、本実施形態では前回撮像時の検出位置のみで予め記憶した検出対象のテンプレート画像Tと同じサイズの枠を設定し、この枠内の画像Gとテンプレート画像Tとの間でマッチング処理を行う。ここで、第3の類似度算出手段73は、実施形態5で説明した第2の類似度算出手段72による算出処理と同様の算出処理を行って類似度を求めており、類似度の算出結果が所定の閾値よりも高ければ画像G内に検出対象が存在すると判断し、その走査点を検出対象の存在位置として出力する。本実施形態による類似度の演算の特徴は、輪郭のある部分も輪郭の無い部分もバランス良くテンプレート画像Tと一致している場合に類似度が高くなることであり、これは検出対象が静止していることを想定しているからである。尚、静止の度合いをより厳密にする場合は類似度の閾値を高めれば良い。
【0048】
(実施形態7)
本発明の実施形態7を図16に基づいて説明する。本実施形態は、検知判断手段8による判断処理以外は実施形態1と同様であるので、共通する構成要素には同一の符号を付してその説明は省略し、本実施形態の特徴部分のみを説明する。
【0049】
検知判断手段8は、先ず第3の類似度算出手段73が求めた類似度(第3の類似度)の最大値が第1のしきい値以上か否かを判定し(S1)、第3の類似度の最大値が第1のしきい値以上であれば検出対象が静止していると判断する(S2)。またS1の判定の結果、第3の類似度の最大値が第1のしきい値よりも低い場合は、検知判断手段8は第2の類似度算出手段72が求めた類似度(第2の類似度)の最大値が第2のしきい値以上か否かを判定し(S3)、第2の類似度の最大値が第2のしきい値以上であれば検出対象が微少な動きをしていると判断する(S4)。さらにS3の判定の結果、第2の類似度の最大値が第2のしきい値よりも低い場合は、検知判断手段8は第1の類似度算出手段71が求めた類似度(第1の類似度)の最大値が第3のしきい値以上か否かを判定し(S5)、第1の類似度の最大値が第3のしきい値以上であれば検出対象が大きな動きをしていると判断する(S6)。またS5の判定の結果、第3の類似度の最大値が第3のしきい値よりも低ければ、検知判断手段8は検出対象が存在しないと判断する。
【0050】
このように検知判断手段8は、先ず検出対象が静止しているか否かを判断し、ついで微少な動きをしているか否かを判断した後、大きな動きをしているか否かを判断しているので、外乱の影響を受けにくく、精度良く検出対象を検知できる。また、第1〜第3の類似度算出手段71〜73がそれぞれ類似度を求めた第1〜第3の探索範囲は、第1の探索範囲よりも第2の探索範囲の方が狭く、且つ、第2の探索範囲よりも第3の探索範囲の方が狭くなっているので、検出対象が複数ある場合でも各検出対象の近傍から探索を行うので、複数の検出対象の間の識別が可能となり、個々の検出対象の追跡が容易に行えるという利点がある。
【0051】
【発明の効果】
上述のように、請求項1の発明は、所定の検知エリアを撮像した画像から人や車輌などの検出対象の有無及び動きを検出する画像処理装置において、検知エリアの画像を所定の時間間隔で撮像する撮像手段と、撮像手段が撮像した画像から輪郭を抽出した輪郭画像を作成する輪郭抽出手段と、輪郭抽出手段が作成した輪郭画像を記憶する輪郭画像記憶手段と、輪郭画像記憶手段に記憶された1乃至複数の輪郭画像をもとに所定期間内で変化のあった輪郭の部分を抽出し、検出対象時間における移動輪郭画像を作成する第1の移動輪郭抽出手段と、輪郭画像記憶手段に記憶された複数の輪郭画像をもとに前記所定期間よりも長い期間内で変化のあった輪郭の部分を抽出し、検出対象時間における移動輪郭画像を作成する第2の移動輪郭抽出手段と、第1の移動輪郭抽出手段が作成した画像に対し、前回の検出対象の検出位置から当該検出対象が移動可能な範囲を第1の探索範囲として設定する第1の探索範囲設定手段と、第1の探索範囲において第1の移動輪郭抽出手段が抽出した輪郭と検出対象との類似度を算出する第1の類似度算出手段と、第2の移動輪郭抽出手段が作成した画像に対し、前回の検出対象の検出位置近傍で第1の探索範囲よりも狭い範囲を第2の探索範囲として設定する第2の探索範囲設定手段と、第2の探索範囲において第2の移動輪郭抽出手段が抽出した輪郭と検出対象との類似度を算出する第2の類似度算出手段と、輪郭抽出手段が作成した検出対象時間における輪郭画像に対し、前回の検出対象の検出位置近傍で第2の探索範囲よりも狭い範囲を第3の探索範囲として設定する第3の探索範囲設定手段と、第3の探索範囲において今回輪郭抽出手段が抽出した輪郭と検出対象との類似度を算出する第3の類似度算出手段と、第1、第2及び第3の類似度算出手段がそれぞれ算出した類似度をもとに検出対象の有無を判断する検知判断手段とを備えて成ることを特徴とする。
【0052】
而して、第2の移動輪郭抽出手段は、第1の移動輪郭抽出手段のサンプリング間隔(上記所定期間)よりも長い期間内で変化のあった輪郭の部分を抽出した画像を作成しているので、第1の移動輪郭抽出手段に比べてサンプリングの時間間隔を長くすることで、検出対象の移動量や動きが小さい場合でも検出対象の輪郭を確実に抽出することができ、さらに検出対象の移動量や動きが小さい場合は前回撮像時の検出位置付近に検出対象が存在すると考えられるので、第2の探索範囲設定手段が前回の検出位置の近傍で第1の探索範囲よりも狭い範囲を第2の探索範囲とし、第2の探索範囲内で第2の類似度算出手段が検出対象との類似度を求めているから、検出対象以外の輪郭の影響を受けにくく、移動量や動きの小さい検出対象を精度良く検知できる。また第1の移動輪郭抽出手段は、第2の移動輪郭抽出手段のサンプリング間隔よりも短い所定期間内で変化のあった輪郭の部分を抽出した画像を作成しているので、検出対象の移動量や動きが比較的大きい場合でも検出対象の輪郭を確実に抽出することができ、そのうえ第1の探索範囲設定手段は検出対象が移動可能な範囲を第1の探索範囲とし、第1の探索範囲において第1の類似度算出手段が検出対象との類似度を求めているので、検出対象の移動量や動きが比較的大きい場合でも検出対象を精度良く検知できる。また更に、第3の探索範囲設定手段は、今回輪郭画像抽出手段が抽出した輪郭画像に対し、前回撮像時の検出対象の検出位置近傍で第2の探索範囲よりも狭い範囲を第3の探索範囲として、第3の探索範囲内で第3の類似度算出手段が検出対象との類似度を求めており、今回輪郭画像抽出手段が抽出した輪郭画像には背景も含めて全ての輪郭が現れるので、検出物体が静止している場合でもその輪郭を正確に抽出することができ、且つ検出物体が静止している場合は前回の検出位置付近にいると考えられるので、前回の検出位置近傍で第2の探索範囲よりも狭い範囲を第3の探索範囲とすることで、検出対象以外の輪郭の影響を受けにくくして、静止している検出対象を精度良く検知できる。そして、検知判断手段は第1、第2及び第3の類似度算出手段の算出結果をもとに検出物体の有無及び動きを判断しており、移動量や動きの比較的大きな検出物体は第1の類似度算出手段で、移動量や動きの小さい検出物体は第2の類似度算出手段で、静止している検出物体は第3の類似度算出手段でそれぞれ検出しているから、各類似度算出手段の算出結果をもとに検出対象の有無及び動きを判断することによって、検出物体がどのような挙動をしていても検出対象を精度良く確実に検出できるという効果がある。
【0053】
請求項2の発明は、請求項1の発明において、第1の移動輪郭抽出手段を、輪郭画像記憶手段に記憶された前々回撮像時の輪郭画像と前回撮像時の輪郭画像との差分を抽出した第1の差分画像を作成する第1の輪郭画像差分手段と、輪郭画像記憶手段に記憶された前回撮像時の輪郭画像と今回輪郭抽出手段が作成した輪郭画像との差分を抽出した第2の差分画像を作成する第2の輪郭画像差分手段と、第1の差分画像と第2の差分画像との共通部分を抽出した画像を作成する共通輪郭抽出手段とで構成したことを特徴とし、前々回撮像時と前回撮像時と今回とで時間間隔が比較的長い場合(つまり検出対象の移動量や動きが大きい場合)、第1の差分画像では、前々回撮像時に検出対象の影に隠れて前回撮像時に現れた背景の輪郭が抽出されるが、この背景の輪郭が前回撮像時の輪郭画像と今回の輪郭画像に共に現れていれば第2の差分画像には現れないので、共通輪郭抽出手段が第1の差分画像と第2の差分画像との共通部分を抽出することによって背景の輪郭を除去することができ、背景の影響を受けずに検出対象の輪郭を精度良く抽出できるという効果がある。
【0054】
請求項3の発明は、請求項1の発明において、第2の移動輪郭抽出手段を、輪郭画像記憶手段に記憶された前々回撮像時の輪郭画像と今回輪郭抽出手段が作成した輪郭画像との差分を抽出した第1の差分画像を作成する第1の輪郭画像差分手段と、輪郭画像記憶手段に記憶された前回撮像時の輪郭画像と今回輪郭抽出手段が作成した輪郭画像との差分を抽出した第2の差分画像を作成する第2の輪郭画像差分手段と、第1の差分画像と第2の差分画像とを合成した画像を作成する輪郭合成手段とで構成したことを特徴とし、第1の輪郭画像差分手段が前々回撮像時の輪郭画像と今回撮像時の輪郭画像との差分を取って第1の差分画像を作成するとともに、第2の輪郭画像差分手段が前回撮像時の輪郭画像と今回撮像時の輪郭画像との差分を取って第2の差分画像を作成し、輪郭合成手段が第1の差分画像と第2の差分画像とを合成しているので、前々回撮像時と今回との間で変化のあった部分に加え、前回撮像時と今回との間で変化のあった部分も抽出できるから、検出対象の移動量や動きが小さい場合でも検出対象の輪郭を精度良く抽出できるという効果がある。
【0055】
請求項4の発明は、請求項1の発明において、第1の類似度算出手段を、検出対象の輪郭を表すテンプレート画像を記憶する記憶手段と、第1の移動輪郭抽出手段が作成した画像を探索対象の画像として第1の探索範囲内を走査し、各走査点においてテンプレート画像の各画素と探索対象の画像の対応する画素とで共に輪郭が存在する画素の数を計数する計数手段と、計数手段の計数値をテンプレート画像の総画素数で正規化することで類似度を算出する手段とで構成したことを特徴とし、各走査点においてテンプレート画像の各画素と、探索対象の画像の対応する画素とで共に輪郭がある画素の数を計数しているので、検出対象の形状や大きさの変化の影響を受けにくいという効果があり、さらに計数結果をテンプレート画像の総画素数で正規化しているから、検出対象の大きさに関わらず同じ尺度で類似度を評価できるという効果がある。
【0056】
請求項5の発明は、請求項1の発明において、第2の類似度算出手段を、検出対象の輪郭を表すテンプレート画像を記憶する記憶手段と、第2の移動輪郭抽出手段が作成した画像を探索対象の画像として第2の探索範囲内を走査し、各走査点においてテンプレート画像の各画素と探索対象の画像の対応する画素とで共に輪郭がある画素の数を計数する第1の計数手段と、各走査点においてテンプレート画像の各画素と探索対象の画像の対応する画素とで共に輪郭がない画素の数を計数する第2の計数手段と、第1の計数手段の計数結果をテンプレート画像の輪郭がある画素数で正規化した値と第2の計数手段の計数結果をテンプレート画像の輪郭がない画素数で正規化した値とを加算することによって類似度を算出する手段とで構成したことを特徴とし、第1及び第2の計数手段により、テンプレート画像の各画素と探索対象の画像の対応する画素とで共に輪郭がある画素の数、共に輪郭がない画素の数をそれぞれ計数しているので、移動量や動きの小さい検出対象を精度良く検知でき、さらに第1及び第2の計数手段の計数結果をテンプレート画像の輪郭が存在する画素の数、輪郭が存在しない画素の数でそれぞれ正規化しているから、検出対象の輪郭の抽出度合いや大きさに関わらず同じ尺度で類似度を評価できるという効果もある。
【0057】
請求項6の発明は、請求項1の発明において、第3の類似度算出手段を、検出対象の輪郭を表すテンプレート画像を記憶する記憶手段と、今回輪郭抽出手段が作成した画像を探索対象の画像として第3の探索範囲内を走査し、各走査点においてテンプレート画像の各画素と探索対象の画像の対応する画素とで共に輪郭がある画素の数を計数する第1の計数手段と、各走査点においてテンプレート画像の各画素と探索対象の画像の対応する画素とで共に輪郭がない画素の数を計数する第2の計数手段と、第1の計数手段の計数結果をテンプレート画像の輪郭がある画素数で正規化した値と第2の計数手段の計数結果をテンプレート画像の輪郭がない画素数で正規化した値とを加算することによって類似度を算出する手段とで構成したことを特徴とし、第1及び第2の計数手段により、テンプレート画像の各画素と探索対象の画像の対応する画素とで共に輪郭がある画素の数、共に輪郭がない画素の数をそれぞれ計数しているので、移動量や動きの小さい検出対象を精度良く検知でき、さらに第1及び第2の計数手段の計数結果をテンプレート画像の輪郭がある画素の数、輪郭がない画素の数でそれぞれ正規化しているから、検出対象の輪郭の抽出度合いや大きさに関わらず同じ尺度で類似度を評価できるという効果もある。
【0058】
請求項7の発明は、請求項1の発明において、検知判断手段は、第3の類似度算出手段で算出した第3の類似度が第1のしきい値以上であれば検出対象が静止していると判断するとともに、第3の類似度が第1のしきい値よりも低い場合に第2の類似度算出手段で算出した第2の類似度が第2のしきい値以上であれば検出対象が微少な動きをしていると判断し、第3の類似度が第1のしきい値よりも低く且つ第2の類似度が第2のしきい値よりも低い場合に第1の類似度算出手段で算出した第1の類似度が第3のしきい値以上であれば検出対象がより大きな動きをしていると判断することを特徴とし、先ず探索範囲の最も狭い第3の探索範囲について第3の類似度算出手段で算出した第3の類似度と第1のしきい値との高低を比較することで、検出対象が静止しているか否かを判断し、次いで探索範囲が二番目に狭い第2の探索範囲について第2の類似度算出手段で算出した第2の類似度と第2のしきい値との高低を比較することで、検出対象が比較的小さな動きをしているか否かを判断した後、探索範囲が最も広い第1の探索範囲について第1の類似度算出手段で算出した第1の類似度と第3のしきい値との高低を比較することで、検出対象が比較的大きな動きをしているか否かを判断しているので、外乱の影響を受け難くして検出対象を精度良く検出することができる。また、第1の探索範囲よりも第2の探索範囲の方が狭く、且つ、第2の探索範囲よりも第3の探索範囲の方が狭くなっているので、検出対象が複数ある場合でも検出対象の前回撮像時の検出位置の近傍から探索を行うので、複数の検出対象の間の識別が可能となり、個々の検出対象の追跡が容易に行えるという効果がある。
【図面の簡単な説明】
【図1】実施形態1の画像処理装置の概略構成のブロック図である。
【図2】同上の画像処理の手順を説明する説明図である。
【図3】同上の第1の探索範囲の説明図である。
【図4】同上の別の画像処理の手順を説明する説明図である。
【図5】同上の第2の探索範囲の説明図である。
【図6】同上のまた別の画像処理の手順を説明する説明図である。
【図7】同上の第3の探索範囲の説明図である。
【図8】実施形態2の画像処理装置の要部のブロック図である。
【図9】同上の画像処理の手順を説明する説明図である。
【図10】実施形態3の画像処理装置の要部のブロック図である。
【図11】同上の画像処理の手順を説明する説明図である。
【図12】実施形態4の画像処理装置による画像処理の説明図である。
【図13】(a)はテンプレート画像の一例であり、(b)〜(d)は探索対象の画像の一例である。
【図14】実施形態5の画像処理装置による画像処理の説明図である。
【図15】(a)はテンプレート画像の一例であり、(b)〜(d)は探索対象の画像の一例である。
【図16】実施形態6の画像処理装置による画像処理の説明図である。
【図17】実施形態7の画像処理装置による画像処理のフロー図である。
【符号の説明】
3 輪郭抽出手段
4 輪郭画像記憶手段
8 検知判断手段
51,52 第1、第2の移動輪郭抽出手段
61〜63 第1〜第3の探索範囲設定手段
71〜73 第1〜第3の類似度算出手段
[0001]
TECHNICAL FIELD OF THE INVENTION
The present invention relates to an image processing apparatus that detects the presence or absence and movement of a detection target such as a person or a vehicle in a detection area from an image of a predetermined detection area.
[0002]
[Prior art]
In a conventional image processing apparatus, a method such as a background subtraction method or an inter-frame difference method has been used to detect the presence or absence and movement of a detection target such as a person or a vehicle from an image in a predetermined detection area.
[0003]
In the method using the background subtraction method, an image in which no detection target exists is captured in advance, and this image is stored as a background image. After that, the image of the detection area is sequentially captured, a difference from the background image is extracted (this processing is called a background difference), and the presence or absence of the detection target is detected based on the feature of the region of the extracted part. Was. For example, the image of the region extracted by the background difference is binarized, and if the size and shape of the image and the template image of the detection target (for example, a person) are within a predetermined range, it is determined that the person is a person, If it is out of the range, it is determined that the person is not a person.
[0004]
In the method using the inter-frame difference method, a difference between an image captured at time (t-1) and an image captured at time t is extracted, and the presence or absence of a detection target is determined based on the feature of the extracted region. Had been detected. In addition, in the inter-frame difference method, the same process as the background difference method is performed to determine whether or not a frame is a detection target (for example, see Patent Document 1).
[0005]
[Patent Document 1]
JP-A-6-201715 (page 3 and FIG. 4)
[0006]
[Problems to be solved by the invention]
Among the image processing apparatuses using the background subtraction method, among the above-described image processing apparatuses, if the background image is an appropriate image, the area to be detected can be accurately detected, but an image without the detection object is captured and stored as the background image. And it is not easy to capture the background image. For example, when a background image is manually picked up, there is a problem that it is necessary to take an image at a timing when the detection target does not exist. In addition, when the brightness of the detection area changes due to an influence of a change in sunshine or the like, the change remains in the image obtained by the background difference. Therefore, in such an environment, every time the brightness of the detection area changes. It was necessary to update the background image, which could not be handled manually. Therefore, a method of automatically capturing a background image has also been proposed.For example, a method in which an average value of images captured for a long time is used as a background image, or updating is slow when a luminance change is large for each pixel in the image, When the size is small, there is a method of updating the background image sequentially for each pixel by updating it quickly.However, in any case, when a person is stationary for a long time in the detection area, the person is buried in the background image. As a result, there is a problem that a person cannot be detected.
[0007]
Further, in the method using the inter-frame difference method, the difference between two images captured at different times is obtained. Therefore, if the detection target moves greatly during that time, the detection target can be accurately extracted. When the movement of the target is small or when the target stops at the same place, there is a problem that only a part of the detection target can be extracted or the detection target cannot be extracted at all. Further, in order to accurately extract the detection target even when the detection target moves slowly, it is conceivable to increase the amount of movement of the detection target between frames by increasing the interval at which an image of the detection area is captured. However, in this case, there is a problem in that a search for a detection target becomes difficult because the position of the detection target changes greatly between frames in a detection target having a large motion.
[0008]
The present invention has been made in view of the above problems, and an object of the present invention is to detect a motion of a detection target regardless of whether the motion is fast, slow, or stationary. An object of the present invention is to provide an image processing device capable of detecting an object with high accuracy.
[0009]
[Means for Solving the Problems]
In order to achieve the above object, according to the first aspect of the present invention, in an image processing apparatus that detects the presence or absence and movement of a detection target such as a person or a vehicle from an image obtained by capturing a predetermined detection area, Image capturing means for capturing images at time intervals, contour extracting means for creating a contour image by extracting a contour from an image captured by the image capturing means, contour image storing means for storing the contour image created by the contour extracting means, and contour image storing First moving contour extracting means for extracting a part of the contour that has changed within a predetermined period based on one or more contour images stored in the means and creating a moving contour image at a detection target time; A second shift for extracting a portion of a contour that has changed within a period longer than the predetermined period based on a plurality of contour images stored in the image storage unit and creating a moving contour image at a detection target time. A first search range setting for setting, as a first search range, a range in which the detection target can be moved from a previous detection position of the detection target with respect to the image created by the contour extraction means and the first moving contour extraction means. Means, first similarity calculating means for calculating the similarity between the contour extracted by the first moving contour extracting means and the detection object in the first search range, and an image created by the second moving contour extracting means On the other hand, a second search range setting means for setting a range smaller than the first search range near the detection position of the previous detection target as the second search range, and a second moving contour in the second search range. A second similarity calculating means for calculating a similarity between the contour extracted by the extracting means and the detection target; Narrower than search range 2 Third search range setting means for setting the range as a third search range, and third similarity calculation means for calculating the similarity between the contour extracted by the current contour extraction means and the detection target in the third search range And detection determination means for determining the presence or absence of a detection target based on the similarities calculated by the first, second, and third similarity calculation means.
[0010]
According to a second aspect of the present invention, in the first aspect of the present invention, the first moving contour extracting means extracts a difference between the contour image at the time of the last two-time imaging stored in the contour image storage means and the contour image at the time of the previous imaging. A first contour image difference means for creating a first difference image, and a second contour image difference extracting means for extracting a difference between the contour image at the time of the previous imaging stored in the contour image storage means and the contour image created by the current contour extraction means. It is characterized by comprising a second contour image difference means for creating a difference image and a common contour extraction means for creating an image obtained by extracting a common part between the first difference image and the second difference image.
[0011]
According to a third aspect of the present invention, in the first aspect of the present invention, the second moving contour extracting means includes a difference between a contour image stored in the contour image storing means at the time of the last two-time imaging and a contour image created by the present contour extracting means. A first contour image difference unit for creating a first difference image from which the contour image has been extracted, and a difference between the contour image at the time of previous imaging stored in the contour image storage unit and the contour image created by the current contour extraction unit. It is characterized by comprising a second contour image differencing means for creating a second difference image, and a contour synthesizing means for creating an image obtained by combining the first difference image and the second difference image.
[0012]
According to a fourth aspect of the present invention, in the first aspect of the present invention, the first similarity calculating means includes a storing means for storing a template image representing a contour of a detection target, and an image created by the first moving contour extracting means. Counting means for scanning within the first search range as an image to be searched, and counting the number of pixels having an outline together with each pixel of the template image and a corresponding pixel of the image to be searched at each scanning point; Means for calculating the similarity by normalizing the count value of the counting means with the total number of pixels of the template image.
[0013]
According to a fifth aspect of the present invention, in the first aspect of the present invention, the second similarity calculating means includes a storing means for storing a template image representing a contour of a detection target, and an image created by the second moving contour extracting means. A first counting unit that scans the second search range as an image to be searched and counts the number of pixels having outlines at each scanning point between each pixel of the template image and a corresponding pixel of the image to be searched; And second counting means for counting the number of pixels having no contour between each pixel of the template image and the corresponding pixel of the image to be searched at each scanning point; and counting the counting result of the first counting means with the template image. Means for calculating the degree of similarity by adding the value normalized by the number of pixels having an outline and the value normalized by the number of pixels having no outline in the template image to the count result of the second counting means. This The features.
[0014]
According to a sixth aspect of the present invention, in the first aspect of the present invention, the third similarity calculating means includes a storage means for storing a template image representing an outline of the detection target, and an image created by the current contour extraction means for searching. A first counting unit that scans the third search range as an image and counts the number of pixels having an outline together with each pixel of the template image and a corresponding pixel of the image to be searched at each scanning point; The second counting means for counting the number of pixels having no contour between each pixel of the template image and the corresponding pixel of the image to be searched at the scanning point; A means for calculating the similarity by adding a value normalized by a certain number of pixels and a value normalized by the number of pixels having no outline of the template image to the count result of the second counting means. To.
[0015]
According to a seventh aspect of the present invention, in the first aspect of the present invention, if the third similarity calculated by the third similarity calculating means is equal to or larger than the first threshold, the detection determination unit stops. And when the third similarity is lower than the first threshold, and the second similarity calculated by the second similarity calculating means is equal to or greater than the second threshold. When it is determined that the detection target is moving slightly, the first similarity is lower than the first threshold and the second similarity is lower than the second threshold, the first similarity is determined. If the first similarity calculated by the similarity calculating means is equal to or more than the third threshold, it is determined that the detection target is moving more.
[0016]
BEST MODE FOR CARRYING OUT THE INVENTION
(Embodiment 1)
First Embodiment A first embodiment of the present invention will be described with reference to FIGS. The image processing apparatus according to the present embodiment is for detecting the presence or absence and movement of a detection target such as a person or a vehicle in a predetermined detection area. FIG. 1 is a block diagram showing a schematic configuration of an image processing apparatus. For example, an imaging unit 1 such as a TV camera installed on a ceiling surface and taking images of a room from the ceiling surface at predetermined time intervals, and an input from the imaging unit 1 Image input means 2 for taking in an image signal to be input, contour extraction means 3 for extracting a contour (edge) of an object based on the image signal input via the image input means to create a contour image, and contour extraction. A contour image storing means for storing an image of the contour extracted by the means; and a contour part which has changed within a predetermined period is extracted based on one or a plurality of contour images stored in the contour image storing means. Then, a first moving contour extracting means 51 for creating a moving contour image at the detection target time, and a range in which the detection target can be moved with respect to the image (moving contour image) created by the first moving contour extracting means 51, 1 search range The first search range setting means 61 to be set, the first similarity calculation means 71 for calculating the similarity between the contour within the first search range and the detection target (for example, a human body), and the contour image storage means 4 A second moving contour extracting means 52 for extracting a portion of the contour that has changed within a period longer than the predetermined period based on the stored plurality of contour images and creating a moving contour image at a detection target time; In the image (moving contour image) created by the second moving contour extracting means 52, a second search range that is smaller than the first search range in the vicinity of the previous detection target detection position is set. Search range setting means 62, second similarity calculation means 72 for calculating the similarity between the contour within the second search range and the detection target (human body), and the detection target time generated by the contour extraction means 3. For the contour image, the previous detection target Third search range setting means 63 for setting a range smaller than the second search range near the outgoing position as a third search range, and similarity between the contour in the third search range and the detection target (human body) Is calculated based on the similarity calculated by the first to third similarity calculators 71 to 73, and whether or not the detection target is present in the detection area is determined. In this case, it is constituted by the detection judging means 8 for detecting the existence position of the detection target. Note that the contour extracting means 3, the first and second moving contour extracting means 51 and 52, the first to third search range setting means 61 to 63, the first to third similarity calculating means 71 to 73, and The detection judging means 8 is constituted by, for example, an arithmetic function of a personal computer.
[0017]
The contour extracting means 3 extracts a contour using, for example, a SOBEL operator. The SOBEL operator is a 3 × 3 filter, and by using a line buffer for three lines, it is possible to extract an outline while inputting an image. A frame memory for storing image signals for one screen is provided. After the image signals for one screen input from the image input means 2 are stored in the frame memory, the contour extracting means 3 reads the necessary image from the frame memory. The signal may be read, the read image signal may be subjected to arithmetic processing using a SOBEL operator, and the extracted contour may be written in the contour image storage unit 4.
[0018]
The first moving contour extracting means 51 performs processing for extracting the contour with high accuracy when the detection target moves largely or the movement is large. For example, as shown in FIG. 2, the first moving contour extraction unit 51 compares the contour image B2 stored in the contour image storage unit 4 at the time of the previous imaging (time (t−1)) with the time of the current imaging (time t). By performing a difference process and a binarization process with the contour image B3, a portion of the contour that has changed within a predetermined period (between the previous imaging and the current imaging) is extracted, and the detection is performed in the detection target time. The moving contour image C1 is output. Incidentally, when the detection target is completely still, there is no difference between the contour image B1 at the time of the previous imaging and the contour image B2 of the present time, so that no contour is extracted from the moving contour image C1.
[0019]
Since the first search range setting means 61 does not know where on the screen the detection target appears before the detection target is detected as shown in FIG. Explore. That is, at the time of the initial operation, the detection target is searched for from all the contours moving in the image W0. Then, once the detection target is detected, the detection position of the detection target at the time of the previous imaging is stored, and the range in which the detection target can move around the detection position at the time of the previous imaging is set as the first search range W1. By limiting the search to the first search range W1, the influence of disturbance is reduced.
[0020]
The first similarity calculating means 71 calculates the similarity between the feature of the contour to be detected (template image) stored in advance in a memory (storage means) (not shown) and the feature of the contour X appearing in the moving contour image C1. Calculate the degree. The first similarity calculating means 71 performs a process for accurately detecting when the movement amount or the movement of the detection target is large. In this case, almost the entire contour of the detection target is extracted. Shape tends to change relatively large. Therefore, the first similarity calculating means 71 does not strictly evaluate the feature but makes a relatively rough evaluation. For example, the first similarity calculating means 71 previously stores the number of pixels of the contour portion of the template image to be detected as a reference feature amount, scans the first search range in the moving contour image C1, and performs each scan. At the point, a frame having the same size as the template image is set, the number of pixels of the contour existing in the frame is calculated, and the ratio between the calculated number of pixels and the reference feature amount is calculated as the similarity. If the ratio (similarity) of the number of pixels is closest to 1 in the first search range and is closer to 1 than a predetermined threshold value, it is determined that a detection target exists at the scanning point, and the position is detected. Output as the target location.
[0021]
The second moving contour extraction means 52 performs processing for extracting the contour with high accuracy when the detection target moves slightly or the movement is small. For example, as shown in FIG. 4, the second moving contour extracting unit 52 creates the contour image B1 stored in the contour image storing unit 4 at the time of the last two imagings (time (t−2)) and the current contour extracting unit 3 By performing a difference process and a binarization process with the contour image B3, a contour portion that has changed within a period longer than the above-described predetermined period (between the last two-time imaging and the current imaging) is extracted. Then, the moving contour image C2 at the detection target time is output. Since the sampling interval between the two contour images B1 and B2 for which the difference processing is performed is doubled as compared with the above-described first moving contour extracting means 51, it is suitable for extracting the contour Y of a slow-moving detection target. . Incidentally, when the detection target is stationary, there is no difference between the contour image B1 at the time of the last two-time imaging and the current contour image B3, so that no contour is extracted in the moving contour image C2.
[0022]
If the movement amount or the movement of the detection target is small, it is assumed that the detection target is present near the detection position at the time of the previous imaging, so the second search range setting unit 62 sets the position as shown in FIG. A range smaller than the first search range W1 near the detection position of the previous detection target is defined as a second search range W2, and the influence of disturbance is reduced by searching the detection target only in the second search range W2. It is hard to receive.
[0023]
The second similarity calculating means 72 calculates the similarity between the feature (template image) of the contour of the detection target stored in the memory in advance and the feature of the contour X appearing in the moving contour image C2. The second similarity calculating means 72 performs a process for accurately detecting when the movement amount or the movement of the detection target is small. In this case, the size and shape of the detection target tend not to change much. Therefore, the second similarity calculating means 72 relatively strictly evaluates the features of the outline, for example, scans the search range W2 in the moving outline image C2 and, at each scanning point, forms a frame of the same size as the template image. After setting, the outline in the frame is compared with the template image. If the corresponding pixel has both contours or no contour exists, counting is performed. If the count value is the highest in the search range W2 and higher than a predetermined threshold, the scanning point is determined. Is determined to exist, and the position is output as the position where the detection target exists.
[0024]
Further, the contour extracting means 3 performs processing for extracting the contour with high accuracy when the detection target is stationary. For example, as shown in FIG. 6, the contour extraction means 3 performs contour extraction and binarization processing on the image A input from the image input means 2 at time t, and outputs a contour image B3. All the contours of the detection target including the background are extracted from the contour image B3 regardless of whether the detection target is moving or stationary.
[0025]
When the detection target is stationary, it is assumed that the detection target exists very close to the detection position at the time of the previous imaging, so the third search range setting means 63 sets the last detection target as shown in FIG. The range W3 or W4 that is at or near the detection position of the detection target and is smaller than the second search range is set as the third search range, and the detection target is searched for only in the third search range W3 or W4. This makes it less susceptible to disturbances. Since the contour image B3 extracted by the contour extracting means 3 contains many contours of an object (such as a background) other than the detection target, it is desirable to narrow the third search range as much as possible. W3 in FIG. 7 is a search range when the range is limited to the previous detection position of the detection target, and W4 is a search range when the range is limited to the vicinity of the previous detection position.
[0026]
The third similarity calculating means 73 calculates the similarity between the contour feature (template image) of the detection target previously stored in the memory and the contour feature of the contour image B3 created by the contour extracting means 3 this time. calculate. Here, consideration is given so that detection can be performed with high accuracy when the detection target is stationary. In this case, it is assumed that the size and shape of the detection target hardly change. Therefore, the third similarity calculating means 73 strictly evaluates the features of the outline, for example, scans a search range in the outline image B3 and sets a frame of the same size as the template image at each scanning point, The outline in this frame is compared with the template image. If the corresponding pixel has a contour or both have no contour, the count is incremented. If the count value is the highest in the search range W3 and is higher than a predetermined threshold, the detection is performed at the scanning point. It is determined that the target exists, and the position is output as the detection target existing position.
[0027]
The detection determining means 8 determines the presence or absence of a detection target based on the calculation results of the first to third similarity calculating means 71 to 73. The result is given the highest priority, and the calculation result of the second similarity calculation means 72 and the calculation result of the third similarity calculation means 73 are weighted in order, and the position of the detection target is output from any of the similarity calculation means 71 to 73. Is output, it is determined that there is a detection target (for example, a person), and if there is no output, there is no human.
[0028]
As described above, in the present embodiment, the first moving contour extraction means 51 that can accurately detect the detection target when the movement amount or the movement of the detection target is large, and the detection target when the movement amount or the movement is relatively small. Is provided with a second moving contour extracting means 52 capable of detecting the object accurately, and a contour extracting means 3 capable of accurately detecting the object to be detected when the object to be detected is stationary. Sets the search range wide, sets the second search range near the detection position at the time of previous imaging when the movement amount or movement of the detection object is relatively small, and sets the previous imaging range when the detection object is stationary. Since a range smaller than the second search range near the detection position at the time is set as the third search range, the detection target can be accurately and reliably detected regardless of the behavior of the detection target. be able to.
[0029]
(Embodiment 2)
Embodiment 2 of the present invention will be described with reference to FIGS. In the present embodiment, in the first embodiment described above, the first moving contour extracting means 51 is used to determine whether the first moving contour extracting means 51 and the contour image B1 stored in the contour image storing means 4 (time (t-2)) before and after the previous capturing. By taking a difference from the contour image B2 at (time (t-1)), a first difference image D1 in which the contour of a portion that has changed within a predetermined period (between the last two-time imaging and the previous imaging) is extracted. The first contour image difference means 51a for creating the contour image and the difference between the contour image B2 stored in the contour image storage means 4 and the contour image B3 taken at the current time (time t) are calculated. A second contour image difference unit 51b for creating a second difference image D2 that extracts a contour of a portion that has changed during the period (between the previous imaging and the current imaging); Extraction of the common part with the second difference image D2 The constitutes in common contour extracting means 51c for outputting an image (moving outline image) in the detection target time. Since the configuration other than the first moving contour extraction means 51 is the same as that of the first embodiment, the same reference numerals are given to the same components, and the description thereof will be omitted.
[0030]
The operation of the first moving contour extracting means 51 will be described below with reference to FIG. The imaging unit 1 captures an image of the detection area at predetermined time intervals, and the contour extraction unit 3 extracts a contour based on an image signal of the imaging unit 1 input via the image input unit 2. Are created, and the created images are sequentially stored in the contour image storage means 4. Then, in the first moving contour extracting means 51, the first contour image differentiating means 51a performs a difference process between the contour image B1 obtained at the time of the previous image capturing and the contour image B2 obtained at the previous image capturing, and performs the image processing at the time of the image capturing before the second time and the image capturing at the previous The contour changed between the times is extracted to create a difference image D1. Further, the second contour image difference means 51b performs a difference process between the contour image B2 at the time of the previous imaging and the current contour image B3, and extracts the outline changed between the previous imaging and the current imaging. A difference image D2 is created. Then, the common contour extraction unit 51c creates an image D1 obtained by extracting a common part of the difference images D1 and D2 output from the first and second contour image difference units 51a and 51b.
[0031]
For example, when the detection target moves greatly from the last two imagings to the current imaging, the first outline image difference means 51a calculates the difference between the outline images B1 and B2, and thereby the outline X of the detection object is extracted in the difference image D1. However, since the outline of the background portion hidden by the shadow of the detection target at the time of the imaging two times before appears in the outline image B2 of the previous imaging, this portion is also extracted as the difference image D1. Similarly, the contour X to be detected is extracted in the difference image D2 by the second contour image difference means 51b calculating the difference between the contour images B2 and B3. Since the outline of the part appears in the current outline image B3, this part is also extracted in the difference image D2. Then, the common contour extracting means 51c extracts a contour common to the difference images D1 and D2 (MIN processing), performs a binarization process to create a moving contour image C1, and generates a moving contour image C1 for each of the difference images D1 and D2. Since the outline of the appearing background part exists only in one image, the outline of the background part can be removed when extracting the common part, and only the outline X of the detection target at the detection target time can be accurately extracted. it can.
[0032]
Since the second similarity calculating means 71 searches for the presence or absence of a detection target from the moving contour image C1 obtained by the above-described processing, the influence of the background is large even when the movement amount or movement of the detection target is large. The detection target can be detected with high accuracy without receiving.
[0033]
(Embodiment 3)
Second Embodiment A second embodiment of the present invention will be described with reference to FIGS. In the present embodiment, in the first embodiment described above, the second moving contour extracting means 52 is used to determine whether the second moving contour extracting means 52 is stored in the contour image storing means 4 at the time of the previous image pickup (time (t-2)) and the current image pickup time. The difference from the contour image B3 at (time t) is obtained, and the first difference image difference means 52a for creating a first difference image D3 in which the contour of the portion changed during this time is extracted, and stored in the contour image storage means 4. A second difference image D4 is created in which the difference between the contour image B2 at the time of the previous imaging (time (t-1)) and the current contour image B3 is obtained, and the contour of a portion changed during this time is extracted. And an outline synthesizing unit 52c that outputs an image C2 obtained by synthesizing the first differential image D3 and the second differential image D4. In this way, the difference between the contour images B1 and B3 at the time of the last two-time imaging and the current imaging is obtained, and the outline image of the portion changed during this time is extracted, and the outline images B2 and B3 at the previous imaging and the current imaging are obtained. Is obtained, and the outline image of the part changed during this period is extracted and synthesized, so that the outline of the part changed during a period longer than the predetermined period is extracted. Since the configuration other than the second moving contour extraction means 52 is the same as that of the first embodiment, the same components are denoted by the same reference numerals and the description thereof will be omitted.
[0034]
Hereinafter, the procedure of image processing by the second moving contour extraction means 52 will be described with reference to FIG. The imaging unit 1 captures an image of the detection area at predetermined time intervals, and the contour extraction unit 3 extracts a contour based on an image signal of the imaging unit 1 input via the image input unit 2. Are created, and the created images are sequentially stored in the contour image storage means 4. In the second moving contour extracting means 52, the first contour image differentiating means 52a performs a difference process between the contour image B1 obtained at the time of preceding image capturing and the current contour image B3, and performs the difference processing between the image captured two times before and the current image capturing. The changed outline is extracted to create a difference image D3. Further, the second contour image difference means 52b performs a difference process between the contour image B2 at the time of the previous imaging and the contour image B3 at the time of the current imaging, and extracts a portion of the outline that has changed between the previous imaging and the current imaging. To create a difference image D4. Then, the contour combining means 52c combines the first and second difference images D3 and D4, performs a binarization process, and creates a moving contour extracted image C2.
[0035]
As described above, the second moving contour extraction unit 52 extracts both the movement between the time before the last imaging and the current imaging and the movement between the previous imaging and the current imaging, and combines the results. Therefore, for example, considering a case in which the detection target person moves his / her hand to a certain position between the time of the last imaging and the time of the previous imaging, and then returns the hand to the position where the imaging was performed twice before the current imaging, Since there is no human motion between the time of the imaging and the current imaging, it is not possible to extract the difference between the contour images B1 and B3 as a moving outline, but between the previous imaging and the current imaging, the motion of the human is Therefore, if a difference between the outline images B2 and B3 is obtained, it can be extracted as a moving outline. In addition, when the person to be detected moves his / her hand to a certain position between the last two imagings and the previous imaging, and then has a hand at the same position as the previous imaging at the current imaging (that is, from the previous imaging to the current imaging). Considering the case where the hand is stopped in between), since there is no movement of the person between the previous imaging and the current imaging, the difference between the outline images B2 and B3 cannot be extracted as a moving outline, but is taken two times before. Since there is a human motion between the time of imaging and the time of current imaging, if a difference between the outline images B1 and B3 is obtained, it can be extracted as a moving outline.
[0036]
When the person to be detected moves as described above, in this embodiment, the moving outline can be extracted by one of the first and second outline image difference units 52a and 52b. Since the outputs of the first and second contour image difference means 52a and 52b are combined, the moving contour can be extracted in any case, and the detection target can be extracted from the moving contour image C2 thus obtained. Since the presence / absence of the search is searched, the detection target can be detected with high accuracy even when the movement amount or the movement of the detection target or a part thereof is small, or when the detection target moves or stops.
[0037]
(Embodiment 4)
Embodiment 4 of the present invention will be described with reference to FIGS. This embodiment is the same as the first embodiment except for the similarity calculation processing by the first similarity calculation means 71. Therefore, common components are denoted by the same reference numerals, and description thereof is omitted. Only the features of the embodiment will be described.
[0038]
The first similarity calculating means 71 scans the first search range W1 set by the first search range setting means 61 in the image W0 to be detected as shown in FIG. A frame having the same size as the detected template image T to be detected is set, and a matching process is performed between the image E in this frame and the template image T. The first similarity calculating unit 71 performs a less strict similarity calculation than the second and third similarity calculating units 72 and 73 as described in the first embodiment, and the pixel in the template image T And the pixels of the image E corresponding to this pixel are counted, and the count value is normalized by the size (total number of pixels) of the template image T.
[0039]
Here, the calculation procedure of the similarity by the first similarity calculating means 71 will be described with reference to the images of FIGS. FIG. 13A shows an example of a template image T, which is a 7 × 8 pixel binary image (total number of pixels: 56 pixels). Each square indicates a pixel, white pixels are portions having an outline, and hatched pixels are portions without an outline. FIGS. 13B to 13D respectively show images E1, E2, and E3 in a frame set by three scanning points in the first search range W1, and the image E1 is an image having relatively many contours. , The number of pixels having an outline together with the pixel of the image E1 and the corresponding pixel of the template image T is 17, and the number of pixels having no outline is 21. Further, the image E2 is an image having a few outlines, and the number of pixels having an outline is 11 among the pixels of the image E2 and the corresponding pixel of the template image T, and the number of pixels having no outline is 35. Further, the image E3 has the smallest outline among the three images E1 to E3, and the number of pixels having the outline both in the pixels of the image E3 and the corresponding pixels of the template image T is 3, and the number of the pixels having no outline in both. Is 35. The first similarity calculating means 71 counts the number of pixels having a contour together with the pixels in the template image T and the pixels of the search target images E1 to E3 corresponding to the pixels, and the result is referred to as the template image. The similarity (coincidence) is obtained by normalizing with the size of T (total number of pixels). In the case of the image E1, the number of pixels having an outline is 17 and the similarity is 17/56 = 0.304, since the number of pixels having an outline is 11 in the case of the image E2, the similarity is 11/56 = 0.196, and the number of pixels having the outline is 3 in the case of the image E3. , The similarity becomes 3/56 = 0.0536, and the image E1 has the highest similarity among the three images E1 to E3.
[0040]
As described above, the first similarity calculating means 71 sets a frame of the same size as the template image T at all the scanning points in the first search range W1, and determines the similarity between the image in the frame and the template image T. If the maximum value of the similarity is higher than a predetermined threshold value, it is determined that the detection target exists in the image having the highest similarity, and the scanning point is output as the position where the detection target exists. The feature of the calculation according to the present embodiment is that the similarity is high at a position having a relatively large number of contours, because it is assumed that the movement amount or the movement of the detection target is large. Further, since the similarity is obtained by normalizing the size of the template image T (the total number of pixels), there is an advantage that the similarity can be used on the same scale regardless of the size of the template image T.
[0041]
(Embodiment 5)
Embodiment 5 of the present invention will be described with reference to FIGS. This embodiment is the same as Embodiment 1 except for the similarity calculation processing by the second similarity calculation means 72, so that the same reference numerals are given to the same components, and the description thereof will be omitted. Only the features of the embodiment will be described.
[0042]
The second similarity calculating means 72 scans the second search range W2 set by the second search range setting means 62 in the image W0 to be detected as shown in FIG. 14, and stores in advance at each scanning point. A frame having the same size as the detected template image T to be detected is set, and a matching process is performed between the image F in the frame and the template image T. The second similarity calculating unit 72 performs a relatively strict similarity calculation as compared with the first similarity calculating unit 71 as described in the first embodiment, and the pixel in the template image T The number of pixels having outlines together with the pixels in the image F corresponding to the pixels and the pixels having no outline are separately counted, and the similarity is calculated by normalizing the ratio of pixels having outlines to pixels having no outline. Since it is obtained, the similarity of the image that matches the template image in a well-balanced manner with respect to both the portion having the contour and the portion having no contour increases, and the conversely, the similarity decreases.
[0043]
Here, the procedure of calculating the similarity by the second similarity calculating means 72 will be described with reference to the images of FIGS. FIG. 15A shows an example of a template image T, which is a 7 × 8 pixel binary image (total number of pixels: 56 pixels). Each square indicates a pixel, and the number of pixels with outlines (white pixels) is 21, and the number of pixels without outlines (pixels in oblique lines) is 35. FIGS. 15B to 15D respectively show images F1, F2, and F3 in a frame set by three scanning points in the second search range W2, and the image F1 is an image having relatively many contours. , The number of pixels having an outline both in the pixels of the image F1 and the corresponding pixels in the template image T is 21, and the number of pixels having no outline is 21. Further, the image F2 is an image having a few outlines. The number of pixels having an outline in both the pixels of the image F2 and the corresponding pixels of the template image T is 11, and the number of pixels having no outline is 35. Further, the image F3 has the smallest outline among the three images F1 to F3, and the number of pixels having the outline both in the pixels of the image F3 and the corresponding pixels of the template image T is 3, and the number of the pixels having no outline in both. Is 35.
[0044]
The second similarity calculation means 72 calculates the number of pixels having outlines together with the pixels in the template image T and the pixels of the search target images F1 to F3 corresponding to the pixels, and the number of pixels having no outlines together. Are separately calculated, and each count result is normalized by the number of pixels having an outline and the number of pixels having no outline in the template image T to obtain a similarity (coincidence). The total similarity is obtained by adding the similarity of the pixel and the similarity of the pixel having no contour. That is, in the case of the image F1, the number of pixels having both contours is 17 and the number of pixels having no contour is 21. Therefore, the similarity of a portion having both contours is 17/21 = 0.810, and both of the contours have contours. The similarity of the missing part is 21/35 = 0.60, and the total similarity (sum of similarities) is 1.410. In the case of the image F2, the number of pixels having both contours is 11 and the number of pixels having no contour is 35. Therefore, the similarity of a portion having both contours is 11/21 = 0.524, and both of the contours have contours. The similarity of the missing part is 35/35 = 1.0, and the total similarity is 1.524. Further, in the case of the image F3, the number of pixels having both contours is 3 and the number of pixels having no contour is 35, so that the similarity of the portion having both contours is 3/21 = 0.143, and the contour is both The similarity of the missing part is 35/35 = 1.0, and the total similarity is 1.143. Thus, the image F2 has the highest similarity among the three images F1 to F3.
[0045]
As described above, the second similarity calculating means 72 sets a frame of the same size as the template image T at all the scanning points in the second search range W2, and determines the similarity between the image in the frame and the template image T. If the maximum value of the similarity is higher than a predetermined threshold value, it is determined that the detection target exists in the image having the maximum similarity, and the scanning point is output as the position where the detection target exists. The feature of the calculation of the present embodiment is that the similarity increases when both the contoured portion and the non-contoured portion match the template image with good balance. This is because a small case is assumed.
[0046]
(Embodiment 6)
Embodiment 6 of the present invention will be described with reference to FIG. This embodiment is the same as the first embodiment except for the similarity calculation processing by the third similarity calculation means 73. Therefore, common components are denoted by the same reference numerals, and description thereof is omitted. Only the characteristic portions of the embodiment will be described.
[0047]
The third similarity calculating unit 73 scans the third search range W3 or W4 set by the third search range setting unit 63 in the image W0 to be detected. A frame having the same size as the template image T to be detected stored in advance only at the detection position is set, and a matching process is performed between the image G in this frame and the template image T. Here, the third similarity calculating unit 73 obtains the similarity by performing the same calculation processing as the calculation processing by the second similarity calculating unit 72 described in the fifth embodiment, and calculates the similarity. Is higher than a predetermined threshold value, it is determined that the detection target exists in the image G, and the scanning point is output as the detection target existence position. The feature of the similarity calculation according to the present embodiment is that the similarity increases when both the contoured portion and the non-contoured portion match the template image T in a well-balanced manner. This is because it is assumed that When the degree of stillness is made stricter, the threshold value of the similarity may be increased.
[0048]
(Embodiment 7)
Embodiment 7 of the present invention will be described with reference to FIG. This embodiment is the same as Embodiment 1 except for the judgment processing by the detection judging means 8. Therefore, the same components are denoted by the same reference numerals, and the description thereof will be omitted. Only the features of the present embodiment will be described. explain.
[0049]
The detection determining means 8 first determines whether or not the maximum value of the similarity (third similarity) obtained by the third similarity calculating means 73 is equal to or greater than a first threshold (S1). If the maximum value of the similarity is equal to or greater than the first threshold value, it is determined that the detection target is stationary (S2). Also, as a result of the determination in S1, if the maximum value of the third similarity is lower than the first threshold, the detection determination means 8 determines the similarity (the second similarity) determined by the second similarity calculation means 72. It is determined whether or not the maximum value of the similarity is equal to or greater than a second threshold (S3). If the maximum value of the second similarity is equal to or greater than the second threshold, the detection target performs a slight movement. It is determined that it is performed (S4). Further, as a result of the determination in S3, when the maximum value of the second similarity is lower than the second threshold, the detection determination means 8 determines the similarity (the first similarity) determined by the first similarity calculation means 71. It is determined whether or not the maximum value of the similarity is equal to or greater than a third threshold (S5). If the maximum value of the first similarity is equal to or greater than the third threshold, the detection target moves greatly. Is determined (S6). Also, as a result of the determination in S5, if the maximum value of the third similarity is lower than the third threshold, the detection determination means 8 determines that there is no detection target.
[0050]
As described above, the detection determination unit 8 first determines whether the detection target is stationary, and then determines whether the detection target is moving slightly, and then determines whether the detection target is moving largely. Therefore, the detection target is less likely to be affected by disturbance and can be accurately detected. The first to third search ranges in which the first to third similarity calculation means 71 to 73 have determined the similarities are narrower in the second search range than in the first search range, and Since the third search range is narrower than the second search range, even when there are a plurality of detection targets, the search is performed from the vicinity of each detection target, so that the plurality of detection targets can be identified. Therefore, there is an advantage that tracking of each detection target can be easily performed.
[0051]
【The invention's effect】
As described above, the invention according to claim 1 is an image processing apparatus that detects the presence or absence and movement of a detection target such as a person or a vehicle from an image obtained by capturing a predetermined detection area. Image capturing means for capturing an image, contour extracting means for creating a contour image obtained by extracting a contour from an image captured by the image capturing means, contour image storing means for storing the contour image created by the contour extracting means, and storing in the contour image storing means First moving contour extracting means for extracting a part of the contour changed within a predetermined period based on the obtained one or more contour images and creating a moving contour image at a detection target time, and a contour image storing means Moving contour extracting means for extracting a portion of a contour that has changed within a period longer than the predetermined period based on a plurality of contour images stored in the storage device, and creating a moving contour image at a detection target time. A first search range setting unit that sets, as a first search range, a range in which the detection target can move from a detection position of a previous detection target with respect to the image created by the first moving contour extraction unit; The first similarity calculating means for calculating the similarity between the contour extracted by the first moving contour extracting means and the detection target in one search range, and the image created by the second moving contour extracting means, A second search range setting unit that sets a range narrower than the first search range near the detection position of the target to be detected as the second search range, and a second moving contour extraction unit that extracts the second search range in the second search range. A second similarity calculating means for calculating the similarity between the detected contour and the detection target, and a second search range near the detection position of the previous detection target with respect to the contour image at the detection target time created by the contour extraction means. A smaller area than the third A third search range setting unit that sets a search range, a third similarity calculation unit that calculates a similarity between the contour extracted by the current contour extraction unit and the detection target in the third search range, The second and third similarity calculating means are provided with a detection judging means for judging the presence or absence of a detection target based on the similarity calculated respectively.
[0052]
Thus, the second moving contour extracting means creates an image in which a contour portion that has changed within a period longer than the sampling interval (the predetermined period) of the first moving contour extracting means is extracted. Therefore, by making the sampling time interval longer than that of the first moving contour extraction means, the contour of the detection target can be reliably extracted even when the movement amount or the movement of the detection target is small, and furthermore, the detection target contour can be extracted. When the movement amount or the movement is small, it is considered that the detection target exists near the detection position at the time of the previous imaging, so that the second search range setting means sets a range smaller than the first search range near the previous detection position. Since the second similarity calculating means obtains the similarity with the detection target within the second search range, the second similarity calculation unit is less likely to be affected by contours other than the detection target. Accurate small detection target It can be known. Also, the first moving contour extracting means creates an image in which a portion of the contour which has changed within a predetermined period shorter than the sampling interval of the second moving contour extracting means is extracted, so that the moving amount of the detection target is And even when the movement is relatively large, the contour of the detection target can be reliably extracted. In addition, the first search range setting means sets the range in which the detection target can move as the first search range, and sets the first search range Since the first similarity calculating means calculates the similarity with the detection target in the above, the detection target can be accurately detected even when the movement amount or the movement of the detection target is relatively large. Still further, the third search range setting means sets the third search range smaller than the second search range in the vicinity of the detection position of the detection target at the time of the previous imaging with respect to the contour image extracted by the current contour image extraction means. As the range, the third similarity calculating means obtains the similarity with the detection target within the third search range, and all the contours including the background appear in the contour image extracted by the contour image extracting means this time. Therefore, even when the detected object is stationary, its outline can be accurately extracted, and when the detected object is stationary, it is considered that it is near the previous detection position. By setting the range smaller than the second search range as the third search range, it is less likely to be affected by contours other than the detection target, and the stationary detection target can be detected with high accuracy. The detection determining means determines the presence or absence and movement of the detected object based on the calculation results of the first, second, and third similarity calculating means. In the first similarity calculating means, a detected object having a small moving amount or movement is detected by the second similarity calculating means, and a stationary detected object is detected by the third similarity calculating means. By judging the presence or absence and the movement of the detection target based on the calculation result of the degree calculation means, there is an effect that the detection target can be detected accurately and reliably regardless of the behavior of the detection object.
[0053]
According to a second aspect of the present invention, in the first aspect of the present invention, the first moving contour extracting means extracts a difference between the contour image at the time of the last imaging stored in the contour image storage means and the contour image at the previous imaging. A first contour image difference means for creating a first difference image, and a second contour image difference extracting means for extracting a difference between the contour image at the time of the previous imaging stored in the contour image storage means and the contour image created by the current contour extraction means. A second contour image difference means for creating a difference image; and a common contour extraction means for creating an image obtained by extracting a common part between the first difference image and the second difference image. If the time interval between the imaging, the previous imaging, and the current imaging is relatively long (that is, the moving amount or movement of the detection target is large), the first difference image is hidden behind the shadow of the detection target at the time of imaging two times before and the last imaging. The outline of the background that appears sometimes is extracted However, if the outline of the background appears in both the outline image at the time of the previous imaging and the outline image of this time, the outline does not appear in the second difference image. By extracting the common part with the image, the outline of the background can be removed, and there is an effect that the outline of the detection target can be accurately extracted without being affected by the background.
[0054]
According to a third aspect of the present invention, in the first aspect of the present invention, the second moving contour extracting means includes a difference between the contour image stored in the contour image storing means at the time of the last two-time imaging and the contour image created by the current contour extracting means. A first contour image difference unit for creating a first difference image from which the contour image has been extracted, and a difference between the contour image at the time of previous imaging stored in the contour image storage unit and the contour image created by the current contour extraction unit. A second contour image difference means for creating a second difference image; and a contour synthesizing means for creating an image obtained by combining the first difference image and the second difference image. The contour image subtracting means creates a first difference image by taking the difference between the contour image obtained two times before and the contour image taken this time, and the second contour image differencing means uses the contour image Find the difference from the contour image Since the second difference image is generated by the contour synthesizing unit and the first difference image is synthesized with the second difference image, the second difference image is added to the portion that has changed between the time of the last two-time imaging and the current time. Since a portion that has changed between the time of the previous imaging and the current time can also be extracted, there is an effect that the contour of the detection target can be accurately extracted even when the movement amount or movement of the detection target is small.
[0055]
According to a fourth aspect of the present invention, in the first aspect, the first similarity calculating means includes a storage means for storing a template image representing an outline of a detection target, and an image created by the first moving outline extracting means. Counting means for scanning within the first search range as an image to be searched, and counting the number of pixels having an outline together with each pixel of the template image and a corresponding pixel of the image to be searched at each scanning point; Means for calculating a similarity by normalizing the count value of the counting means with the total number of pixels of the template image, wherein each pixel of the template image corresponds to an image to be searched at each scanning point. Since the number of pixels that have both contours is counted, the effect of being less susceptible to changes in the shape and size of the detection target is obtained. Because they were-normalized, there is an effect that the same scale regardless of the size of the detection object can be evaluated similarity.
[0056]
According to a fifth aspect of the present invention, in the first aspect of the present invention, the second similarity calculating means includes a storing means for storing a template image representing an outline of a detection target, and an image created by the second moving outline extracting means. A first counting unit that scans the second search range as an image to be searched and counts the number of pixels having outlines at each scanning point between each pixel of the template image and a corresponding pixel of the image to be searched; And second counting means for counting the number of pixels having no contour between each pixel of the template image and the corresponding pixel of the image to be searched at each scanning point; and counting the counting result of the first counting means with the template image. Means for calculating the degree of similarity by adding the value normalized by the number of pixels having an outline and the value normalized by the number of pixels having no outline in the template image to the count result of the second counting means. thing As a feature, the first and second counting means count the number of pixels having an outline and the number of pixels having no outline at each pixel of the template image and corresponding pixels of the image to be searched. Therefore, it is possible to accurately detect a detection target having a small amount of movement or a small amount of movement, and furthermore, the counting results of the first and second counting means are normalized by the number of pixels having an outline of the template image and the number of pixels having no outline. Therefore, there is an effect that the similarity can be evaluated using the same scale regardless of the extraction degree and the size of the outline of the detection target.
[0057]
According to a sixth aspect of the present invention, in the first aspect of the present invention, the third similarity calculating means includes a storage means for storing a template image representing an outline of a detection target, and an image created by the current contour extraction means for searching. A first counting unit that scans the third search range as an image and counts the number of pixels having an outline together with each pixel of the template image and a corresponding pixel of the image to be searched at each scanning point; The second counting means for counting the number of pixels having no contour between each pixel of the template image and the corresponding pixel of the image to be searched at the scanning point; Means for calculating a similarity by adding a value normalized by a certain number of pixels and a value normalized by the number of pixels having no contour of the template image to the count result of the second counting means. Since the first and second counting means count the number of pixels having an outline and the number of pixels having no outline in each pixel of the template image and the corresponding pixel of the image to be searched, respectively. In addition, it is possible to accurately detect a detection target having a small amount of movement or movement, and further, the count results of the first and second counting means are normalized by the number of pixels having an outline and the number of pixels having no outline in the template image, respectively. Therefore, there is an effect that the similarity can be evaluated using the same scale regardless of the extraction degree and the size of the outline of the detection target.
[0058]
According to a seventh aspect of the present invention, in the first aspect of the present invention, the detection determination unit determines that the detection target is stationary if the third similarity calculated by the third similarity calculation unit is equal to or greater than the first threshold. And when the third similarity is lower than the first threshold, and the second similarity calculated by the second similarity calculating means is equal to or greater than the second threshold. When it is determined that the detection target is making a slight movement, and the third similarity is lower than the first threshold and the second similarity is lower than the second threshold, the first similarity is determined. If the first similarity calculated by the similarity calculating means is equal to or more than a third threshold, it is determined that the detection target is moving more. By comparing the level of the third similarity calculated by the third similarity calculating means with the first threshold for the search range, It is determined whether or not the detection target is stationary, and then the second similarity calculated by the second similarity calculating means for the second search range having the second narrowest search range and the second threshold are calculated. After determining whether the detection target is moving relatively small by comparing the height of the first search range, the first similarity calculation unit calculates the first search range calculated by the first similarity calculation unit for the first search range having the widest search range. By comparing the level of the similarity with the third threshold to determine whether the detection target is moving relatively large, it is difficult to be affected by disturbance and the detection target is accurately determined. It can be detected well. Further, since the second search range is narrower than the first search range and the third search range is narrower than the second search range, even if there are a plurality of detection targets, the detection can be performed. Since the search is performed from the vicinity of the detection position of the target at the time of the previous imaging, identification between a plurality of detection targets becomes possible, and there is an effect that individual detection targets can be easily tracked.
[Brief description of the drawings]
FIG. 1 is a block diagram of a schematic configuration of an image processing apparatus according to a first embodiment.
FIG. 2 is an explanatory diagram illustrating a procedure of the image processing according to the first embodiment.
FIG. 3 is an explanatory diagram of a first search range according to the first embodiment.
FIG. 4 is an explanatory diagram illustrating another image processing procedure according to the first embodiment.
FIG. 5 is an explanatory diagram of a second search range according to the embodiment.
FIG. 6 is an explanatory diagram illustrating another image processing procedure according to the first embodiment.
FIG. 7 is an explanatory diagram of a third search range according to the embodiment.
FIG. 8 is a block diagram of a main part of an image processing apparatus according to a second embodiment.
FIG. 9 is an explanatory diagram illustrating a procedure of the image processing according to the embodiment.
FIG. 10 is a block diagram of a main part of an image processing apparatus according to a third embodiment.
FIG. 11 is an explanatory diagram illustrating a procedure of the image processing described above.
FIG. 12 is a diagram illustrating image processing performed by the image processing apparatus according to the fourth embodiment.
FIG. 13A is an example of a template image, and FIGS. 13B to 13D are examples of a search target image.
FIG. 14 is an explanatory diagram of image processing by the image processing apparatus according to the fifth embodiment.
FIG. 15A is an example of a template image, and FIGS. 15B to 15D are examples of images to be searched.
FIG. 16 is an explanatory diagram of image processing performed by the image processing apparatus according to the sixth embodiment.
FIG. 17 is a flowchart of image processing performed by the image processing apparatus according to the seventh embodiment.
[Explanation of symbols]
3 Contour extraction means
4 Contour image storage means
8 Detection and judgment means
51,52 First and second moving contour extracting means
61 to 63 first to third search range setting means
71-73 First to Third Similarity Calculation Means

Claims (7)

所定の検知エリアを撮像した画像から人や車輌などの検出対象の有無及び動きを検出する画像処理装置において、検知エリアの画像を所定の時間間隔で撮像する撮像手段と、撮像手段が撮像した画像から輪郭を抽出した輪郭画像を作成する輪郭抽出手段と、輪郭抽出手段が作成した輪郭画像を記憶する輪郭画像記憶手段と、輪郭画像記憶手段に記憶された1乃至複数の輪郭画像をもとに所定期間内で変化のあった輪郭の部分を抽出し、検出対象時間における移動輪郭画像を作成する第1の移動輪郭抽出手段と、輪郭画像記憶手段に記憶された複数の輪郭画像をもとに前記所定期間よりも長い期間内で変化のあった輪郭の部分を抽出し、検出対象時間における移動輪郭画像を作成する第2の移動輪郭抽出手段と、第1の移動輪郭抽出手段が作成した画像に対し、前回の検出対象の検出位置から当該検出対象が移動可能な範囲を第1の探索範囲として設定する第1の探索範囲設定手段と、第1の探索範囲において第1の移動輪郭抽出手段が抽出した輪郭と検出対象との類似度を算出する第1の類似度算出手段と、第2の移動輪郭抽出手段が作成した画像に対し、前回の検出対象の検出位置近傍で第1の探索範囲よりも狭い範囲を第2の探索範囲として設定する第2の探索範囲設定手段と、第2の探索範囲において第2の移動輪郭抽出手段が抽出した輪郭と検出対象との類似度を算出する第2の類似度算出手段と、輪郭抽出手段が作成した検出対象時間における輪郭画像に対し、前回の検出対象の検出位置近傍で第2の探索範囲よりも狭い範囲を第3の探索範囲として設定する第3の探索範囲設定手段と、第3の探索範囲において今回輪郭抽出手段が抽出した輪郭と検出対象との類似度を算出する第3の類似度算出手段と、第1、第2及び第3の類似度算出手段がそれぞれ算出した類似度をもとに検出対象の有無を判断する検知判断手段とを備えて成ることを特徴とする画像処理装置。In an image processing apparatus for detecting the presence or absence and movement of a detection target such as a person or a vehicle from an image obtained by imaging a predetermined detection area, an imaging unit for imaging an image of the detection area at predetermined time intervals, and an image captured by the imaging unit Contour extracting means for creating a contour image obtained by extracting a contour from an image, contour image storing means for storing the contour image created by the contour extracting means, and one or more contour images stored in the contour image storing means. A first moving contour extracting means for extracting a contour part which has changed within a predetermined period and creating a moving contour image at a detection target time; and a plurality of contour images stored in a contour image storing means. The second moving contour extracting means for extracting a part of the contour that has changed within a period longer than the predetermined period and creating a moving contour image at the detection target time, and the first moving contour extracting means First search range setting means for setting, as a first search range, a range in which the detection target can move from the detection position of the previous detection target with respect to the image, and a first moving contour in the first search range. The first similarity calculating means for calculating the similarity between the contour extracted by the extracting means and the detection target, and the first similarity calculating means for calculating the similarity between the image generated by the second moving contour extracting means and the first detecting means near the detection position of the previous detection target. A second search range setting unit that sets a range narrower than the search range of the second search range, and a similarity between the contour extracted by the second moving contour extraction unit and the detection target in the second search range. For the contour image at the detection target time created by the second similarity calculation means to be calculated and the contour extraction means, a range narrower than the second search range near the detection position of the previous detection target is set to the third search range. Third search set as Surrounding setting means; third similarity calculating means for calculating the similarity between the contour extracted by the current contour extracting means in the third search range and the detection target; first, second and third similarity calculating means An image processing apparatus comprising: a detection determination unit configured to determine the presence or absence of a detection target based on the similarity calculated by each of the units. 第1の移動輪郭抽出手段を、輪郭画像記憶手段に記憶された前々回撮像時の輪郭画像と前回撮像時の輪郭画像との差分を抽出した第1の差分画像を作成する第1の輪郭画像差分手段と、輪郭画像記憶手段に記憶された前回撮像時の輪郭画像と今回輪郭抽出手段が作成した輪郭画像との差分を抽出した第2の差分画像を作成する第2の輪郭画像差分手段と、第1の差分画像と第2の差分画像との共通部分を抽出した画像を作成する共通輪郭抽出手段とで構成したことを特徴とする請求項1記載の画像処理装置。The first moving contour extracting means generates a first difference image by extracting a difference between a contour image at the time of the last two-time imaging stored in the contour image storing means and a contour image at the previous imaging. Means, and second contour image difference means for creating a second difference image by extracting a difference between the contour image at the time of previous imaging stored in the contour image storage means and the contour image created by the current contour extraction means, 2. The image processing apparatus according to claim 1, further comprising common outline extracting means for generating an image in which a common part between the first difference image and the second difference image is extracted. 第2の移動輪郭抽出手段を、輪郭画像記憶手段に記憶された前々回撮像時の輪郭画像と今回輪郭抽出手段が作成した輪郭画像との差分を抽出した第1の差分画像を作成する第1の輪郭画像差分手段と、輪郭画像記憶手段に記憶された前回撮像時の輪郭画像と今回輪郭抽出手段が作成した輪郭画像との差分を抽出した第2の差分画像を作成する第2の輪郭画像差分手段と、第1の差分画像と第2の差分画像とを合成した画像を作成する輪郭合成手段とで構成したことを特徴とする請求項1記載の画像処理装置。The second moving contour extracting means generates a first difference image by extracting a difference between a contour image stored in the contour image storing means and a contour image created by the present contour extracting means at the time of the last two-time imaging. A contour image difference unit, and a second contour image difference for creating a second difference image which is obtained by extracting a difference between the contour image at the time of the previous imaging stored in the contour image storage unit and the contour image created by the current contour extraction unit. 2. An image processing apparatus according to claim 1, wherein said image processing apparatus comprises: means for generating an image obtained by synthesizing the first difference image and the second difference image. 第1の類似度算出手段を、検出対象の輪郭を表すテンプレート画像を記憶する記憶手段と、第1の移動輪郭抽出手段が作成した画像を探索対象の画像として第1の探索範囲内を走査し、各走査点においてテンプレート画像の各画素と探索対象の画像の対応する画素とで共に輪郭が存在する画素の数を計数する計数手段と、計数手段の計数値をテンプレート画像の総画素数で正規化することで類似度を算出する手段とで構成したことを特徴とする請求項1記載の画像処理装置。The first similarity calculation means scans the first search range with a storage means for storing a template image representing an outline of a detection target and an image created by the first moving outline extraction means as an image to be searched. Counting means for counting the number of pixels having an outline at each scanning point with each pixel of the template image and the corresponding pixel of the image to be searched; and counting the counting means by the total number of pixels of the template image. 2. The image processing apparatus according to claim 1, wherein the image processing apparatus is configured to calculate the degree of similarity by performing the conversion. 第2の類似度算出手段を、検出対象の輪郭を表すテンプレート画像を記憶する記憶手段と、第2の移動輪郭抽出手段が作成した画像を探索対象の画像として第2の探索範囲内を走査し、各走査点においてテンプレート画像の各画素と探索対象の画像の対応する画素とで共に輪郭がある画素の数を計数する第1の計数手段と、各走査点においてテンプレート画像の各画素と探索対象の画像の対応する画素とで共に輪郭がない画素の数を計数する第2の計数手段と、第1の計数手段の計数結果をテンプレート画像の輪郭がある画素数で正規化した値と第2の計数手段の計数結果をテンプレート画像の輪郭がない画素数で正規化した値とを加算することによって類似度を算出する手段とで構成したことを特徴とする請求項1記載の画像処理装置。The second similarity calculation means scans the second search range using the image created by the second moving outline extraction means as a search target image, with the storage means storing a template image representing the outline of the detection target. First counting means for counting the number of pixels having a contour together with each pixel of the template image and the corresponding pixel of the image to be searched at each scanning point, and each pixel of the template image at each scanning point and Second counting means for counting the number of pixels having no contour together with the corresponding pixels of the image of the image, a value obtained by normalizing the count result of the first counting means with the number of pixels having the contour of the template image, and a second 2. The image processing apparatus according to claim 1, further comprising means for calculating a similarity by adding a count result of said counting means to a value normalized by the number of pixels having no contour of the template image. 第3の類似度算出手段を、検出対象の輪郭を表すテンプレート画像を記憶する記憶手段と、今回輪郭抽出手段が作成した画像を探索対象の画像として第3の探索範囲内を走査し、各走査点においてテンプレート画像の各画素と探索対象の画像の対応する画素とで共に輪郭がある画素の数を計数する第1の計数手段と、各走査点においてテンプレート画像の各画素と探索対象の画像の対応する画素とで共に輪郭がない画素の数を計数する第2の計数手段と、第1の計数手段の計数結果をテンプレート画像の輪郭がある画素数で正規化した値と第2の計数手段の計数結果をテンプレート画像の輪郭がない画素数で正規化した値とを加算することによって類似度を算出する手段とで構成したことを特徴とする請求項1記載の画像処理装置。The third similarity calculating means scans the third search range with the storage means for storing a template image representing the contour of the detection target and the image created by the contour extraction means this time as the search target image. First counting means for counting the number of pixels having outlines at each point of each pixel of the template image and the corresponding pixel of the image to be searched; and at each scanning point, each pixel of the template image and the image of the image to be searched. A second counting means for counting the number of pixels having no contour with the corresponding pixel; a value obtained by normalizing the count result of the first counting means with the number of pixels having the contour of the template image; 2. The image processing apparatus according to claim 1, further comprising means for calculating a similarity by adding a result obtained by counting the number of pixels and a value normalized by the number of pixels having no contour of the template image. 検知判断手段は、第3の類似度算出手段で算出した第3の類似度が第1のしきい値以上であれば検出対象が静止していると判断するとともに、第3の類似度が第1のしきい値よりも低い場合に第2の類似度算出手段で算出した第2の類似度が第2のしきい値以上であれば検出対象が微少な動きをしていると判断し、第3の類似度が第1のしきい値よりも低く且つ第2の類似度が第2のしきい値よりも低い場合に第1の類似度算出手段で算出した第1の類似度が第3のしきい値以上であれば検出対象がより大きな動きをしていると判断することを特徴とする請求項1記載の画像処理装置。If the third similarity calculated by the third similarity calculating means is equal to or greater than the first threshold, the detection determining means determines that the detection target is stationary, and determines that the third similarity is the third similarity. If the second similarity calculated by the second similarity calculating means is lower than the first threshold and the second similarity calculated by the second similarity calculating means is equal to or more than the second threshold, it is determined that the detection target is making a slight movement, When the third similarity is lower than the first threshold and the second similarity is lower than the second threshold, the first similarity calculated by the first similarity calculating means is equal to the first similarity. 2. The image processing apparatus according to claim 1, wherein if the threshold is equal to or greater than 3, the detection target is determined to be moving more.
JP2003086084A 2003-03-26 2003-03-26 Image processing device Expired - Fee Related JP4042602B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2003086084A JP4042602B2 (en) 2003-03-26 2003-03-26 Image processing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2003086084A JP4042602B2 (en) 2003-03-26 2003-03-26 Image processing device

Publications (2)

Publication Number Publication Date
JP2004295416A true JP2004295416A (en) 2004-10-21
JP4042602B2 JP4042602B2 (en) 2008-02-06

Family

ID=33400838

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2003086084A Expired - Fee Related JP4042602B2 (en) 2003-03-26 2003-03-26 Image processing device

Country Status (1)

Country Link
JP (1) JP4042602B2 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006148425A (en) * 2004-11-18 2006-06-08 Keio Gijuku Method and apparatus for image processing, and content generation system
JP2007249270A (en) * 2006-03-13 2007-09-27 Secom Co Ltd Image sensor
JP2011053951A (en) * 2009-09-02 2011-03-17 Canon Inc Image processing apparatus
JP2012174117A (en) * 2011-02-23 2012-09-10 Denso Corp Mobile object detection device
JP2013020311A (en) * 2011-07-07 2013-01-31 Fujitsu Ltd Image processing system, image processing method and image processing program
WO2016027410A1 (en) * 2014-08-21 2016-02-25 パナソニックIpマネジメント株式会社 Detection device and detection system
JP2016053938A (en) * 2014-08-21 2016-04-14 パナソニックIpマネジメント株式会社 Detection device, detection system and program
JP2019215633A (en) * 2018-06-11 2019-12-19 オムロン株式会社 Control system, control unit, image processing apparatus, and program

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0546769A (en) * 1991-08-12 1993-02-26 Nippon Telegr & Teleph Corp <Ntt> Method for calculating movement vector
JPH07263482A (en) * 1994-03-18 1995-10-13 Fujitsu Ltd Pattern matching method and manufacture of semiconductor device
JPH08185521A (en) * 1994-12-28 1996-07-16 Clarion Co Ltd Mobile object counter
JPH0973543A (en) * 1995-09-06 1997-03-18 Toshiba Corp Moving object recognition method/device
JP2001167282A (en) * 1999-12-10 2001-06-22 Toshiba Corp Device and method for extracting moving object
JP2002183666A (en) * 2000-12-12 2002-06-28 Fujitsu Kiden Ltd Pattern-recognizing device, image reader, pattern- recognizing method and recording medium
JP2002352239A (en) * 2001-05-29 2002-12-06 Matsushita Electric Works Ltd Image processing method and image processor using the same

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0546769A (en) * 1991-08-12 1993-02-26 Nippon Telegr & Teleph Corp <Ntt> Method for calculating movement vector
JPH07263482A (en) * 1994-03-18 1995-10-13 Fujitsu Ltd Pattern matching method and manufacture of semiconductor device
JPH08185521A (en) * 1994-12-28 1996-07-16 Clarion Co Ltd Mobile object counter
JPH0973543A (en) * 1995-09-06 1997-03-18 Toshiba Corp Moving object recognition method/device
JP2001167282A (en) * 1999-12-10 2001-06-22 Toshiba Corp Device and method for extracting moving object
JP2002183666A (en) * 2000-12-12 2002-06-28 Fujitsu Kiden Ltd Pattern-recognizing device, image reader, pattern- recognizing method and recording medium
JP2002352239A (en) * 2001-05-29 2002-12-06 Matsushita Electric Works Ltd Image processing method and image processor using the same

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006148425A (en) * 2004-11-18 2006-06-08 Keio Gijuku Method and apparatus for image processing, and content generation system
JP4649640B2 (en) * 2004-11-18 2011-03-16 学校法人慶應義塾 Image processing method, image processing apparatus, and content creation system
JP2007249270A (en) * 2006-03-13 2007-09-27 Secom Co Ltd Image sensor
JP2011053951A (en) * 2009-09-02 2011-03-17 Canon Inc Image processing apparatus
JP2012174117A (en) * 2011-02-23 2012-09-10 Denso Corp Mobile object detection device
JP2013020311A (en) * 2011-07-07 2013-01-31 Fujitsu Ltd Image processing system, image processing method and image processing program
WO2016027410A1 (en) * 2014-08-21 2016-02-25 パナソニックIpマネジメント株式会社 Detection device and detection system
JP2016053938A (en) * 2014-08-21 2016-04-14 パナソニックIpマネジメント株式会社 Detection device, detection system and program
JP2016053939A (en) * 2014-08-21 2016-04-14 パナソニックIpマネジメント株式会社 Detection device, detection system, and program
JP2018129090A (en) * 2014-08-21 2018-08-16 パナソニックIpマネジメント株式会社 Detection apparatus, detection system, and program
JP2019215633A (en) * 2018-06-11 2019-12-19 オムロン株式会社 Control system, control unit, image processing apparatus, and program
JP7078894B2 (en) 2018-06-11 2022-06-01 オムロン株式会社 Control systems, controls, image processing devices and programs

Also Published As

Publication number Publication date
JP4042602B2 (en) 2008-02-06

Similar Documents

Publication Publication Date Title
US10234957B2 (en) Information processing device and method, program and recording medium for identifying a gesture of a person from captured image data
JP6655878B2 (en) Image recognition method and apparatus, program
JP4915655B2 (en) Automatic tracking device
CN107615334B (en) Object recognition device and object recognition system
US10679358B2 (en) Learning image automatic sorting device, learning image automatic sorting method, and learning image automatic sorting program
WO2012127618A1 (en) Moving-body detection device, moving-body detection method, moving-body detection program, moving-body tracking device, moving-body tracking method, and moving-body tracking program
JP4373840B2 (en) Moving object tracking method, moving object tracking program and recording medium thereof, and moving object tracking apparatus
JP2008192131A (en) System and method for performing feature level segmentation
JP5832910B2 (en) Image monitoring device
WO2006030633A1 (en) Object detector
US9349207B2 (en) Apparatus and method for parsing human body image
JP6095817B1 (en) Object detection device
US10762372B2 (en) Image processing apparatus and control method therefor
JP2008288684A (en) Person detection device and program
JP2007164690A (en) Image processor and image processing method
JP4042602B2 (en) Image processing device
JP4740755B2 (en) Monitoring device using images
JP3230509B2 (en) Moving image processing device
JP5754931B2 (en) Image analysis apparatus, image analysis method, and program
JP2010113562A (en) Apparatus, method and program for detecting and tracking object
KR100751467B1 (en) Image processing device for recognizing outline of moving target and method therefor
JP2014071482A (en) Object detection device
JP2007249743A (en) Identification method for moving object, identification device for moving object and program making identification processing for moving object to be performed
JP2004258925A (en) Image tracking device
JP2003162724A (en) Image sensor

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20050915

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20071018

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20071023

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20071105

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20101122

Year of fee payment: 3

S533 Written request for registration of change of name

Free format text: JAPANESE INTERMEDIATE CODE: R313533

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20101122

Year of fee payment: 3

R350 Written notification of registration of transfer

Free format text: JAPANESE INTERMEDIATE CODE: R350

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20111122

Year of fee payment: 4

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20121122

Year of fee payment: 5

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20121122

Year of fee payment: 5

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20131122

Year of fee payment: 6

LAPS Cancellation because of no payment of annual fees