JP2015158795A - Object detection device - Google Patents

Object detection device Download PDF

Info

Publication number
JP2015158795A
JP2015158795A JP2014033154A JP2014033154A JP2015158795A JP 2015158795 A JP2015158795 A JP 2015158795A JP 2014033154 A JP2014033154 A JP 2014033154A JP 2014033154 A JP2014033154 A JP 2014033154A JP 2015158795 A JP2015158795 A JP 2015158795A
Authority
JP
Japan
Prior art keywords
pixel
luminance
pixels
reference frame
image frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP2014033154A
Other languages
Japanese (ja)
Inventor
憲一 小川
Kenichi Ogawa
憲一 小川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aiphone Co Ltd
Original Assignee
Aiphone Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aiphone Co Ltd filed Critical Aiphone Co Ltd
Priority to JP2014033154A priority Critical patent/JP2015158795A/en
Publication of JP2015158795A publication Critical patent/JP2015158795A/en
Pending legal-status Critical Current

Links

Images

Abstract

PROBLEM TO BE SOLVED: To provide an object detection device which can reduce a calculation amount by focusing on the ratio of dynamic state pixels to static state pixels in a cell and eliminating a vector operation.SOLUTION: An object detection device comprises: an image input part 1 which generates an image frame group; a temporal change extraction part 2 which calculates the dispersion value of luminance from luminance change information on the same portion for each pixel of a reference pixel and an image frame preceding by a predetermined number or after the predetermined number from the reference frame to determine that the pixel with the dispersion value of the luminance larger than a prescribed dispersion value is in the dynamic state and pixels in other states are in the static state; an inter-frame feature extraction part 3 which forms a cell consisting of a plurality of adjacent pixels in the reference frame, calculating the ratio of the dynamic state pixels to the static state pixels for each cell with addition and division, and normalizes it to calculate the motion feature quantity; and a detection part 4 which detects a moving object by generating a strong discriminator on the basis of a weak discriminator generated from the calculated motion feature quantity and a learning sample.

Description

本発明は、カメラの撮像映像から物体を検出する物体検出装置に関する。   The present invention relates to an object detection device that detects an object from a captured image of a camera.

近年、犯罪の増加や高度化により、一般家庭などでも防犯機能の高い防犯カメラの需要が高まっている。従来のこのような防犯カメラは、人感センサが感知動作したらカメラが起動して撮像を開始するよう構成されているため、不審者がカメラの撮像範囲に居てもカメラから離れた場所であれば撮像されること自体無いため、防犯には不十分であった。
そのため、本発明者等は映像処理により動く物体を検出する技術を特許文献1で提案した。
In recent years, with the increase and sophistication of crime, the demand for security cameras with high security functions is increasing even in ordinary households. Such a conventional security camera is configured so that the camera is activated and starts imaging when the human sensor senses, so that even if a suspicious person is in the imaging range of the camera, it may be located away from the camera. Since it was not captured, it was insufficient for crime prevention.
For this reason, the inventors of the present invention proposed a technique for detecting a moving object by video processing in Patent Document 1.

特開2013−190943号公報JP 2013-190943 A

上記特許文献1の物体検出装置は、演算量を削減して人物や車両等の物体を安価なCPUにより検知することを可能とし、一般家庭に設置する防犯カメラに適用可能なものであった。
しかしながら、この演算量は1つのセルに対しての特徴量は2次元で済むためHOG特徴を用いた演算に比べれば演算量を削減できたが、特徴を抽出する際のベクトル演算(2乗和の平方根を分母に持つ割り算)やアダブーストによるカスケード型識別器において時間を要する処理が発生し、安価なCPUで実施できるまで十分な演算量の削減には至っていなかった。
The object detection apparatus disclosed in Patent Document 1 can detect an object such as a person or a vehicle with an inexpensive CPU by reducing the amount of calculation, and can be applied to a security camera installed in a general home.
However, since this calculation amount requires only two dimensions for one cell, the calculation amount can be reduced as compared with the calculation using the HOG feature, but the vector calculation (square sum) when extracting the feature In the cascade type discriminator based on AdaBoost and division with the square root of the denominator, time-consuming processing has occurred, and the amount of calculation has not been sufficiently reduced until it can be implemented by an inexpensive CPU.

そこで、本発明はこのような問題点に鑑み、ベクトル強度ではなくセル内の動状態ピクセルと静状態ピクセルの割合に着目し、また簡易な共起表現により計算量の削減を可能とした物体検出装置を提供することを目的としている。   Therefore, in view of such problems, the present invention focuses on the ratio of moving state pixels and static state pixels in a cell instead of vector intensity, and also enables object detection that can reduce the amount of calculation by simple co-occurrence expression. The object is to provide a device.

上記課題を解決する為に、請求項1の発明は、カメラの撮像映像を基に時間軸上で連続する画像から成る画像フレーム群を生成する画像フレーム群生成手段と、生成した複数の画像フレームのうち任意の画像フレームを基準フレームとして、当該基準フレームから時間軸上で所定数前までの画像フレーム或いは所定数後までの画像フレームのピクセル毎の同一部位の輝度の変化量の最大値を算出する輝度変化算出手段と、基準フレームと当該基準フレームから時間軸上で所定数前までの画像フレーム或いは所定数後までの画像フレームのピクセル毎の同一部位の輝度変化情報から輝度の分散値を算出する分散値算出手段と、輝度の変化量の最大値が所定の値より大きく、及び/又は輝度の分散値が所定の値より大きいピクセルを動状態と判断し、その他の状態のピクセルを静状態と判断するピクセル状態判断手段と、基準フレームにおいて隣接する複数のピクセルから成るセルを形成し、当該セル毎に動状態ピクセル及び静状態ピクセルの割合を加算と除算で算出して正規化し、動きの特徴量を算出する特徴量算出手段と、特徴量と学習サンプルから共起を表現した弱識別器を生成する弱識別器生成手段と、学習により弱識別器から生成される強識別器によって移動する物体を検出する物体検出手段とを備えたことを特徴とする。   In order to solve the above-mentioned problems, the invention of claim 1 is directed to an image frame group generation means for generating an image frame group consisting of continuous images on a time axis based on a captured image of a camera, and a plurality of generated image frames Any image frame is used as a reference frame, and the maximum value of the change in luminance of the same part for each pixel of the image frame up to a predetermined number on the time axis from the reference frame or a predetermined number on the time axis is calculated. Brightness change calculation means, and a brightness variance value is calculated from the reference frame and the brightness change information of the same part for each pixel of the image frame up to a predetermined number on the time axis from the reference frame or a predetermined number on the time axis. A variance value calculating means for determining a pixel having a maximum luminance change amount greater than a predetermined value and / or a pixel having a luminance variance value greater than a predetermined value as a moving state A pixel state determination means for determining pixels in other states as a static state and a cell composed of a plurality of adjacent pixels in the reference frame are formed, and the ratio of the moving state pixel and the static state pixel is added and divided for each cell. From the weak classifier by learning, the feature quantity calculating means for calculating the feature quantity of motion, the weak classifier generating means for generating a weak classifier expressing the co-occurrence from the feature quantity and the learning sample, And an object detecting means for detecting an object moving by the generated strong classifier.

本発明によれば、特徴量を抽出する際に、加算と除算により求めた動状態ピクセルと静状態ピクセルの割合を基に特徴量を算出することで、ベクトル強度を算出する面倒な演算を行う必要が無くなるし、簡易な共起表現を用いることで従来のアダブーストによるカスケード型識別器より演算量を減らすことが可能となるため、算出手段の負荷を軽減でき、人物や車両等の物体の検出を高速に然も安価に実現することが可能となる。   According to the present invention, when extracting a feature value, the feature value is calculated based on the ratio of the moving state pixel and the static state pixel obtained by addition and division, thereby performing a troublesome calculation for calculating the vector intensity. This eliminates the need for this, and the amount of computation can be reduced by using a simple co-occurrence expression compared to the conventional cascade booster using Adaboost, thereby reducing the load on the calculation means and detecting objects such as people and vehicles. Can be realized at high speed and at low cost.

本発明に係る物体検出装置の一例を示す機能構成図である。It is a functional lineblock diagram showing an example of an object detection device concerning the present invention. 画像フレームと輝度値の関係を示す画像フレーム群の説明図である。It is explanatory drawing of the image frame group which shows the relationship between an image frame and a luminance value. 動物体を抽出する演算の流れを画像で示している画像説明図である。It is an image explanatory drawing which shows the flow of the calculation which extracts a moving body with an image.

以下、本発明を具体化した実施の形態を、図面を参照して詳細に説明する。図1は本発明に係る物体検出装置の一例を示す機能構成図であり、図示しないカメラが撮像した映像信号が入力され、所定の間隔で時間軸上で連続する画像から成る画像フレームを生成して出力する画像入力部1、任意の画像フレームを基準フレームとしてピクセル毎に輝度の時間変化量を抽出する時間変化抽出部2、ピクセル毎にフレーム間差分特徴となる特徴量を算出するフレーム間特徴抽出部3、学習サンプルと算出した特徴量を用いた強識別器によって画像フレーム内の物体を検出する検出部4、検出結果を出力する結果出力部5を備えている。
尚、これら構成要素のうち、画像入力部1、時間変化抽出部2、フレーム間特徴抽出部3、検出部4が物体検出装置を構成し、CPU或いはDSPが所定のプログラムを実行することで実現される。また、結果出力部5を含む全体は例えばパーソナルコンピュータで実現できる。
DESCRIPTION OF THE PREFERRED EMBODIMENTS Embodiments embodying the present invention will be described below in detail with reference to the drawings. FIG. 1 is a functional configuration diagram showing an example of an object detection apparatus according to the present invention, which receives a video signal captured by a camera (not shown) and generates an image frame composed of images that are continuous on a time axis at a predetermined interval. An image input unit 1 that outputs the image, a time change extraction unit 2 that extracts a time change amount of luminance for each pixel using an arbitrary image frame as a reference frame, and an inter-frame feature that calculates a feature amount serving as an inter-frame difference feature for each pixel An extraction unit 3, a detection unit 4 that detects an object in an image frame by a strong classifier using the learning sample and the calculated feature amount, and a result output unit 5 that outputs a detection result are provided.
Of these components, the image input unit 1, the time change extraction unit 2, the inter-frame feature extraction unit 3, and the detection unit 4 constitute an object detection device, and the CPU or DSP executes a predetermined program. Is done. The whole including the result output unit 5 can be realized by a personal computer, for example.

以下、各部の動作を順に説明する。画像入力部1は、例えば赤外線照明を備えて夜間でも監視できる暗視カメラの撮像映像信号が入力され、例えば0.03秒毎に画像フレーム(静止画)を生成して出力する。こうして画像入力部1は、時間軸上で連続するデジタルデータから成る画像フレームを出力する。   Hereinafter, the operation of each unit will be described in order. The image input unit 1 receives, for example, an imaging video signal of a night vision camera that is equipped with infrared illumination and can be monitored at night, and generates and outputs an image frame (still image), for example, every 0.03 seconds. In this way, the image input unit 1 outputs an image frame composed of digital data continuous on the time axis.

時間変化抽出部2は、画像入力部1が出力した画像フレームの任意の画素(ピクセル)から、輝度の分散値Sを算出する。この演算は、最終的にフレーム内の全てのピクセルに対して実施される。
図2は、画像フレームと輝度値の関係を示す画像フレーム群の説明図であり、基準フレームをKとして、基準フレームKの直前に出力された画像フレームである過去フレームと、直後に出力される画像フレームである未来フレームの関係を示している。また、基準フレームKの任意の座標のピクセルの輝度値をIkとし、個々の画像フレームの同一座標の輝度をIk(±n)で示している。以下、この図2を参照して説明する。
The time change extraction unit 2 calculates a luminance dispersion value S from an arbitrary pixel of the image frame output by the image input unit 1. This operation is finally performed on all pixels in the frame.
FIG. 2 is an explanatory diagram of an image frame group showing the relationship between the image frame and the luminance value. A reference frame is K, a past frame that is an image frame output immediately before the reference frame K, and an image frame that is output immediately thereafter. The relationship of the future frame which is an image frame is shown. In addition, the luminance value of a pixel at an arbitrary coordinate in the reference frame K is denoted by Ik, and the luminance at the same coordinate in each image frame is denoted by Ik (± n). Hereinafter, a description will be given with reference to FIG.

ピクセルの分散値Sは、N個過去のフレームK−Nから基準フレームKまでの分散値(第1分散値)S1と、基準フレームKからN個未来のフレームK+Nまでの分散値(第2分散値)S2が演算される。
この分散値S1,S2は、例えば数1により算出される。尚、数1においてjは1からnの整数である。
The variance value S of the pixels includes a variance value (first variance value) S1 from N past frames K-N to a reference frame K, and a variance value (second variance) from the reference frame K to N future frames K + N. Value) S2 is calculated.
The variance values S1 and S2 are calculated by, for example, Equation 1. In Equation 1, j is an integer from 1 to n.

Figure 2015158795
Figure 2015158795

こうして求めた分散値に対して、次に閾値処理を行う。閾値処理により静状態/動状態を判定する。具体的には、第1分散値S1及び第2分散値S2が閾値以上であれば動状態と判定し、それ以外を静状態と判定し、こうしてピクセル毎に2値化した(2階調とした)輝度情報を出力する。
尚、閾値は予め設定された固定値であっても良いが、固定値の場合はカメラの設置環境や明るさにより判定に大きなバラツキが発生するため、過去の画像フレームのピクセルの分散値を基に適切な値を決定するのが好ましい。
Next, threshold processing is performed on the dispersion value thus obtained. The static state / moving state is determined by threshold processing. Specifically, if the first variance value S1 and the second variance value S2 are equal to or greater than the threshold value, it is determined to be a moving state, and the others are determined to be a static state, and thus binarized for each pixel (two gradations and Output brightness information.
Note that the threshold value may be a fixed value set in advance, but in the case of a fixed value, the determination varies greatly depending on the installation environment and brightness of the camera. It is preferable to determine an appropriate value for.

フレーム間特徴抽出部3は、求めた画像全体の動状態ピクセル・静状態ピクセルから所定の演算を実施して正規化し、特徴量を算出する。
正規化は、時間変化抽出部2で算出したピクセル毎の静状態/動状態の判定結果を基に算出され、特徴量は隣接する複数のピクセルから成るセルを作成し、更に複数のセルで形成したブロック単位で算出される。
尚、例えば1セルは5×5ピクセルで構成され、1ブロックは3×3セルで構成される。
The inter-frame feature extraction unit 3 performs a predetermined calculation from the obtained moving state pixels and static state pixels of the entire image to normalize and calculate a feature amount.
Normalization is calculated based on the determination result of the static state / moving state for each pixel calculated by the time change extraction unit 2, and the feature amount is formed by a plurality of cells formed by a plurality of adjacent pixels. Calculated in units of blocks.
For example, one cell is composed of 5 × 5 pixels, and one block is composed of 3 × 3 cells.

図3は特徴量演算の説明図であり、B1、B2は基準フレームK内のブロックを示し、Cはセルを示している。ブロックB1は動状態ピクセルを白、静状態ピクセルを黒で表し、ブロックB2は静状態ピクセルを白、動状態ピクセルを黒で表している。
このB1に示す動状態ピクセル(白)を「1」、静状態ピクセル(黒)を「0」とした場合の正規化した特徴量Va、B2に示す静状態ピクセル(白)を「1」、動状態ピクセル(黒)を「0」とした場合の正規化した特徴量Vrは、次の数2により算出される。
FIG. 3 is an explanatory diagram of the feature amount calculation. B1 and B2 indicate blocks in the reference frame K, and C indicates a cell. Block B1 represents moving state pixels as white and still state pixels as black, and block B2 represents still state pixels as white and moving state pixels as black.
When the dynamic state pixel (white) shown in B1 is “1” and the static state pixel (black) is “0”, the normalized feature value Va when the static state pixel (black) is “0”, the static state pixel (white) shown in B2 is “1”, The normalized feature value Vr when the moving state pixel (black) is “0” is calculated by the following equation (2).

Figure 2015158795
Figure 2015158795

尚、a,b,・・,rは各セルの静状態/動状態のカウント数であり、数2に示すように、ここでは注目セルの動状態及び静状態の特徴量を、単純な足し算と割り算で求めたブロックの動状態数に占める割合、ブロックの静状態数に占める割合により算出した数値としている。   Here, a, b,..., R are the count numbers of the static state / moving state of each cell, and as shown in Equation 2, here, the feature amount of the moving state and static state of the cell of interest is simply added. It is a numerical value calculated by the ratio of the block to the number of dynamic states obtained by division and the ratio of the block to the number of static states.

検出部4は、フレーム間特徴量抽出部3にて算出した特徴の中から簡易的に共起を表現した弱識別器を抽出し、学習により弱識別器の選択を経て強識別器を生成し、物体(ここでは人を検出する場合を説明する)を検出する。簡易的な共起は、例えば数3により算出される。
数3において、ha・rとha+rはそれぞれ弱識別器出力、W a+は全ポジティブサンプルの特徴量Vaの値に応じた重みの和、W r+は全ポジティブサンプルの特徴量Vrの値に応じた重みの和、W a−は全ネガティブサンプルの特徴量Vaの値に応じた重みの和、W r−は全ネガティブサンプルのVrの値に応じた重みの和である。
The detection unit 4 extracts a weak classifier that simply expresses a co-occurrence from the features calculated by the inter-frame feature quantity extraction unit 3, and generates a strong classifier through selection of the weak classifier by learning. , An object (here, a case where a person is detected will be described) is detected. Simple co-occurrence is calculated by, for example, Equation 3.
In Equation 3, h a · r and h a + r are the weak discriminator outputs, W j a + is the sum of weights according to the value of the feature value Va of all positive samples, and W j r + is the feature value Vr of all positive samples. The sum of the weights according to the values, W j a− is the sum of the weights according to the value of the feature value Va of all negative samples, and W j r− is the sum of the weights according to the value of Vr of all the negative samples.

Figure 2015158795
Figure 2015158795

強識別器はリアルアダブーストによって構築され、検出対象の人の画像とそれ以外の画像とから成る学習サンプルにより学習し、より最適なセルの組合せを選択して最終的な強識別器を得る。こうして得た強識別器により人が検出されたら、人の座標情報、領域情報等を含む物体検出信号が出力される。
尚、このリアルアダブーストで使用する学習サンプルは、ここでは人を検知するために人物と背景によってクラス分けされた学習サンプルが使用されるが、車両を検知する場合は自転車や自動車等の車両と背景によってクラス分けされた学習サンプルが使用される。
The strong classifier is constructed by Real Adaboost, learns from a learning sample consisting of a human image to be detected and other images, and selects a more optimal cell combination to obtain a final strong classifier. When a person is detected by the strong classifier thus obtained, an object detection signal including the person's coordinate information, area information, etc. is output.
In this case, the learning sample used in this real Adaboost is a learning sample that is classified according to the person and the background to detect the person. Learning samples classified by background are used.

結果出力部5は、検出部4が人を検出したら警報を報音する報音部、検出した映像を表示するLCDモニタ等の映像表示部、外部に通報する通報部等で構成され、検出部4が出力する座標情報等を受けて映像表示部では入力映像の表示に加えて、人のエリアがウィンドウ表示される。   The result output unit 5 includes a sound report unit that sounds an alarm when the detection unit 4 detects a person, a video display unit such as an LCD monitor that displays the detected video, a notification unit that reports to the outside, and the like. In response to the coordinate information output by 4, the video display unit displays a human area in addition to the display of the input video.

このように、特徴量を算出する際に、加算と除算により求めた動状態ピクセルと静状態ピクセルの割合を基に特徴量を算出することで、ベクトル強度を算出する面倒な演算を行う必要が無くなり、また簡易的な共起表現を用いることで従来のアダブーストによるカスケード型識別器より演算量を減らすことが可能となるため、特徴量算出演算を行うCPUの負荷を軽減でき、人物や車両等の物体の検出を高速に然も安価に実現することが可能となる。   As described above, when calculating the feature amount, it is necessary to perform a cumbersome calculation to calculate the vector intensity by calculating the feature amount based on the ratio of the moving state pixel and the static state pixel obtained by addition and division. By using simple co-occurrence expressions, it is possible to reduce the amount of calculation compared to the conventional cascade booster with Adaboost, so the load on the CPU for calculating the characteristic amount can be reduced, and people, vehicles, etc. It is possible to realize the detection of the object at high speed and at low cost.

尚、上記実施形態では、第1分散値S1及び第2分散値S2が閾値以上であれば動状態としているが、輝度の変化量の最大値が所定の輝度値より大きい場合を動状態としても良いし、分散値と輝度の変化量の最大値の双方が所定値以上であれば動状態としても良い。   In the above embodiment, the moving state is set when the first variance value S1 and the second variance value S2 are equal to or larger than the threshold value. However, the moving state may be set when the maximum value of the luminance change amount is larger than a predetermined luminance value. Alternatively, the moving state may be set as long as both the dispersion value and the maximum value of the amount of change in luminance are greater than or equal to a predetermined value.

1・・画像入力部(画像フレーム群生成手段)、2・・時間変化抽出部(輝度変化算出手段、分散値算出手段、ピクセル状態判断手段)、3・・フレーム間特徴抽出部(特徴量算出手段)、4・・検出部(弱識別器生成手段、物体検出手段)、5・・結果出力部。   1. Image input unit (image frame group generation unit) 2. Time change extraction unit (luminance change calculation unit, variance value calculation unit, pixel state determination unit) 3. Inter frame feature extraction unit (feature amount calculation) Means), 4 .... detection unit (weak classifier generation means, object detection means), 5 .... result output unit.

Claims (1)

カメラの撮像映像を基に時間軸上で連続する画像から成る画像フレーム群を生成する画像フレーム群生成手段と、
生成した複数の画像フレームのうち任意の画像フレームを基準フレームとして、当該基準フレームから時間軸上で所定数前までの画像フレーム或いは所定数後までの画像フレームのピクセル毎の同一部位の輝度の変化量の最大値を算出する輝度変化算出手段と、
前記基準フレームと当該基準フレームから時間軸上で所定数前までの画像フレーム或いは所定数後までの画像フレームのピクセル毎の同一部位の輝度変化情報から輝度の分散値を算出する分散値算出手段と、
前記輝度の変化量の最大値が所定の値より大きく、及び/又は前記輝度の分散値が所定の値より大きいピクセルを動状態と判断し、その他の状態のピクセルを静状態と判断するピクセル状態判断手段と、
前記基準フレームにおいて隣接する複数のピクセルから成るセルを形成し、当該セル毎に前記動状態ピクセル及び静状態ピクセルの割合を加算と除算で算出して正規化し、動きの特徴量を算出する特徴量算出手段と、
前記特徴量と学習サンプルから共起を表現した弱識別器を生成する弱識別器生成手段と、
学習により前記弱識別器から生成される強識別器によって移動する物体を検出する物体検出手段とを備えたことを特徴とする物体検出装置。
Image frame group generation means for generating an image frame group consisting of continuous images on the time axis based on captured images of the camera;
Changes in luminance of the same part for each pixel of an image frame up to a predetermined number on the time axis from the reference frame or a predetermined number on the time axis from an arbitrary image frame among a plurality of generated image frames as a reference frame A luminance change calculating means for calculating the maximum amount,
A variance value calculating means for calculating a variance value of luminance from the reference frame and luminance change information of the same part for each pixel of the image frame up to a predetermined number on the time axis from the reference frame or a predetermined number of images after the reference frame; ,
A pixel state in which the maximum value of the luminance change amount is larger than a predetermined value and / or a pixel in which the variance value of the luminance is larger than a predetermined value is determined as a moving state, and pixels in other states are determined as a static state Judgment means,
A feature amount that forms a cell composed of a plurality of adjacent pixels in the reference frame, calculates a motion feature amount and normalizes the ratio of the moving state pixel and the static state pixel by addition and division for each cell. A calculation means;
Weak classifier generating means for generating a weak classifier expressing co-occurrence from the feature quantity and the learning sample;
An object detection apparatus comprising: an object detection unit configured to detect an object moving by a strong classifier generated from the weak classifier by learning.
JP2014033154A 2014-02-24 2014-02-24 Object detection device Pending JP2015158795A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2014033154A JP2015158795A (en) 2014-02-24 2014-02-24 Object detection device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2014033154A JP2015158795A (en) 2014-02-24 2014-02-24 Object detection device

Publications (1)

Publication Number Publication Date
JP2015158795A true JP2015158795A (en) 2015-09-03

Family

ID=54182741

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2014033154A Pending JP2015158795A (en) 2014-02-24 2014-02-24 Object detection device

Country Status (1)

Country Link
JP (1) JP2015158795A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101837256B1 (en) * 2015-12-21 2018-03-12 동국대학교 산학협력단 Method and system for adaptive traffic signal control

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013190943A (en) * 2012-03-13 2013-09-26 Aiphone Co Ltd Object detector and intercom system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013190943A (en) * 2012-03-13 2013-09-26 Aiphone Co Ltd Object detector and intercom system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
藤吉 弘亘 HIRONOBU FUJIYOSHI: "局所特徴量の関連性に着目したJoint特徴による物体検出 Object Detection by Joint Feature Based on", 情報処理学会研究報告 VOL.2009 NO.29 IPSJ SIG TECHNICAL REPORTS, vol. 第2009巻, JPN6017031782, 6 March 2009 (2009-03-06), JP, pages 43 - 54, ISSN: 0003742624 *
西村 拓一 TAKUICHI NISHIMURA: "白黒動画像からの形状特徴を用いたジェスチャのスポッティング認識システム Spotting Recognition of Gest", 電子情報通信学会技術研究報告 VOL.97 NO.595 IEICE TECHNICAL REPORT, vol. 第97巻, JPN6017031779, 12 March 1998 (1998-03-12), JP, pages 89 - 96, ISSN: 0003742623 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101837256B1 (en) * 2015-12-21 2018-03-12 동국대학교 산학협력단 Method and system for adaptive traffic signal control

Similar Documents

Publication Publication Date Title
Cheng et al. Illumination-sensitive background modeling approach for accurate moving object detection
US10810438B2 (en) Setting apparatus, output method, and non-transitory computer-readable storage medium
US10713798B2 (en) Low-complexity motion detection based on image edges
US8630453B2 (en) Image processing device, image processing method and program
US9569688B2 (en) Apparatus and method of detecting motion mask
JP5274216B2 (en) Monitoring system and monitoring method
US9818277B1 (en) Systems and methods for smoke detection
JP2016012752A (en) Video monitoring device, video monitoring system, and video monitoring method
JP6731645B2 (en) Image monitoring device, image monitoring method, and image monitoring program
JP6214426B2 (en) Object detection device
JP2020057111A (en) Facial expression determination system, program and facial expression determination method
Doulamis Iterative motion estimation constrained by time and shape for detecting persons' falls
TW201207778A (en) Monitoring system and related recording methods for recording motioned image, and machine readable medium thereof
CN104202533B (en) Motion detection device and movement detection method
Cocorullo et al. Embedded surveillance system using background subtraction and Raspberry Pi
JP2013190943A (en) Object detector and intercom system
Rezaee et al. Intelligent detection of the falls in the elderly using fuzzy inference system and video-based motion estimation method
JP2015158795A (en) Object detection device
JP5864230B2 (en) Object detection device
JP4619082B2 (en) Image determination device
JP6124739B2 (en) Image sensor
JP3994416B1 (en) Abnormality detection device, program, and abnormality detection method
JP6244221B2 (en) Human detection device
Zizi et al. Aggressive movement detection using optical flow features base on digital & thermal camera
WO2019082474A1 (en) Three-dimensional intrusion detection system and three-dimensional intrusion detection method

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20160927

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20170814

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20170829

A521 Request for written amendment filed

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20171019

A02 Decision of refusal

Free format text: JAPANESE INTERMEDIATE CODE: A02

Effective date: 20180220