JP2013114583A - Moving direction identification device - Google Patents

Moving direction identification device Download PDF

Info

Publication number
JP2013114583A
JP2013114583A JP2011262380A JP2011262380A JP2013114583A JP 2013114583 A JP2013114583 A JP 2013114583A JP 2011262380 A JP2011262380 A JP 2011262380A JP 2011262380 A JP2011262380 A JP 2011262380A JP 2013114583 A JP2013114583 A JP 2013114583A
Authority
JP
Japan
Prior art keywords
person
moving direction
calculated
calculating
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2011262380A
Other languages
Japanese (ja)
Other versions
JP5864231B2 (en
Inventor
Madoka Inoue
円 井上
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aiphone Co Ltd
Original Assignee
Aiphone Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aiphone Co Ltd filed Critical Aiphone Co Ltd
Priority to JP2011262380A priority Critical patent/JP5864231B2/en
Publication of JP2013114583A publication Critical patent/JP2013114583A/en
Application granted granted Critical
Publication of JP5864231B2 publication Critical patent/JP5864231B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

PROBLEM TO BE SOLVED: To improve monitoring efficiency by estimating a moving direction of a detected person and dismissing detection of a person outside a monitoring target from the moving direction.SOLUTION: A moving direction identification device comprises: a time change extraction unit 2 for dividing each of consecutive images on a time axis into multiple areas and calculating an intensity change within the areas as a state change quantity for each divided area; a movement vector calculation unit 3 for calculating a difference in the calculated state change quantity for each image frame group that consists of multiple consecutive images, calculating a centroid of the calculated difference, comparing a position of the calculated centroid among the consecutive image frame groups and calculating a movement quantity of the centroid as a feature quantity; a person detection unit 4 for determining the presence of a person in a picture; and a moving direction determination unit 5 for determining a moving direction of the person. The moving direction determination unit 5 consists of a cascade strong identifier by AdaBoost, compares the feature quantity stored in a sample storage unit 5a with the calculated feature quantity of each area and determines the moving direction of the person.

Description

本発明は、カメラの撮像映像から人物の移動方向を識別する移動方向識別装置に関する。   The present invention relates to a movement direction identification device that identifies a movement direction of a person from an image captured by a camera.

近年、映像から人物を検出する技術が発達しており、局所領域内の画素の輝度変化をヒストグラム化した特徴量を基に識別を行う手法として、輝度の勾配方向ヒストグラム化した特徴ベクトルであるHOGを用いた非特許文献1に開示された手法や、複数画像フレームにおける時間軸上の輝度変化に基づいてピクセル状態分析(PSA)を行って特徴量として用いた特許文献1に開示された手法が提案されている。   In recent years, a technique for detecting a person from an image has been developed, and as a technique for performing identification based on a feature amount obtained by histogramating luminance changes of pixels in a local region, a feature vector HOG that is a histogram of luminance gradient direction is used. There is a method disclosed in Non-Patent Document 1 using the image processing method, or a method disclosed in Patent Document 1 used as a feature amount by performing pixel state analysis (PSA) based on a change in luminance on the time axis in a plurality of image frames. Proposed.

特開2009−301104号公報JP 2009-301104 A

N.Dalal and B.Triggs, “Histograms of Oriented Gradients for Human Detection”, IEEE Computer Vision and Pattern Recognitio, pp. 886-893, 2005年N.Dalal and B. Triggs, “Histograms of Oriented Gradients for Human Detection”, IEEE Computer Vision and Pattern Recognitio, pp. 886-893, 2005

しかしながら、カメラを使用してカメラに近づいてくる人物、或いは特定の方向に移動している人物だけを監視したくても、実際には移動方向は関係なくカメラの前を横切る人物であれば誰でも検出してしまう問題があった。   However, if you want to monitor only the person who is approaching the camera using the camera, or who is moving in a specific direction, but who actually crosses the camera regardless of the direction of movement, But there was a problem that would detect it.

そこで、本発明はこのような問題点に鑑み、検出した人物の移動方向を推定し、移動方向から監視対象外の人物に対する検出を棄却することで、監視効率を向上させることができる移動方向識別装置を提供することを目的としている。   Therefore, in view of such problems, the present invention estimates the moving direction of the detected person and rejects the detection of the person who is not the monitoring target from the moving direction, so that the moving direction can be improved. The object is to provide a device.

上記課題を解決する為に、請求項1の発明に係る移動方向識別装置は、カメラの撮像映像を基に時間軸上で連続する複数の画像で構成される画像フレーム群を生成する画像フレーム群生成手段と、個々の画像を複数領域に分割して、分割した領域毎に領域内の輝度変化を状態変化量として算出する状態変化量算出手段と、生成した個々の画像フレーム群において、算出した状態変化量の連続する画像間の差分を移動領域として算出する移動領域算出手段と、算出された移動領域の重心を算出する重心算出手段と、連続する画像フレーム群の間で重心位置を比較して重心の移動量を特徴量として算出する特徴量算出手段と、映像内の人物の存在を判断する人物検出部と、人物の移動方向を判定する移動方向判定部とを有し、移動方向判定部は、方向別にクラス分けされた学習サンプルから算出された特徴量を記憶するサンプル記憶部を具備して、アダブーストによるカスケード型の強識別器によって構成され、人物検出部が人物の存在を判断したら、サンプル記憶部に記憶された特徴量と算出した個々の領域の特徴量とを比較して検出した人物の移動方向を判定することを特徴とする。
この構成によれば、時間軸上で連続する複数の画像間で輝度の変化する領域から人物の存在を判断して移動方向を識別する。そのため、撮像されている人物を把握しても監視対象外の動きをする人物であれば無視することができ、監視効率を向上させることができる。
In order to solve the above-mentioned problem, the moving direction identification device according to the first aspect of the present invention is an image frame group that generates an image frame group composed of a plurality of images that are continuous on the time axis based on the captured image of the camera. Calculated in the generation means, the state change amount calculation means for dividing the individual image into a plurality of regions, and calculating the brightness change in the region as the state change amount for each divided region, and the generated individual image frame group The moving area calculating means for calculating the difference between successive images of the state change amount as the moving area, the center of gravity calculating means for calculating the center of gravity of the calculated moving area, and the position of the center of gravity are compared between successive image frame groups. A feature amount calculation means for calculating the amount of movement of the center of gravity as a feature amount, a person detection unit for determining the presence of a person in the video, and a movement direction determination unit for determining the movement direction of the person. Department A sample storage unit that stores feature amounts calculated from learning samples classified according to direction, and is configured by a cascade-type strong classifier using Adaboost, and when the person detection unit determines the presence of a person, the sample storage The moving direction of the detected person is determined by comparing the feature amount stored in the section with the calculated feature amount of each region.
According to this configuration, the movement direction is identified by determining the presence of a person from a region where the luminance changes between a plurality of images that are continuous on the time axis. Therefore, even if the person being imaged is grasped, any person who moves outside the monitoring target can be ignored and the monitoring efficiency can be improved.

請求項2の発明は、請求項1に記載の構成において、人物検出部は、人物の有無にクラス分けされた学習サンプルから算出された特徴量を記憶するサンプル記憶部を具備して、アダブーストによるカスケード型の強識別器によって構成され、サンプル記憶部に記憶された特徴量と算出した個々の領域の特徴量とを比較して人物の存在を判定することを特徴とする。
この構成によれば、柱や植栽等を人物と判断することが無いし、明るさの急激な変化も人物と判断することを防止でき、人物の検出精度が向上する。
According to a second aspect of the present invention, in the configuration of the first aspect, the person detection unit includes a sample storage unit that stores a feature amount calculated from a learning sample classified according to the presence or absence of a person, and is based on Adaboost It is configured by a cascade type strong classifier and is characterized in that the presence of a person is determined by comparing the feature amount stored in the sample storage unit with the calculated feature amount of each region.
According to this configuration, it is possible to prevent a pillar, planting, or the like from being determined as a person, and it is possible to prevent a sudden change in brightness from being determined as a person, thereby improving the detection accuracy of the person.

本発明によれば、カメラの撮像映像から撮像されている人物を把握しても、監視対象外の動きをする人物は無視することができ、監視効率を向上させることができる。   According to the present invention, even if the person being imaged is grasped from the captured image of the camera, the person who moves outside the monitoring target can be ignored, and the monitoring efficiency can be improved.

本発明に係る移動方向識別装置の一例を示す構成図である。It is a block diagram which shows an example of the moving direction identification apparatus which concerns on this invention. 画像と画像フレーム群の時系列関係を示す説明図である。It is explanatory drawing which shows the time series relationship of an image and an image frame group. 物体を抽出する流れを示す概念説明図である。It is a conceptual explanatory drawing which shows the flow which extracts an object.

以下、本発明を具体化した実施の形態を図面を参照して詳細に説明する。図1は本発明に係る移動方向識別装置の構成図を示し、画像入力部1、時間変化抽出部2、動きベクトル抽出部3、人物検出部4、移動方向判定部5、結果出力部6を備えている。尚、時間変化抽出部2、動きベクトル抽出部3、人物検出部4、移動方向判定部5は所定のプログラムをインストールしたCPU或いはDSPで一体に構成される。   Embodiments of the present invention will be described below in detail with reference to the drawings. FIG. 1 is a block diagram of a moving direction identification device according to the present invention. An image input unit 1, a time change extraction unit 2, a motion vector extraction unit 3, a person detection unit 4, a movement direction determination unit 5, and a result output unit 6 are shown. I have. The time change extraction unit 2, the motion vector extraction unit 3, the person detection unit 4, and the movement direction determination unit 5 are integrally configured by a CPU or DSP in which a predetermined program is installed.

画像入力部1は、検出対象である人物を撮像するためのカメラを備え、デジタル映像信号を出力する。この映像信号は、例えば256階調のカラー映像である。   The image input unit 1 includes a camera for imaging a person to be detected, and outputs a digital video signal. This video signal is, for example, a 256-color color video.

時間変化抽出部2は、画像入力部1が出力する映像信号から一定の時間間隔で画像(画像フレーム)を順次取り出し、所定数の画像毎にグループ分けして画像フレーム群を生成する。図2は取り出した画像と画像フレーム群の時系列関係を示し、3フレームの画像で1つの画像フレーム群が構成された場合を示している。
また時間変化抽出部2は、個々の画像を複数の領域に分割し、分割した個々の領域毎の画素の輝度情報から領域内の輝度変化を状態変化量として算出する。
The time change extraction unit 2 sequentially extracts images (image frames) from the video signal output from the image input unit 1 at regular time intervals, and groups them for each predetermined number of images to generate an image frame group. FIG. 2 shows a time-series relationship between the extracted image and the image frame group, and shows a case where one image frame group is composed of three frame images.
Further, the time change extraction unit 2 divides each image into a plurality of regions, and calculates the luminance change in the region as the state change amount from the luminance information of the pixel for each divided region.

動きベクトル抽出部3は、時間軸上で隣接する画像の同一領域同士で状態変化量を比較し、差分を演算する。そして、画像フレーム群毎に求めた差分の重心位置を求め、隣接する画像フレーム群間の重心の移動量を特徴量として算出する。   The motion vector extraction unit 3 compares the state change amounts between the same regions of adjacent images on the time axis, and calculates a difference. Then, the centroid position of the difference obtained for each image frame group is obtained, and the movement amount of the centroid between adjacent image frame groups is calculated as a feature amount.

人物検出部4は、人物と背景によってクラス分けされた学習サンプルと動きベクトル抽出部3で抽出した特徴量を用いて、強識別器によって構成される周知のアダブーストによるカスケード型識別器を用いて人物の存在を判定する。4aは学習サンプルの特徴量を記憶するサンプル記憶部を示している。
尚、人物の存在の判定は、上記非特許文献1の技術によりカメラ撮像映像から人物を検出して判定しても良い。
The person detection unit 4 uses the learning samples classified by the person and the background and the feature quantity extracted by the motion vector extraction unit 3 to use a known cascade booster classifier made up of strong boosters. Determine the existence of Reference numeral 4a denotes a sample storage unit that stores feature quantities of learning samples.
Note that the presence of a person may be determined by detecting a person from a camera-captured video by the technique of Non-Patent Document 1.

移動方向判定部5は、人物と背景によってクラス分けされた学習サンプルと動きベクトル抽出部3で抽出した特徴量を用いて、強識別器によって構成される周知のアダブーストによるカスケード型識別器を用いて人物の移動方向を判定する。5aは学習サンプルの特徴量を記憶するサンプル記憶部を示している。   The moving direction determination unit 5 uses a well-known cascade-type discriminator based on Adaboost using a learning class classified by person and background and the feature quantity extracted by the motion vector extraction unit 3. The moving direction of the person is determined. Reference numeral 5a denotes a sample storage unit that stores the feature amount of the learning sample.

結果出力部6は、報音するためのスピーカ或いはブザー等の音響出力部、映像を表示する表示部等を備え、移動方向判定部5が予め設定された特定の方向へ移動する人物を検出したら、それを報音通知し、撮像映像を表示する。   The result output unit 6 includes a sound output unit such as a speaker or a buzzer for reporting sound, a display unit for displaying an image, and the like, and when the movement direction determination unit 5 detects a person moving in a specific direction set in advance. , Notify the sound and display the captured image.

このように構成した移動方向識別装置の動作を、以下具体的に説明する。図3は人物の検出及び移動方向を判定する流れを示す概念説明図であり、この図3を参照しながら説明する。尚、ここでは人物検出部4の人物検出、及び移動方向判定部5の人物の移動方向の判定は後述する学習サンプルが異なるだけで同一の流れで判定動作する。   The operation of the moving direction identification device configured as described above will be specifically described below. FIG. 3 is a conceptual explanatory diagram showing a flow of determining a person detection and moving direction, and will be described with reference to FIG. Here, the person detection by the person detection unit 4 and the determination of the movement direction of the person by the movement direction determination unit 5 are performed with the same flow except that learning samples described later are different.

画像入力部1のカメラ撮像映像から、まず時間変化抽出部2が例えば100ms間隔の連続する画像(画像フレーム)P1,P2・・Pnが取り出され、図2に示すように所定の個数でグループ分けした画像フレーム群が生成される。尚、前後する画像フレーム群同士は共通する画像を有しても良い。   First, the time change extraction unit 2 extracts, for example, continuous images (image frames) P1, P2,... Pn at intervals of 100 ms from the camera-captured video of the image input unit 1, and groups them by a predetermined number as shown in FIG. A group of image frames is generated. The preceding and following image frame groups may have a common image.

時間変化抽出部2において、更に個々の画像が予め設定した複数の領域に分割され、分割された個々の領域毎に、画素の輝度情報から領域内の状態変化量が算出される。図3(a)は個々の画像を複数領域に分割した様子を示し、1画像を12の領域に分割した状態を示している。そして、図3(b)は状態変化量の説明図であり、ここでは1例として図3(a)のEで示す分割された領域の変化を示している。実際には全ての領域において状態変化量が算出される。
尚、ここでは説明の都合上2フレームの画像で個々の画像フレーム群が構成された場合を示している。また、図3(a)のFは画像入力部1のカメラの撮像範囲に入り込んだ人物を示している。
In the time change extraction unit 2, each individual image is further divided into a plurality of preset regions, and the state change amount in the region is calculated from the luminance information of the pixel for each divided region. FIG. 3A shows a state where each image is divided into a plurality of regions, and shows a state where one image is divided into 12 regions. FIG. 3B is an explanatory diagram of the state change amount. Here, as an example, the change in the divided area indicated by E in FIG. 3A is shown. Actually, the state change amount is calculated in all regions.
Here, for convenience of explanation, a case is shown in which individual image frame groups are composed of two frame images. Also, F in FIG. 3A indicates a person who has entered the imaging range of the camera of the image input unit 1.

次に、動きベクトル抽出部3において、時間軸上で隣接する画像の同一領域同士で状態変化量の差分が移動領域として算出され、画像フレーム群毎に個々の分割領域の移動領域の重心が求められる。図3(c)は、こうして画像フレーム群毎に求めた領域Eの重心Gを示している。
更に、隣接する画像フレーム群間の重心の位置が比較され、その移動量が特徴量として算出される。図3(d)は算出された特徴量を示し、特徴量はベクトルVとして算出される。
Next, the motion vector extraction unit 3 calculates a difference in state change amount between the same regions of adjacent images on the time axis as a moving region, and obtains the center of gravity of the moving region of each divided region for each image frame group. It is done. FIG. 3C shows the center of gravity G of the region E thus obtained for each image frame group.
Further, the positions of the centers of gravity between adjacent image frame groups are compared, and the amount of movement is calculated as a feature amount. FIG. 3D shows the calculated feature amount, and the feature amount is calculated as a vector V.

個々の領域の特徴量が算出されたら、まず人物検出部4において求めた特徴量とサンプル記憶部4aに記憶している学習サンプルの特徴量とが比較される。このとき、強識別器によって構成される周知のアダブーストによるカスケード型識別器を用いて人物の検出が行われる。
このとき使用される学習サンプルは、人物の有無でクラス分けされたデータが使用される。具体的に1つは様々な姿勢あるいは動きの人を例えば数百枚撮影して、その画像から結果として人の傾向を示す特徴量が算出されて、その確率密度関数が生成されて記憶される。また別クラスとして、人以外を撮影した例えば数百枚の撮影画像から、結果として人以外の傾向を示す特徴量を算出しその確率密度関数が生成され記憶される。
生成した確率密度関数を学習サンプルとして、画像入力部1から入力された画像から算出した特徴量を学習サンプルと比較して人物の検出が行われる。
When the feature amount of each region is calculated, first, the feature amount obtained by the person detection unit 4 is compared with the feature amount of the learning sample stored in the sample storage unit 4a. At this time, a person is detected using a well-known AdaBoost cascade classifier constituted by a strong classifier.
As the learning sample used at this time, data classified according to the presence or absence of a person is used. Specifically, one is, for example, shooting several hundred people of various postures or movements, and as a result, a feature quantity indicating a person's tendency is calculated from the image, and a probability density function is generated and stored. . As another class, feature quantities indicating a tendency other than a person are calculated as a result from, for example, several hundred captured images obtained by shooting a person other than a person, and a probability density function is generated and stored.
Using the generated probability density function as a learning sample, the feature amount calculated from the image input from the image input unit 1 is compared with the learning sample to detect a person.

人物検出部4が人物を検出したら、次に移動方向判定部5が検出した人物の移動方向を判定する。この判定においても、強識別器によって構成される周知のアダブーストによるカスケード型識別器が使用され、学習サンプルの特徴量と比較して判定される。
このとき使用される学習サンプルは、人の移動方向でクラス分けされたデータが使用される。具体的に、カメラに近づいてくる方向に移動している人物を検出したい場合は、1つのクラスとして近づいてくる人物を撮影した例えば数百枚の画像から特徴量を算出して、その確率密度関数が生成されて記憶される。また、近づいてくる動作以外の例えば左右方向や遠ざかる方向に移動する人物を撮影した例えば数百枚撮影画像から、結果として近づいてくる方向以外の方向傾向を示す特徴量を算出しその確率密度関数が生成され別クラスとして記憶される。
生成した確立密度関数を学習サンプルとして、画像入力部1から入力された画像から算出した特徴量を学習サンプルと比較して移動方向の判定が行われる。
If the person detection unit 4 detects a person, the movement direction determination unit 5 next determines the movement direction of the detected person. Also in this determination, a well-known cascade booster discriminator using Adaboost composed of a strong discriminator is used, and the determination is made by comparing with the feature amount of the learning sample.
As the learning sample used at this time, data classified according to the movement direction of the person is used. Specifically, when it is desired to detect a person moving in the direction approaching the camera, the feature amount is calculated from, for example, several hundred images obtained by photographing the person approaching as one class, and the probability density A function is generated and stored. In addition, for example, hundreds of images taken of a person who moves in a horizontal direction or a direction away from the person other than the approaching action, as a result, a feature amount indicating a direction tendency other than the approaching direction is calculated, and its probability density function Is generated and stored as a separate class.
Using the generated probability density function as a learning sample, the feature amount calculated from the image input from the image input unit 1 is compared with the learning sample to determine the moving direction.

こうして判定された結果、近づいてくる人物がいると判断したら、結果出力部6が警報音を発してカメラの撮像映像を表示する。   As a result of the determination, if it is determined that there is a person approaching, the result output unit 6 emits an alarm sound and displays a captured image of the camera.

このように、時間軸上で連続する複数の画像から算出した特徴量を方向別にクラス別けされた学習サンプルと比較して移動方向の判定するため、撮像されている人物を把握しても監視対象外の動きをする人物であれば無視するよう設定でき、監視効率を向上させることができる。
また、人物を検出するに際しても、時間軸上で連続する複数の画像から算出した特徴量を学習サンプルと比較して移動方向の判定することで、柱や植栽等を人物と判断することが無いし、明るさの急激な変化も人物と判断することを防止でき、人物の検出精度が向上する。
In this way, the feature quantity calculated from a plurality of images that are continuous on the time axis is compared with the learning sample classified according to direction to determine the moving direction. A person who moves outside can be set to be ignored, and the monitoring efficiency can be improved.
Also, when detecting a person, it is possible to determine a pillar, planting, or the like as a person by comparing a feature amount calculated from a plurality of continuous images on the time axis with a learning sample and determining a moving direction. In addition, it is possible to prevent a sudden change in brightness from being determined as a person, and the detection accuracy of the person is improved.

尚、上記実施形態において、算出する輝度の変化量(状態変化量)は、色の変化、テクスチャーの変化、エッジの変化を含む。   In the embodiment, the calculated luminance change amount (state change amount) includes a color change, a texture change, and an edge change.

1・・画像入力部、2・・時間変化抽出部(画像フレーム群生成手段、状態変化量算出手段)、3・・動きベクトル抽出部(移動領域算出手段、重心算出手段、特徴量算出手段)、4・・人物検出部、4a・・サンプル記憶部、5・・移動方向判定部、5a・・サンプル記憶部、6・・結果出力部。   1. Image input unit 2. Time change extracting unit (image frame group generating unit, state change amount calculating unit) 3. Motion vector extracting unit (moving region calculating unit, center of gravity calculating unit, feature amount calculating unit) 4 .. Person detection unit, 4 a .. Sample storage unit, 5... Moving direction determination unit, 5 a .. Sample storage unit, 6.

Claims (2)

カメラの撮像映像を基に時間軸上で連続する複数の画像で構成される画像フレーム群を生成する画像フレーム群生成手段と、
個々の画像を複数領域に分割して、分割した領域毎に領域内の輝度変化を状態変化量として算出する状態変化量算出手段と、
生成した個々の前記画像フレーム群において、算出した前記状態変化量の連続する画像間の差分を移動領域として算出する移動領域算出手段と、
算出された移動領域の重心を算出する重心算出手段と、
連続する画像フレーム群の間で前記重心位置を比較して重心の移動量を特徴量として算出する特徴量算出手段と、
映像内の人物の存在を判断する人物検出部と、
人物の移動方向を判定する移動方向判定部とを有し、
前記移動方向判定部は、方向別にクラス分けされた学習サンプルから算出された前記特徴量を記憶するサンプル記憶部を具備して、アダブーストによるカスケード型の強識別器によって構成され、前記人物検出部が人物の存在を判断したら、前記サンプル記憶部に記憶された特徴量と算出した前記個々の領域の特徴量とを比較して検出した人物の移動方向を判定することを特徴とする移動方向識別装置。
Image frame group generation means for generating an image frame group composed of a plurality of images continuous on the time axis based on a captured image of the camera;
State change amount calculating means for dividing each image into a plurality of regions and calculating a luminance change in the region as a state change amount for each divided region;
In each of the generated image frame groups, a moving area calculating unit that calculates a difference between consecutive images of the calculated state change amount as a moving area;
Centroid calculating means for calculating the centroid of the calculated moving area;
A feature amount calculating means for comparing the position of the center of gravity between successive image frame groups and calculating the amount of movement of the center of gravity as a feature amount;
A person detection unit that determines the presence of a person in the video;
A moving direction determination unit that determines a moving direction of the person,
The moving direction determination unit includes a sample storage unit that stores the feature amounts calculated from learning samples classified according to directions, and is configured by a cascade type strong classifier using Adaboost, and the person detection unit includes: When the presence of a person is determined, a moving direction identification device that determines the moving direction of the detected person by comparing the feature amount stored in the sample storage unit with the calculated feature amount of each region. .
前記人物検出部は、人物の有無にクラス分けされた学習サンプルから算出された前記特徴量を記憶するサンプル記憶部を具備して、アダブーストによるカスケード型の強識別器によって構成され、前記サンプル記憶部に記憶された特徴量と算出した前記個々の領域の特徴量とを比較して人物の存在を判定することを特徴とする請求項1記載の移動方向識別装置。 The person detection unit includes a sample storage unit that stores the feature amount calculated from a learning sample classified according to the presence or absence of a person, and is configured by a cascade type strong classifier using Adaboost, the sample storage unit The moving direction identification device according to claim 1, wherein the presence of a person is determined by comparing the feature quantity stored in the field and the calculated feature quantity of each region.
JP2011262380A 2011-11-30 2011-11-30 Moving direction identification device Active JP5864231B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2011262380A JP5864231B2 (en) 2011-11-30 2011-11-30 Moving direction identification device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2011262380A JP5864231B2 (en) 2011-11-30 2011-11-30 Moving direction identification device

Publications (2)

Publication Number Publication Date
JP2013114583A true JP2013114583A (en) 2013-06-10
JP5864231B2 JP5864231B2 (en) 2016-02-17

Family

ID=48710049

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2011262380A Active JP5864231B2 (en) 2011-11-30 2011-11-30 Moving direction identification device

Country Status (1)

Country Link
JP (1) JP5864231B2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017045161A (en) * 2015-08-25 2017-03-02 株式会社デンソーウェーブ Information code reading system, information code reading device, and information code forming medium
CN106650620A (en) * 2016-11-17 2017-05-10 华南理工大学 Target personnel identifying and tracking method applying unmanned aerial vehicle monitoring

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011100175A (en) * 2009-11-04 2011-05-19 Nippon Hoso Kyokai <Nhk> Device and program for deciding personal action
JP2011171795A (en) * 2010-02-16 2011-09-01 Victor Co Of Japan Ltd Noise reduction device
JP2011198006A (en) * 2010-03-19 2011-10-06 Panasonic Corp Object detecting apparatus, object detecting method, and object detecting program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011100175A (en) * 2009-11-04 2011-05-19 Nippon Hoso Kyokai <Nhk> Device and program for deciding personal action
JP2011171795A (en) * 2010-02-16 2011-09-01 Victor Co Of Japan Ltd Noise reduction device
JP2011198006A (en) * 2010-03-19 2011-10-06 Panasonic Corp Object detecting apparatus, object detecting method, and object detecting program

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017045161A (en) * 2015-08-25 2017-03-02 株式会社デンソーウェーブ Information code reading system, information code reading device, and information code forming medium
CN106650620A (en) * 2016-11-17 2017-05-10 华南理工大学 Target personnel identifying and tracking method applying unmanned aerial vehicle monitoring
CN106650620B (en) * 2016-11-17 2019-05-14 华南理工大学 A kind of target person identification method for tracing using unmanned plane monitoring

Also Published As

Publication number Publication date
JP5864231B2 (en) 2016-02-17

Similar Documents

Publication Publication Date Title
US10810438B2 (en) Setting apparatus, output method, and non-transitory computer-readable storage medium
JP6525453B2 (en) Object position estimation system and program thereof
Hu et al. Moving object detection and tracking from video captured by moving camera
JP5102410B2 (en) Moving body detection apparatus and moving body detection method
JP5603403B2 (en) Object counting method, object counting apparatus, and object counting program
JP2022166067A (en) Information processing system, information processing method and program
RU2393544C2 (en) Method and device to detect flame
US9025875B2 (en) People counting device, people counting method and people counting program
JP2020078058A (en) Image processing apparatus, monitoring system, image processing method, and program
CN107657244B (en) Human body falling behavior detection system based on multiple cameras and detection method thereof
JP2018180619A (en) Information processing device, information processing and program
Lim et al. Crowd saliency detection via global similarity structure
JP2015062121A (en) System and method of alerting driver that visual perception of pedestrian may be difficult
KR101030257B1 (en) Method and System for Vision-Based People Counting in CCTV
JP5271227B2 (en) Crowd monitoring device, method and program
Poonsri et al. Improvement of fall detection using consecutive-frame voting
US20210319229A1 (en) System and method for determining object distance and/or count in a video stream
Alqaysi et al. Detection of abnormal behavior in dynamic crowded gatherings
TWI493510B (en) Falling down detection method
JP2020109644A (en) Fall detection method, fall detection apparatus, and electronic device
JP2016143335A (en) Group mapping device, group mapping method, and group mapping computer program
JP5864230B2 (en) Object detection device
US9478032B2 (en) Image monitoring apparatus for estimating size of singleton, and method therefor
JP5864231B2 (en) Moving direction identification device
JP2014229266A (en) Device for detecting specific operation

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20140728

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20150409

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20150414

A521 Request for written amendment filed

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20150604

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20151201

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20151224

R150 Certificate of patent or registration of utility model

Ref document number: 5864231

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250