JP2014229266A - Device for detecting specific operation - Google Patents

Device for detecting specific operation Download PDF

Info

Publication number
JP2014229266A
JP2014229266A JP2013111144A JP2013111144A JP2014229266A JP 2014229266 A JP2014229266 A JP 2014229266A JP 2013111144 A JP2013111144 A JP 2013111144A JP 2013111144 A JP2013111144 A JP 2013111144A JP 2014229266 A JP2014229266 A JP 2014229266A
Authority
JP
Japan
Prior art keywords
luminance gradient
difference
feature
gradient
person
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2013111144A
Other languages
Japanese (ja)
Other versions
JP6046559B2 (en
Inventor
円 井上
Madoka Inoue
円 井上
梅崎 太造
Taizo Umezaki
太造 梅崎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aiphone Co Ltd
Original Assignee
Aiphone Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aiphone Co Ltd filed Critical Aiphone Co Ltd
Priority to JP2013111144A priority Critical patent/JP6046559B2/en
Publication of JP2014229266A publication Critical patent/JP2014229266A/en
Application granted granted Critical
Publication of JP6046559B2 publication Critical patent/JP6046559B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Closed-Circuit Television Systems (AREA)
  • Image Analysis (AREA)

Abstract

PROBLEM TO BE SOLVED: To provide a device for detecting a specific operation capable of calculating a feature quantity having both characteristics of a movement and a shape with less calculation quantity and detecting the specific operation.SOLUTION: A device for detecting a specific operation includes: a camera 1 for outputting a captured image by a continuous image frame; a luminance gradient calculation part 2 for obtaining a luminance gradient for each extremely small region for the image frame; a luminance gradient difference calculation part 3 for extracting a difference between the luminance gradients obtained for the continuous image frame; a time series feature calculation part 4 for extracting a time change from difference information of the extracted luminance gradient and calculating an autocorrelation feature of a gradient strength in order to extract a movement characteristic from the luminance gradient difference; a determination part 5 for comparing the data in which an operation feature is previously numerically converted with a calculation result of the time series feature quantity calculation part 4 and determining an operation; and a result output part 6 for outputting an estimation result.

Description

本発明は、未知の映像データから特定の動きと形状情報を定量化し、事前情報として保持している特定の物体の特定の動作情報と比較して類似度を判定する特定動作検出装置に関する。   The present invention relates to a specific motion detection apparatus that quantifies specific motion and shape information from unknown video data and determines similarity by comparing with specific motion information of a specific object held as prior information.

映像から人物の行動を推定するには、ある時刻に撮影された画像から人物の特徴を抽出し、その情報が時間の経過に伴いどのように変化していくかを観察し、その画像特徴の時系列変化と、予め蓄積した特定の動作に関する画像の特徴データとを比較し、その類似度を算出して行動が推定される。
近年、このような動作を識別する方法としてCHLAC(例えば、特許文献1参照)特徴が知られている。CHLAC特徴は単純な特徴記述であるだけでなく、シーン中へ侵入する物体の位置に関わらず同じ記述が可能であるし、複数の同時動作に対応する特徴量の和による表現が可能であるため、広く採用されている。
To estimate a person's behavior from a video, we extract the person's characteristics from an image taken at a certain time, observe how the information changes over time, and The behavior is estimated by comparing the time-series change and the feature data of the image relating to the specific action accumulated in advance, and calculating the similarity.
In recent years, a CHLAC (see, for example, Patent Document 1) feature is known as a method for identifying such an operation. The CHLAC feature is not only a simple feature description, but the same description is possible regardless of the position of the object that enters the scene, and it can be expressed by the sum of feature amounts corresponding to a plurality of simultaneous operations. Widely adopted.

一方で、老人ホームや高齢者施設では、被介護者等の施設居住者が転倒により骨折等の深刻な怪我をする事例が増えてきている。この場合、最も怪我をするのがトイレに行くためにベッドから起き上がって立ち上がる際に発生するとされている。このような怪我を防ぐ対策としてカメラを使用したものは、例えば非特許文献1に開示された技術がある。これは、カメラにより人物を撮像して、その頭部を追跡して3軸方向の加速度の変化から転倒を見分けるものであった。   On the other hand, in nursing homes and facilities for the elderly, there are an increasing number of cases where facility residents such as care recipients are seriously injured such as fractures due to falls. In this case, the most injured is said to occur when getting up and getting up from the bed to go to the toilet. For example, Non-Patent Document 1 discloses a technique using a camera as a measure for preventing such an injury. In this method, a person is imaged by a camera, the head is tracked, and a fall is distinguished from a change in acceleration in three axis directions.

特開2006−079272号公報JP 2006-079272 A 特開2008−269063号公報JP 2008-269063 A

C. Rougier, J. Meunier, A. Saint-Arnaud, and J. Rousseau, “Monocular 3D head tracking to detect falls of elderly people,” In Proc. of 28th AC. Rougier, J. Meunier, A. Saint-Arnaud, and J. Rousseau, “Monocular 3D head tracking to detect falls of elderly people,” In Proc. of 28th Annual International Confer-ence of the IEEE Engineering in Medicine and Biology Society, pp. 6384-6387 , 2006.C. Rougier, J. Meunier, A. Saint-Arnaud, and J. Rousseau, “Monocular 3D head tracking to detect falls of elderly people,” In Proc. Of 28th AC. Rougier, J. Meunier, A. Saint-Arnaud , and J. Rousseau, “Monocular 3D head tracking to detect falls of elderly people,” In Proc. of 28th Annual International Confer-ence of the IEEE Engineering in Medicine and Biology Society, pp. 6384-6387, 2006.

しかしながら、CHLAC特徴は定常動作以外の行動に対して一定の反応を示すが、モデルフリーな特性を持つために行為自体を識別することが難しかった。また、CHLACを応用した手法として例えば特許文献2の技術があるが、特許文献2では多重分割した画像を用いて人物の動作を分解するため、CHLACの特徴である位置不変性が生かされなかったし、計算量が多く実装コストが高くなっていた。
また、非特許文献1の技術は、転倒を検出するアプローチであり、転倒そのものを防止できる技術ではなかった。
However, although the CHLAC feature shows a certain response to actions other than the steady operation, it has been difficult to identify the action itself because it has model-free characteristics. Further, for example, there is a technique of Patent Document 2 as a technique applying CHLAC. However, in Patent Document 2, since the movement of a person is decomposed using multiple divided images, the position invariance characteristic of CHLAC is not utilized. However, the amount of calculation is large and the implementation cost is high.
The technique of Non-Patent Document 1 is an approach for detecting a fall, and is not a technique that can prevent the fall itself.

そこで、本発明はこのような問題点に鑑み、動きと形状の両者の特性を併せ持つ特徴量を少ない計算量で算出して特定の動作を検出できる特定動作検出装置、そして人がベッドから立ち上がる前の起き上がる動作を少ない計算量で算出して検出できる特定動作検出装置を提供することを目的としている。   Therefore, in view of such problems, the present invention calculates a feature amount having both characteristics of motion and shape with a small amount of calculation to detect a specific motion, and before a person stands up from the bed. It is an object of the present invention to provide a specific motion detection device that can detect and detect the motion of rising motion with a small amount of calculation.

上記課題を解決する為に、請求項1の発明に係る特定動作検出装置は、撮像映像を連続する画像フレームで出力する撮像手段と、画像フレームに対して微小領域毎の輝度勾配を求める輝度勾配算出部と、複数の時系列画像フレームに対して求めた輝度勾配の差分を抽出する輝度勾配差分算出部と、輝度勾配差分から動き特徴を抽出するために、抽出した輝度勾配の差分情報から時間変化を抽出して勾配強度の自己相関特徴を算出する時系列特徴算出部と、事前に特定物体の行動特徴を数値化したデータと時系列特徴算出手段の算出結果とを比較して類似度を判定する判定部と、判定結果を出力する結果出力部とを有することを特徴とする。
この構成によれば、画像フレーム間の輝度勾配差分を求めることで動く物体の輪郭を把握でき、輝度勾配差分から算出した自己相関から動き特徴を抽出して行動を判定できる。よって、動きと形状の両者の特性を併せ持つ特徴量を比較的少ない計算量で算出することができ、人物の特定の動作や特徴ある動作を安価な装置で判別することが可能となる。
In order to solve the above-described problem, the specific motion detection device according to the first aspect of the present invention includes an imaging unit that outputs a captured image in continuous image frames, and a luminance gradient that obtains a luminance gradient for each minute region with respect to the image frame. A calculation unit, a luminance gradient difference calculation unit that extracts a difference in luminance gradient obtained for a plurality of time-series image frames, and a time from the extracted luminance gradient difference information in order to extract a motion feature from the luminance gradient difference The time series feature calculation unit that extracts the change and calculates the autocorrelation feature of the gradient strength, the data obtained by digitizing the behavior feature of the specific object in advance and the calculation result of the time series feature calculation means It has the determination part to determine, and the result output part which outputs a determination result, It is characterized by the above-mentioned.
According to this configuration, the contour of a moving object can be grasped by obtaining a luminance gradient difference between image frames, and a behavior can be determined by extracting a motion feature from an autocorrelation calculated from the luminance gradient difference. Therefore, it is possible to calculate a feature amount having both characteristics of motion and shape with a relatively small amount of calculation, and it is possible to discriminate a person's specific motion and characteristic motion with an inexpensive device.

請求項2の発明は、請求項1に記載の構成において、撮像手段が、ベッド上の人物を撮像するカメラであると共に、事前に特定物体の行動特徴を数値化したデータがベッド上で起床動作する人物のデータであり、判定部が、カメラの撮像映像から人物の起床動作を判定することを特徴とする。
この構成によれば、カメラの撮像映像を基にベッド上に伏している患者等の人物が起き上がる動作を判別するため、安価な装置で判別できる。そして、監視対象の人物の動作が危険な状態に至る前に介護者等の関係者に通知でき、事故の防止に役立つ。
According to a second aspect of the present invention, in the configuration according to the first aspect, the image pickup means is a camera that picks up a person on the bed, and the data obtained by previously digitizing the behavioral characteristics of the specific object is a wake-up operation on the bed The determination unit determines a person's wake-up motion from a captured image of the camera.
According to this configuration, since an operation in which a person such as a patient who is lying on the bed rises based on the captured image of the camera, the determination can be made with an inexpensive device. And before the operation of the person to be monitored reaches a dangerous state, the person concerned such as a caregiver can be notified, which helps to prevent an accident.

本発明によれば、画像フレーム間の輝度勾配差分を求めることで動く物体の輪郭を把握でき、輝度勾配差分から算出した自己相関から動き特徴を抽出して行動を判定できる。よって、動きと形状の両者の特性を併せ持つ特徴量を比較的少ない計算量で算出することができ、人物の特定の動作や特徴ある動作を安価な装置で判別することが可能となる。
そして、事前に特定物体の行動特徴を数値化したデータをベッド上で起床動作する人物のデータとすることで、ベッド上での人物の起き上がり動作を安価な装置で判別することが可能となる。
According to the present invention, a contour of a moving object can be grasped by obtaining a luminance gradient difference between image frames, and a behavior can be determined by extracting a motion feature from an autocorrelation calculated from the luminance gradient difference. Therefore, it is possible to calculate a feature amount having both characteristics of motion and shape with a relatively small amount of calculation, and it is possible to discriminate a person's specific motion and characteristic motion with an inexpensive device.
Then, by using the data obtained by digitizing the action characteristics of the specific object in advance as the data of the person who wakes up on the bed, it is possible to determine the wake-up action of the person on the bed with an inexpensive device.

本発明に係る特定動作検出装置の一例を示すブロック図である。It is a block diagram which shows an example of the specific operation | movement detection apparatus which concerns on this invention. 時系列情報算出部での特徴量抽出の概要を示す図である。It is a figure which shows the outline | summary of the feature-value extraction in a time series information calculation part. 画像から特定動作を判定する流れを示す説明図であり、(a)は判別するカメラ撮像画像の時系列画像フレーム、(b)は各画像フレームの輝度勾配ヒストグラム、(c)は輝度勾配の画像フレーム間差分を抽出した図、(d)は輝度勾配のフレーム間差分から自己相関特徴を算出したヒストグラム図である。It is explanatory drawing which shows the flow which determines specific operation | movement from an image, (a) is a time series image frame of the camera picked-up image to discriminate | determine, (b) is a brightness | luminance gradient histogram of each image frame, (c) is an image of a brightness | luminance gradient. The figure which extracted the difference between frames, (d) is the histogram figure which calculated the autocorrelation characteristic from the difference between the frames of a brightness | luminance gradient.

以下、本発明を具体化した実施の形態を、図面を参照して詳細に説明する。図1は本発明に係る特定動作検出装置の一例を示すブロック図であり、1は監視対象を撮像するためのカメラ、2はカメラ1が撮像した映像データから輝度勾配データを算出する輝度勾配算出部、3は輝度勾配の画像フレーム間差分を抽出する輝度勾配差分算出部、4は自己相関を算出する時系列情報算出部、5はアダブーストによるカスケード型識別器を用いて動作を識別する判定部、6は結果出力部である。
尚、輝度勾配抽出部2、輝度勾配差分抽出部3、時系列情報算出部4、及び判定部5は、所定のプログラムをインストールしたCPU或いはDSPで一体に構成される。また、ここで検出する特定の動作は、具体的にはベッドに伏している患者等の人物が起き上がる動作の場合であり、監視対象人物の起き上がる動作を検出したら報知する構成について説明する。
DESCRIPTION OF THE PREFERRED EMBODIMENTS Embodiments embodying the present invention are described in detail below with reference to the drawings. FIG. 1 is a block diagram showing an example of a specific motion detection apparatus according to the present invention, where 1 is a camera for imaging a monitoring target, and 2 is a luminance gradient calculation for calculating luminance gradient data from video data captured by the camera 1. 3 is a luminance gradient difference calculation unit that extracts a difference between image frames of the luminance gradient, 4 is a time series information calculation unit that calculates autocorrelation, and 5 is a determination unit that identifies an operation using a cascade type discriminator based on AdaBoost. , 6 is a result output unit.
The luminance gradient extraction unit 2, the luminance gradient difference extraction unit 3, the time series information calculation unit 4, and the determination unit 5 are integrally configured by a CPU or DSP in which a predetermined program is installed. The specific operation detected here is specifically an operation in which a person such as a patient who is lying on the bed gets up, and a configuration in which notification is made when an operation in which the person to be monitored rises is detected will be described.

カメラ1は、ベッドの上方であって、ベッド上の人物の少なくとも頭部を良好に撮像できる位置に設置され、例えば0.03秒毎に画像フレーム(静止画)を生成して出力する。   The camera 1 is installed above the bed and at a position where it is possible to satisfactorily capture at least the head of a person on the bed. For example, the camera 1 generates and outputs an image frame (still image) every 0.03 seconds.

輝度勾配算出部2は、輝度勾配強度、勾配方向を算出する。具体的には、先ず入力される画像フレーム毎に予め定義したパッチサイズにダウンサンプリングし、各ピクセルの輝度から勾配強度mと勾配方向θを次式(数1、数2)により算出し、近隣画素でN次ヒストグラム(輝度勾配ヒストグラム)を生成する。   The luminance gradient calculation unit 2 calculates luminance gradient strength and gradient direction. Specifically, first, downsampling is performed to a predetermined patch size for each input image frame, and gradient intensity m and gradient direction θ are calculated from the luminance of each pixel by the following equations (Equation 1 and Equation 2). An Nth-order histogram (luminance gradient histogram) is generated with pixels.

Figure 2014229266
Figure 2014229266

Figure 2014229266
Figure 2014229266

dx(x,y)、dy(x,y)は輝度差であり、次式(数3)で算出される。   dx (x, y) and dy (x, y) are luminance differences and are calculated by the following equation (Equation 3).

Figure 2014229266
Figure 2014229266

尚、I(x,y)は、(x,y)座標上の輝度である。   Note that I (x, y) is the luminance on the (x, y) coordinate.

輝度勾配差分算出部3は、輝度勾配算出部2によって算出された輝度勾配データに対して、時系列で隣接する画像フレーム間の輝度勾配の差分を算出し、輝度勾配方向θをビン数Kに畳み込む。具体的に、S×Sピクセル領域における近似画素の角度情報をビン数Kとする勾配ヒストグラムに畳み込む。
畳み込んだ各ビンのスコアである輝度勾配ヒストグラムhk(x,y)は数4で表される。
The luminance gradient difference calculation unit 3 calculates the luminance gradient difference between adjacent image frames in time series with respect to the luminance gradient data calculated by the luminance gradient calculation unit 2, and sets the luminance gradient direction θ to the number of bins K. Fold it up. Specifically, the angle information of the approximate pixel in the S × S pixel region is convolved with a gradient histogram having the number K of bins.
A luminance gradient histogram hk (x, y), which is a score of each bin that has been convolved, is expressed by Equation 4.

Figure 2014229266
Figure 2014229266

θ’(x,y)は、角度方向θ(x,y)をビン数Kに畳み込んだ値、δはクロネッカーのデルタであり、kがθ’(x、y)と等しければ1を、それ以外ならば0を返す。
最後に、数4で求めた隣接フレーム間の輝度勾配ヒストグラムhk(x、y)の差分を求める。時刻tにおける輝度勾配強度ヒストグラム差分hKsub(x,y)は数5により得られる。
θ ′ (x, y) is a value obtained by convolving the angular direction θ (x, y) with the number K of bins, δ is a Kronecker delta, and 1 if k is equal to θ ′ (x, y). Otherwise 0 is returned.
Finally, the difference between the luminance gradient histograms hk (x, y) between the adjacent frames obtained by Equation 4 is obtained. The luminance gradient intensity histogram difference hKsub (x, y) at time t is obtained by Equation 5.

Figure 2014229266
Figure 2014229266

時系列情報算出部4では、輝度勾配差分特徴である輝度勾配の時間変化量から自己相関を算出する。輝度勾配の画像フレーム間差分情報に対して、隣接セルで構成した所定のマスクパターンを使用して勾配強度の自己相関を算出する。ここでは、座標位置、時刻、並びに輝度勾配方向の組合せによる4次元情報の自己相関を取得する。
図2は特徴抽出の概要を示し、画像フレームFLはXY平面上に配置(セル位置をXY平面で定義)され、時系列情報はt軸上で定義される。
尚、CLはセルを示し、画像フレームFLを複数に分割したピクセルの集合体(例えば5×5ピクセルで構成される)である。また輝度勾配方向は、ヒストグラムのセル毎のビン数により定義され、勾配強度はこれら座標位置、時刻、並びに輝度勾配方向から成る4次元ベクトルとなる。
The time series information calculation unit 4 calculates the autocorrelation from the temporal change amount of the luminance gradient that is the luminance gradient difference feature. An autocorrelation of the gradient strength is calculated using a predetermined mask pattern formed by adjacent cells with respect to the difference information between the image frames of the luminance gradient. Here, the autocorrelation of the four-dimensional information by the combination of the coordinate position, time, and luminance gradient direction is acquired.
FIG. 2 shows an outline of feature extraction. An image frame FL is arranged on the XY plane (cell positions are defined on the XY plane), and time series information is defined on the t-axis.
CL denotes a cell, and is an aggregate of pixels (for example, composed of 5 × 5 pixels) obtained by dividing the image frame FL into a plurality of parts. The luminance gradient direction is defined by the number of bins for each cell of the histogram, and the gradient intensity is a four-dimensional vector composed of these coordinate positions, time, and luminance gradient direction.

ここでマスクパターンについて説明する。N次元マスクパターンは、N×N×Nセルで定義されたマスクブロック内の3次元ベクトル(x,y,t)から得られる。例えば、マスクパターンの次元数を3とした場合、マスクブロック内には81(=9×1×9)のマスクパターンが定義される。この際、基点となる時刻のセル位置は1点に固定され、抽出される特徴の総数は、セル領域サイズに分割した画像に対してマスクブロックを1セルずつずらして走査したマスクブロック数となる。例えば、1画像にH×W個のセルが存在する場合、1画像あたり(W−N+1)×(H−N+1)個のマスクブロックが得られる。更に角度方向を組み合わせて、合計すると(W−N+1)×(H−N+1)×K個の特徴が得られる。
一方、セル位置(x,y)を位置ベクトルrとし、輝度勾配差分をf(r,k)とすると、時系列情報算出部4が算出するN次の自己相関特徴Xは数6によって得ることができる。
Here, the mask pattern will be described. The N-dimensional mask pattern is obtained from a three-dimensional vector (x, y, t) in a mask block defined by N × N × N cells. For example, when the number of dimensions of the mask pattern is 3, 81 (= 9 × 1 × 9) mask patterns are defined in the mask block. At this time, the cell position at the time of the base point is fixed at one point, and the total number of extracted features is the number of mask blocks scanned by shifting the mask blocks by one cell from the image divided into cell area sizes. . For example, when there are H × W cells in one image, (W−N + 1) × (H−N + 1) mask blocks are obtained per image. Further, (W−N + 1) × (H−N + 1) × K features can be obtained by combining the angular directions.
On the other hand, if the cell position (x, y) is a position vector r and the luminance gradient difference is f (r, k), the Nth-order autocorrelation feature X calculated by the time series information calculation unit 4 is obtained by Equation 6. Can do.

Figure 2014229266
Figure 2014229266

anは変位ベクトル、kならびにknは勾配方向、Nは自己相関の次数、αは畳み込まれた角度情報の差分、G(・)はガウスカーネル、Cはマスクブロック数である。
このように、本発明の幾何学特徴は、マスクパターンを使用して勾配強度の自己相関を算出するが、事前に複雑なマスクパターンを定義する必要がないため、従来のCHLACより演算を簡略化でき実装が容易となる。
“an” is a displacement vector, “k” and “kn” are gradient directions, “N” is an autocorrelation order, “α” is a difference of convolved angle information, “G (•)” is a Gaussian kernel, and “C” is the number of mask blocks.
As described above, the geometric feature of the present invention calculates the autocorrelation of the gradient strength using the mask pattern, but it is not necessary to define a complicated mask pattern in advance, so that the calculation is simplified compared to the conventional CHLAC. And easy to implement.

判定部5は、人物と背景によってクラス分けされた学習サンプルと、算出した上記特徴量を用いて学習された強識別器によって構成され、周知のアダブーストによるカスケード型識別器を用いて統計特徴量を算出して類似度を判別する。   The determination unit 5 includes a learning sample classified according to a person and a background, and a strong discriminator learned using the calculated feature amount, and a statistical feature amount is calculated using a well-known cascade booster of Adaboost. The similarity is determined by calculation.

結果出力部6は、警報等を報音する報音部、検出した映像を表示する表示部、外部に通報する通報部等を備え、判定部5が類似する評価を下したら報音通知し、カメラの撮像映像を表示する。   The result output unit 6 includes a sound report unit for reporting an alarm, a display unit for displaying the detected video, a report unit for reporting to the outside, and the determination unit 5 performs a similar evaluation to notify the sound, Displays the image captured by the camera.

次に、上述した特定動作検出装置の検知動作、具体的にベッドに伏している人が起き上がる起床動作を検出する動作を説明する。図3は、画像から人物の起床動作を検出する流れを示す説明図であり、図3を参照して説明する。図3(a)はカメラ1が出力する時系列画像フレーム、(b)はカメラ1が出力した各画像フレームから抽出された輝度勾配ヒストグラム(HOG)、(c)は抽出された輝度勾配ヒストグラムの画像フレーム間差分情報、(d)は輝度勾配ヒストグラムのフレーム間差分から自己相関特徴を算出した結果をそれぞれ示している。   Next, the detection operation of the specific operation detection device described above, specifically, the operation of detecting the wake-up operation in which a person lying on the bed gets up will be described. FIG. 3 is an explanatory diagram showing a flow of detecting a person's wake-up motion from an image, and will be described with reference to FIG. 3A is a time-series image frame output from the camera 1, FIG. 3B is a luminance gradient histogram (HOG) extracted from each image frame output from the camera 1, and FIG. 3C is an extracted luminance gradient histogram. Image inter-frame difference information, (d) shows the result of calculating the autocorrelation feature from the inter-frame difference of the luminance gradient histogram.

図3(b)に示すように、輝度勾配算出部2によりカメラ1が出力する例えば2秒単位で抽出した4フレームの画像の個々のピクセルの輝度から勾配強度mと勾配方向θとを算出し、輝度勾配ヒストグラムを生成する。
次に、輝度勾配差分算出部3が、輝度勾配算出部2が出力する輝度勾配データを基に、時系列で隣接する画像フレーム間の輝度勾配ヒストグラムの差分を算出する。この算出により、図3(c)に示すように変動した部位の輪郭が抽出される。
そして、時系列情報算出部4が、算出された輝度勾配差分特徴から自己相関を算出する。この結果、照明変動や移動方向の異なる他の物体の動作など無相関の変位を排除して着目物体の変位だけを抽出することができる。
こうして、フレーム間差分を算出して更に自己相関を求めて2段階の特徴抽出過程を経ることで、最終結果としての統計特徴量を得る。
As shown in FIG. 3B, the gradient strength m and the gradient direction θ are calculated from the luminance of each pixel of the four-frame image extracted by the luminance gradient calculation unit 2 in units of, for example, 2 seconds output from the camera 1. Generate a brightness gradient histogram.
Next, the luminance gradient difference calculation unit 3 calculates the difference in the luminance gradient histogram between adjacent image frames in time series based on the luminance gradient data output from the luminance gradient calculation unit 2. By this calculation, the contour of the fluctuating part is extracted as shown in FIG.
Then, the time series information calculation unit 4 calculates an autocorrelation from the calculated luminance gradient difference feature. As a result, it is possible to extract only the displacement of the object of interest while eliminating uncorrelated displacements such as illumination fluctuations and movements of other objects with different movement directions.
In this way, the inter-frame difference is calculated, further autocorrelation is obtained, and a two-stage feature extraction process is performed to obtain a statistical feature value as a final result.

最後に、判定部5において連続3フレームの自己相関特徴から図3(d)のようにアダブーストを用いた統計特徴量を得て、起床動作であるか判定される。こうして起床動作と判定されたら、結果出力部6において警報の報音等通知動作が成され、モニタを備えている場合はカメラ1の撮像映像の表示が行われる。   Finally, the determination unit 5 obtains a statistical feature amount using Adaboost as shown in FIG. 3D from the autocorrelation features of three consecutive frames, and determines whether it is a wake-up operation. When the wake-up operation is determined in this way, a notification operation such as a warning sound is performed in the result output unit 6, and when the monitor is provided, the captured image of the camera 1 is displayed.

このように、輝度勾配のフレーム間差分を求めることで動く物体の輪郭を把握でき、フレーム間差分の時間変化を抽出して行動推定できる。よって、動きと形状の両者の特性を併せ持つ特徴量を比較的少ない計算量で算出することができ、人の特定の動作や、特徴ある動作を安価な装置で判別することが可能となる。
また、エッジの勾配強度と勾配方向から得られた自己相関特徴を基に物体の特定動作を検出するため、少ない計算量で特定動作を検出できる。
そして、ベッドの上方に配置したカメラ1の撮像映像を基に、ベッド上に伏している人が起き上がる動作を検知することが可能であり、安価な装置で起き上がりを判別できる。よって、監視対象の人物の動作が危険な状態に至る前に介護者等の関係者に通知でき、事故の防止に役立つし、監視対象者に対して音や光を発しないので、監視対象者に負担になるようなこともない。
Thus, the contour of the moving object can be grasped by obtaining the inter-frame difference of the luminance gradient, and the action can be estimated by extracting the temporal change of the inter-frame difference. Therefore, it is possible to calculate a feature amount having both characteristics of motion and shape with a relatively small amount of calculation, and it is possible to discriminate a human specific action and a characteristic action with an inexpensive apparatus.
Further, since the specific motion of the object is detected based on the autocorrelation feature obtained from the gradient strength and the gradient direction of the edge, the specific motion can be detected with a small amount of calculation.
Based on the captured image of the camera 1 disposed above the bed, it is possible to detect the movement of a person who is lying on the bed, and the rising can be determined with an inexpensive device. Therefore, it is possible to notify related persons such as caregivers before the monitored person's movement becomes dangerous, which helps prevent accidents and does not emit sound or light to the monitored person. There will be no burden on you.

上記実施形態は、ベッドに伏している人が起き上がる動作を検出する動作に関して説明したが、本発明の特定動作検出装置は、検出する動作を限定するものではない。判定部5において比較する学習サンプルの内容に応じて特定動作を幅広く変更でき、起き上がりの延長線上で考えれば、起床動作は反応せずにベッドから離れようとする動作を検出して通知するよう動作させても良い。
また、防犯設備に組み込み不審者を検出したい場合は、学習サンプルとして窓を覗き込む動作の学習サンプルを蓄積させることで、不審者の検出に利用できる。
更に、検出対象を人物に限定しなくとも良く、学習サンプルの内容を変更することにより幅広く動く物体を判別することが可能となる。
Although the said embodiment demonstrated regarding the operation | movement which detects the operation | movement which the person who is lying in the bed rises, the specific operation | movement detection apparatus of this invention does not limit the operation | movement detected. The specific operation can be widely changed according to the contents of the learning samples to be compared in the determination unit 5, and when considered on the extension line of getting up, the operation to wake up is detected and notified without being reacted. You may let them.
In addition, when it is desired to detect a suspicious person incorporated in a crime prevention facility, it is possible to use it to detect a suspicious person by accumulating learning samples of an operation of looking into a window as a learning sample.
Furthermore, it is not necessary to limit the detection target to a person, and it is possible to discriminate widely moving objects by changing the contents of the learning sample.

1・・カメラ(撮像手段)、2・・輝度勾配算出部、3・・輝度勾配差分算出部、4・・時系列情報算出部、5・・判定部、6・・結果出力部。   DESCRIPTION OF SYMBOLS 1 ... Camera (imaging means) 2. Brightness gradient calculation part 3 .... Brightness gradient difference calculation part 4 .... Time series information calculation part 5 .... Determination part 6 .... Result output part

Claims (2)

撮像映像を連続する画像フレームで出力する撮像手段と、
前記画像フレームに対して微小領域毎の輝度勾配を求める輝度勾配算出部と、
複数の時系列画像フレームに対して求めた前記輝度勾配の差分を抽出する輝度勾配差分算出部と、
前記輝度勾配差分から動き特徴を抽出するために、抽出した前記輝度勾配の差分情報から時間変化を抽出して勾配強度の自己相関特徴を算出する時系列特徴算出部と、
事前に特定物体の行動特徴を数値化したデータと前記時系列特徴算出手段の算出結果とを比較して類似度を判定する判定部と、
判定結果を出力する結果出力部とを有することを特徴とする特定動作検出装置。
Imaging means for outputting captured images in successive image frames;
A luminance gradient calculating unit for obtaining a luminance gradient for each minute region with respect to the image frame;
A luminance gradient difference calculation unit for extracting the difference of the luminance gradient obtained for a plurality of time-series image frames;
In order to extract a motion feature from the brightness gradient difference, a time-series feature calculation unit that extracts a time change from the extracted difference information of the brightness gradient and calculates an autocorrelation feature of the gradient strength;
A determination unit that compares the data obtained by quantifying the behavioral characteristics of the specific object in advance with the calculation result of the time-series characteristic calculation unit to determine the similarity;
And a result output unit that outputs a determination result.
前記撮像手段が、ベッド上の人物を撮像するカメラであると共に、事前に特定物体の行動特徴を数値化したデータがベッド上で起床動作する人物のデータであり、前記判定部が、前記カメラの撮像映像から人物の起床動作を判定することを特徴とする請求項1記載の特定動作検出装置。 The imaging means is a camera that images a person on the bed, and data obtained by digitizing behavioral characteristics of a specific object in advance is data of a person who wakes up on the bed, and the determination unit includes the camera The specific motion detection device according to claim 1, wherein a wake-up motion of a person is determined from the captured video.
JP2013111144A 2013-05-27 2013-05-27 Specific motion detection device Active JP6046559B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2013111144A JP6046559B2 (en) 2013-05-27 2013-05-27 Specific motion detection device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2013111144A JP6046559B2 (en) 2013-05-27 2013-05-27 Specific motion detection device

Publications (2)

Publication Number Publication Date
JP2014229266A true JP2014229266A (en) 2014-12-08
JP6046559B2 JP6046559B2 (en) 2016-12-14

Family

ID=52129009

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2013111144A Active JP6046559B2 (en) 2013-05-27 2013-05-27 Specific motion detection device

Country Status (1)

Country Link
JP (1) JP6046559B2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016143071A (en) * 2015-01-29 2016-08-08 アイホン株式会社 Specific motion detection system
JP2017068442A (en) * 2015-09-29 2017-04-06 アイホン株式会社 Specific operation detection device and specific operation detection method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008046903A (en) * 2006-08-17 2008-02-28 National Institute Of Advanced Industrial & Technology Apparatus and method for detecting number of objects
JP2008097624A (en) * 2007-11-05 2008-04-24 National Institute Of Advanced Industrial & Technology Method and apparatus for extracting feature from three-dimensional data

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008046903A (en) * 2006-08-17 2008-02-28 National Institute Of Advanced Industrial & Technology Apparatus and method for detecting number of objects
JP2008097624A (en) * 2007-11-05 2008-04-24 National Institute Of Advanced Industrial & Technology Method and apparatus for extracting feature from three-dimensional data

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
小林 匠: ""安全・安心のための動画像認識技術"", 情報処理学会研究報告 コンピュータビジョンとイメージメディア(CVIM), JPN6016040645, 15 April 2013 (2013-04-15), JP, pages 1 - 6, ISSN: 0003424906 *
川合 諒、外2名: ""STHOG特徴を用いた複数カメラ間での人物照合"", 情報処理学会研究報告 コンピュータビジョンとイメージメディア(CVIM), JPN6016040644, 15 June 2011 (2011-06-15), JP, pages 1 - 8, ISSN: 0003424905 *
渡邉 章二、松島 宏典: ""車載カメラを用いたGLACに基づく高精度歩行者認識"", 映像情報メディア学会技術報告, vol. 37, no. 8, JPN6016040643, 11 February 2013 (2013-02-11), JP, pages 257 - 262, ISSN: 0003424904 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016143071A (en) * 2015-01-29 2016-08-08 アイホン株式会社 Specific motion detection system
JP2017068442A (en) * 2015-09-29 2017-04-06 アイホン株式会社 Specific operation detection device and specific operation detection method

Also Published As

Publication number Publication date
JP6046559B2 (en) 2016-12-14

Similar Documents

Publication Publication Date Title
Zhang et al. A survey on vision-based fall detection
Liu et al. A fall detection system using k-nearest neighbor classifier
Tzeng et al. Design of fall detection system with floor pressure and infrared image
JP6764481B2 (en) Monitoring device
Sehairi et al. Elderly fall detection system based on multiple shape features and motion analysis
JP5832910B2 (en) Image monitoring device
JP6822328B2 (en) Watching support system and its control method
Debard et al. Camera based fall detection using multiple features validated with real life video
WO2011016782A1 (en) Condition detection methods and condition detection devices
Gunale et al. Fall detection using k-nearest neighbor classification for patient monitoring
JP2010191793A (en) Alarm display and alarm display method
Stone et al. Silhouette classification using pixel and voxel features for improved elder monitoring in dynamic environments
JP6214424B2 (en) Specific motion detection device
Nguyen et al. Extracting silhouette-based characteristics for human gait analysis using one camera
JP6046559B2 (en) Specific motion detection device
Merrouche et al. Fall detection using head tracking and centroid movement based on a depth camera
Alaoui et al. Video based human fall detection using von mises distribution of motion vectors
JP6214425B2 (en) Specific motion detection device
Bansal et al. Elderly people fall detection system using skeleton tracking and recognition
Khraief et al. Vision-based fall detection for elderly people using body parts movement and shape analysis
Thuc et al. An effective video-based model for fall monitoring of the elderly
Lee et al. Automated abnormal behavior detection for ubiquitous healthcare application in daytime and nighttime
JP6124739B2 (en) Image sensor
Hao et al. Prediction of a bed-exit motion: Multi-modal sensing approach and incorporation of biomechanical knowledge
Dorgham et al. Improved elderly fall detection by surveillance video using real-time human motion analysis

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20151118

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20161017

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20161025

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20161117

R150 Certificate of patent or registration of utility model

Ref document number: 6046559

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250