WO2018008702A1 - Behavior detection system - Google Patents

Behavior detection system Download PDF

Info

Publication number
WO2018008702A1
WO2018008702A1 PCT/JP2017/024714 JP2017024714W WO2018008702A1 WO 2018008702 A1 WO2018008702 A1 WO 2018008702A1 JP 2017024714 W JP2017024714 W JP 2017024714W WO 2018008702 A1 WO2018008702 A1 WO 2018008702A1
Authority
WO
WIPO (PCT)
Prior art keywords
motion
recognition
action
detection system
unit
Prior art date
Application number
PCT/JP2017/024714
Other languages
French (fr)
Japanese (ja)
Inventor
洋登 永吉
大介 勝又
孝史 野口
健太郎 大西
Original Assignee
株式会社日立システムズ
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社日立システムズ filed Critical 株式会社日立システムズ
Publication of WO2018008702A1 publication Critical patent/WO2018008702A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion

Definitions

  • the present invention relates to a behavior detection system, and more particularly to a technique effective for motion analysis for recognizing and analyzing human motion.
  • Patent Document 1 describes that, for example, a feature quantity extracted from a sensor attached to a person is compared with a feature quantity stored in a feature quantity database to recognize the action of the person.
  • An object of the present invention is to provide a technique capable of identifying an operation with high accuracy even when the types of the recognized operation are increased.
  • a typical behavior detection system recognizes an element action to be recognized and analyzes an action meaning indicating the meaning of the recognized element action.
  • This behavior detection system includes an operation model storage unit, a photographing unit, a position recognition unit, and an operation recognition unit.
  • the behavior model storage unit stores a behavior model representing the recognized element behavior as numerical information.
  • the photographing unit photographs a work operation to be recognized.
  • the position recognition unit recognizes the position of the recognition target from the shooting information acquired by the shooting unit.
  • the behavior detection system has a work position operation table that associates the position of the recognition target, the element operation performed at the corresponding position, and the operation meaning of the element operation.
  • the motion recognition unit refers to the work position motion table, acquires an element motion associated with the position recognized by the position recognition unit, and further extracts a motion model corresponding to the element motion from the motion model storage unit. .
  • the motion recognition unit recognizes the element motion of the recognition target by comparing the extracted motion model with the motion of the recognition target. Further, the motion recognition unit extracts the motion meaning from the recognition target position and the recognized element motion with reference to the work position motion table.
  • the motion recognition unit generates recognition information that associates the motion meaning corresponding to the acquired motion model, the position detected by the position recognition unit, and the time when the recognition target element motion is recognized.
  • the constituent elements are not necessarily indispensable unless otherwise specified and apparently essential in principle. Needless to say.
  • FIG. 1 is an explanatory diagram illustrating an example of a configuration in a behavior detection system 10 according to an embodiment.
  • the behavior detection system 10 is a system that recognizes the position and element motion of a worker who is a recognition target, and stores the meaning of the recognized element motion, time, and the like in association with each other. As shown in FIG. 1, the behavior detection system 10 includes a video photographing device 11, a recognition processing unit 12, and a display device 13.
  • the video photographing device 11 as a photographing unit is, for example, a web camera or a surveillance camera capable of photographing a color image, a depth sensor capable of photographing a distance to a photographing target, or the like.
  • the recognition processing unit 12 extracts information related to a person's movement (hereinafter referred to as movement-related information) from the shooting information shot by the video shooting apparatus 11.
  • the motion-related information is, for example, time series information of a human joint position or information obtained by abstracting it.
  • the above-described abstracted information is, for example, information representing the speed of joint movement, the distance between joints, the speed of joint movement expressed in frequency, or the combination thereof. Both are expressed as numerical information.
  • the display device 13 is composed of a display such as a liquid crystal monitor, for example, and displays recognition information described later under the control of the recognition processing unit 12.
  • the recognition processing unit 12 includes a position recognition unit 15, an operation recognition unit 16, and a storage unit 17.
  • the position recognizing unit 15 recognizes the position of the worker, that is, where the worker is in the work place, from the photographing information photographed by the video photographing device 11.
  • This position recognition can be performed, for example, by giving in advance the angle of view, the distance to the floor, and the angle of the floor of the image capturing device 11. If the photographing apparatus is a normal color camera, the position in the workplace can be recognized from the size and position of the subject on the image. In the case of a depth camera, depth information can also be used, so that the position can be recognized with high accuracy.
  • the motion recognition unit 16 recognizes the worker's elemental motion from the shooting information of the video shooting device 11.
  • the storage unit 17 includes a nonvolatile storage device such as a hard disk device (HDD) or a flash memory, and stores various types of information.
  • the storage unit 17 includes a place operation table 20, an operation model storage unit 21, and a recognition information storage unit 22.
  • the place operation table 20 which is a work position operation table is a table in which a position is associated with an element operation of a worker performed at the position.
  • the motion model storage unit 21 stores motion models of element motions to be detected. This motion model is a series of motion related information when, for example, a worker performs a standard work motion.
  • FIG. 2 is an explanatory diagram illustrating an example of a data configuration in the place operation table 20 included in the storage unit 17 of FIG.
  • the place action table 20 stores the area where the element action can be executed and the meaning of the action indicated by the element action in the area for the element action of the worker to be recognized. As shown in FIG. 2, the place operation table 20 is composed of data of “place”, “operation ID”, and “operation meaning”.
  • “Location” is data indicating which position or area of the workplace, and is represented by coordinates, for example.
  • Action ID specifies the element action of the worker to be recognized in the “place”, and is indicated by a number in the example of FIG.
  • the “motion meaning” is data indicating the meaning of the element motion of the worker to be recognized at the place.
  • FIG. 3 is an explanatory diagram showing another example of the data structure in the place operation table 20 of FIG.
  • FIG. 4 is an explanatory diagram showing an example of a data configuration in the behavior model accumulated in the behavior model accumulation unit 21 included in the storage unit 17 of FIG.
  • the operation model stored in the operation model storage unit 21 includes “operation ID” and “operation model” as shown in FIG.
  • the “motion model” is a numerical value of the elemental motion of the worker to be recognized as described above.
  • the motion related information captured once may be used as the motion model as it is, or the motion related information captured multiple times is used as the motion model. It may be used. In the latter case, motion-related information captured a plurality of times may be used as an operation model as it is, or an average may be used.
  • the recognition information storage unit 22 stores the recognition information generated by the motion recognition unit 16.
  • the recognition information stored in the recognition information storage unit 22 includes, for example, “time”, “location”, “operation ID”, and “operation meaning”.
  • Time is, for example, the time when the worker started work. Alternatively, it may be from the time when the work is started to the time when the work is finished. This is the time when the motion recognition unit 16 recognizes the worker's element motion, for example.
  • “Location” indicates the position where the worker is working. “Operation ID” identifies the element operation of the worker to be recognized. The “motion meaning” indicates what meaning the “motion ID” has at the corresponding “location”.
  • FIG. 5 is a flowchart showing an example of the operation in the behavior detection system 10 of FIG. In FIG. 5, behavior recognition processing by the behavior detection system 10 will be described.
  • the behavior recognition process is a process for recognizing a worker's work position and element motion, generating recognition information, and storing the recognition information.
  • FIG. 5 illustrates the case where the behavior recognition process described below is executed by hardware such as the position recognition unit 15 and the action recognition unit 16, and the behavior recognition process is, for example, the recognition processing unit of FIG. 1.
  • the program may be executed on the basis of software in a program format stored in a program storage memory (not shown) provided in FIG.
  • the software When executed based on software, the software is executed by, for example, a CPU (Central Processing Unit) (not shown) included in the recognition processing unit 12.
  • a CPU Central Processing Unit
  • the position recognition unit 15 recognizes the position of the worker who started the work based on, for example, the above-described method (step S101).
  • the position recognition unit 15 outputs the recognized worker position to the operation recognition unit 16.
  • the motion recognition unit 16 acquires the recognition target corresponding to the position recognized by the position recognition unit 15 with reference to the place motion table 20 of FIG. 2, that is, the element motion of the worker (step S102).
  • the motion recognition unit 16 reads all the motion models corresponding to the “motion ID” acquired in the process of step S102 from the motion model storage unit 21 of the storage unit 17 (step S103).
  • the motion recognition unit 16 calculates motion related information from the shooting information of the worker shot by the video shooting device 11 (step S104). Then, the motion recognition unit 16 operates the “motion ID” of the most similar motion model among the motion models read out in step S103 with respect to the motion-related information calculated in step S104. Obtained as a recognition result (step S105).
  • the motion recognition result is “1”.
  • the movement related information calculated in the process of step S104 is compared with movement related information in each movement model, The most similar result is adopted as a comparison result with the corresponding behavior model.
  • the motion recognition unit 16 acquires the motion meaning corresponding to the recognition result by the process of step S105 from the place motion table 20 (step S106), generates recognition information, and the recognition information storage unit 22 of the storage unit 17. (Step S107).
  • the operation meaning is “turn the lever”.
  • the recognition information generated by the motion recognition unit 16 associates the time when the work is started, the position of the worker, the motion recognition result, and the motion meaning as described above.
  • the load on the recognition processing of the motion recognition unit 16 can be reduced.
  • the recognition processing time can be shortened, and the performance of the behavior detection system 10 can be improved.
  • the recognition information stored in the recognition information storage unit 22 may be displayed on the display device 13.
  • the operation recognition unit 16 reads the recognition information stored in the recognition information storage unit 22 and causes the display device 13 to display the display.
  • the operation recognition unit 16 reads the recognition information in the recognition information storage unit 22 and displays the display device 13. You may make it display on.
  • a predetermined work (hereinafter referred to as a prescribed work) is arranged to be performed in a predetermined time zone.
  • the action recognition unit 16 may determine whether or not the prescribed work is being performed, and may output an alert to the display device 13 or the like when the prescribed work is not being performed by the worker.
  • the storage unit 17 includes a prescribed work information storage unit.
  • the specified work information storage unit stores specified work information indicating the position, time zone, and operation meaning of the specified work.
  • the motion recognition unit 16 searches the recognition information stored in the recognition information storage unit 22, and is the operation corresponding to the determined motion meaning being performed at the position defined in the prescribed work information? No, as an example, the order of work and the excess / shortage of work are determined.
  • the motion recognition unit 16 uses the time of the recognition information stored in the recognition information storage unit 22 to extract the corresponding specified work information, and is determined at a position determined in the extracted specified work information. It may be determined whether or not an operation corresponding to the operation meaning is performed.
  • the action recognition unit 16 determines that the prescribed work has not been performed, and outputs an alert to the display device 13 or the like indicating that the prescribed work has not been performed.
  • the supervisor can efficiently check the operation of the worker.
  • the operation model storage unit 21 is included in the storage unit 17.
  • the operation model storage unit 21 may be connected via, for example, the Internet.
  • FIG. 6 is an explanatory diagram showing another configuration example of the behavior detection system 10 of FIG.
  • the behavior model storage unit 21 includes, for example, storage on the cloud, so-called cloud storage.
  • the behavior model storage unit 21 is connected to the recognition processing unit 12 through a communication line 30 such as the Internet.
  • the behavior model may be uploaded to the behavior model storage unit 21 through the communication line 30, and the operation model is individually input in advance at each workplace. Can be made unnecessary. Thereby, the man-hour at the time of using the behavior detection system 10 can be reduced.
  • a part of the configuration of one embodiment can be replaced with the configuration of another embodiment, and the configuration of another embodiment can be added to the configuration of one embodiment. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The purpose of the present invention is to identify an action with high accuracy even when the types of actions to be recognized are increased. The behavior recognition system 10 has an action model storage part 21, a video photographing device 11, a position recognition section 15, and an action recognition section 16. The action model storage part 21 stores action models representing models of fundamental actions to be detected. The video photographing device 11 photographs a work activity of a subject to be recognized. The position recognition section 15 detects the position of the subject from photographic information photographed by the video photographing device 11. The action recognition section 16 recognizes the fundamental action of the subject. In addition, the action recognition section 16, upon acquiring a similar action model to the recognized fundamental action of the subject by searching through the action models stored in the action model storage part 21, acquires the meaning of the action corresponding to the acquired action model.

Description

振る舞い検知システムBehavior detection system
 本発明は、振る舞い検知システムに関し、特に、人の動作を認識して解析する動作解析に有効な技術に関する。 The present invention relates to a behavior detection system, and more particularly to a technique effective for motion analysis for recognizing and analyzing human motion.
 近年、店舗における従業員の不正行動の防止あるいはHACCP(Hazard Analysis and Critical Control Point)による衛生管理技術による製品の安全の確保などの観点から、人の行動認識へのニーズが高まっている。 In recent years, there has been a growing need for recognition of human behavior from the standpoint of preventing employee fraud in stores and ensuring product safety through hygiene management technology using HACCP (Hazard Analysis and Critical Control Point).
 この種の行動認識の技術としては、人間や動物あるいは機械などの移動体の動作や行動を自動認識するものがある(例えば特許文献1参照)。この特許文献1には、例えば人に付けたセンサから抽出した特徴量と、特徴量データベースに格納された特徴量とを比較して、該人の動作を認識する旨が記載されている。 As this type of behavior recognition technology, there is a technology that automatically recognizes the behavior and behavior of a moving object such as a human being, an animal, or a machine (for example, see Patent Document 1). This Patent Document 1 describes that, for example, a feature quantity extracted from a sensor attached to a person is compared with a feature quantity stored in a feature quantity database to recognize the action of the person.
特開平10-113343号公報JP-A-10-113343
 しかしながら、上述した特許文献1の技術では、人に取り付けたセンサが取得した特徴量をデータベースに格納された特徴量と比較するだけであるので、認識する動作の種類が増加した場合には、処理量が多くなってしまい、動作の認識に時間がかってしまう恐れがある。 However, in the technique of Patent Document 1 described above, the feature quantity acquired by the sensor attached to the person is only compared with the feature quantity stored in the database. There is a risk that the amount increases and it takes time to recognize the motion.
 また、同じ動作であっても、動作対象、例えば作業する場所が異なった場合には、異なる動作を意味することがある。そのような場合には、動作を認識することは困難であり、その結果、動作認識の精度が低下してしまうという問題がある。 Also, even if the operation is the same, if the operation target, for example, the work place is different, it may mean a different operation. In such a case, it is difficult to recognize the motion, and as a result, there is a problem that the accuracy of motion recognition is lowered.
 本発明の目的は、認識する動作の種類が増加しても、高精度に動作を識別することのできる技術を提供することにある。 An object of the present invention is to provide a technique capable of identifying an operation with high accuracy even when the types of the recognized operation are increased.
 本発明の前記ならびにその他の目的と新規な特徴については、本明細書の記述および添付図面から明らかになるであろう。 The above and other objects and novel features of the present invention will be apparent from the description of this specification and the accompanying drawings.
 本願において開示される発明のうち、代表的なものの概要を簡単に説明すれば、次のとおりである。 Of the inventions disclosed in this application, the outline of typical ones will be briefly described as follows.
 すなわち、代表的な振る舞い検知システムは、認識対象の要素動作を認識して、認識した要素動作の意味を示す動作意味を解析する。この振る舞い検知システムは、動作モデル蓄積部、撮影部、位置認識部、および動作認識部を有する。 That is, a typical behavior detection system recognizes an element action to be recognized and analyzes an action meaning indicating the meaning of the recognized element action. This behavior detection system includes an operation model storage unit, a photographing unit, a position recognition unit, and an operation recognition unit.
 動作モデル蓄積部は、認識する要素動作を数値情報として表した動作モデルを蓄積する。撮影部は、認識対象の作業動作を撮影する。位置認識部は、撮影部が取得した撮影情報から認識対象の位置を認識する。 The behavior model storage unit stores a behavior model representing the recognized element behavior as numerical information. The photographing unit photographs a work operation to be recognized. The position recognition unit recognizes the position of the recognition target from the shooting information acquired by the shooting unit.
 振る舞い検知システムは、認識対象の位置、該当位置にて行われる要素動作、および要素動作の動作意味を対応付けした作業位置動作テーブルを有する。動作認識部は、この作業位置動作テーブルを参照して、位置認識部が認識した位置に対応づけられた要素動作を取得し、さらに要素動作に対応した動作モデルを、動作モデル蓄積部から抽出する。動作認識部は、抽出された動作モデルと、認識対象の動作とを比較することで、認識対象の要素動作を認識する。さらに、動作認識部は、認識対象の位置と、認識した要素動作とから、上記作業位置動作テーブルを参照して、動作意味を抽出する。 The behavior detection system has a work position operation table that associates the position of the recognition target, the element operation performed at the corresponding position, and the operation meaning of the element operation. The motion recognition unit refers to the work position motion table, acquires an element motion associated with the position recognized by the position recognition unit, and further extracts a motion model corresponding to the element motion from the motion model storage unit. . The motion recognition unit recognizes the element motion of the recognition target by comparing the extracted motion model with the motion of the recognition target. Further, the motion recognition unit extracts the motion meaning from the recognition target position and the recognized element motion with reference to the work position motion table.
 また、動作認識部は、取得した動作モデルに対応する動作意味、位置認識部が検出した位置、および認識対象の要素動作を認識した際の時間を関連付けした認識情報を生成する。 Also, the motion recognition unit generates recognition information that associates the motion meaning corresponding to the acquired motion model, the position detected by the position recognition unit, and the time when the recognition target element motion is recognized.
 本願において開示される発明のうち、代表的なものによって得られる効果を簡単に説明すれば以下のとおりである。 Among the inventions disclosed in the present application, effects obtained by typical ones will be briefly described as follows.
 (1)要素動作の認識精度を向上させることができる。 (1) The recognition accuracy of element motion can be improved.
 (2)要素動作の認識速度を向上することができる。 (2) The recognition speed of element motion can be improved.
一実施の形態による振る舞い検知システムにおける構成の一例を示す説明図である。It is explanatory drawing which shows an example of a structure in the behavior detection system by one Embodiment. 図1の記憶部が有する場所動作テーブルにおけるデータ構成の一例を示す説明図である。It is explanatory drawing which shows an example of a data structure in the place operation | movement table which the memory | storage part of FIG. 1 has. 図2の場所動作テーブルにおけるデータ構成の他の例を示す説明図である。It is explanatory drawing which shows the other example of a data structure in the place operation | movement table of FIG. 図1の記憶部が有する動作モデル蓄積部に蓄積される動作モデルにおけるデータ構成の一例を示す説明図である。It is explanatory drawing which shows an example of the data structure in the behavior model accumulate | stored in the behavior model storage part which the memory | storage part of FIG. 1 has. 図1の振る舞い検知システムにおける動作の一例を示すフローチャートである。It is a flowchart which shows an example of the operation | movement in the behavior detection system of FIG. 図1の振る舞い検知システムの他の構成例を示す説明図である。It is explanatory drawing which shows the other structural example of the behavior detection system of FIG.
 以下の実施の形態においては便宜上その必要があるときは、複数のセクションまたは実施の形態に分割して説明するが、特に明示した場合を除き、それらはお互いに無関係なものではなく、一方は他方の一部または全部の変形例、詳細、補足説明等の関係にある。 In the following embodiments, when it is necessary for the sake of convenience, the description will be divided into a plurality of sections or embodiments. However, unless otherwise specified, they are not irrelevant to each other. There are some or all of the modifications, details, supplementary explanations, and the like.
 また、以下の実施の形態において、要素の数等(個数、数値、量、範囲等を含む)に言及する場合、特に明示した場合および原理的に明らかに特定の数に限定される場合等を除き、その特定の数に限定されるものではなく、特定の数以上でも以下でもよい。 Further, in the following embodiments, when referring to the number of elements (including the number, numerical value, quantity, range, etc.), especially when clearly indicated and when clearly limited to a specific number in principle, etc. Except, it is not limited to the specific number, and may be more or less than the specific number.
 さらに、以下の実施の形態において、その構成要素(要素ステップ等も含む)は、特に明示した場合および原理的に明らかに必須であると考えられる場合等を除き、必ずしも必須のものではないことは言うまでもない。 Further, in the following embodiments, the constituent elements (including element steps and the like) are not necessarily indispensable unless otherwise specified and apparently essential in principle. Needless to say.
 同様に、以下の実施の形態において、構成要素等の形状、位置関係等に言及するときは特に明示した場合および原理的に明らかにそうではないと考えられる場合等を除き、実質的にその形状等に近似または類似するもの等を含むものとする。このことは、上記数値および範囲についても同様である。 Similarly, in the following embodiments, when referring to the shape, positional relationship, etc. of components, etc., the shape of the component is substantially the case unless it is clearly specified and the case where it is clearly not apparent in principle. And the like are included. The same applies to the above numerical values and ranges.
 また、実施の形態を説明するための全図において、同一の部材には原則として同一の符号を付し、その繰り返しの説明は省略する。 In all the drawings for explaining the embodiments, the same members are, in principle, given the same reference numerals, and the repeated explanation thereof is omitted.
 以下、実施の形態を詳細に説明する。 Hereinafter, embodiments will be described in detail.
 〈振る舞い検知システムの構成例〉
 図1は、一実施の形態による振る舞い検知システム10における構成の一例を示す説明図である。
<Configuration example of behavior detection system>
FIG. 1 is an explanatory diagram illustrating an example of a configuration in a behavior detection system 10 according to an embodiment.
 振る舞い検知システム10は、認識対象である作業員の位置および要素動作を認識し、認識した要素動作の意味、および時刻などを対応付けして蓄積するシステムである。この振る舞い検知システム10は、図1に示すように、映像撮影装置11、認識処理部12、および表示装置13を有する。 The behavior detection system 10 is a system that recognizes the position and element motion of a worker who is a recognition target, and stores the meaning of the recognized element motion, time, and the like in association with each other. As shown in FIG. 1, the behavior detection system 10 includes a video photographing device 11, a recognition processing unit 12, and a display device 13.
 撮影部である映像撮影装置11は、例えばカラー画像を撮影できるWebカメラや監視カメラ、撮影対象までの距離を撮影できる深度センサなどである。認識処理部12は、映像撮影装置11が撮影した撮影情報から人の動作に関する情報(以降、動作関連情報と称する)を抽出する。動作関連情報は、例えば、人の関節位置の時系列情報、もしくはそれを抽象化した情報である。 The video photographing device 11 as a photographing unit is, for example, a web camera or a surveillance camera capable of photographing a color image, a depth sensor capable of photographing a distance to a photographing target, or the like. The recognition processing unit 12 extracts information related to a person's movement (hereinafter referred to as movement-related information) from the shooting information shot by the video shooting apparatus 11. The motion-related information is, for example, time series information of a human joint position or information obtained by abstracting it.
 カラー情報から人の関節位置の時系列情報を取得するには、Toshevらの方法(A. Toshev and C. Szegedy, “Deeppose: Human pose estimation via deep neural networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 1653-1660.)を用いればよいし、深度センサを用いた場合はShottonらの方法 (J. Shotton, T. Sharp, A. Kipman, A. Fitzgibbon, M. Finocchio, A. Blake, M. Cook, and R. Moore, “Real-time human pose recognition in parts from single depth images,” Communications of the ACM, vol. 56, no. 1, pp. 116-124, 2013.)を用いればよい。 To obtain time series information of human joint positions from color information, Toshev et al. (A. Vision and Pattern Recognition, 2014, pp. 1653-1660.If a depth sensor is used, the method of Shotton et al. (J. Shotton, T. Sharp, A. Fitzgibbon, M. Finocchio, A. Blake, M. Cook, and R. Moore, “Real-time human pose recognition in parts from single depth images,” Communications of the ACM, vol. 56, no. 1, pp. 116-124, 2013 .) Can be used.
 前述の抽象化した情報とは例えば、関節の動きの速さ、関節間の距離、関節の動きの速さを周波数で表現した情報、あるいはそれらの組み合わせの情報である。いずれも、数値情報として表現される。 The above-described abstracted information is, for example, information representing the speed of joint movement, the distance between joints, the speed of joint movement expressed in frequency, or the combination thereof. Both are expressed as numerical information.
 表示装置13は、例えば液晶モニタなどの表示ディスプレイなどからなり、認識処理部12の制御によって後述する認識情報などを表示する。 The display device 13 is composed of a display such as a liquid crystal monitor, for example, and displays recognition information described later under the control of the recognition processing unit 12.
 認識処理部12は、位置認識部15、動作認識部16、および記憶部17を有する。位置認識部15は、映像撮影装置11が撮影した撮影情報から作業者の位置、すなわち作業者が作業場のどの場所にいるかを認識する。 The recognition processing unit 12 includes a position recognition unit 15, an operation recognition unit 16, and a storage unit 17. The position recognizing unit 15 recognizes the position of the worker, that is, where the worker is in the work place, from the photographing information photographed by the video photographing device 11.
 この位置認識は、例えば映像撮影装置11の画角および床との距離、床との角度を事前に与えておくことで可能となる。撮影装置が通常のカラーカメラであれば、画像上での被写体の大きさおよび位置から、作業場における位置を認識できる。深度カメラであれば、さらに奥行きの情報も利用できるため、精度よく位置を認識できる。動作認識部16は、映像撮影装置11の撮影情報から作業員の要素動作を認識する。 This position recognition can be performed, for example, by giving in advance the angle of view, the distance to the floor, and the angle of the floor of the image capturing device 11. If the photographing apparatus is a normal color camera, the position in the workplace can be recognized from the size and position of the subject on the image. In the case of a depth camera, depth information can also be used, so that the position can be recognized with high accuracy. The motion recognition unit 16 recognizes the worker's elemental motion from the shooting information of the video shooting device 11.
 記憶部17は、ハードディスク装置(HDD)あるいはフラッシュメモリのような不揮発性の記憶装置からなり、各種の情報を格納する。この記憶部17は、場所動作テーブル20、動作モデル蓄積部21、および認識情報蓄積部22を有する。 The storage unit 17 includes a nonvolatile storage device such as a hard disk device (HDD) or a flash memory, and stores various types of information. The storage unit 17 includes a place operation table 20, an operation model storage unit 21, and a recognition information storage unit 22.
 作業位置動作テーブルとなる場所動作テーブル20は、位置とその位置にて行われる作業員の要素動作とを対応付けしたテーブルである。動作モデル蓄積部21は、検知すべき要素動作の動作モデルが蓄積されている。この動作モデルは、例えば作業員が標準的な作業動作を行った際の、一連の動作関連情報である。 The place operation table 20 which is a work position operation table is a table in which a position is associated with an element operation of a worker performed at the position. The motion model storage unit 21 stores motion models of element motions to be detected. This motion model is a series of motion related information when, for example, a worker performs a standard work motion.
 〈場所動作テーブルの構成例〉
 図2は、図1の記憶部17が有する場所動作テーブル20におけるデータ構成の一例を示す説明図である。
<Configuration example of location action table>
FIG. 2 is an explanatory diagram illustrating an example of a data configuration in the place operation table 20 included in the storage unit 17 of FIG.
 場所動作テーブル20は、認識対象となる作業員の要素動作に対して、その要素動作が実行されうる領域および該領域における要素動作が示す動作の意味を格納したものである。場所動作テーブル20は、図2に示すように、「場所」、「動作ID」、および「動作意味」のデータから構成されている。 The place action table 20 stores the area where the element action can be executed and the meaning of the action indicated by the element action in the area for the element action of the worker to be recognized. As shown in FIG. 2, the place operation table 20 is composed of data of “place”, “operation ID”, and “operation meaning”.
 「場所」は、作業場のどの位置あるいは領域であるかを示すデータであり、例えば座標などによって表される。「動作ID」は、その「場所」において認識対象となる作業員の要素動作を特定するもので、図2の例では、数字によって示されている。「動作意味」は、その場所における認識対象となる作業員の要素動作がどのような意味を持つのかを示すデータである。 “Location” is data indicating which position or area of the workplace, and is represented by coordinates, for example. “Action ID” specifies the element action of the worker to be recognized in the “place”, and is indicated by a number in the example of FIG. The “motion meaning” is data indicating the meaning of the element motion of the worker to be recognized at the place.
 〈場所動作テーブルの他の構成例〉
 図3は、図2の場所動作テーブル20におけるデータ構成の他の例を示す説明図である。
<Other configuration examples of the location action table>
FIG. 3 is an explanatory diagram showing another example of the data structure in the place operation table 20 of FIG.
 図2に示す場所動作テーブル20の場合、「場所」毎に「動作ID」と「動作意味」とがそれぞれ対応していたが、この図3に示す場所動作テーブル20では、「動作ID」毎に「場所」と「動作意味」とがそれぞれ対応するテーブルとなっている。 In the case of the location operation table 20 shown in FIG. 2, “operation ID” and “operation meaning” correspond to each “location”. However, in the location operation table 20 shown in FIG. In the table, “location” and “operation meaning” correspond to each other.
 〈動作モデルの構成例〉
 図4は、図1の記憶部17が有する動作モデル蓄積部21に蓄積される動作モデルにおけるデータ構成の一例を示す説明図である。
<Configuration example of motion model>
FIG. 4 is an explanatory diagram showing an example of a data configuration in the behavior model accumulated in the behavior model accumulation unit 21 included in the storage unit 17 of FIG.
 動作モデル蓄積部21に蓄積される動作モデルは、図4に示すように、「動作ID」および「動作モデル」からなる。 The operation model stored in the operation model storage unit 21 includes “operation ID” and “operation model” as shown in FIG.
 「動作モデル」は、前述のように、認識対象となる作業員の要素動作を数値化したものである。予め作業員が標準的な作業を行った際の一連の動作関連情報であり、1回撮影した動作関連情報をそのまま動作モデルとして用いてもよいし、複数回撮影した動作関連情報を動作モデルとして用いてもよい。後者の場合は、複数回撮影した動作関連情報をそのまま動作モデルとして用いてもよいし、平均を用いてもよい。 The “motion model” is a numerical value of the elemental motion of the worker to be recognized as described above. A series of motion related information when a worker performs standard work in advance. The motion related information captured once may be used as the motion model as it is, or the motion related information captured multiple times is used as the motion model. It may be used. In the latter case, motion-related information captured a plurality of times may be used as an operation model as it is, or an average may be used.
 認識情報蓄積部22は、動作認識部16が生成した認識情報を蓄積する。この認識情報蓄積部22に蓄積される認識情報は、例えば「時刻」、「場所」、「動作ID」、および「動作意味」からなる。 The recognition information storage unit 22 stores the recognition information generated by the motion recognition unit 16. The recognition information stored in the recognition information storage unit 22 includes, for example, “time”, “location”, “operation ID”, and “operation meaning”.
 「時刻」は、例えば作業者が作業を開始した時間である。あるいは作業を開始した時間から該作業を終了した時間までであってもよい。これは、例えば動作認識部16が作業員の要素動作を認識した際の時間である。 “Time” is, for example, the time when the worker started work. Alternatively, it may be from the time when the work is started to the time when the work is finished. This is the time when the motion recognition unit 16 recognizes the worker's element motion, for example.
 「場所」は、作業者が作業している位置を示す。「動作ID」は、認識対象となる作業員の要素動作を特定するものである。「動作意味」は、その「動作ID」が該当「場所」においてどのような意味を持つ動作であるかを示す。 “Location” indicates the position where the worker is working. “Operation ID” identifies the element operation of the worker to be recognized. The “motion meaning” indicates what meaning the “motion ID” has at the corresponding “location”.
 〈振る舞い検知システムの動作例〉
 続いて、振る舞い検知システム10の動作について説明する。
<Operation example of behavior detection system>
Next, the operation of the behavior detection system 10 will be described.
 図5は、図1の振る舞い検知システム10における動作の一例を示すフローチャートである。この図5では、振る舞い検知システム10による振る舞い認識処理について説明する。振る舞い認識処理は、作業員の作業位置および要素動作を認識して認識情報を生成し、該認識情報を格納する処理である。 FIG. 5 is a flowchart showing an example of the operation in the behavior detection system 10 of FIG. In FIG. 5, behavior recognition processing by the behavior detection system 10 will be described. The behavior recognition process is a process for recognizing a worker's work position and element motion, generating recognition information, and storing the recognition information.
 なお、図5では、以下に説明する振る舞い認識処理を位置認識部15および動作認識部16などによるハードウェアによって実行する場合について説明するが、該振る舞い認識処理は、例えば、図1の認識処理部12に設けられた図示しないプログラム格納メモリなどに記憶されているプログラム形式のソフトウェアに基づいて実行するようにしてもよい。 Note that FIG. 5 illustrates the case where the behavior recognition process described below is executed by hardware such as the position recognition unit 15 and the action recognition unit 16, and the behavior recognition process is, for example, the recognition processing unit of FIG. 1. The program may be executed on the basis of software in a program format stored in a program storage memory (not shown) provided in FIG.
 ソフトウェアに基づいて実行される場合、該ソフトウェアは、例えば認識処理部12が有する図示しないCPU(Central Processing Unit)などによって実行されるものとする。 When executed based on software, the software is executed by, for example, a CPU (Central Processing Unit) (not shown) included in the recognition processing unit 12.
 まず、位置認識部15は、作業を開始した作業者の位置を、例えば上述の方法に基づいて認識する(ステップS101)。 First, the position recognition unit 15 recognizes the position of the worker who started the work based on, for example, the above-described method (step S101).
 続いて、位置認識部15は、認識した作業者の位置を動作認識部16に出力する。動作認識部16は、図2の場所動作テーブル20を参照して位置認識部15が認識した位置に対応する認識対象、すなわち作業者の要素動作を取得する(ステップS102)。 Subsequently, the position recognition unit 15 outputs the recognized worker position to the operation recognition unit 16. The motion recognition unit 16 acquires the recognition target corresponding to the position recognized by the position recognition unit 15 with reference to the place motion table 20 of FIG. 2, that is, the element motion of the worker (step S102).
 例えば図2の場所動作テーブル20において、位置認識部15が認識した位置が(1.0,2.0)の場合には、認識対象の要素動作の「動作ID」は’1’および’2’となる。 For example, in the place motion table 20 of FIG. 2, when the position recognized by the position recognition unit 15 is (1.0, 2.0), the “motion ID” of the recognition target element motion is “1” and “2”. 'Become.
 そして、動作認識部16は、記憶部17の動作モデル蓄積部21からステップS102の処理にて取得した「動作ID」に対応するすべての動作モデルを読み出す(ステップS103)。 Then, the motion recognition unit 16 reads all the motion models corresponding to the “motion ID” acquired in the process of step S102 from the motion model storage unit 21 of the storage unit 17 (step S103).
 続いて、動作認識部16は、映像撮影装置11が撮影した作業員の撮影情報から、動作関連情報を算出する(ステップS104)。そして、動作認識部16は、ステップS104の処理にて算出した動作関連情報に対して、ステップS103の処理にて読み出した動作モデルのうち、もっとも類似している動作モデルの「動作ID」を動作認識結果として取得する(ステップS105)。 Subsequently, the motion recognition unit 16 calculates motion related information from the shooting information of the worker shot by the video shooting device 11 (step S104). Then, the motion recognition unit 16 operates the “motion ID” of the most similar motion model among the motion models read out in step S103 with respect to the motion-related information calculated in step S104. Obtained as a recognition result (step S105).
 図4の動作モデル蓄積部21において、例えば作業情報に最も類似している動作モデルが(0.53、0.52、0.33)であった場合、該動作モデルに対応する「動作ID」は、’1’となる。よって、動作認識結果は、’1’となる。なお、一つの動作モデルに複数の動作関連情報が蓄積されている場合は、例えば、ステップS104の処理にて算出した動作関連情報と、該当動作モデル内のそれぞれとの動作関連情報を比較し、そのうち最も類似していた結果を、該当動作モデルとの比較結果として採用する。 In the behavior model storage unit 21 of FIG. 4, for example, when the behavior model most similar to the work information is (0.53, 0.52, 0.33), the “motion ID” corresponding to the behavior model. Becomes '1'. Therefore, the motion recognition result is “1”. When a plurality of pieces of movement related information are accumulated in one movement model, for example, the movement related information calculated in the process of step S104 is compared with movement related information in each movement model, The most similar result is adopted as a comparison result with the corresponding behavior model.
 続いて、動作認識部16は、ステップS105の処理による認識結果に対応する動作意味を場所動作テーブル20から取得して(ステップS106)、認識情報を生成して記憶部17の認識情報蓄積部22に蓄積する(ステップS107)。 Subsequently, the motion recognition unit 16 acquires the motion meaning corresponding to the recognition result by the process of step S105 from the place motion table 20 (step S106), generates recognition information, and the recognition information storage unit 22 of the storage unit 17. (Step S107).
 例えばステップS101の処理にて位置認識部15が認識した位置が(1.0,2.0)であり、ステップS105の処理にて動作認識結果が’1’であった場合、図2の場所動作テーブル20を参照すると、動作意味は「レバーを回す」となる。 For example, when the position recognized by the position recognition unit 15 in the process of step S101 is (1.0, 2.0) and the operation recognition result is “1” in the process of step S105, the location shown in FIG. Referring to the operation table 20, the operation meaning is “turn the lever”.
 また、動作認識部16が生成する認識情報は、上述したように作業を開始した時刻、作業員の位置、動作認識結果、および動作意味を関連付けしたものである。 Also, the recognition information generated by the motion recognition unit 16 associates the time when the work is started, the position of the worker, the motion recognition result, and the motion meaning as described above.
 以上により、振る舞い認識処理が終了となる。 This completes the behavior recognition process.
 このように、作業員の要素動作だけでなく、作業員の位置を加味して動作認識を行うことにより、同じ要素動作であっても作業員の作業位置によって異なる動作意味があることを認識することができる。 In this way, by recognizing not only the worker's elemental motion but also the worker's position, it is recognized that even the same elemental motion has different motion meanings depending on the worker's work position. be able to.
 これにより、作業動作の認識精度を向上させることができる。 This can improve the recognition accuracy of work movements.
 また、上述したように、作業員の位置別に異なる動作意味を認識するので、動作認識部16の認識処理にかかる負荷を軽減することができる。その結果、認識処理の時間を短縮することができ、振る舞い検知システム10のパフォーマンスを向上させることができる。さらに、動作モデル蓄積部21に蓄積される動作モデルを増やすことによって、より作業動作の認識精度を向上させることができる。 Also, as described above, since different motion meanings are recognized depending on the position of the worker, the load on the recognition processing of the motion recognition unit 16 can be reduced. As a result, the recognition processing time can be shortened, and the performance of the behavior detection system 10 can be improved. Further, by increasing the number of motion models stored in the motion model storage unit 21, it is possible to further improve the recognition accuracy of work motions.
 〈認識情報の表示例〉
 ここで、認識情報蓄積部22に蓄積する認識情報は、表示装置13に表示するようにしてもよい。この表示は、例えば動作認識部16が認識情報蓄積部22の認識情報を読み出して表示装置13に表示させる。
<Display example of recognition information>
Here, the recognition information stored in the recognition information storage unit 22 may be displayed on the display device 13. For example, the operation recognition unit 16 reads the recognition information stored in the recognition information storage unit 22 and causes the display device 13 to display the display.
 あるいは、振る舞い検知システム10が有する図示しないマウスやキーボードなどの入力部によって認識情報を表示する要求が入力された際に、動作認識部16が認識情報蓄積部22の認識情報を読み出して表示装置13に表示させるようにしてもよい。 Alternatively, when a request for displaying recognition information is input by an input unit such as a mouse or a keyboard (not shown) included in the behavior detection system 10, the operation recognition unit 16 reads the recognition information in the recognition information storage unit 22 and displays the display device 13. You may make it display on.
 これにより、監督者が表示装置13に表示されている認識情報を閲覧することにより、作業者が予め定められた作業を行っているかなどを効率よく確認することができる。 Thereby, it is possible to efficiently confirm whether or not the worker is performing a predetermined work by browsing the recognition information displayed on the display device 13 by the supervisor.
 また、作業によっては、予め定められた時間帯に定められた作業(以下、規定作業という)を行うことが取り決めされているものがある。動作認識部16は、規定作業が行われているか否かを判定し、作業員による規定作業が行われていない場合には、表示装置13などにアラートを出力するようにしてもよい。 Also, depending on the work, there is an arrangement that a predetermined work (hereinafter referred to as a prescribed work) is arranged to be performed in a predetermined time zone. The action recognition unit 16 may determine whether or not the prescribed work is being performed, and may output an alert to the display device 13 or the like when the prescribed work is not being performed by the worker.
 この場合、記憶部17には、規定作業情報蓄積部が含まれる。該規定作業情報蓄積部には、規定作業を行う位置、時間帯、および動作意味を表す規定作業情報を格納する。
そして、動作認識部16は、認識情報蓄積部22に蓄積されている認識情報を検索して、規定作業情報に定められた位置にて、定められた動作意味に対応した動作が行われているか否か、一例としては作業の順番や作業の過不足を判定する。
In this case, the storage unit 17 includes a prescribed work information storage unit. The specified work information storage unit stores specified work information indicating the position, time zone, and operation meaning of the specified work.
Then, the motion recognition unit 16 searches the recognition information stored in the recognition information storage unit 22, and is the operation corresponding to the determined motion meaning being performed at the position defined in the prescribed work information? No, as an example, the order of work and the excess / shortage of work are determined.
 このとき、動作認識部16は、認識情報蓄積部22に蓄積されている認識情報の時刻を用いて、対応する規定作業情報を抽出し、抽出した規定作業情報に定められた位置で、定められた動作意味に対応した動作が行われているか否かを判定するようにしてもよい。 At this time, the motion recognition unit 16 uses the time of the recognition information stored in the recognition information storage unit 22 to extract the corresponding specified work information, and is determined at a position determined in the extracted specified work information. It may be determined whether or not an operation corresponding to the operation meaning is performed.
 動作認識部16は、規定作業情報に類似する認識情報がない場合、規定作業が行われていないと判断して、表示装置13などに規定作業が行われていない旨のアラートを出力する。 When there is no recognition information similar to the prescribed work information, the action recognition unit 16 determines that the prescribed work has not been performed, and outputs an alert to the display device 13 or the like indicating that the prescribed work has not been performed.
 これにより、監督者は、作業員が規定作業を行ったか否かをより短時間で簡単に確認することができる。 This makes it possible for the supervisor to easily and quickly confirm whether or not the worker has performed the prescribed work.
 以上により、監督者による作業員の作業動作の確認を効率よく行うことができる。 As described above, the supervisor can efficiently check the operation of the worker.
 〈振る舞い検知システムの他の構成例〉
 また、本実施の形態では、記憶部17に動作モデル蓄積部21を有する構成としたが、該動作モデル蓄積部21は、例えばインターネットなどを通じて接続される構成としてもよい。
<Other configuration examples of behavior detection system>
In the present embodiment, the operation model storage unit 21 is included in the storage unit 17. However, the operation model storage unit 21 may be connected via, for example, the Internet.
 図6は、図1の振る舞い検知システム10の他の構成例を示す説明図である。 FIG. 6 is an explanatory diagram showing another configuration example of the behavior detection system 10 of FIG.
 図6の振る舞い検知システム10が図1の振る舞い検知システム10と異なるところは、動作モデル蓄積部21が例えばクラウド上のストレージ、いわゆるクラウドストレージからなる点である。この場合、動作モデル蓄積部21は、インターネットなどの通信回線30を通じて認識処理部12に接続される。 6 is different from the behavior detection system 10 in FIG. 1 in that the behavior model storage unit 21 includes, for example, storage on the cloud, so-called cloud storage. In this case, the behavior model storage unit 21 is connected to the recognition processing unit 12 through a communication line 30 such as the Internet.
 例えば作業内容や作業場の構成などがほぼ同じであれば、通信回線30を通じて動作モデルを動作モデル蓄積部21にアップロードするだけでよく、各々の作業場において、個別に動作モデルを予め入力するなどの作業を不要とすることができる。これにより、振る舞い検知システム10を用いる際の工数を削減することができる。 For example, if the work contents and the workplace configuration are almost the same, the behavior model may be uploaded to the behavior model storage unit 21 through the communication line 30, and the operation model is individually input in advance at each workplace. Can be made unnecessary. Thereby, the man-hour at the time of using the behavior detection system 10 can be reduced.
 以上、本発明者によってなされた発明を実施の形態に基づき具体的に説明したが、本発明は前記実施の形態に限定されるものではなく、その要旨を逸脱しない範囲で種々変更可能であることはいうまでもない。 As mentioned above, the invention made by the present inventor has been specifically described based on the embodiment. However, the present invention is not limited to the embodiment, and various modifications can be made without departing from the scope of the invention. Needless to say.
 なお、本発明は上記した実施の形態に限定されるものではなく、様々な変形例が含まれる。例えば、上記した実施の形態は本発明を分かりやすく説明するために詳細に説明したものであり、必ずしも説明した全ての構成を備えるものに限定されるものではない。 Note that the present invention is not limited to the above-described embodiment, and includes various modifications. For example, the above-described embodiment has been described in detail for easy understanding of the present invention, and is not necessarily limited to one having all the configurations described.
 また、ある実施の形態の構成の一部を他の実施の形態の構成に置き換えることが可能であり、また、ある実施の形態の構成に他の実施の形態の構成を加えることも可能である。また、各実施の形態の構成の一部について、他の構成の追加、削除、置換をすることが可能である。 Further, a part of the configuration of one embodiment can be replaced with the configuration of another embodiment, and the configuration of another embodiment can be added to the configuration of one embodiment. . In addition, it is possible to add, delete, and replace other configurations for a part of the configuration of each embodiment.
10 振る舞い検知システム
11 映像撮影装置
12 認識処理部
13 表示装置
15 位置認識部
16 動作認識部
17 記憶部
20 場所動作テーブル
21 動作モデル蓄積部
22 認識情報蓄積部
DESCRIPTION OF SYMBOLS 10 Behavior detection system 11 Image | video imaging device 12 Recognition processing part 13 Display apparatus 15 Position recognition part 16 Motion recognition part 17 Storage part 20 Location action table 21 Action model storage part 22 Recognition information storage part

Claims (7)

  1.  認識対象の要素動作を認識して、認識した前記要素動作の意味を示す動作意味を解析する振る舞い検知システムであって、
     検出する要素動作のモデルとなる動作モデルを蓄積する動作モデル蓄積部と、
     前記認識対象の作業動作を撮影する撮影部と、
     前記撮影部が撮影した撮影情報から前記認識対象の位置を認識する位置認識部と、
     前記認識対象の要素動作を認識する動作認識部と、
     を有し、
     前記動作認識部は、前記動作モデル蓄積部に蓄積される前記動作モデルを検索して、認識した前記認識対象の要素動作に類似する動作モデルを取得し、取得した前記動作モデルに対応する前記動作意味を取得する、振る舞い検知システム。
    A behavior detection system for recognizing an element action of a recognition target and analyzing an action meaning indicating the meaning of the recognized element action,
    An action model accumulating unit for accumulating an action model serving as a model of an element action to be detected;
    A photographing unit that photographs the work operation of the recognition target;
    A position recognition unit for recognizing the position of the recognition target from shooting information shot by the shooting unit;
    An action recognition unit for recognizing the element action of the recognition target;
    Have
    The motion recognition unit searches for the motion model stored in the motion model storage unit, acquires a motion model similar to the recognized element motion of the recognition target, and the motion corresponding to the acquired motion model A behavior detection system that captures meaning.
  2.  請求項1記載の振る舞い検知システムにおいて、
     前記動作認識部は、取得した前記動作モデルに対応する前記動作意味、前記位置認識部が検出した位置、および前記認識対象の要素動作を認識した際の時間を関連付けした認識情報を生成する、振る舞い検知システム。
    The behavior detection system according to claim 1,
    The motion recognition unit generates recognition information that associates the motion meaning corresponding to the acquired motion model, the position detected by the position recognition unit, and the time when the recognition target element motion is recognized. Detection system.
  3.  請求項2記載の振る舞い検知システムにおいて、
     前記動作認識部が生成した前記認識情報を蓄積する認識情報蓄積部を有する、振る舞い検知システム。
    The behavior detection system according to claim 2,
    A behavior detection system including a recognition information storage unit that stores the recognition information generated by the motion recognition unit.
  4.  請求項1記載の振る舞い検知システムにおいて、
     前記認識対象の位置、前記位置にて行われる要素動作、および前記要素動作の動作意味を対応付けした作業位置動作テーブルを有し、
     前記動作認識部は、前記作業位置動作テーブルを参照して前記位置認識部が検出した位置に対応する前記要素動作の動作意味を抽出する、振る舞い検知システム。
    The behavior detection system according to claim 1,
    A work position operation table that associates the position of the recognition target, the element operation performed at the position, and the operation meaning of the element operation;
    The behavior recognition system, wherein the motion recognition unit extracts the motion meaning of the element motion corresponding to the position detected by the position recognition unit with reference to the work position motion table.
  5.  請求項1記載の振る舞い検知システムにおいて、
     前記動作モデル蓄積部に蓄積される前記動作モデルは、前記認識対象の動作要素を数値化したものである、振る舞い検知システム。
    The behavior detection system according to claim 1,
    The behavior detection system is a behavior detection system in which the behavior model accumulated in the behavior model accumulation unit is obtained by quantifying the motion element to be recognized.
  6.  請求項3記載の振る舞い検知システムにおいて、
     予め設定された規定作業の作業条件からなる規定作業情報を蓄積する規定作業情報蓄積部を有し、
     前記動作認識部は、前記規定作業情報蓄積部に蓄積される作業情報を検索して、前記認識情報蓄積部に蓄積されている認識情報に類似する前記作業情報があるか否かを判定し、前記認識情報に類似する前記作業情報がないと判定した際にアラートを出力する、振る舞い検知システム。
    The behavior detection system according to claim 3,
    Having a prescribed work information storage unit for accumulating prescribed work information consisting of preset working conditions of the prescribed work;
    The motion recognition unit searches the work information stored in the prescribed work information storage unit to determine whether there is the work information similar to the recognition information stored in the recognition information storage unit; A behavior detection system that outputs an alert when it is determined that there is no work information similar to the recognition information.
  7.  請求項1記載の振る舞い検知システムにおいて、
     前記動作モデル蓄積部は、クラウドストレージである、振る舞い検知システム。
    The behavior detection system according to claim 1,
    The behavior model storage unit is a behavior detection system that is a cloud storage.
PCT/JP2017/024714 2016-07-07 2017-07-05 Behavior detection system WO2018008702A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2016-134827 2016-07-07
JP2016134827A JP6841608B2 (en) 2016-07-07 2016-07-07 Behavior detection system

Publications (1)

Publication Number Publication Date
WO2018008702A1 true WO2018008702A1 (en) 2018-01-11

Family

ID=60912874

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/024714 WO2018008702A1 (en) 2016-07-07 2017-07-05 Behavior detection system

Country Status (2)

Country Link
JP (1) JP6841608B2 (en)
WO (1) WO2018008702A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7254546B2 (en) * 2018-02-27 2023-04-10 キヤノン株式会社 Information processing device, information processing method and program
WO2019167775A1 (en) * 2018-02-27 2019-09-06 キヤノン株式会社 Information processing device, information processing method, and program
JP7332465B2 (en) * 2019-12-26 2023-08-23 株式会社日立製作所 Motion recognition system, motion recognition device, and region setting method
CN115803790A (en) * 2020-07-10 2023-03-14 松下电器(美国)知识产权公司 Action recognition device, action recognition method, and program
EP4198996A1 (en) * 2020-08-13 2023-06-21 Kim, Hyungsook Movement code-based emotional behavior analysis system
WO2024048741A1 (en) * 2022-09-01 2024-03-07 味の素株式会社 Cooking motion estimation device, cooking motion estimation method, and cooking motion estimation program

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003167613A (en) * 2001-11-30 2003-06-13 Sharp Corp Operation management system and method and recording medium with its program for realizing the same method stored
JP2005202653A (en) * 2004-01-15 2005-07-28 Canon Inc Behavior recognition device and method, animal object recognition device and method, equipment control device and method, and program
WO2013145631A1 (en) * 2012-03-30 2013-10-03 日本電気株式会社 Flow line data analysis device, system, program and method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015043141A (en) * 2013-08-26 2015-03-05 キヤノン株式会社 Gesture recognition device and control program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003167613A (en) * 2001-11-30 2003-06-13 Sharp Corp Operation management system and method and recording medium with its program for realizing the same method stored
JP2005202653A (en) * 2004-01-15 2005-07-28 Canon Inc Behavior recognition device and method, animal object recognition device and method, equipment control device and method, and program
WO2013145631A1 (en) * 2012-03-30 2013-10-03 日本電気株式会社 Flow line data analysis device, system, program and method

Also Published As

Publication number Publication date
JP2018005752A (en) 2018-01-11
JP6841608B2 (en) 2021-03-10

Similar Documents

Publication Publication Date Title
WO2018008702A1 (en) Behavior detection system
JP6286474B2 (en) Image processing apparatus and area tracking program
RU2607774C2 (en) Control method in image capture system, control apparatus and computer-readable storage medium
US11188788B2 (en) System and method to determine a timing update for an image recognition model
US9566004B1 (en) Apparatus, method and system for measuring repetitive motion activity
JP6112616B2 (en) Information processing apparatus, information processing system, information processing method, and program
JP6847254B2 (en) Pedestrian tracking methods and electronic devices
JP6024658B2 (en) Object detection apparatus, object detection method, and program
CN104919794A (en) Method and system for metadata extraction from master-slave cameras tracking system
US20150016671A1 (en) Setting apparatus, output method, and non-transitory computer-readable storage medium
CN104811660A (en) Control apparatus and control method
US10970551B2 (en) Control apparatus and control method for determining relation of persons included in an image, and storage medium storing a program therefor
US8284292B2 (en) Probability distribution constructing method, probability distribution constructing apparatus, storage medium of probability distribution constructing program, subject detecting method, subject detecting apparatus, and storage medium of subject detecting program
JP2017076288A (en) Information processor, information processing method and program
JP2019152802A (en) Work operation analysis system and work operation analysis method
CN113283408A (en) Monitoring video-based social distance monitoring method, device, equipment and medium
JP2020087312A (en) Behavior recognition device, behavior recognition method, and program
JP2017054493A (en) Information processor and control method and program thereof
JP6618349B2 (en) Video search system
JP7446060B2 (en) Information processing device, program and information processing method
JP7028729B2 (en) Object tracking device, object tracking system, and object tracking method
JP2007048232A (en) Information processing device, information processing method, and computer program
JP4449483B2 (en) Image analysis apparatus, image analysis method, and computer program
JP2020064684A (en) Control device, control method, and program
WO2023095329A1 (en) Movement evaluation system, movement evaluation method, and non-transitory computer-readable medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17824305

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17824305

Country of ref document: EP

Kind code of ref document: A1