WO2017150211A1 - Appareil de reconnaissance d'action, appareil d'apprentissage d'action, programme de reconnaissance d'action et programme d'apprentissage d'action - Google Patents

Appareil de reconnaissance d'action, appareil d'apprentissage d'action, programme de reconnaissance d'action et programme d'apprentissage d'action Download PDF

Info

Publication number
WO2017150211A1
WO2017150211A1 PCT/JP2017/005850 JP2017005850W WO2017150211A1 WO 2017150211 A1 WO2017150211 A1 WO 2017150211A1 JP 2017005850 W JP2017005850 W JP 2017005850W WO 2017150211 A1 WO2017150211 A1 WO 2017150211A1
Authority
WO
WIPO (PCT)
Prior art keywords
behavior
recognition
action
time
time series
Prior art date
Application number
PCT/JP2017/005850
Other languages
English (en)
Japanese (ja)
Inventor
岳彦 指田
義満 青木
雄太 工藤
Original Assignee
コニカミノルタ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by コニカミノルタ株式会社 filed Critical コニカミノルタ株式会社
Priority to JP2018503027A priority Critical patent/JPWO2017150211A1/ja
Publication of WO2017150211A1 publication Critical patent/WO2017150211A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion

Definitions

  • the present invention relates to machine learning, and relates to the field of learning and recognizing a target action.
  • the supervised learning in which the target value to be predicted is included in the training data includes an identification (classification) problem for predicting a class. Improvements in reliability, speeding up of processing, and the like are issues.
  • a monitoring video of a person or the like is used as input data to recognize the action of the person or the like. In this case, continuous image frames are analyzed. When an action is recognized from a certain frame sequence, an action before the action at the current recognition time can be taken into consideration when recognizing the action in the subsequent frame sequence (the action at the current recognition time).
  • Non-Patent Document 1 is a learning technique that can be used with Trancated BPTT: LSTM, etc., and features before a predetermined frame are not referenced during learning. Basically, the amount of data used for action recognition is determined in a certain time (number of frames). In the invention described in Patent Document 1, instead of explicitly giving the start point of the gesture in gesture recognition, an observation signal for a fixed length is generated with the current frame as the end point, and input to the HMM model database to determine the likelihood of each gesture. Ask. The invention also basically determines the amount of data used for action recognition in a certain time (number of frames).
  • the present invention has been made in view of the above problems in the prior art, and stabilizes the action before the action at the current recognition time, not the length of time (number of frames), during learning and recognition of action recognition. It is an object to determine the amount of data used for action recognition taking into account appropriate and appropriate considerations, and to improve the accuracy and efficiency of action recognition.
  • the behavior recognition apparatus of the present invention for solving the above problems recognizes a behavior based on time-series data of feature quantities of the behavior of the target extracted from data in which the behavior of the target is recorded in chronological order.
  • the recognition unit recognizes the behavior at each time point by following the time series data of the feature amount in time series, and a series of identical behaviors that are not distinguished is regarded as one behavior, and is arranged in a time series. After the recognition of the number of actions is over, based on the time series data of the feature amount corresponding to a plurality of actions arranged in a time series from the time point before the predetermined number of actions to the current recognition time point, It is characterized by recognizing actions.
  • the behavior learning device of the present invention includes a recognition unit that recognizes and learns the behavior based on the time-series data of the feature amount of the target behavior extracted from the training data in which the target behavior is recorded in time series.
  • the recognizing unit recognizes the behavior at each time point by following the time series data of the feature amount in time series, and recognizes a predetermined number of actions arranged in the time series as one continuous behavior of the same behavior that is not distinguished.
  • the action at the current recognition time is recognized based on the time series data of the feature amount corresponding to a plurality of actions arranged in a time series from the time point before the predetermined number of actions to the current recognition time point. It is characterized by that.
  • the behavior recognition program of the present invention causes a computer to function as a recognition unit that recognizes a behavior based on time-series data of feature quantities of the behavior of the target extracted from data in which the behavior of the target is recorded in time series.
  • the recognizing unit recognizes the behavior at each time point by following the time series data of the feature amount in time series, and a predetermined number of behaviors arranged in time series is regarded as one action that is not distinguished.
  • the behavior at the current recognition time point is determined based on the time series data of the feature amount corresponding to a plurality of behaviors arranged in time series from the time point before the predetermined number of actions to the current recognition time point. It is characterized by recognition.
  • the behavior learning program of the present invention uses a computer as a recognition unit for recognizing and learning the behavior based on the time series data of the feature amount of the target behavior extracted from the training data in which the target behavior is recorded in time series.
  • the recognizing unit recognizes the behavior at each time point by following the time series data of the feature amount in time series, and a sequence of the same behavior that is not distinguished is defined as one behavior in a time series. After the recognition of the number of actions is finished, based on the time series data of the feature amount corresponding to a plurality of actions arranged in time series from the time point before the predetermined number of actions to the current recognition time point, the current recognition time point It is characterized by recognizing the behavior.
  • the present invention at the time of learning and recognition of behavior recognition, not a length of time (number of frames) but a predetermined number of behaviors before the behavior at the current recognition time is included and behaviors before that are not included. Since the amount of data used for action recognition is determined, it is possible to improve the accuracy and efficiency of action recognition regardless of changes in conditions such as early and late action by humans.
  • the object to be recognized is the actions of the elderly and their caregivers.
  • Specific actions for elderly people to recognize include “sleeping”, “getting up”, “getting up”, “sitting”, “squatting”, “walking”, “meal”, “toilet”, “going out”, “things”
  • Basic actions in daily life such as “Take”, and actions that occur at the time of accidents such as falls and falls.
  • assistance actions such as “supporting”, “holding”, and “feeding” are also included as the actions of the assistant.
  • “conversation” which is an action by a plurality of people is also conceivable.
  • FIG. 1 shows a conceptual diagram of a system including the action recognition (learning) device of the present embodiment.
  • the action recognition (learning) apparatus is configured by installing an action recognition (learning) program for causing the computer to function as the following units.
  • the target is a human
  • “data in which the behavior of the target is recorded in time series” is moving image data.
  • the moving image data 12 is input to the preprocessing unit 11.
  • moving image data 12 as training data is input to the preprocessing unit 11.
  • the feature amount 13 of the action is extracted from each frame of the moving image data 12, and time-series data (hereinafter referred to as “feature amount sequence”) 14 of the feature amount is generated.
  • the feature quantity sequence 14 is input to the recognition unit 15.
  • the behavior recognition (learning) device receives the feature quantity sequence 14 as an input, and recognizes the target behavior (recognition result 16) and the likelihood 17 based on the input feature quantity sequence 14, the likelihood 17,
  • the action boundary determining unit 18 is configured to determine a boundary point of an action that switches to a different action based on a feature amount or the like.
  • the recognizing unit 15 recognizes the behavior at each time point by following the special information amount sequence 14 in time series.
  • the time point that is the recognition target is the current recognition time point.
  • the recognizing unit 15 obtains an action reflected in the moving image data corresponding to the current recognition time point and its likelihood.
  • an action and its likelihood are output in units of frames.
  • the feature amount the case where the image itself of each frame of the moving image is used most simply can be considered.
  • optical flow extracted from an image, person position / posture, time information, or the like may be used.
  • a human posture joint point coordinates
  • the feature amount is usually given as a fixed length value such as giving the feature amount for the past 10 frames starting from the frame at the current recognition time point, or giving all frames seamlessly from past information
  • a predetermined number of actions N corresponding to the action of the learning / recognition target frame is set as the starting point.
  • the action A is counted as one action and the action B is counted as one action, while these two actions are consecutive with the actions A and B, for example. For example, it counts as 2 actions. In the case of the same action that does not distinguish between “walking” and heel “sitting”, if “walking” “sitting” “walking” continues, it is counted as three actions.
  • FIG. 2 is a conceptual diagram showing a feature string of length used by the recognition unit for action recognition.
  • FIG. 2A shows a comparative example in which all frames are used
  • FIG. 3 is a conceptual diagram showing a feature string of lengths used by the recognition unit for action recognition in a frame
  • FIG. 3A shows a comparative example in which the length of the frame 301 is fixed at a fixed number of frames.
  • the recognition unit 15 uses the boundary point 19 output by the behavior boundary determination unit 18 as a reference, and the time series from the time point that goes back a predetermined number of actions (two times in the examples of FIGS. 2B and 3B) to the current recognition time point.
  • the behavior at the current recognition time is recognized based on the feature amount sequence 20 corresponding to a plurality of behaviors arranged in a row.
  • the speed of action differs depending on the person. It is conceivable that the previous action information effective for specifying the action number is not included, but the number of frames of the feature amount sequence used by the recognition unit 15 for action recognition is set based on the action number as in the example of the present invention in FIG. 3B. By making it variable, it becomes possible to sufficiently obtain information on past actions that lead to the action at the time of the current recognition.
  • the feature quantity sequence for three actions is obtained by recognizing a predetermined number of actions arranged in time series (two in the example of FIGS. 2B and 3B). After it is over.
  • the recognition unit 15 uses a predetermined number of actions arranged in time series (2 in the examples of FIGS. 2B and 3B). Before the recognition of (1) is completed, the behavior at the current recognition time is recognized based on all the feature quantity sequences up to the current recognition time.
  • RNN Recurrent Neural Network
  • LSTM Short Term Memory
  • LSTM is a technology that can hold past information for a longer period of time, and by combining the two, it becomes possible to utilize long-term past data for learning and recognition of the current input.
  • RNN + LSTM can reset the internal state with a flag. When not reset, the information of all the frames up to that point is retained internally, but when reset, the internal state is initialized, so it is handled that there is no past input. Therefore, in this embodiment, based on the determination of the action boundary determination unit 18, the process of resetting the internal state and inputting the feature amount again is used as the process of resetting the action used for learning recognition.
  • the recognition unit 15 needs to learn before recognition. In learning, moving image data whose correct behavior is known is input, and what is an effective feature amount for distinguishing each behavior is learned. At the time of recognition, it recognizes based on the process made by learning.
  • the boundary of actions is known at the time of learning, so it may be reset according to the number of actions.However, since the action is unknown in advance and the same cannot be done at the time of recognition, The determination unit 18 is required. Note that the method used for recognition is not limited to LSTM.
  • the recognition unit 15 outputs the likelihood of each target action as the recognition result 16. For example, when 10 kinds of actions are recognized, the likelihood is calculated for each of the 10 actions, and the action with the highest likelihood is output as the recognition result 16.
  • the action boundary determination unit 18 determines a boundary point that becomes a break between actions being recognized, and inputs the boundary point to the recognition unit 15.
  • the recognition result 16 changes to a different action (when there is a first place change)
  • it is considered to be a boundary point, but in that case, the recognition result 16 changes to a different action Since the boundary point is determined for the first time later, the determination is delayed.
  • the delay in determination is expected to be greater.
  • a method using likelihood information of each action of action recognition can be considered.
  • the behavior boundary determination unit 18 determines a time point when the difference 601 between the first and second rankings having the highest likelihood is equal to or less than a predetermined threshold value as a boundary point. That is, in FIG. 6, action 0 is first in frame 1-6, but the first place is not determined in the seventh frame when action 0 switches to action 2 or after, and the first and second positions in the sixth frame.
  • the determination is made early by making a determination when the difference 601 is equal to or less than a predetermined threshold.
  • the recognition unit 15 updates the feature amount sequence used for the action recognition to the range of the number of actions traced from the new boundary point, thereby improving the accuracy of the action recognition.
  • the action boundary determination unit 18 starts from the frame at the current recognition time point, based on a statistic such as the average or median of each action likelihood value in a predetermined range, A method of determining that the behavior has been switched at the stage where the statistics are switched is conceivable. Also, a method of determining the end of the action when the action indicating the maximum likelihood does not change within a predetermined time (the number of frames) after the action indicating the maximum likelihood changes can be considered. In this case, the “mode” can be used as the statistic.
  • a boundary point of an action that switches to a different action is determined based on the position information, such as at the moment of leaving the bed.
  • a boundary point of an action for switching to a different action is determined based on position information indicating that the person has entered / exited a specific range such as in a bathroom.
  • This position information may be target position information obtained by analyzing the moving image data 12 shown in FIG. 1, or may be input from the position detection unit 21 separately.
  • the position detection unit 21 is not based on the moving image data 12 but cooperates with a sensing system that detects a target position. Thereby, when the place which performs action, such as bathing, is limited, recognition accuracy can be improved.
  • a method of determining that there is a boundary point when the same action continues for a predetermined number of frames or more can be considered. This is because if the same action continues for a too long period of time, the relationship between the previous action and the next action is considered weakened.
  • feature length sequences of length used by the recognition unit 15 for action recognition are indicated by frames 801 and 803, and current recognition points are indicated by pointers 802 and 804.
  • the recognition unit 15 performs the following action recognition (recognition at the current recognition time point 804 illustrated in FIG. 8B). As shown by a frame 803, the past action used for recognition is shifted by one action for recognition.
  • the behavior recognition apparatus of the present invention has a recognition unit that recognizes a behavior based on time-series data of feature quantities of the behavior of the target extracted from data in which the behavior of the target is recorded in time series.
  • the recognizing unit recognizes the behavior at each time point by following the time series data of the feature amount in time series, and recognizes a predetermined number of actions arranged in the time series as one continuous action of the same action that is not distinguished.
  • the action at the current recognition time is recognized based on the time series data of the feature amount corresponding to a plurality of actions arranged in a time series from the time point before the predetermined number of actions to the current recognition time point. It is characterized by that.
  • the recognition unit is configured to determine the current recognition time point based on the time series data of all the feature quantities up to the current recognition time point before the recognition of the predetermined number of actions arranged in time series is finished. It is characterized by recognizing actions.
  • the behavior recognition apparatus further includes a behavior boundary determination unit that determines a boundary point of a behavior that switches to a different behavior, and the recognition unit uses the boundary point output by the behavior boundary determination unit as a reference. It is characterized in that the action at the current recognition time point is recognized based on the time series data of the feature amount corresponding to a plurality of actions arranged in a time series from the time point before the action number to the current recognition time point.
  • the behavior boundary determination unit determines a boundary point of the behavior that switches to a different behavior based on the likelihood information of the behavior output by the recognition unit.
  • the behavior boundary determination unit determines the time point when the difference between the first ranking and the second ranking having a high likelihood is equal to or less than a predetermined threshold as the boundary point. .
  • the behavior boundary determination unit determines a boundary point of the behavior that switches to a different behavior based on a statistical amount of likelihood information output from the recognition unit a plurality of times within a predetermined length of time. It is characterized by determining.
  • the action boundary determination unit determines a boundary point of an action that switches to a different action based on the position information of the target.
  • the behavior learning device of the present invention includes a recognition unit that recognizes and learns the behavior based on the time-series data of the feature amount of the target behavior extracted from the training data in which the target behavior is recorded in time series.
  • the recognizing unit recognizes the behavior at each time point by following the time series data of the feature amount in time series, and recognizes a predetermined number of actions arranged in the time series as one continuous behavior of the same behavior that is not distinguished.
  • the action at the current recognition time is recognized based on the time series data of the feature amount corresponding to a plurality of actions arranged in a time series from the time point before the predetermined number of actions to the current recognition time point. It is characterized by that.
  • the behavior recognition program of the present invention causes a computer to function as a recognition unit that recognizes a behavior based on time-series data of feature quantities of the behavior of the target extracted from data in which the behavior of the target is recorded in time series.
  • the recognizing unit recognizes the behavior at each time point by following the time series data of the feature amount in time series, and a predetermined number of behaviors arranged in time series is regarded as one action that is not distinguished.
  • the behavior at the current recognition time point is determined based on the time series data of the feature amount corresponding to a plurality of behaviors arranged in time series from the time point before the predetermined number of actions to the current recognition time point. It is characterized by recognition.
  • the recognition unit is configured to determine the current recognition time point based on the time series data of all the feature quantities up to the current recognition time point before the recognition of the predetermined number of actions arranged in time series is completed. It is characterized by recognizing actions.
  • the computer functions as an action boundary determination unit that determines a boundary point of an action to be switched to a different action, and the recognition unit is based on the boundary point output by the action boundary determination unit. It is characterized in that the action at the current recognition time point is recognized based on the time series data of the feature amount corresponding to a plurality of actions lined up in time series from the time point preceding the predetermined number of actions to the current recognition time point.
  • the behavior boundary determination unit determines a boundary point of the behavior that switches to a different behavior based on the likelihood information of the behavior output by the recognition unit.
  • the behavior boundary determination unit determines, as the boundary point, a time point when a difference between a first ranking and a second ranking having a high likelihood is equal to or less than a predetermined threshold value.
  • the behavior boundary determination unit determines a boundary point of the behavior that switches to a different behavior based on a statistical amount of likelihood information output from the recognition unit a plurality of times within a predetermined length of time. It is characterized by determining.
  • the action boundary determination unit determines a boundary point of an action that switches to a different action based on the target position information.
  • the behavior learning program of the present invention uses a computer as a recognition unit for recognizing and learning the behavior based on the time series data of the feature amount of the target behavior extracted from the training data in which the target behavior is recorded in time series.
  • the recognizing unit recognizes the behavior at each time point by following the time series data of the feature amount in time series, and a sequence of the same behavior that is not distinguished is defined as one behavior in a time series. After the recognition of the number of actions is finished, based on the time series data of the feature amount corresponding to a plurality of actions arranged in time series from the time point before the predetermined number of actions to the current recognition time point, the current recognition time point It is characterized by recognizing the behavior.
  • the present invention can be used for action recognition of a person or the like by a computer or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

La présente invention concerne un apprentissage de reconnaissance d'action et une reconnaissance d'action lors desquels une quantité de données utilisée pour la reconnaissance d'action est déterminée par considération stable et appropriée des actions effectuées avant l'action à un instant de reconnaissance actuel plutôt que par considération de la durée (nombre de trames), ce qui permet d'améliorer la précision et l'efficacité de la reconnaissance d'action. L'appareil de reconnaissance d'action de la présente invention comprend une unité de reconnaissance (12) qui reconnaît les actions d'une cible telle qu'une personne, sur la base de données chronologiques de quantité caractéristique des actions de la cible, extraites de données (données d'image mobile) dans lesquelles les actions sont enregistrées chronologiquement. L'unité de reconnaissance reconnaît systématiquement une action grâce à un suivi séquentiel des données chronologiques de la quantité caractéristique, définit une série d'actions uniques qui ne sont pas distinguées les unes des autres en tant qu'action et, après la reconnaissance d'un nombre prescrit d'actions alignées chronologiquement, reconnaît une action au niveau de l'instant de reconnaissance actuel sur la base des données chronologiques de la quantité caractéristique équivalente à une pluralité (trois dans la FIG. 8) d'actions alignées chronologiquement à partir de l'instant précédent le nombre d'actions prescrit et jusqu'à l'instant de reconnaissance actuel (802, 804).
PCT/JP2017/005850 2016-03-03 2017-02-17 Appareil de reconnaissance d'action, appareil d'apprentissage d'action, programme de reconnaissance d'action et programme d'apprentissage d'action WO2017150211A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2018503027A JPWO2017150211A1 (ja) 2016-03-03 2017-02-17 行動認識装置及び行動学習装置並びに行動認識プログラム及び行動学習プログラム

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2016-040656 2016-03-03
JP2016040656 2016-03-03

Publications (1)

Publication Number Publication Date
WO2017150211A1 true WO2017150211A1 (fr) 2017-09-08

Family

ID=59742827

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/005850 WO2017150211A1 (fr) 2016-03-03 2017-02-17 Appareil de reconnaissance d'action, appareil d'apprentissage d'action, programme de reconnaissance d'action et programme d'apprentissage d'action

Country Status (2)

Country Link
JP (1) JPWO2017150211A1 (fr)
WO (1) WO2017150211A1 (fr)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108804995A (zh) * 2018-03-23 2018-11-13 李春莲 设备分布图像识别平台
KR20190061538A (ko) * 2017-11-28 2019-06-05 영남대학교 산학협력단 멀티 인식모델의 결합에 의한 행동패턴 인식방법 및 장치
JP2019096252A (ja) * 2017-11-28 2019-06-20 Kddi株式会社 撮影映像から人の行動を表すコンテキストを推定するプログラム、装置及び方法
JP2020009141A (ja) * 2018-07-06 2020-01-16 株式会社 日立産業制御ソリューションズ 機械学習装置及び方法
JP2020087437A (ja) * 2018-11-27 2020-06-04 富士ゼロックス株式会社 カメラシステムを使用した、ユーザの身体部分によって実行されるタスクの完了の評価のための方法、プログラム、及びシステム
JP2021022323A (ja) * 2019-07-30 2021-02-18 Necソリューションイノベータ株式会社 行動推定装置、行動推定方法およびプログラム
JP2021071773A (ja) * 2019-10-29 2021-05-06 株式会社エクサウィザーズ 動作評価装置、動作評価方法、動作評価システム
WO2021125521A1 (fr) * 2019-12-16 2021-06-24 연세대학교 산학협력단 Procédé de reconnaissance d'action utilisant des données caractéristiques séquentielles et appareil pour cela
WO2024062882A1 (fr) * 2022-09-20 2024-03-28 株式会社Ollo Programme, procédé de traitement d'informations et dispositif de traitement d'informations

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005215927A (ja) * 2004-01-29 2005-08-11 Mitsubishi Heavy Ind Ltd 行動認識システム
JP2005258830A (ja) * 2004-03-11 2005-09-22 Yamaguchi Univ 人物行動理解システム
JP2011215951A (ja) * 2010-03-31 2011-10-27 Toshiba Corp 行動判定装置、方法及びプログラム

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005215927A (ja) * 2004-01-29 2005-08-11 Mitsubishi Heavy Ind Ltd 行動認識システム
JP2005258830A (ja) * 2004-03-11 2005-09-22 Yamaguchi Univ 人物行動理解システム
JP2011215951A (ja) * 2010-03-31 2011-10-27 Toshiba Corp 行動判定装置、方法及びプログラム

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102440385B1 (ko) * 2017-11-28 2022-09-05 영남대학교 산학협력단 멀티 인식모델의 결합에 의한 행동패턴 인식방법 및 장치
KR20190061538A (ko) * 2017-11-28 2019-06-05 영남대학교 산학협력단 멀티 인식모델의 결합에 의한 행동패턴 인식방법 및 장치
JP2019096252A (ja) * 2017-11-28 2019-06-20 Kddi株式会社 撮影映像から人の行動を表すコンテキストを推定するプログラム、装置及び方法
CN108804995B (zh) * 2018-03-23 2019-04-16 新昌县夙凡软件科技有限公司 盥洗设备分布图像识别平台
CN108804995A (zh) * 2018-03-23 2018-11-13 李春莲 设备分布图像识别平台
JP2020009141A (ja) * 2018-07-06 2020-01-16 株式会社 日立産業制御ソリューションズ 機械学習装置及び方法
JP2020087437A (ja) * 2018-11-27 2020-06-04 富士ゼロックス株式会社 カメラシステムを使用した、ユーザの身体部分によって実行されるタスクの完了の評価のための方法、プログラム、及びシステム
JP7392348B2 (ja) 2018-11-27 2023-12-06 富士フイルムビジネスイノベーション株式会社 カメラシステムを使用した、ユーザの身体部分によって実行されるタスクの完了の評価のための方法、プログラム、及びシステム
JP2021022323A (ja) * 2019-07-30 2021-02-18 Necソリューションイノベータ株式会社 行動推定装置、行動推定方法およびプログラム
JP7368045B2 (ja) 2019-07-30 2023-10-24 Necソリューションイノベータ株式会社 行動推定装置、行動推定方法およびプログラム
JP2021071773A (ja) * 2019-10-29 2021-05-06 株式会社エクサウィザーズ 動作評価装置、動作評価方法、動作評価システム
KR20210076659A (ko) * 2019-12-16 2021-06-24 연세대학교 산학협력단 순차적 특징 데이터 이용한 행동 인식 방법 및 그를 위한 장치
KR102334388B1 (ko) * 2019-12-16 2021-12-01 연세대학교 산학협력단 순차적 특징 데이터 이용한 행동 인식 방법 및 그를 위한 장치
WO2021125521A1 (fr) * 2019-12-16 2021-06-24 연세대학교 산학협력단 Procédé de reconnaissance d'action utilisant des données caractéristiques séquentielles et appareil pour cela
WO2024062882A1 (fr) * 2022-09-20 2024-03-28 株式会社Ollo Programme, procédé de traitement d'informations et dispositif de traitement d'informations

Also Published As

Publication number Publication date
JPWO2017150211A1 (ja) 2018-12-27

Similar Documents

Publication Publication Date Title
WO2017150211A1 (fr) Appareil de reconnaissance d'action, appareil d'apprentissage d'action, programme de reconnaissance d'action et programme d'apprentissage d'action
JP6658331B2 (ja) 行動認識装置及び行動認識プログラム
Aminikhanghahi et al. Enhancing activity recognition using CPD-based activity segmentation
US11551103B2 (en) Data-driven activity prediction
Aminikhanghahi et al. Using change point detection to automate daily activity segmentation
Junker et al. Gesture spotting with body-worn inertial sensors to detect user activities
JP5382436B2 (ja) データ処理装置、データ処理方法、およびプログラム
Ko et al. Using dynamic time warping for online temporal fusion in multisensor systems
JP5359414B2 (ja) 行動認識方法、装置及びプログラム
Kyritsis et al. Food intake detection from inertial sensors using LSTM networks
WO2009090584A2 (fr) Procédé et système de reconnaissance d'activité et leurs applications en détection de chute
CN112801000B (zh) 一种基于多特征融合的居家老人摔倒检测方法及系统
CN102707806A (zh) 一种基于加速度传感器的运动识别方法
Zheng A novel attention-based convolution neural network for human activity recognition
Chen et al. Activity recognition based on streaming sensor data for assisted living in smart homes
Azorin-López et al. A predictive model for recognizing human behaviour based on trajectory representation
Devanne et al. Recognition of activities of daily living via hierarchical long-short term memory networks
EP4098182A1 (fr) Reconnaissance de gestes basée sur l'apprentissage machine dotée d'un cadre pour l'ajout de gestes personnalisés par l'utilisateur
JP6274114B2 (ja) 制御方法、制御プログラム、および制御装置
Al Machot et al. A windowing approach for activity recognition in sensor data streams
Neili et al. Human posture recognition approach based on ConvNets and SVM classifier
Kheratkar et al. Gesture controlled home automation using CNN
Diete et al. Vision and acceleration modalities: Partners for recognizing complex activities
JP2018124801A (ja) ジェスチャ認識装置およびジェスチャ認識プログラム
Yahaya et al. Towards the development of an adaptive system for detecting anomaly in human activities

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2018503027

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17759680

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17759680

Country of ref document: EP

Kind code of ref document: A1