JP2018139100A - 集団レベル認知状態を評価するための技術 - Google Patents
集団レベル認知状態を評価するための技術 Download PDFInfo
- Publication number
- JP2018139100A JP2018139100A JP2017224121A JP2017224121A JP2018139100A JP 2018139100 A JP2018139100 A JP 2018139100A JP 2017224121 A JP2017224121 A JP 2017224121A JP 2017224121 A JP2017224121 A JP 2017224121A JP 2018139100 A JP2018139100 A JP 2018139100A
- Authority
- JP
- Japan
- Prior art keywords
- behavior
- camera
- agent
- symbol sequence
- symbols
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000001149 cognitive effect Effects 0.000 title description 6
- 238000005516 engineering process Methods 0.000 title description 4
- 238000000034 method Methods 0.000 claims abstract description 34
- 230000009471 action Effects 0.000 claims abstract description 28
- 230000006399 behavior Effects 0.000 claims description 50
- 239000002245 particle Substances 0.000 claims description 46
- 230000008921 facial expression Effects 0.000 claims description 20
- 238000013528 artificial neural network Methods 0.000 claims description 19
- 230000000306 recurrent effect Effects 0.000 claims description 18
- 230000033001 locomotion Effects 0.000 claims description 14
- 230000008451 emotion Effects 0.000 claims description 12
- 238000001914 filtration Methods 0.000 claims description 12
- 238000005070 sampling Methods 0.000 claims description 10
- 230000001815 facial effect Effects 0.000 claims description 7
- 230000007704 transition Effects 0.000 claims description 5
- 230000002123 temporal effect Effects 0.000 claims 1
- 238000012544 monitoring process Methods 0.000 abstract 2
- 230000011273 social behavior Effects 0.000 description 21
- 238000004458 analytical method Methods 0.000 description 17
- 238000010586 diagram Methods 0.000 description 10
- 238000001514 detection method Methods 0.000 description 7
- 230000003993 interaction Effects 0.000 description 7
- 230000008569 process Effects 0.000 description 6
- 238000004088 simulation Methods 0.000 description 6
- 238000012549 training Methods 0.000 description 6
- 230000003542 behavioural effect Effects 0.000 description 5
- 230000006998 cognitive state Effects 0.000 description 5
- 238000004891 communication Methods 0.000 description 5
- 230000015654 memory Effects 0.000 description 5
- 238000013461 design Methods 0.000 description 4
- 238000009826 distribution Methods 0.000 description 4
- 238000010801 machine learning Methods 0.000 description 4
- 210000000056 organ Anatomy 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 206010029216 Nervousness Diseases 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 210000001508 eye Anatomy 0.000 description 3
- 230000014509 gene expression Effects 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- 206010020400 Hostility Diseases 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 230000003001 depressive effect Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000002996 emotional effect Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000012482 interaction analysis Methods 0.000 description 2
- 238000012804 iterative process Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 230000036544 posture Effects 0.000 description 2
- 230000003997 social interaction Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 210000005252 bulbus oculi Anatomy 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 230000004424 eye movement Effects 0.000 description 1
- 210000000887 face Anatomy 0.000 description 1
- 238000010348 incorporation Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
- 230000001755 vocal effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/04—Inference or reasoning models
- G06N5/041—Abduction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/62—Extraction of image or video features relating to a temporal dimension, e.g. time-based feature extraction; Pattern tracking
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0002—Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network
- A61B5/0004—Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network characterised by the type of physiological signal transmitted
- A61B5/0013—Medical image data
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0002—Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network
- A61B5/0015—Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network characterised by features of the telemetry system
- A61B5/0022—Monitoring a patient using a global network, e.g. telephone networks, internet
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0059—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
- A61B5/0077—Devices for viewing the surface of the body, e.g. camera, magnifying lens
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
- A61B5/165—Evaluating the state of mind, e.g. depression, anxiety
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/74—Details of notification to user or communication with user or patient ; user input means
- A61B5/742—Details of notification to user or communication with user or patient ; user input means using visual displays
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/74—Details of notification to user or communication with user or patient ; user input means
- A61B5/746—Alarms related to a physiological condition, e.g. details of setting alarm thresholds or avoiding false alarms
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/74—Details of notification to user or communication with user or patient ; user input means
- A61B5/7465—Arrangements for interactive communication between patient and care services, e.g. by using a telephone network
- A61B5/747—Arrangements for interactive communication between patient and care services, e.g. by using a telephone network in case of emergency, i.e. alerting emergency services
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/049—Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/04—Inference or reasoning models
- G06N5/046—Forward inferencing; Production systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/53—Recognition of crowd images, e.g. recognition of crowd congestion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B2576/00—Medical imaging apparatus involving image processing or analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/12—Classification; Matching
- G06F2218/16—Classification; Matching by matching signal segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/277—Analysis of motion involving stochastic approaches, e.g. using Kalman filters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
- G06V40/193—Preprocessing; Feature extraction
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- Multimedia (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Evolutionary Computation (AREA)
- Pathology (AREA)
- Animal Behavior & Ethology (AREA)
- Heart & Thoracic Surgery (AREA)
- Public Health (AREA)
- Surgery (AREA)
- Veterinary Medicine (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Data Mining & Analysis (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Human Computer Interaction (AREA)
- Computational Linguistics (AREA)
- Psychiatry (AREA)
- Computer Networks & Wireless Communication (AREA)
- Physiology (AREA)
- Social Psychology (AREA)
- Databases & Information Systems (AREA)
- Educational Technology (AREA)
- Child & Adolescent Psychology (AREA)
- Hospice & Palliative Care (AREA)
- Developmental Disabilities (AREA)
- Business, Economics & Management (AREA)
- Critical Care (AREA)
Abstract
Description
12 カメラ
14 人々
16 環境
18 クラウドベースのコンピューティングシステム
20 コンピューティング装置
22 サーバ
24 メモリ
26 データベース
28 プロセッサ
30 通信部品
42 エージェント
44 内的経験
46 物理状態
72 シンボル
74 観測表情
92 最も起こりそうな粒子
Claims (14)
- 環境(16)において1人または複数の個人(14)の行動に関連したデータを取り込む1つまたは複数のカメラ(12)と、
環境(16)において1つまたは複数のカメラ(12)から1人または複数の個人の前記行動に関連した前記データを受信し、
それぞれの個人(14)の前記行動を各々モデル化し、かつシミュレーション中の前記それぞれの個人の内的経験を表すシンボル列を各々出力する1つまたは複数のエージェントベースのシミュレータを実行し、そして、
前記シンボル列が問合せ行動に対する問合せシンボル列と一致するとき前記それぞれの個人の各々に対する以降の行動を予測する、
1つまたは複数のプロセッサ(28)を備える1つまたは複数のコンピューティング装置(20)と、
前記1つまたは複数のコンピューティング装置に結合され、かつ前記以降の行動を表す指示を表示するように構成されるディスプレイと、を備えるシステム。 - 前記1つまたは複数のカメラ(12)が、場所および関節身体動作の推定値を取り込む赤、緑、青、深度(RGB+D)カメラ、ならびに顔画像を取り込む固定カメラおよびパンチルトズーム(PTZ)カメラを備える、請求項1記載のシステム。
- 前記1つまたは複数のコンピューティング装置(20)が、スマートフォン、スマートウォッチ、タブレット、ラップトップコンピュータ、デスクトップコンピュータ、クラウドベースのコンピューティングシステム(18)におけるサーバ(22)、またはそれらの何らかの組合せを備える、請求項1記載のシステム。
- 前記1つまたは複数のプロセッサ(28)が、一定の以降の行動が予測されるときに、行為であって、警報器を鳴らすこと、緊急サービスを呼ぶこと、警告をトリガすること、メッセージを送ること、警告を表示すること、またはそれらの何らかの組合せを含む行為を行う、請求項3記載のシステム。
- 環境(16)において1つまたは複数のカメラ(12)から1人または複数の個人(14)に関連したデータを受信することと、
それぞれの個人(14)の行動のモデルを生成するように各々が動作する1つまたは複数のエージェントベースのシミュレータを実行し、各モデルの出力が、シミュレーション中の前記それぞれの個人(14)の内的経験(44)を表すシンボル列であることと、
前記シンボル列が問合せ行動に対する問合せシンボル列と一致するとき前記それぞれの個人の各々に対する以降の行動を予測することと、を含む方法。 - 各モデルが、粒子フィルタリングを使用し、そして各粒子が、前記データに基づいて前記シンボル列の時間進化を反復的に推定するリカレントニューラルネットワークを含む、請求項5記載の方法。
- 類似したシンボル列を含む粒子が、次の反復に遷移して、前記シンボル列の内的シンボルの次の集合を予測するようにする、請求項6記載の方法。
- 類似したシンボル列を含まない粒子を終了する、請求項6記載の方法。
- 前記リカレントニューラルネットワークを使用して、前記シンボル列に基づいて前記以降の行動を予測する、請求項6記載の方法。
- 前記リカレントニューラルネットワークが、ランダムな内的経験シンボルで初めに種が与えられる、請求項9記載の方法。
- 前記リカレントニューラルネットワークが、物理状態シンボルの次の集合をサンプリングし、そして物理シンボルの前記次の集合を前記問合せシンボル列の物理状態シンボルと比較することによって、前記以降の行動を予測する、請求項10記載の方法。
- 前記1つまたは複数のカメラが、場所および関節身体動作の推定値を取り込む赤、緑、青、深度(RGB+D)カメラ、ならびに顔画像を取り込む固定カメラおよびパンチルトズーム(PTZ)カメラを備える、請求項5記載の方法。
- 前記シンボル列が、性格類型、感情、観測表情、またはそれらの何らかの組合せに対する記憶したグラフィクを含む、請求項5記載の方法。
- 一定の行動が予測されるときに行為を行うことを含み、前記行為が、警報器を鳴らすこと、緊急サービスを呼ぶこと、警告をトリガすること、メッセージを送ること、警告を表示すること、またはそれらの何らかの組合せを含む、請求項5記載の方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/370,736 US10216983B2 (en) | 2016-12-06 | 2016-12-06 | Techniques for assessing group level cognitive states |
US15/370,736 | 2016-12-06 |
Publications (1)
Publication Number | Publication Date |
---|---|
JP2018139100A true JP2018139100A (ja) | 2018-09-06 |
Family
ID=60781464
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
JP2017224121A Pending JP2018139100A (ja) | 2016-12-06 | 2017-11-22 | 集団レベル認知状態を評価するための技術 |
Country Status (5)
Country | Link |
---|---|
US (1) | US10216983B2 (ja) |
EP (1) | EP3333764A1 (ja) |
JP (1) | JP2018139100A (ja) |
CN (1) | CN108154236A (ja) |
CA (1) | CA2986406A1 (ja) |
Families Citing this family (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10445565B2 (en) * | 2016-12-06 | 2019-10-15 | General Electric Company | Crowd analytics via one shot learning |
WO2018126275A1 (en) * | 2016-12-30 | 2018-07-05 | Dirk Schneemann, LLC | Modeling and learning character traits and medical condition based on 3d facial features |
JP6615800B2 (ja) * | 2017-01-11 | 2019-12-04 | 株式会社東芝 | 情報処理装置、情報処理方法およびプログラム |
US10963493B1 (en) * | 2017-04-06 | 2021-03-30 | AIBrain Corporation | Interactive game with robot system |
US10929759B2 (en) | 2017-04-06 | 2021-02-23 | AIBrain Corporation | Intelligent robot software platform |
US11151992B2 (en) | 2017-04-06 | 2021-10-19 | AIBrain Corporation | Context aware interactive robot |
US10475222B2 (en) | 2017-09-05 | 2019-11-12 | Adobe Inc. | Automatic creation of a group shot image from a short video clip using intelligent select and merge |
US10832393B2 (en) * | 2019-04-01 | 2020-11-10 | International Business Machines Corporation | Automated trend detection by self-learning models through image generation and recognition |
US11381651B2 (en) * | 2019-05-29 | 2022-07-05 | Adobe Inc. | Interpretable user modeling from unstructured user data |
AU2019100806A4 (en) * | 2019-07-24 | 2019-08-29 | Dynamic Crowd Measurement Pty Ltd | Real-Time Crowd Measurement And Management Systems And Methods Thereof |
CN110472726B (zh) * | 2019-07-25 | 2022-08-02 | 南京信息工程大学 | 基于输出变化微分的灵敏长短期记忆方法 |
US11514767B2 (en) * | 2019-09-18 | 2022-11-29 | Sensormatic Electronics, LLC | Systems and methods for averting crime with look-ahead analytics |
CN110991375B (zh) * | 2019-12-10 | 2020-12-15 | 北京航空航天大学 | 一种群体行为分析方法及装置 |
US11497418B2 (en) | 2020-02-05 | 2022-11-15 | General Electric Company | System and method for neuroactivity detection in infants |
US11587428B2 (en) * | 2020-03-11 | 2023-02-21 | Johnson Controls Tyco IP Holdings LLP | Incident response system |
US11373425B2 (en) * | 2020-06-02 | 2022-06-28 | The Nielsen Company (U.S.), Llc | Methods and apparatus for monitoring an audience of media based on thermal imaging |
US11763591B2 (en) | 2020-08-20 | 2023-09-19 | The Nielsen Company (Us), Llc | Methods and apparatus to determine an audience composition based on voice recognition, thermal imaging, and facial recognition |
US11553247B2 (en) | 2020-08-20 | 2023-01-10 | The Nielsen Company (Us), Llc | Methods and apparatus to determine an audience composition based on thermal imaging and facial recognition |
US11595723B2 (en) | 2020-08-20 | 2023-02-28 | The Nielsen Company (Us), Llc | Methods and apparatus to determine an audience composition based on voice recognition |
US11887384B2 (en) * | 2021-02-02 | 2024-01-30 | Black Sesame Technologies Inc. | In-cabin occupant behavoir description |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2002021441A1 (fr) * | 2000-09-06 | 2002-03-14 | Hitachi, Ltd. | Detecteur de comportement anormal |
US20090222388A1 (en) * | 2007-11-16 | 2009-09-03 | Wei Hua | Method of and system for hierarchical human/crowd behavior detection |
JP2010231393A (ja) * | 2009-03-26 | 2010-10-14 | Advanced Telecommunication Research Institute International | 監視装置 |
JP2011186521A (ja) * | 2010-03-04 | 2011-09-22 | Nec Corp | 感情推定装置および感情推定方法 |
JP2011248445A (ja) * | 2010-05-24 | 2011-12-08 | Toyota Central R&D Labs Inc | 可動物予測装置及びプログラム |
Family Cites Families (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6778637B2 (en) | 2002-09-20 | 2004-08-17 | Koninklijke Philips Electronics, N.V. | Method and apparatus for alignment of anti-scatter grids for computed tomography detector arrays |
DE10252662A1 (de) | 2002-11-11 | 2004-05-27 | Philips Intellectual Property & Standards Gmbh | Computertomographie-Verfahren mit kohärenten Streustrahlen und Computertomograph |
US7418082B2 (en) | 2003-06-01 | 2008-08-26 | Koninklijke Philips Electronics N.V. | Anti-scattering X-ray collimator for CT scanners |
US8442839B2 (en) | 2004-07-16 | 2013-05-14 | The Penn State Research Foundation | Agent-based collaborative recognition-primed decision-making |
US8057235B2 (en) | 2004-08-12 | 2011-11-15 | Purdue Research Foundation | Agent based modeling of risk sensitivity and decision making on coalitions |
US7450683B2 (en) | 2006-09-07 | 2008-11-11 | General Electric Company | Tileable multi-layer detector |
AU2007327315B2 (en) | 2006-12-01 | 2013-07-04 | Rajiv Khosla | Method and system for monitoring emotional state changes |
US7486764B2 (en) | 2007-01-23 | 2009-02-03 | General Electric Company | Method and apparatus to reduce charge sharing in pixellated energy discriminating detectors |
US20110263946A1 (en) | 2010-04-22 | 2011-10-27 | Mit Media Lab | Method and system for real-time and offline analysis, inference, tagging of and responding to person(s) experiences |
JP5645079B2 (ja) * | 2011-03-31 | 2014-12-24 | ソニー株式会社 | 画像処理装置および方法、プログラム、並びに記録媒体 |
US9062978B2 (en) * | 2011-05-31 | 2015-06-23 | Massachusetts Institute Of Technology | Tracking a body by nonlinear and non-Gaussian parametric filtering |
EP2718864A4 (en) | 2011-06-09 | 2016-06-29 | Univ Wake Forest Health Sciences | AGENT-BASED BRAIN MODEL AND RELATED METHODS |
US9384448B2 (en) * | 2011-12-28 | 2016-07-05 | General Electric Company | Action-based models to identify learned tasks |
US9076563B2 (en) | 2013-06-03 | 2015-07-07 | Zhengrong Ying | Anti-scatter collimators for detector systems of multi-slice X-ray computed tomography systems |
US9955124B2 (en) * | 2013-06-21 | 2018-04-24 | Hitachi, Ltd. | Sensor placement determination device and sensor placement determination method |
CN103876767B (zh) | 2013-12-19 | 2017-04-12 | 沈阳东软医疗系统有限公司 | 一种ct机及其x射线准直器 |
CN103716324B (zh) | 2013-12-31 | 2017-04-12 | 重庆邮电大学 | 一种基于多智能体虚拟矿井风险行为实现系统及方法 |
US10335091B2 (en) * | 2014-03-19 | 2019-07-02 | Tactonic Technologies, Llc | Method and apparatus to infer object and agent properties, activity capacities, behaviors, and intents from contact and pressure images |
US9582496B2 (en) | 2014-11-03 | 2017-02-28 | International Business Machines Corporation | Facilitating a meeting using graphical text analysis |
JP6671248B2 (ja) * | 2016-06-08 | 2020-03-25 | 株式会社日立製作所 | 異常候補情報分析装置 |
-
2016
- 2016-12-06 US US15/370,736 patent/US10216983B2/en active Active
-
2017
- 2017-11-22 JP JP2017224121A patent/JP2018139100A/ja active Pending
- 2017-11-22 CA CA2986406A patent/CA2986406A1/en active Pending
- 2017-12-01 EP EP17204871.2A patent/EP3333764A1/en not_active Ceased
- 2017-12-06 CN CN201711281197.0A patent/CN108154236A/zh active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2002021441A1 (fr) * | 2000-09-06 | 2002-03-14 | Hitachi, Ltd. | Detecteur de comportement anormal |
US20090222388A1 (en) * | 2007-11-16 | 2009-09-03 | Wei Hua | Method of and system for hierarchical human/crowd behavior detection |
JP2010231393A (ja) * | 2009-03-26 | 2010-10-14 | Advanced Telecommunication Research Institute International | 監視装置 |
JP2011186521A (ja) * | 2010-03-04 | 2011-09-22 | Nec Corp | 感情推定装置および感情推定方法 |
JP2011248445A (ja) * | 2010-05-24 | 2011-12-08 | Toyota Central R&D Labs Inc | 可動物予測装置及びプログラム |
Non-Patent Citations (1)
Title |
---|
KANG HOON LEEほか3名: ""Group Behavior from Video: A Data-Driven Approach to Crowd Simulation"", SCA '07: PROCEEDINGS OF THE 2007 ACM SIGGRAPH/EUROGRAPHICS SYMPOSIUM ON COMPUTER ANIMATION, JPN7022000042, August 2007 (2007-08-01), pages 109 - 118, XP001510743, ISSN: 0004874716 * |
Also Published As
Publication number | Publication date |
---|---|
US10216983B2 (en) | 2019-02-26 |
CN108154236A (zh) | 2018-06-12 |
CA2986406A1 (en) | 2018-06-06 |
US20180157902A1 (en) | 2018-06-07 |
EP3333764A1 (en) | 2018-06-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10216983B2 (en) | Techniques for assessing group level cognitive states | |
EP3333762A1 (en) | Crowd analytics via one shot learning | |
US10503978B2 (en) | Spatio-temporal interaction network for learning object interactions | |
Bai et al. | Predicting the Visual Focus of Attention in Multi-Person Discussion Videos. | |
EP3884426B1 (en) | Action classification in video clips using attention-based neural networks | |
CN110826453A (zh) | 一种通过提取人体关节点坐标的行为识别方法 | |
Zhuang et al. | Group activity recognition with differential recurrent convolutional neural networks | |
WO2021194372A1 (en) | Methods and systems for managing meeting notes | |
Li et al. | Facial expression-based analysis on emotion correlations, hotspots, and potential occurrence of urban crimes | |
Baig et al. | Crowd emotion detection using dynamic probabilistic models | |
Jain et al. | State-of-the-arts violence detection using ConvNets | |
Dotti et al. | Behavior and personality analysis in a nonsocial context dataset | |
Dhiraj et al. | Activity recognition for indoor fall detection in 360-degree videos using deep learning techniques | |
Baig et al. | Perception of emotions from crowd dynamics | |
KR102564300B1 (ko) | 체온 행동 패턴을 이용한 학교 폭력 예방 시스템 | |
Yang et al. | Automatic aggression detection inside trains | |
Baig et al. | Bio-inspired probabilistic model for crowd emotion detection | |
Usman | Anomalous crowd behavior detection in time varying motion sequences | |
Mocanu et al. | Human activity recognition with convolution neural network using tiago robot | |
Zhang et al. | Automatic construction and extraction of sports moment feature variables using artificial intelligence | |
Dichwalkar et al. | Activity recognition and fall detection in elderly people | |
Zachariah et al. | Review on vision based human motion detection using deep learning | |
Gorodnichy et al. | PROVE-IT (FRiV): framework and results | |
Alkittawi | A deep-learning-based fall-detection system to support aging-in-place | |
Baraka et al. | An Intelligent Criminal Detection System: A Case Study of Beni-town |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
RD02 | Notification of acceptance of power of attorney |
Free format text: JAPANESE INTERMEDIATE CODE: A7422 Effective date: 20190806 |
|
A621 | Written request for application examination |
Free format text: JAPANESE INTERMEDIATE CODE: A621 Effective date: 20201117 |
|
A977 | Report on retrieval |
Free format text: JAPANESE INTERMEDIATE CODE: A971007 Effective date: 20211126 |
|
A131 | Notification of reasons for refusal |
Free format text: JAPANESE INTERMEDIATE CODE: A131 Effective date: 20220119 |
|
A521 | Request for written amendment filed |
Free format text: JAPANESE INTERMEDIATE CODE: A523 Effective date: 20220412 |
|
A02 | Decision of refusal |
Free format text: JAPANESE INTERMEDIATE CODE: A02 Effective date: 20220914 |