JP6357421B2 - オブジェクト認識装置、分類ツリー学習装置及びその動作方法 - Google Patents
オブジェクト認識装置、分類ツリー学習装置及びその動作方法 Download PDFInfo
- Publication number
- JP6357421B2 JP6357421B2 JP2014552125A JP2014552125A JP6357421B2 JP 6357421 B2 JP6357421 B2 JP 6357421B2 JP 2014552125 A JP2014552125 A JP 2014552125A JP 2014552125 A JP2014552125 A JP 2014552125A JP 6357421 B2 JP6357421 B2 JP 6357421B2
- Authority
- JP
- Japan
- Prior art keywords
- object part
- classification tree
- visible
- hidden
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims description 18
- 238000004458 analytical method Methods 0.000 claims description 44
- 238000012545 processing Methods 0.000 claims description 24
- 239000000284 extract Substances 0.000 claims 1
- 238000012549 training Methods 0.000 description 24
- 230000033001 locomotion Effects 0.000 description 16
- 238000004364 calculation method Methods 0.000 description 11
- 238000006243 chemical reaction Methods 0.000 description 9
- 238000010586 diagram Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 8
- 230000008901 benefit Effects 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 3
- 238000000354 decomposition reaction Methods 0.000 description 3
- 210000003811 finger Anatomy 0.000 description 3
- 238000005266 casting Methods 0.000 description 2
- 210000004247 hand Anatomy 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000007637 random forest analysis Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 235000006693 Cassia laevigata Nutrition 0.000 description 1
- 241001465754 Metazoa Species 0.000 description 1
- 241000735631 Senna pendula Species 0.000 description 1
- 238000013398 bayesian method Methods 0.000 description 1
- 230000037237 body shape Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000002526 effect on cardiovascular system Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000003205 muscle Anatomy 0.000 description 1
- 210000000653 nervous system Anatomy 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 229940124513 senna glycoside Drugs 0.000 description 1
- 210000002356 skeleton Anatomy 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 210000003813 thumb Anatomy 0.000 description 1
- 210000001835 viscera Anatomy 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/75—Determining position or orientation of objects or cameras using feature-based methods involving models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
- G06F18/2134—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on separation criteria, e.g. independent component analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/243—Classification techniques relating to the number of classes
- G06F18/24323—Tree-organised classifiers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/24765—Rule-based classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/06—Ray-tracing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/08—Volume rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/40—Hidden part removal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/005—Tree description, e.g. octree, quadtree
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/42—Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
- G06V10/422—Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation for representing the structure of the pattern or shape of an object therefor
- G06V10/426—Graphical representations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/24—Indexing scheme for image data processing or generation, in general involving graphical user interfaces [GUIs]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20048—Transform domain processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Computer Graphics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Geometry (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Human Computer Interaction (AREA)
- Medical Informatics (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Mathematical Physics (AREA)
- Image Analysis (AREA)
Description
(例えば、Δ0.5である場合)
ii.分類ツリーのレベルが基準値以上である場合
(例えば、分類ツリーのレベルが25以上である場合)
iii.可視オブジェクトパートデータ及び隠しオブジェクトパートデータの量が基準値以下である場合
(例えば、データに属するボクセルの数が10個以下である場合)
分類ツリー生成部746は、現在ノードが停止基準を満足すると、現在ノードをリーフノードとして決定し、当該データセットに対して学習する動作を終了してもよい。可視オブジェクトパートの後に数回重畳した隠しオブジェクトパートがある場合、分類ツリー生成部746は、1つのリーフノードのうち隠しオブジェクトパートに対するヒストグラムを複数生成してもよい。再び説明すると、重畳したパートが数個ある場合、隠しオブジェクトパートは重畳した数だけ数個存在するが、分類ツリー生成部746は、隠しオブジェクトパートに対するヒストグラムを1つのリーフノードのうち複数生成することによって複数の隠しオブジェクトパートそれぞれに関する情報を格納してもよい。
は2次元で要素ワイズ乗算子(element−wise multiplication)を示す。
Claims (12)
- 分析対象に対する深度映像が入力される入力部と、
分類ツリーを用いて、前記深度映像から、前記分析対象の可視オブジェクトパート及び前記分析対象の隠しオブジェクトパートを認識する処理部と、
前記認識された可視オブジェクトパート及び前記認識された隠しオブジェクトパートを用いて、1つのデータ空間に前記分析対象のボリュームを構成するボリューム構成部と、
を備え、
前記ボリューム構成部は、前記分類ツリーのリーフノードの中に保管されている相対的な深度値に基づいて前記ボリュームを構築し、かつ、前記相対的な深度値は、前記認識された可視オブジェクトパートの深度値と前記認識された隠しオブジェクトパートの深度値との間の差異を含む、ことを特徴とするオブジェクト認識装置。 - 前記処理部は、前記ボリュームに基づいて前記分析対象に対する付加情報を抽出することを特徴とする請求項1に記載のオブジェクト認識装置。
- 前記付加情報は、前記分析対象の形状、姿勢、キージョイント、及び構造のうち少なくとも1つに関する情報を含むことを特徴とする請求項2に記載のオブジェクト認識装置。
- 前記ボリューム構成部は、前記分類ツリーのリーフノードに格納された相対的深度値を用いて前記ボリュームを構成し、
前記相対的深度値は、前記認識された可視オブジェクトパートに対する深度値と前記認識された隠しオブジェクトパートに対する深度値との間の差値を示すことを特徴とする請求項1に記載のオブジェクト認識装置。 - 前記処理部は、
前記分類ツリーに前記深度映像を入力し、
前記分類ツリーの現在ノードがスプリットノードであれば、前記スプリットノードに格納された特徴の値及び閾値を読み出し、前記特徴の値及び前記閾値をスプリット関数に入力して結果値を演算し、前記演算された結果値に基づいて前記現在ノードに対する左側子ノード及び右側子ノードのいずれか1つのノードを探索し、
前記現在ノードがリーフノードであれば、前記リーフノードに格納された前記可視オブジェクトパートに対する第1ヒストグラム及び前記隠しオブジェクトパートに対する第2ヒストグラムを読み出し、前記第1ヒストグラムに基づいて前記深度映像から前記可視オブジェクトパートを認識して前記第2ヒストグラムに基づいて前記深度映像から前記隠しオブジェクトパートを認識することを特徴とする請求項1乃至4のいずれか一項に記載のオブジェクト認識装置。 - 前記処理部は、前記演算された結果値が前記閾値よりも小さければ、前記左側子ノードを探索し、前記演算された結果値が前記閾値と同一であるか大きければ、前記右側子ノードを探索することを特徴とする請求項5に記載のオブジェクト認識装置。
- 前記分析対象に対するオブジェクトモデルの幅及び高さのうち少なくとも1つの大きさを調整する大きさ調整部をさらに備えることを特徴とする請求項1乃至6のいずれか一項に記載のオブジェクト認識装置。
- 前記分類ツリーは、前記可視オブジェクトパートの確率値及び前記隠しオブジェクトパートの確率値を含むことを特徴とする請求項1乃至7のいずれか一項に記載のオブジェクト認識装置。
- 前記分類ツリーは、前記可視オブジェクトパートと前記隠しオブジェクトパートの相対的深度値を含むことを特徴とする請求項1乃至7のいずれか一項に記載のオブジェクト認識装置。
- 前記分類ツリーは、前記隠しオブジェクトパートの少なくとも一部分を複数のレイヤに表現することを特徴とする請求項1乃至7のいずれか一項に記載のオブジェクト認識装置。
- 分析対象に対する深度映像が入力されるステップと、
分類ツリーを用いて前記深度映像から前記分析対象の可視オブジェクトパート及び前記分析対象の隠しオブジェクトパートを認識するステップと、
前記認識された可視オブジェクトパート及び前記認識された隠しオブジェクトパートを用いて、1つのデータ空間に前記分析対象のボリュームを構成するステップと、
を含み、
前記ボリュームを構成するステップは、前記分類ツリーのリーフノードの中に保管されている相対的な深度値に基づいて前記ボリュームを構築し、かつ、前記相対的な深度値は、前記認識された可視オブジェクトパートの深度値と前記認識された隠しオブジェクトパートの深度値との間の差異を含む、
ことを特徴とするオブジェクト認識装置の動作方法。 - プログラムが記録されたコンピュータで読み出し可能な記録媒体であって、
プロセッサによって前記プログラムが実行されると、請求項11に記載の方法を実行する、
コンピュータで読み出し可能な記録媒体。
Applications Claiming Priority (7)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2012-0003585 | 2012-01-11 | ||
KR20120003585 | 2012-01-11 | ||
KR20120006181 | 2012-01-19 | ||
KR10-2012-0006181 | 2012-01-19 | ||
KR1020120106183A KR101919831B1 (ko) | 2012-01-11 | 2012-09-25 | 오브젝트 인식 장치, 분류 트리 학습 장치 및 그 동작 방법 |
KR10-2012-0106183 | 2012-09-25 | ||
PCT/KR2013/000174 WO2013105783A1 (ko) | 2012-01-11 | 2013-01-09 | 오브젝트 인식 장치, 분류 트리 학습 장치 및 그 동작 방법 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
JP2017146234A Division JP6522060B2 (ja) | 2012-01-11 | 2017-07-28 | オブジェクト認識装置、分類ツリー学習装置及びその動作方法 |
Publications (2)
Publication Number | Publication Date |
---|---|
JP2015505108A JP2015505108A (ja) | 2015-02-16 |
JP6357421B2 true JP6357421B2 (ja) | 2018-07-11 |
Family
ID=48993724
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
JP2014552125A Active JP6357421B2 (ja) | 2012-01-11 | 2013-01-09 | オブジェクト認識装置、分類ツリー学習装置及びその動作方法 |
JP2017146234A Active JP6522060B2 (ja) | 2012-01-11 | 2017-07-28 | オブジェクト認識装置、分類ツリー学習装置及びその動作方法 |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
JP2017146234A Active JP6522060B2 (ja) | 2012-01-11 | 2017-07-28 | オブジェクト認識装置、分類ツリー学習装置及びその動作方法 |
Country Status (6)
Country | Link |
---|---|
US (3) | US9508152B2 (ja) |
EP (1) | EP2804111B1 (ja) |
JP (2) | JP6357421B2 (ja) |
KR (1) | KR101919831B1 (ja) |
CN (1) | CN103890752B (ja) |
WO (1) | WO2013105783A1 (ja) |
Families Citing this family (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10474793B2 (en) * | 2013-06-13 | 2019-11-12 | Northeastern University | Systems, apparatus and methods for delivery and augmentation of behavior modification therapy and teaching |
US9952042B2 (en) | 2013-07-12 | 2018-04-24 | Magic Leap, Inc. | Method and system for identifying a user location |
US9600897B2 (en) * | 2013-10-31 | 2017-03-21 | Nec Corporation | Trajectory features and distance metrics for hierarchical video segmentation |
KR102271853B1 (ko) | 2014-03-21 | 2021-07-01 | 삼성전자주식회사 | 전자 장치, 영상 처리 방법, 및 컴퓨터 판독가능 기록매체 |
US9842274B2 (en) * | 2014-03-28 | 2017-12-12 | Xerox Corporation | Extending data-driven detection to the prediction of object part locations |
KR101982258B1 (ko) * | 2014-09-19 | 2019-05-24 | 삼성전자주식회사 | 오브젝트 검출 방법 및 오브젝트 검출 장치 |
CN105760390B (zh) * | 2014-12-17 | 2021-09-28 | 富泰华工业(深圳)有限公司 | 图片检索系统及方法 |
US9471836B1 (en) * | 2016-04-01 | 2016-10-18 | Stradvision Korea, Inc. | Method for learning rejector by forming classification tree in use of training images and detecting object in test images, and rejector using the same |
US10373319B2 (en) | 2016-06-13 | 2019-08-06 | International Business Machines Corporation | Object tracking with a holographic projection |
WO2018022011A1 (en) * | 2016-07-26 | 2018-02-01 | Hewlett-Packard Development Company, L.P. | Indexing voxels for 3d printing |
WO2018128424A1 (ko) * | 2017-01-04 | 2018-07-12 | 가이아쓰리디 주식회사 | 3차원 지리 정보 시스템 웹 서비스를 제공하는 방법 |
CN110945537B (zh) * | 2017-07-28 | 2023-09-22 | 索尼互动娱乐股份有限公司 | 训练装置、识别装置、训练方法、识别方法和程序 |
KR102440385B1 (ko) * | 2017-11-28 | 2022-09-05 | 영남대학교 산학협력단 | 멀티 인식모델의 결합에 의한 행동패턴 인식방법 및 장치 |
CN108154104B (zh) * | 2017-12-21 | 2021-10-15 | 北京工业大学 | 一种基于深度图像超像素联合特征的人体姿态估计方法 |
KR101862677B1 (ko) * | 2018-03-06 | 2018-05-31 | (주)휴톰 | 3차원 탄성 모델 렌더링 방법, 장치 및 프로그램 |
US11127189B2 (en) | 2018-02-23 | 2021-09-21 | Canon Kabushiki Kaisha | 3D skeleton reconstruction from images using volumic probability data |
GB2571307B (en) * | 2018-02-23 | 2020-10-21 | Canon Kk | 3D skeleton reconstruction from images using volumic probability data |
US10650233B2 (en) * | 2018-04-25 | 2020-05-12 | International Business Machines Corporation | Identifying discrete elements of a composite object |
US11423615B1 (en) * | 2018-05-29 | 2022-08-23 | HL Acquisition, Inc. | Techniques for producing three-dimensional models from one or more two-dimensional images |
KR101949727B1 (ko) * | 2018-07-02 | 2019-02-19 | 한화시스템 주식회사 | 객체간 링크 생성 시스템 및 이의 동작 방법 |
US11335027B2 (en) | 2018-09-28 | 2022-05-17 | Hewlett-Packard Development Company, L.P. | Generating spatial gradient maps for a person in an image |
KR102280201B1 (ko) * | 2018-11-23 | 2021-07-21 | 주식회사 스칼라웍스 | 머신 러닝을 이용하여 은닉 이미지를 추론하는 방법 및 장치 |
WO2020242047A1 (en) * | 2019-05-30 | 2020-12-03 | Samsung Electronics Co., Ltd. | Method and apparatus for acquiring virtual object data in augmented reality |
US11651621B2 (en) * | 2019-10-23 | 2023-05-16 | Samsung Electronics Co., Ltd. | Electronic device and method for controlling the electronic device |
US11741670B2 (en) * | 2021-03-01 | 2023-08-29 | Samsung Electronics Co., Ltd. | Object mesh based on a depth image |
US11785196B2 (en) * | 2021-09-28 | 2023-10-10 | Johnson Controls Tyco IP Holdings LLP | Enhanced three dimensional visualization using artificial intelligence |
Family Cites Families (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0928460B1 (en) * | 1997-07-29 | 2003-01-29 | Philips Electronics N.V. | Method of reconstruction of tridimensional scenes and corresponding reconstruction device and decoding system |
US7536044B2 (en) | 2003-11-19 | 2009-05-19 | Siemens Medical Solutions Usa, Inc. | System and method for detecting and matching anatomical structures using appearance and shape |
JP2005165791A (ja) | 2003-12-03 | 2005-06-23 | Fuji Xerox Co Ltd | 対象物の追跡方法及び追跡システム |
US20070053563A1 (en) | 2005-03-09 | 2007-03-08 | Zhuowen Tu | Probabilistic boosting tree framework for learning discriminative models |
JP4767718B2 (ja) | 2006-02-24 | 2011-09-07 | 富士フイルム株式会社 | 画像処理方法および装置ならびにプログラム |
US20090002489A1 (en) | 2007-06-29 | 2009-01-01 | Fuji Xerox Co., Ltd. | Efficient tracking multiple objects through occlusion |
US7925081B2 (en) | 2007-12-12 | 2011-04-12 | Fuji Xerox Co., Ltd. | Systems and methods for human body pose estimation |
US9165199B2 (en) * | 2007-12-21 | 2015-10-20 | Honda Motor Co., Ltd. | Controlled human pose estimation from depth image streams |
KR101335346B1 (ko) * | 2008-02-27 | 2013-12-05 | 소니 컴퓨터 엔터테인먼트 유럽 리미티드 | 장면의 심도 데이터를 포착하고, 컴퓨터 액션을 적용하기 위한 방법들 |
KR20090093119A (ko) | 2008-02-28 | 2009-09-02 | 홍익대학교 산학협력단 | 움직이는 객체 추적을 위한 다중 정보의 융합 방법 |
JP4889668B2 (ja) | 2008-03-05 | 2012-03-07 | 三菱電機株式会社 | 物体検出装置 |
JP5212007B2 (ja) * | 2008-10-10 | 2013-06-19 | 株式会社リコー | 画像分類学習装置、画像分類学習方法、および画像分類学習システム |
EP2249292A1 (en) | 2009-04-03 | 2010-11-10 | Siemens Aktiengesellschaft | Decision making mechanism, method, module, and robot configured to decide on at least one prospective action of the robot |
KR101109568B1 (ko) | 2009-04-13 | 2012-01-31 | 한양대학교 산학협력단 | 행동유발성 확률모델을 이용한 로봇의 행동 선택 방법 |
US9182814B2 (en) * | 2009-05-29 | 2015-11-10 | Microsoft Technology Licensing, Llc | Systems and methods for estimating a non-visible or occluded body part |
CN101989326B (zh) * | 2009-07-31 | 2015-04-01 | 三星电子株式会社 | 人体姿态识别方法和装置 |
JP2011059898A (ja) | 2009-09-08 | 2011-03-24 | Fujifilm Corp | 画像解析装置、画像解析方法およびプログラム |
US8665268B2 (en) | 2009-09-22 | 2014-03-04 | Siemens Aktiengesellschaft | Image data and annotation processing system |
KR101068465B1 (ko) | 2009-11-09 | 2011-09-28 | 한국과학기술원 | 삼차원 물체 인식 시스템 및 방법 |
US8446492B2 (en) | 2009-12-10 | 2013-05-21 | Honda Motor Co., Ltd. | Image capturing device, method of searching for occlusion region, and program |
KR101671488B1 (ko) | 2009-12-18 | 2016-11-01 | 에스케이텔레콤 주식회사 | 문맥상 사라진 특징점의 복원을 통한 물체 인식 방법 |
KR101077788B1 (ko) | 2010-01-18 | 2011-10-28 | 한국과학기술원 | 이미지 내의 물체 인식 방법 및 장치 |
EP2383696A1 (en) | 2010-04-30 | 2011-11-02 | LiberoVision AG | Method for estimating a pose of an articulated object model |
EP2386998B1 (en) | 2010-05-14 | 2018-07-11 | Honda Research Institute Europe GmbH | A Two-Stage Correlation Method for Correspondence Search |
US8625897B2 (en) * | 2010-05-28 | 2014-01-07 | Microsoft Corporation | Foreground and background image segmentation |
KR20110133677A (ko) * | 2010-06-07 | 2011-12-14 | 삼성전자주식회사 | 3d 영상 처리 장치 및 그 방법 |
-
2012
- 2012-09-25 KR KR1020120106183A patent/KR101919831B1/ko active IP Right Grant
-
2013
- 2013-01-09 JP JP2014552125A patent/JP6357421B2/ja active Active
- 2013-01-09 CN CN201380003629.5A patent/CN103890752B/zh active Active
- 2013-01-09 EP EP13735721.6A patent/EP2804111B1/en active Active
- 2013-01-09 US US14/345,563 patent/US9508152B2/en active Active
- 2013-01-09 WO PCT/KR2013/000174 patent/WO2013105783A1/ko active Application Filing
-
2016
- 2016-10-18 US US15/296,624 patent/US10163215B2/en active Active
-
2017
- 2017-07-28 JP JP2017146234A patent/JP6522060B2/ja active Active
-
2018
- 2018-12-18 US US16/223,580 patent/US10867405B2/en active Active
Also Published As
Publication number | Publication date |
---|---|
WO2013105783A1 (ko) | 2013-07-18 |
US9508152B2 (en) | 2016-11-29 |
US10867405B2 (en) | 2020-12-15 |
CN103890752A (zh) | 2014-06-25 |
KR20130082425A (ko) | 2013-07-19 |
JP6522060B2 (ja) | 2019-05-29 |
US20170039720A1 (en) | 2017-02-09 |
JP2017208126A (ja) | 2017-11-24 |
US20190122385A1 (en) | 2019-04-25 |
US10163215B2 (en) | 2018-12-25 |
US20150023557A1 (en) | 2015-01-22 |
EP2804111A1 (en) | 2014-11-19 |
EP2804111B1 (en) | 2020-06-24 |
CN103890752B (zh) | 2017-05-10 |
JP2015505108A (ja) | 2015-02-16 |
KR101919831B1 (ko) | 2018-11-19 |
EP2804111A4 (en) | 2016-03-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6357421B2 (ja) | オブジェクト認識装置、分類ツリー学習装置及びその動作方法 | |
Kamel et al. | Deep convolutional neural networks for human action recognition using depth maps and postures | |
CN103718175B (zh) | 检测对象姿势的设备、方法和介质 | |
CN107155360B (zh) | 用于对象检测的多层聚合 | |
Ar et al. | A computerized recognition system for the home-based physiotherapy exercises using an RGBD camera | |
Packer et al. | A combined pose, object, and feature model for action understanding | |
US9058663B2 (en) | Modeling human-human interactions for monocular 3D pose estimation | |
CN110073369A (zh) | 时间差分模型的无监督学习技术 | |
Kumar et al. | Indian sign language recognition using graph matching on 3D motion captured signs | |
Vejdemo-Johansson et al. | Cohomological learning of periodic motion | |
Papadopoulos et al. | Human action recognition using 3d reconstruction data | |
Haggag et al. | Semantic body parts segmentation for quadrupedal animals | |
Kumar et al. | Early estimation model for 3D-discrete indian sign language recognition using graph matching | |
Muhamada et al. | Review on recent computer vision methods for human action recognition | |
WO2024183454A1 (zh) | 虚拟对象动画生成方法、装置、电子设备、计算机可读存储介质及计算机程序产品 | |
Yi et al. | Generating Human Interaction Motions in Scenes with Text Control | |
US11361467B2 (en) | Pose selection and animation of characters using video data and training techniques | |
Devanne | 3d human behavior understanding by shape analysis of human motion and pose | |
Neskorodieva et al. | Real-time Classification, Localization and Tracking System (Based on Rhythmic Gymnastics) | |
US20240307783A1 (en) | Plotting behind the scenes with learnable game engines | |
Tong | Cross-modal learning from visual information for activity recognition on inertial sensors | |
Ray et al. | PressureTransferNet: Human Attribute Guided Dynamic Ground Pressure Profile Transfer using 3D simulated Pressure Maps | |
Zhao | Video Understanding: A Predictive Analytics Perspective | |
Zong | Evaluation of Training Dataset and Neural Network Architectures for Hand Pose Estimation in Real Time | |
Okechukwu et al. | A Less Convoluted Approach to 3D Pose Estimation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A621 | Written request for application examination |
Free format text: JAPANESE INTERMEDIATE CODE: A621 Effective date: 20160105 |
|
A977 | Report on retrieval |
Free format text: JAPANESE INTERMEDIATE CODE: A971007 Effective date: 20161130 |
|
A131 | Notification of reasons for refusal |
Free format text: JAPANESE INTERMEDIATE CODE: A131 Effective date: 20161213 |
|
A521 | Request for written amendment filed |
Free format text: JAPANESE INTERMEDIATE CODE: A523 Effective date: 20170309 |
|
A02 | Decision of refusal |
Free format text: JAPANESE INTERMEDIATE CODE: A02 Effective date: 20170328 |
|
A521 | Request for written amendment filed |
Free format text: JAPANESE INTERMEDIATE CODE: A523 Effective date: 20170728 |
|
A911 | Transfer to examiner for re-examination before appeal (zenchi) |
Free format text: JAPANESE INTERMEDIATE CODE: A911 Effective date: 20170807 |
|
A912 | Re-examination (zenchi) completed and case transferred to appeal board |
Free format text: JAPANESE INTERMEDIATE CODE: A912 Effective date: 20170901 |
|
A61 | First payment of annual fees (during grant procedure) |
Free format text: JAPANESE INTERMEDIATE CODE: A61 Effective date: 20180618 |
|
R150 | Certificate of patent or registration of utility model |
Ref document number: 6357421 Country of ref document: JP Free format text: JAPANESE INTERMEDIATE CODE: R150 |
|
R250 | Receipt of annual fees |
Free format text: JAPANESE INTERMEDIATE CODE: R250 |
|
R250 | Receipt of annual fees |
Free format text: JAPANESE INTERMEDIATE CODE: R250 |
|
R250 | Receipt of annual fees |
Free format text: JAPANESE INTERMEDIATE CODE: R250 |
|
R250 | Receipt of annual fees |
Free format text: JAPANESE INTERMEDIATE CODE: R250 |