WO2020062760A1 - 一种动作捕捉系统和方法 - Google Patents

一种动作捕捉系统和方法 Download PDF

Info

Publication number
WO2020062760A1
WO2020062760A1 PCT/CN2019/075208 CN2019075208W WO2020062760A1 WO 2020062760 A1 WO2020062760 A1 WO 2020062760A1 CN 2019075208 W CN2019075208 W CN 2019075208W WO 2020062760 A1 WO2020062760 A1 WO 2020062760A1
Authority
WO
WIPO (PCT)
Prior art keywords
infrared
image
infrared energy
motion
motion capture
Prior art date
Application number
PCT/CN2019/075208
Other languages
English (en)
French (fr)
Inventor
覃旭
佟令文
蒋丽梅
Original Assignee
深圳市中视典数字科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市中视典数字科技有限公司 filed Critical 深圳市中视典数字科技有限公司
Publication of WO2020062760A1 publication Critical patent/WO2020062760A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/143Sensing or illuminating at different wavelengths
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Definitions

  • the invention relates to the technical field of gesture recognition, in particular to a motion capture system and method.
  • Motion capture also called motion capture, refers to the technology of recording and processing the movements of people or other objects. It usually includes capture camera parts, special capture clothes and capture reflectors; usually the system is equipped with special motion capture software for system settings, capture process control, editing processing of capture data, output, etc. After setting up the system in specific capture environments such as studios, warehouses, and studios, the staff can attach reflective ball capture points on the actors' heads, knees, and other joints to capture. The actor performs according to the requirements specified by the director. The data of the reflector is captured by the camera and stored in the control computer in real time. Usually actors perform multiple sets of actions, and the system operator edits and repairs the original data before outputting them to, for example, Maya and 3ds.
  • the present invention aims to solve at least one of the technical problems in the related technology. To this end, it is an object of the present invention to provide a motion capture system and method.
  • the technical solution adopted by the present invention is: a motion capture system including an infrared sensor group, an acquisition area, and a processing unit, wherein the acquisition area includes a capture object and a surrounding environment, and the measurement of the acquisition area by the infrared sensor group is performed.
  • Infrared energy which generates an infrared image according to the distribution of infrared energy
  • the processing unit processes the infrared image based on an artificial intelligence deep learning model to obtain motion information of a captured object.
  • the infrared sensor measures the infrared energy released by the capture object and the infrared energy of the surrounding environment at a predetermined frequency, and the predetermined frequency includes 60 FPS -1000FPS.
  • the infrared sensor is used to measure infrared energy of infrared light in a certain wavelength range, and the certain wavelength range includes an error value of 12 ⁇ m ⁇ .
  • the infrared sensor group includes at least two infrared sensors for determining the infrared image of the captured object according to the difference between the infrared energy of the captured object and the surrounding environment.
  • the artificial intelligence deep learning model includes an image template, a binocular ranging processing algorithm, and an optimization algorithm, wherein the processing unit constructs a bone image according to the infrared image and the image template, and records the corresponding bones in time Image; the processing unit processes the skeletal image according to the binocular ranging processing algorithm to form a depth image; the processing unit processes the depth image according to the optimization algorithm to obtain three-dimensional skeletal point motion parameters, and marks all The skeletal point motion parameters are motion information.
  • Another technical solution adopted by the present invention is: a motion capture method suitable for the above-mentioned system, comprising the steps of: measuring infrared energy of a collection area, generating an infrared image according to the distribution of the infrared energy; and processing said based on an artificial intelligence deep learning model Infrared image to capture motion information of the captured object.
  • the infrared sensor measures the infrared energy released by the capture object and the infrared energy of the surrounding environment at a predetermined frequency, and the predetermined frequency includes 60 FPS -1000FPS.
  • the infrared energy of the infrared rays in a certain wavelength range of the acquisition area is measured, and the certain wavelength range includes an error value of 12 ⁇ m ⁇ .
  • At least two infrared sensors are provided for determining the infrared image of the captured object according to the difference between the infrared energy of the captured object and the surrounding environment, respectively.
  • the step of processing the infrared image based on the artificial intelligence deep learning model includes: constructing a skeletal image according to the infrared image and a preset image template, and recording the corresponding skeletal image according to time; and processing the image according to a binocular ranging processing algorithm.
  • the bone image is described to form a depth image; the depth image is processed according to a preset optimization algorithm to obtain three-dimensional bone point motion parameters, and the bone point motion parameters are marked as motion information.
  • the invention measures the infrared energy of the acquisition area through an infrared sensor group, and generates an infrared image according to the distribution of the infrared energy; the processing unit processes the infrared image based on the artificial intelligence deep learning model to obtain the motion information of the captured object, and can be estimated by the distribution of the red line A reasonable bone position can be identified and recorded.
  • FIG. 1 is a schematic diagram of a motion capture method according to the present invention
  • FIG. 2 is a schematic diagram of a motion capture system according to the present invention.
  • the infrared image is processed based on the artificial intelligence deep learning model to obtain motion information of the captured object.
  • the infrared energy of the acquisition area is measured by a thermal imaging camera (the principle is to convert the detected infrared energy into an electrical signal, and then generate a thermal image and temperature value on the display, and the temperature value can be calculated; for cost
  • a thermal imaging camera the principle is to convert the detected infrared energy into an electrical signal, and then generate a thermal image and temperature value on the display, and the temperature value can be calculated; for cost
  • none of the existing thermal imaging cameras can and cannot only obtain the infrared energy of a single target accurately, so they will acquire the infrared energy of an area
  • the background area that is, the surrounding environment of the captured object
  • Biological creatures similar to the human body therefore, there is a big difference between the infrared rays released by the surrounding environment and the infrared rays released by the human body.
  • the infrared energy image is extracted separately to form an infrared image; even if the recognition ability of the thermal imaging camera is high, and it can obtain a detailed distribution image of the infrared capabilities of different human body regions, it can form a very close to a human shape image (ie Infrared image).
  • the infrared image is processed based on the artificial intelligence deep learning model to obtain the motion information of the captured object.
  • This model is a trained processing model for processing data, which basically includes image templates, binocular ranging processing algorithms and optimization algorithms.
  • the combination of models can output motion information.
  • image recognition and matching based on image templates is a common method of image processing.
  • the main purpose is to process the image to obtain a contour image of a human body and obtain the distribution of bones, and bones include many joints. Yes, the joints of these joints (the junction of two bones) will generate various angles. These angles and the length of the joints (that is, the bones) belong to the motion information.
  • the purpose is to exclude bad factors such as image noise, such as clothing and decoration (because clothing also emits infrared rays similar to humans due to body temperature, which needs to be excluded).
  • the main point is to judge the source of infrared rays (belonging to the body or belonging to Clothing) and then exclude Infrared body;
  • the artificial intelligence deep learning model mentioned in Embodiment 1 is essentially a feature set and judgment process extracted after multiple / multiple data sets training. Therefore, the more parameters used for reference, the more accurate it will be. For example, if the precise location of the target can be obtained, it can be applied to the corresponding data set (data sets of different distances and distances will produce different effects, consistent with the principle of near-far method), and the obtained processing results are also most in line with reality; then Use at least two thermal imaging cameras (or other similar devices, such as binocular ranging sensors on the market, etc.) to obtain images of the same target respectively, and then use the binocular ranging method to obtain the precise position of the target (that is, the depth image Including different distances and different infrared energy levels, such as infrared light sources with different distances, the infrared energy generated by them will show different colors. Similarly, the infrared energy generated by different parts of the human body is not the same, including the brain. Will generate very strong infrared rays),
  • the infrared sensor measures the infrared energy released by the capture object and the infrared energy of the surrounding environment at a predetermined frequency, and the predetermined frequency includes 60 FPS -1000FPS.
  • the certain wavelength range includes 12 ⁇ m ⁇ error value.
  • the acquisition frequency of the infrared sensor is able to adapt to the data collection and utilization requirements of human motion in video, image and other aspects;
  • 12 ⁇ m is the range of the wavelength that the human body emits infrared rays. It can specifically be 12 ⁇ m ⁇ error value.
  • the error value is small (for example, 1 ⁇ m).
  • the purpose is to allow the error of the infrared sensor function and improve the overall infrared energy measurement effect. .
  • the purpose of this embodiment is to provide a motion capture system as shown in FIG. 2, including:
  • Infrared sensor group 1 acquisition area 2, and processing unit 3, where the acquisition area includes the capture object 21 and the surrounding environment, the infrared energy of the acquisition area is measured by the infrared sensor group, and an infrared image is generated according to the infrared energy distribution; the processing unit is based on artificial intelligence A deep learning model processes the infrared image to obtain motion information of a captured object;
  • the infrared sensor group includes at least two infrared cameras (for achieving binocular distance measurement), and the processing unit includes an ordinary PC, which is provided with several processing software for image processing.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Geophysics And Detection Of Objects (AREA)

Abstract

本发明公开了一种动作捕捉系统和方法,系统包括:红外传感器组,采集区域,处理单元,其中,采集区域包括捕捉对象和周边环境,通过红外传感器组测量采集区域的红外能量,根据红外能量的分布生成红外图像;处理单元基于人工智能深度学习模型处理红外图像以获取捕捉对象的动作信息。方法适用于系统。本发明通过红外传感器组测量采集区域的红外能量,根据红外能量的分布生成红外图像;处理单元基于人工智能深度学习模型处理所述红外图像以获取捕捉对象的动作信息,能够借助红线的分布情况推算出合理的骨骼位置实现动作的识别和记录。

Description

一种动作捕捉系统和方法
技术领域
本发明涉及姿态识别技术领域,尤其是一种动作捕捉系统和方法。
背景技术
动作捕捉又称为动态捕捉,是指记录并处理人或其他物体动作的技术。其通常包含捕捉摄像机件、专用捕捉衣服和捕捉反光球;通常系统配有专门的运动捕捉软件,进行系统设定、捕捉过程控制、捕捉数据的编辑处理、输出等。工作人员在特定的捕捉环境如工作室、仓库、摄影棚等地方将系统搭设好后,在演员的头、膝盖、其他关节处贴好反光球捕捉点,即可进行捕捉。演员按照导演指定的要求进行表演,反光球的数据被摄像机捕捉后实时存储到控制电脑里。通常演员表演多组动作,系统操作人员对原始数据进行编辑、修补等处理之后,再输出到如maya、3ds max、softimage、XSI、MotionBuilder等主流的三维软件,动画师使用运动数据驱动后继软件中的三维模型的相对应骨骼节点以形成动作细节,这种方法的问题在于反光球在实际的动作过程中会随着动作幅度改变反光效果,出现个别发光球不能正常被摄像机识别,导致不能合理的获取整体的动作细节。
发明内容
本发明旨在至少在一定程度上解决相关技术中的技术问题之一。为此,本发明的一个目的是提供一种动作捕捉系统和方法。
本发明所采用的技术方案是:一种动作捕捉系统,包括:红外传感器组,采集区域,处理单元,其中,所述采集区域包括捕捉对象和周边环境,通过所述红外传感器组测量采集区域的红外能量,根据红外能量的分布生成红外图像;所述处理单元基于人工智能深度学习模型处理所述红外图像以获取捕捉对象的动作信息。
优选地,所述红外传感器以预定的频率测量捕捉对象释放的红外能量以及周围环境的红外能量,所述预定的频率包括60 FPS -1000FPS。
优选地,所述红外传感器用于测量一定波长范围内的红外线的红外能量,所述一定的波长范围包括12μm±误差值。
优选地,所述红外传感器组包括最少两个红外传感器,用于分别根据捕捉对象和周围环境的红外能量的差值以确定捕捉对象的红外图像。
优选地,所述人工智能深度学习模型包括图像模板、双目测距处理算法和优化算法,其中,所述处理单元根据所述红外图像和所述图像模板构建骨骼图像,依时间记录对应的骨骼图像;所述处理单元根据所述双目测距处理算法处理所述骨骼图像以形成深度图像;所述处理单元根据所述优化算法处理所述深度图像以获取三维的骨骼点运动参数,标记所述骨骼点运动参数为动作信息。
本发明所采用的另一技术方案是:一种动作捕捉方法,适用于上述系统,包括步骤:测量采集区域的红外能量,根据红外能量的分布生成红外图像;基于人工智能深度学习模型处理所述红外图像以获取捕捉对象的动作信息。
优选地,所述红外传感器以预定的频率测量捕捉对象释放的红外能量以及周围环境的红外能量,所述预定的频率包括60 FPS -1000FPS。
优选地,测量采集区域的一定波长范围内的红外线的红外能量,所述一定的波长范围包括12μm±误差值。
优选地,设置最少两个红外传感器,用于分别根据捕捉对象和周围环境的红外能量的差值以确定捕捉对象的红外图像。
优选地,所述基于人工智能深度学习模型处理所述红外图像的步骤包括:根据红外图像和预设的图像模板构建骨骼图像,依时间记录对应的骨骼图像;根据双目测距处理算法处理所述骨骼图像以形成深度图像;根据预设的优化算法处理深度图像以获取三维的骨骼点运动参数,标记所述骨骼点运动参数为动作信息。
本发明的有益效果是:
本发明通过红外传感器组测量采集区域的红外能量,根据红外能量的分布生成红外图像;处理单元基于人工智能深度学习模型处理所述红外图像以获取捕捉对象的动作信息,能够借助红线的分布情况推算出合理的骨骼位置实现动作的识别和记录。
附图说明
图1是本发明的一种动作捕捉方法的示意图;
图2是本发明的一种动作捕捉系统的示意图。
具体实施方式
需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。
实施例1
本实施例的目的在于说明现有技术的缺陷和本发明的解决思路。
现有的在人体上设置反光球或者其他标记物的方法,在实际的测试中,由于人的动作幅度的问题,会出现反光球出现在镜头获取不到的位置,而对应的解决方法包括设置多个摄像机或者提供多种规格的反光球以区别彼此,这会对硬件和处理软件都提出额外的需求,相应的会增加整体的成本,本实施例提供如图1所示一种动作捕捉方法,包括步骤:
S1、测量采集区域的红外能量,根据红外能量的分布生成红外图像;
S2、基于人工智能深度学习模型处理所述红外图像以获取捕捉对象的动作信息。
其中,通过热成像相机测量采集区域的红外能量(其原理是将探测到的红外能量转换为电信号,进而在显示器上生成热图像和温度值,并可以对温度值进行计算;出于成本的目的,现有的热成像相机都不会也不能只确切的获取单一目标的红外能量,所以都会获取一个区域的红外能量),而背景区域(即捕捉对象的周围环境)一般都不会安排有与人体类似的生物,因此,周围环境所释放的红外线和人体释放的红外线有很大的区别,根据这些区别可以很容易的区别红外分布中那些区域属于人,那些区域属于背景,则将人的红外能量图像单独提取出来,则形成红外图像;甚至,如果热成像相机的识别能力如果很高,能够获取不同人体区域的红外能力的细致的分布图像,则可以形成十分近似于人形的图像(即红外图像)。
基于人工智能深度学习模型处理红外图像以获取捕捉对象的动作信息,该模型是经过训练的用于处理数据的处理模型,基本包括图像模板、双目测距处理算法和优化算法;通过红外图像和模型的结合,可以输出动作信息;其中,基于图像模板进行图像识别和匹配是图像处理的惯用手段,其要旨是处理图像以获得一个人体的轮廓图像并获取骨骼的分布,而骨骼是包括很多关节的,这些关节(两个骨头的连接处)相连接的部分会产生各种角度,这些角度和连接部分(即骨头)的长度即属于动作信息,本实施例不进行进一步的说明;优化算法的目的是排除图像噪声等不良因素,例如衣料、装饰物等因素(因为衣料也会由于体温的关系释放出近似人体的红外线,需要排除),其要旨是根据阈值判断红外线的源头(属于身体或者属于衣物),然后排除不属于人体的红外线;
实施例2
本实施例的目的在于说明优选方案。
如实施例1所提到的人工智能深度学习模型,其实质是经过多次/多种数据集的训练所提取的特征集合和判断流程,因此,用于参考的参数越多,越能得到精确的结果,例如,能够获得目的的精确位置,则能够适用于对应位置的数据集(不同远近的数据集会产生不同的效果,符合远近法的原理),那么得到的处理结果也最符合实际;则通过最少两个热成像相机(或者其他类似设备,例如市面上出售的双目测距传感器等),分别获得同一目标的图像,然后根据双目测距方法可以得到目标的精确位置(即深度图像,包括不同距离和不同红外能量层次,例如不同的距离的红外光源其产生红外能量会在颜色上显示出不同,同理,不同的人体部位,其产生的红外能量也不尽一致,其中脑部就会产生很强的红外线),其中,双目测距方法是现有成熟技术,本实施例不进行进一步的描述。
实施例3
红外传感器以预定的频率测量捕捉对象释放的红外能量以及周围环境的红外能量,预定的频率包括60 FPS -1000FPS。
测量采集区域的一定波长范围内的红外线的红外能量,所述一定的波长范围包括12μm±误差值。
上述的数值都是在实际的测试和训练过程中效果较优的数值,其中,红外传感器的获取频率是能够适应人体动作在视频、图像等方面的数据采集、利用方面的需求;
12μm是符合人体释放红外线的波长的范围,具体可以是12μm±误差值,误差值为较小值(例如1μm),其目的是允许红外传感器的机能上的误差,提高整体的红外能量测量的效果。
实施例4
本实施例的目的用于提供如图2所示一种动作捕捉系统,包括:
红外传感器组1,采集区域2,处理单元3,其中,采集区域包括捕捉对象21和周边环境,通过红外传感器组测量采集区域的红外能量,根据红外能量的分布生成红外图像;处理单元基于人工智能深度学习模型处理所述红外图像以获取捕捉对象的动作信息;
其中,红外传感器组包括最少两个红外摄像机(用于实现双目测距),处理单元包括普通的PC,其设置有若干处理软件以进行图像的处理。
以上是对本发明的较佳实施进行了具体说明,但本发明创造并不限于所述实施例,熟悉本领域的技术人员在不违背本发明精神的前提下还可做出种种的等同变形或替换,这些等同的变形或替换均包含在本申请权利要求所限定的范围内。

Claims (10)

  1. 一种动作捕捉系统,其特征在于,包括:
    红外传感器组,采集区域,处理单元,其中,所述采集区域包括捕捉对象和周边环境,通过所述红外传感器组测量采集区域的红外能量,根据红外能量的分布生成红外图像;
    所述处理单元基于人工智能深度学习模型处理所述红外图像以获取捕捉对象的动作信息。
  2. 根据权利要求1所述的一种动作捕捉系统,其特征在于,所述红外传感器以预定的频率测量捕捉对象释放的红外能量以及周围环境的红外能量,所述预定的频率包括60 FPS -1000FPS。
  3. 根据权利要求1所述的一种动作捕捉系统,其特征在于,所述红外传感器用于测量一定波长范围内的红外线的红外能量,所述一定的波长范围包括12μm±误差值。
  4. 根据权利要求1所述的一种动作捕捉系统,其特征在于,所述红外传感器组包括最少两个红外传感器,用于分别根据捕捉对象和周围环境的红外能量的差值以确定捕捉对象的红外图像。
  5. 根据权利要求4所述的一种动作捕捉系统,其特征在于,所述人工智能深度学习模型包括图像模板、双目测距处理算法和优化算法,其中,
    所述处理单元根据所述红外图像和所述图像模板构建骨骼图像,依时间记录对应的骨骼图像;
    所述处理单元根据所述双目测距处理算法处理所述骨骼图像以形成深度图像;
    所述处理单元根据所述优化算法处理所述深度图像以获取三维的骨骼点运动参数,标记所述骨骼点运动参数为动作信息。
  6. 一种动作捕捉方法,适用于权利要求1所述系统,其特征在于,包括步骤:
    测量采集区域的红外能量,根据红外能量的分布生成红外图像;
    基于人工智能深度学习模型处理所述红外图像以获取捕捉对象的动作信息。
  7. 根据权利要求6所述一种动作捕捉方法,其特征在于,所述红外传感器以预定的频率测量捕捉对象释放的红外能量以及周围环境的红外能量,所述预定的频率包括60 FPS -1000FPS。
  8. 根据权利要求6所述一种动作捕捉方法,其特征在于,测量采集区域的一定波长范围内的红外线的红外能量,所述一定的波长范围包括12μm±误差值。
  9. 根据权利要求6所述一种动作捕捉方法,其特征在于,设置最少两个红外传感器,用于分别根据捕捉对象和周围环境的红外能量的差值以确定捕捉对象的红外图像。
  10. 根据权利要求9所述一种动作捕捉方法,其特征在于,所述基于人工智能深度学习模型处理所述红外图像的步骤包括:
    根据红外图像和预设的图像模板构建骨骼图像,依时间记录对应的骨骼图像;
    根据双目测距处理算法处理所述骨骼图像以形成深度图像;
    根据预设的优化算法处理深度图像以获取三维的骨骼点运动参数,标记所述骨骼点运动参数为动作信息。
PCT/CN2019/075208 2018-09-26 2019-02-15 一种动作捕捉系统和方法 WO2020062760A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811122786.9A CN109446906A (zh) 2018-09-26 2018-09-26 一种动作捕捉系统和方法
CN201811122786.9 2018-09-26

Publications (1)

Publication Number Publication Date
WO2020062760A1 true WO2020062760A1 (zh) 2020-04-02

Family

ID=65544559

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/075208 WO2020062760A1 (zh) 2018-09-26 2019-02-15 一种动作捕捉系统和方法

Country Status (2)

Country Link
CN (1) CN109446906A (zh)
WO (1) WO2020062760A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117058766A (zh) * 2023-09-11 2023-11-14 轻威科技(绍兴)有限公司 一种基于主动光频闪的动作捕捉系统和方法

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110363140B (zh) * 2019-07-15 2022-11-11 成都理工大学 一种基于红外图像的人体动作实时识别方法

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20040080102A (ko) * 2003-03-10 2004-09-18 (주) 모비다임 인체 촬상센서를 이용한 모션캡춰장치
CN102725038A (zh) * 2009-09-15 2012-10-10 索尼公司 组合多传感输入以用于数字动画
CN202615315U (zh) * 2012-05-28 2012-12-19 深圳泰山在线科技有限公司 动作识别器
CN106096518A (zh) * 2016-06-02 2016-11-09 哈尔滨多智科技发展有限公司 基于深度学习的快速动态人体动作提取、识别方法
CN106778481A (zh) * 2016-11-15 2017-05-31 上海百芝龙网络科技有限公司 一种人体状况监测方法
CN107551525A (zh) * 2017-10-18 2018-01-09 京东方科技集团股份有限公司 辅助健身系统及方法、健身器材
CN109101935A (zh) * 2018-08-20 2018-12-28 深圳市中视典数字科技有限公司 基于热成像相机的人物动作捕捉系统和方法

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105003301B (zh) * 2015-06-04 2017-11-28 中国矿业大学 一种综采工作面工作人员危险姿态检测装置及检测系统

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20040080102A (ko) * 2003-03-10 2004-09-18 (주) 모비다임 인체 촬상센서를 이용한 모션캡춰장치
CN102725038A (zh) * 2009-09-15 2012-10-10 索尼公司 组合多传感输入以用于数字动画
CN202615315U (zh) * 2012-05-28 2012-12-19 深圳泰山在线科技有限公司 动作识别器
CN106096518A (zh) * 2016-06-02 2016-11-09 哈尔滨多智科技发展有限公司 基于深度学习的快速动态人体动作提取、识别方法
CN106778481A (zh) * 2016-11-15 2017-05-31 上海百芝龙网络科技有限公司 一种人体状况监测方法
CN107551525A (zh) * 2017-10-18 2018-01-09 京东方科技集团股份有限公司 辅助健身系统及方法、健身器材
CN109101935A (zh) * 2018-08-20 2018-12-28 深圳市中视典数字科技有限公司 基于热成像相机的人物动作捕捉系统和方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117058766A (zh) * 2023-09-11 2023-11-14 轻威科技(绍兴)有限公司 一种基于主动光频闪的动作捕捉系统和方法
CN117058766B (zh) * 2023-09-11 2023-12-19 轻威科技(绍兴)有限公司 一种基于主动光频闪的动作捕捉系统和方法

Also Published As

Publication number Publication date
CN109446906A (zh) 2019-03-08

Similar Documents

Publication Publication Date Title
CN101512599B (zh) 三维模型获取的方法和系统
TWI530909B (zh) 影像合成系統及方法
CN108240793A (zh) 物体尺寸测量方法、装置和系统
WO2020062760A1 (zh) 一种动作捕捉系统和方法
WO2018166150A1 (zh) 一种应用于大型多板波浪模拟系统的运动测量方法与装置
JP2015111101A (ja) 情報処理装置および方法
CN109101935A (zh) 基于热成像相机的人物动作捕捉系统和方法
KR20190142626A (ko) Uav 탑재용 하이브리드 이미지 스캐닝에 기반한 자동화 구조물 균열 평가 시스템 및 그 방법
Mobedi et al. 3-D active sensing in time-critical urban search and rescue missions
Wang et al. Phocal: A multi-modal dataset for category-level object pose estimation with photometrically challenging objects
CN108388341A (zh) 一种基于红外摄像机-可见光投影仪的人机交互系统及装置
JP7330970B2 (ja) コンピュータ生成された参照物体を有するマシンビジョンシステム
CN106767526A (zh) 一种基于激光mems振镜投影的彩色多线激光三维测量方法
CN113115008A (zh) 一种基于快速跟踪注册增强现实技术的管廊主从操作巡检系统及方法
Wang et al. Temporal matrices mapping-based calibration method for event-driven structured light systems
KR20210029258A (ko) 광학 측정 방법 및 광학 측정 장치
JP2008017386A (ja) キー画像生成装置
Rothbucher et al. Measuring anthropometric data for HRTF personalization
He et al. A novel and systematic signal extraction method for high-temperature object detection via structured light vision
Zhang et al. Motion-guided dual-camera tracker for low-cost skill evaluation of gastric endoscopy
CN108182727B (zh) 基于多视点几何一致性的相位展开方法
Bi et al. Multi-camera stereo vision based on weights
CN110210322A (zh) 一种通过3d原理进行人脸识别的方法
CN215305700U (zh) 一种自动定位机械臂口腔灯
CN113421286A (zh) 一种动作捕捉系统及方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19865573

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19865573

Country of ref document: EP

Kind code of ref document: A1