CN109446906A - A kind of motion capture system and method - Google Patents
A kind of motion capture system and method Download PDFInfo
- Publication number
- CN109446906A CN109446906A CN201811122786.9A CN201811122786A CN109446906A CN 109446906 A CN109446906 A CN 109446906A CN 201811122786 A CN201811122786 A CN 201811122786A CN 109446906 A CN109446906 A CN 109446906A
- Authority
- CN
- China
- Prior art keywords
- infrared
- image
- motion capture
- infrared energy
- energy
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/12—Details of acquisition arrangements; Constructional details thereof
- G06V10/14—Optical characteristics of the device performing the acquisition or on the illumination arrangements
- G06V10/143—Sensing or illuminating at different wavelengths
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
Abstract
The invention discloses a kind of motion capture system and methods, system includes: infrared sensor group, pickup area, processing unit, wherein, pickup area includes capturing object and surrounding enviroment, and the infrared energy of pickup area is measured by infrared sensor group, generates infrared image according to the distribution of infrared energy;Processing unit obtains the action message for capturing object based on artificial intelligence deep learning model treatment infrared image.Method is suitable for system.The present invention measures the infrared energy of pickup area by infrared sensor group, generates infrared image according to the distribution of infrared energy;Processing unit obtains the action message for capturing object based on infrared image described in artificial intelligence deep learning model treatment, and identification and the record of reasonable bone site realization movement can be extrapolated by the distribution situation of red line.
Description
Technical field
The present invention relates to gesture recognition technical field, especially a kind of motion capture system and method.
Background technique
Motion capture is also known as dynamic and captures, and refers to record and handles people or the technology of other object motions.It is usually wrapped
Parts, dedicated capture clothes and capture witch ball are imaged containing capturing;Usual system is furnished with special motion-captured software, is
System setting, capture-process control, the editing and processing, the output that capture data etc..Staff is specifically capturing environment as worked
After system is set up in the place such as room, warehouse, film studio, witch ball is posted in the head of performer, knee, other joints and is captured
Point can be captured.Performer performs according to the specified requirement of director, and the data of witch ball are by real-time after cameras capture
It stores in control computer.Usual actor performance multiple groups act, and system operators are edited initial data, repaired etc.
Manage and then be output to such as maya, 3ds max, softimage, XSI, MotionBuilder mainstream three-dimensional software, move
Painter drives the corresponding bone node of the threedimensional model in subsequent software to form movement details, this side using exercise data
The problem of method, is that witch ball separate radiation ball can occur as movement range changes reflecting effect in actual action process
It cannot normally be identified by video camera, cause reasonably obtain whole movement details.
Summary of the invention
The present invention is directed to solve at least some of the technical problems in related technologies.For this purpose, of the invention
One purpose is to provide a kind of motion capture system and method.
The technical scheme adopted by the invention is that: a kind of motion capture system, comprising: infrared sensor group, pickup area,
Processing unit, wherein the pickup area includes capturing object and surrounding enviroment, is measured and is acquired by the infrared sensor group
The infrared energy in region generates infrared image according to the distribution of infrared energy;The processing unit is based on artificial intelligence depth
Infrared image described in model treatment is practised to obtain the action message for capturing object.
Preferably, the infrared sensor captures the infrared energy and surrounding ring of object release with scheduled frequency measurement
The infrared energy in border, the scheduled frequency includes 60FPS-1000FPS.
Preferably, the infrared sensor is used to measure the infrared energy of infrared ray in a wavelength range, and described one
Fixed wave-length coverage includes 12 μm of ± error amounts.
Preferably, the infrared sensor group includes minimum two infrared sensors, for respectively according to capture object and
The difference of the infrared energy of ambient enviroment captures the infrared image of object to determine.
Preferably, the artificial intelligence deep learning model includes that image template, binocular ranging Processing Algorithm and optimization are calculated
Method, wherein the processing unit constructs bone image according to the infrared image and described image template, records and corresponds to according to the time
Bone image;The processing unit handles the bone image according to the binocular ranging Processing Algorithm to form depth map
Picture;The processing unit handles the depth image according to the optimization algorithm to obtain three-dimensional skeleton point kinematic parameter, mark
Remember that the skeleton point kinematic parameter is action message.
It is of the present invention another solution is that a kind of motion capture method, be suitable for above system, including step
It is rapid: to measure the infrared energy of pickup area, infrared image is generated according to the distribution of infrared energy;Based on artificial intelligence deep learning
Infrared image described in model treatment captures the action message of object to obtain.
Preferably, the infrared sensor captures the infrared energy and surrounding ring of object release with scheduled frequency measurement
The infrared energy in border, the scheduled frequency includes 60FPS-1000FPS.
Preferably, the infrared energy of the infrared ray in a wavelength range of pickup area, certain wavelength are measured
Range includes 12 μm of ± error amounts.
Preferably, minimum two infrared sensors are set, for respectively according to the infrared energy for capturing object and ambient enviroment
The difference of amount captures the infrared image of object to determine.
Preferably, described the step of being based on infrared image described in artificial intelligence deep learning model treatment includes: according to red
Outer image and preset image template construct bone image, record corresponding bone image according to the time;It is handled according to binocular ranging
Bone image described in algorithm process is to form depth image;Depth image is handled according to preset optimization algorithm to obtain three-dimensional
Skeleton point kinematic parameter, marking the skeleton point kinematic parameter is action message.
The beneficial effects of the present invention are:
The present invention measures the infrared energy of pickup area by infrared sensor group, is generated according to the distribution of infrared energy red
Outer image;Processing unit obtains the movement letter for capturing object based on infrared image described in artificial intelligence deep learning model treatment
Breath can extrapolate identification and the record of reasonable bone site realization movement by the distribution situation of red line.
Detailed description of the invention
Fig. 1 is a kind of schematic diagram of motion capture method of the invention;
Fig. 2 is a kind of schematic diagram of motion capture system of the invention.
Specific embodiment
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase
Mutually combination.
Embodiment 1
The purpose of the present embodiment is that illustrate the prior art defect and resolving ideas of the invention.
It is existing that witch ball or the method for other markers are set on human body, in actual test, due to people's
The problem of movement range, it may appear that witch ball appear in camera lens obtain less than position, and corresponding solution include setting
To distinguish each other, this can all propose additionally hardware and processing software for multiple video cameras or the witch ball of offer plurality of specifications
Demand will increase whole cost accordingly, and the present embodiment provides a kind of motion capture methods as shown in Figure 1, comprising steps of
S1, the infrared energy for measuring pickup area generate infrared image according to the distribution of infrared energy;
S2, the action message that capture object is obtained based on infrared image described in artificial intelligence deep learning model treatment.
Wherein, by the infrared energy of thermal imaging camera measurement pickup area, (its principle is the infrared energy that will be detected
Electric signal is converted to, and then generates thermal image and temperature value over the display, and temperature value can be calculated;For cost
Purpose, existing thermal imaging camera all will not can not only it is exact obtain simple target infrared energy, so can all obtain
Take the infrared energy in a region), and background area (capturing the ambient enviroment of object) generally will not all arrange and human body
Similar biology, therefore, the infrared ray for infrared ray and the human body release that ambient enviroment is discharged make a big difference, according to these
Difference can easily distinguish those regions in infrared profile and belong to people, those regions belong to background, then by the infrared energy of people
Spirogram picture individually extracts, then forms infrared image;It even, can if if the recognition capability of thermal imaging camera is very high
The careful distributed image for obtaining the IR capability in different human body region, then can form and extremely be similar to humanoid image (i.e.
Infrared image).
Based on artificial intelligence deep learning model treatment infrared image to obtain the action message for capturing object, which is
The trained processing model for being used to handle data includes image template, binocular ranging Processing Algorithm and optimization algorithm substantially;
It, can be with output action information by the combination of infrared image and model;Wherein, image recognition and matching are carried out based on image template
It is the customary means of image procossing, main idea is to handle image to obtain the contour images of a human body and obtain point of bone
Cloth, and bone includes many joints, the part that these joints (junction of two bone) are connected can generate various angles
It spends, the length of these angles and coupling part (i.e. bone) belongs to action message, and the present embodiment is without further instruction;
The purpose of optimization algorithm be the factors such as the undesirable elements such as rejection image noise, such as dress material, ornament (because dress material also due to
The relationship of body temperature releases the infrared ray of approximating anatomy, needs to exclude), main idea is the source according to threshold decision infrared ray
(belong to body or belong to clothing) then excludes the infrared ray for being not belonging to human body;
Embodiment 2
The purpose of the present embodiment is that illustrating preferred embodiment.
Such as the artificial intelligence deep learning model that embodiment 1 is previously mentioned, its essence is the instructions by multiple/a variety of data set
Practice extracted characteristic set and judge process, therefore, the parameter for reference is more, can more obtain accurately as a result, for example,
The exact position of purpose can be obtained, then capable of being suitable for the data set of corresponding position, (different far and near data sets can generate not
Same effect, meets the principle of far and near method), then obtained processing result is also best suitable for reality;Then by minimum two heat at
As camera (or other similar equipment, such as the binocular distance measuring sensor sold on the market etc.), same target is obtained respectively
Image, then according to the exact position of the available target of binocular distance measuring method (i.e. depth image, including different distance and difference
Infrared energy level, for example, different distances infrared light supply its generate infrared energy difference can be shown in color, similarly,
Different human bodies, the infrared energy generated is also not the same, and deutocerebral region will generate very strong infrared ray), wherein
Binocular distance measuring method is existing mature technology, and the present embodiment is without further description.
Embodiment 3
Infrared sensor captures the infrared energy of object release and the infrared energy of ambient enviroment with scheduled frequency measurement
Amount, scheduled frequency includes 60FPS-1000FPS.
The infrared energy of the infrared ray in a wavelength range of pickup area is measured, certain wave-length coverage includes
12 μm of ± error amounts.
Above-mentioned numerical value is all the effect preferably numerical value in actual test and training process, wherein infrared sensor
Acquisition frequency be can adapt to human action video, in terms of data acquisition, use aspects demand;
12 μm are the ranges for meeting the wavelength of human body release infrared ray, specifically can be 12 μm of ± error amounts, error amount is
Smaller value (such as 1 μm) improves whole infrared energy measurement the purpose is to allow the dynamic error of infrared sensor
Effect.
Embodiment 4
The purpose of the present embodiment is for providing a kind of motion capture system as shown in Figure 2, comprising:
Infrared sensor group 1, pickup area 2, processing unit 3, wherein pickup area includes capturing object 21 and peripheral ring
Border measures the infrared energy of pickup area by infrared sensor group, generates infrared image according to the distribution of infrared energy;Processing
Unit obtains the action message for capturing object based on infrared image described in artificial intelligence deep learning model treatment;
Wherein, infrared sensor group includes minimum two thermal cameras (for realizing binocular ranging), processing unit packet
Common PC is included, is provided with several processing softwares to carry out the processing of image.
It is to be illustrated to preferable implementation of the invention, but the invention is not limited to the implementation above
Example, those skilled in the art can also make various equivalent variations on the premise of without prejudice to spirit of the invention or replace
It changes, these equivalent deformations or replacement are all included in the scope defined by the claims of the present application.
Claims (10)
1. a kind of motion capture system characterized by comprising
Infrared sensor group, pickup area, processing unit, wherein the pickup area includes capturing object and surrounding enviroment, is led to
The infrared energy for crossing the infrared sensor group measurement pickup area, generates infrared image according to the distribution of infrared energy;
The processing unit obtains the movement for capturing object based on infrared image described in artificial intelligence deep learning model treatment
Information.
2. a kind of motion capture system according to claim 1, which is characterized in that the infrared sensor is with scheduled frequency
Rate measurement captures the infrared energy of object release and the infrared energy of ambient enviroment, and the scheduled frequency includes 60FPS-
1000FPS。
3. a kind of motion capture system according to claim 1, which is characterized in that the infrared sensor is for measuring one
The infrared energy of infrared ray in wavelength range, certain wave-length coverage include 12 μm of ± error amounts.
4. a kind of motion capture system according to claim 1, which is characterized in that the infrared sensor group includes minimum
Two infrared sensors, for capturing object according to the difference for the infrared energy for capturing object and ambient enviroment respectively to determine
Infrared image.
5. a kind of motion capture system according to claim 4, which is characterized in that the artificial intelligence deep learning model
Including image template, binocular ranging Processing Algorithm and optimization algorithm, wherein
The processing unit constructs bone image according to the infrared image and described image template, records corresponding bone according to the time
Bone image;
The processing unit handles the bone image according to the binocular ranging Processing Algorithm to form depth image;
The processing unit handles the depth image according to the optimization algorithm to obtain three-dimensional skeleton point kinematic parameter, mark
Remember that the skeleton point kinematic parameter is action message.
6. a kind of motion capture method is suitable for system described in claim 1, which is characterized in that comprising steps of
The infrared energy for measuring pickup area generates infrared image according to the distribution of infrared energy;
Based on infrared image described in artificial intelligence deep learning model treatment to obtain the action message for capturing object.
7. a kind of motion capture method according to claim 6, which is characterized in that the infrared sensor is with scheduled frequency
Measurement captures the infrared energy of object release and the infrared energy of ambient enviroment, and the scheduled frequency includes 60FPS-
1000FPS。
8. a kind of motion capture method according to claim 6, which is characterized in that measure a wavelength range of pickup area
The infrared energy of interior infrared ray, certain wave-length coverage include 12 μm of ± error amounts.
9. a kind of motion capture method according to claim 6, which is characterized in that minimum two infrared sensors of setting are used
In respectively according to the difference for the infrared energy for capturing object and ambient enviroment to determine the infrared image for capturing object.
10. a kind of motion capture method according to claim 9, which is characterized in that described to be based on artificial intelligence deep learning
Include: the step of infrared image described in model treatment
Bone image is constructed according to infrared image and preset image template, records corresponding bone image according to the time;
The bone image is handled according to binocular ranging Processing Algorithm to form depth image;
Depth image is handled to obtain three-dimensional skeleton point kinematic parameter according to preset optimization algorithm, and the skeleton point is marked to transport
Dynamic parameter is action message.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811122786.9A CN109446906A (en) | 2018-09-26 | 2018-09-26 | A kind of motion capture system and method |
PCT/CN2019/075208 WO2020062760A1 (en) | 2018-09-26 | 2019-02-15 | Motion capture system and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811122786.9A CN109446906A (en) | 2018-09-26 | 2018-09-26 | A kind of motion capture system and method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109446906A true CN109446906A (en) | 2019-03-08 |
Family
ID=65544559
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811122786.9A Pending CN109446906A (en) | 2018-09-26 | 2018-09-26 | A kind of motion capture system and method |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109446906A (en) |
WO (1) | WO2020062760A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110363140A (en) * | 2019-07-15 | 2019-10-22 | 成都理工大学 | A kind of human action real-time identification method based on infrared image |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117058766B (en) * | 2023-09-11 | 2023-12-19 | 轻威科技(绍兴)有限公司 | Motion capture system and method based on active light stroboscopic effect |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN202615315U (en) * | 2012-05-28 | 2012-12-19 | 深圳泰山在线科技有限公司 | Movement identifier |
CN105003301A (en) * | 2015-06-04 | 2015-10-28 | 中国矿业大学 | Apparatus and system for detecting dangerous postures of worker on fully mechanized coal face |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20040080102A (en) * | 2003-03-10 | 2004-09-18 | (주) 모비다임 | Motion Capture Apparatus using Human Body Imaging Sensor |
WO2011034963A2 (en) * | 2009-09-15 | 2011-03-24 | Sony Corporation | Combining multi-sensory inputs for digital animation |
CN106096518A (en) * | 2016-06-02 | 2016-11-09 | 哈尔滨多智科技发展有限公司 | Quick dynamic human body action extraction based on degree of depth study, recognition methods |
CN106778481A (en) * | 2016-11-15 | 2017-05-31 | 上海百芝龙网络科技有限公司 | A kind of body heath's monitoring method |
CN107551525B (en) * | 2017-10-18 | 2019-08-02 | 京东方科技集团股份有限公司 | Fitness-assisting system and method, fitness equipment |
CN109101935A (en) * | 2018-08-20 | 2018-12-28 | 深圳市中视典数字科技有限公司 | Figure action based on thermal imaging camera captures system and method |
-
2018
- 2018-09-26 CN CN201811122786.9A patent/CN109446906A/en active Pending
-
2019
- 2019-02-15 WO PCT/CN2019/075208 patent/WO2020062760A1/en active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN202615315U (en) * | 2012-05-28 | 2012-12-19 | 深圳泰山在线科技有限公司 | Movement identifier |
CN105003301A (en) * | 2015-06-04 | 2015-10-28 | 中国矿业大学 | Apparatus and system for detecting dangerous postures of worker on fully mechanized coal face |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110363140A (en) * | 2019-07-15 | 2019-10-22 | 成都理工大学 | A kind of human action real-time identification method based on infrared image |
CN110363140B (en) * | 2019-07-15 | 2022-11-11 | 成都理工大学 | Human body action real-time identification method based on infrared image |
Also Published As
Publication number | Publication date |
---|---|
WO2020062760A1 (en) | 2020-04-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106289106B (en) | The stereo vision sensor and scaling method that a kind of line-scan digital camera and area array cameras are combined | |
CN110142785A (en) | A kind of crusing robot visual servo method based on target detection | |
KR101948852B1 (en) | Hybrid image scanning method and apparatus for noncontact crack evaluation | |
CN104463880B (en) | A kind of RGB D image acquiring methods | |
CN109977813A (en) | A kind of crusing robot object localization method based on deep learning frame | |
CN108926355A (en) | X-ray system and method for object of standing | |
CN105956586B (en) | A kind of intelligent tracking system based on TOF 3D video camera | |
CN109101935A (en) | Figure action based on thermal imaging camera captures system and method | |
CN104634277B (en) | Capture apparatus and method, three-dimension measuring system, depth computing method and equipment | |
CN110084244A (en) | Method, smart machine and application based on image recognition object | |
CN107596578A (en) | The identification and location determining method of alignment mark, imaging device and storage medium | |
KR20190142626A (en) | System and method for autonomous crack evaluation of structure using hybrid image scanning | |
CN104008236A (en) | Human body three-dimensional data collecting system and method based on light coding technology | |
CN103247056B (en) | Human bone articular system three-dimensional model-bidimensional image spatial registration method | |
CN106537217B (en) | Wide pentrution imaging | |
CN109446906A (en) | A kind of motion capture system and method | |
US20220415016A1 (en) | Photographic method and system for aiding officials in locating an object | |
CN109919007A (en) | A method of generating infrared image markup information | |
CN107230224A (en) | Three-dimensional virtual garment model production method and device | |
CN108614277A (en) | Double excitation single camera three-dimensional imaging scan table and scanning, imaging method | |
CN113115008A (en) | Pipe gallery master-slave operation inspection system and method based on rapid tracking registration augmented reality technology | |
CN106408659A (en) | Human body feature node three-dimensional modeling system and modeling method thereof | |
CN105180802B (en) | A kind of dimension of object information identifying method and device | |
CN104680570A (en) | Action capturing system and method based on video | |
JP4596372B2 (en) | Fluid flow measurement system, fluid flow measurement method, and computer program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190308 |