CN105551059A - Power transformation simulation human body motion capturing method based on optical and inertial body feeling data fusion - Google Patents

Power transformation simulation human body motion capturing method based on optical and inertial body feeling data fusion Download PDF

Info

Publication number
CN105551059A
CN105551059A CN201510896907.5A CN201510896907A CN105551059A CN 105551059 A CN105551059 A CN 105551059A CN 201510896907 A CN201510896907 A CN 201510896907A CN 105551059 A CN105551059 A CN 105551059A
Authority
CN
China
Prior art keywords
human body
human
data
articulation point
body movement
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510896907.5A
Other languages
Chinese (zh)
Inventor
霍宇平
李兵
张秀娥
马霄龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SKILL TRAINING CENTER OF STATE GRID SHANXI ELECTRIC POWER Co
Original Assignee
SKILL TRAINING CENTER OF STATE GRID SHANXI ELECTRIC POWER Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SKILL TRAINING CENTER OF STATE GRID SHANXI ELECTRIC POWER Co filed Critical SKILL TRAINING CENTER OF STATE GRID SHANXI ELECTRIC POWER Co
Priority to CN201510896907.5A priority Critical patent/CN105551059A/en
Publication of CN105551059A publication Critical patent/CN105551059A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Abstract

The invention relates to a transformer substation simulation training technology and specifically relates to a power transformation simulation human body motion capturing method based on optical and inertial body feeling data fusion. The invention solves the problems that an existing motion capturing system is complex in data processing algorithm, large in calculation amount, unable to work when marking points are confused or shielded and unable to carry out human body positioning and accumulated error control. The power transformation simulation human body motion capturing method based on optical and inertial body feeling data fusion comprises the following steps of: 1) obtaining a group of human body motion data respectively through a Kinect body feeling device and an inertial sensor; 2) utilizing a Kalman filtering algorithm carry out data fusion on the two groups of human body motion data, and obtaining a high quality human body skeleton model; and 3) extracting human body motion characteristic identification actions, and then driving virtual humans in a power transformation simulation virtual environment to complete the corresponding actions. The power transformation simulation human body motion capturing method based on optical and inertial body feeling data fusion is applicable to transformer substation simulation training.

Description

The power transformation simulated humanbody motion capture method of optics and coasting body sense data fusion
Technical field
The present invention relates to substation simulation training technique, specifically the power transformation simulated humanbody motion capture method of a kind of optics and coasting body sense data fusion.
Background technology
Transformer station is the important component part of electric system, along with the raising of its automaticity, it is also proposed higher requirement to operation maintenance personnel.Due to the singularity of electric system, can not directly be giveed training by the power equipment in operation actual motion, it is just necessary therefore to carry out emulation training.Substation Training Simulation System (TransformerTrainingSimulator) is a kind of effective means of Substation Training, be to theory study and field practice study improvement and supplement.Current substation simulation training soft ware is mainly in regular multimedia modes such as wiring diagram, equipment photo and live video modes as expression means, and its sense of reality, telepresenc and expressive force all have much room for improvement.Because transformer station's key equipment is complicated, safety in operation requires high, operation maintenance personnel will grasp working specification in the short period of time great difficulty.
Virtual reality technology as a kind of information transmission and the means exchanged, can intuitively, realistically for user provides the visual information such as structure, working specification, Novel presentation of equipment.Developing rapidly as its application in training provides pacing items and powerful guarantee of virtual reality software and hardware technology.Virtual reality technology and computer simulation technique is adopted to build suitable visual virtual training environment, physical prototyping is replaced to train, effectively can overcome and utilize physical device to give restriction in brought time, place and security training, the drawback such as avoid the high and model machine of training cost fragile.Along with the development of virtual reality (VR) technology, adopt VR technology to be substation simulation system construction virtual environment, will greatly improve the sense of reality and the feeling of immersion of transformer station's scene, for substation simulation training brings leap.Transformer station's three-dimensional scenic true to nature and production and construction environment is constructed by virtual reality technology, task training is carried out by avatar execution under virtual environment, difficulty and the danger of work of transformer substation is fully demonstrated by feeling of immersion environment, the ability of trainee in complicated on-the-spot correct manipulation is improved with this, strengthening trainee manipulates technical ability, promote professional standards and the psychological fitness of trainee, improve efficiency during the complicated dangerous work of reply, shorten the laundering period on duty.
In immersed system of virtual reality, the identification of human action and tracking are the key problems of man-machine interaction.Motion capture system is a kind of for the high-tech equipment of Measurement accuracy moving object in three-dimensional space motion situation.Motion-captured (motioncapture) also claims motion capture at home, be one and record biological motion by the motion following the tracks of some key points in time domain, then convert thereof into available mathematical expression and synthesize the process of an independent 3D motion.The eighties in 20th century, the people such as the Carol of the people such as the professor Calvert of Simonfraser University (SimonFraserUniversity), Massachusetts Institute of Technology (MIT) (MIT) and scholar Robertson and Walters conduct in-depth research movement capturing technology, have promoted the development of this technology.The nineties in 20th century, promote movement capturing technology further with the scholar of the artificial representatives such as Tardif, made this technology increasingly full-fledged.Movement capturing technology achieves the development of advancing by leaps and bounds, and business-like motion capture device is pushed to the market in succession, has been widely used in the research fields such as game, animation, ergonomics research, simulated training, virtual reality.
In recent years, movement capturing technology is widely used in training and education programs.Feng Li is just waiting people to analyze the effect of movement capturing technology in modern sport games, this technology can be used for catching the aspects such as human body movement data, movement monitoring, skill diagnosis and analysis, assistant judge's ruling, athletic rehabilitation by proposition, but does not provide concrete application example.To adopting, movement capturing technology obtains the key parameter moved to the people such as Chen Jian, the method for technical movements being carried out to quantitative analysis is introduced, and is illustrated for the training of hurdling of Liu Xiang, golf auxiliary training system (GTRS-1) and weightlifting.Wright has done detailed investigation, for coach and this area research personnel provide theoretical reference to utilizing movement capturing technology to carry out golf instruction training.The applied research of movement capturing technology in other training and teaching mainly contains the people such as Covaci and utilizes movement capturing technology to develop a virtual basketball training system, and user can learn shootaround course when not having teacher; Movement capturing technology has been applied to fire-fighting simulated training by Xiaoli Zhang, describes the method utilizing movement capturing technology to carry out fire-fighting simulated training, and gives the application case of virtual fireman in oil tank fire extinguishing rescue simulation training based on movement capturing technology.
From existing research conditions, using maximum is optical motion capture device, in the majority with the product of Vicon (Wei Kang), MotionAnalysis (mumbo jumbo), Polhemus and 3DSuit company.Because equipment is accurate complicated, strongly professional, the grand movement capture system cost based on optics is extremely millions of up to hundreds of thousands, and the application of immersed system of virtual reality is often limited to the high motion capture equipment of price.
MS is proposed the somatic sensation television game controller Kinect based on optics in November, 2010.The appearance of body sense technology, is called as the revolution of man-machine interaction.Kinect is equipped with three camera lenses, and middle camera lens is RGBVGA camera, and the right and left camera lens is then respectively the 3D degree of depth inductor that infrared transmitter and infrared C M0S camera are formed.In addition, Kinect, also with a motor, can adjust the upper lower angle of camera, to adapt to different application scenarios.Traditional video camera only can obtain the RGB information of object, and depth camera is while acquisition color information, by launching infrared ray, obtaining the phase differential come and gone and obtaining depth information.What Kinect adopted is a kind of mutation of structured light technique.
DimitriosAlexiadis, Kinect is applied to dance movement assessment and editing system by the people such as PetrosDaras, contrast by the motion of performer's skeletal joint point is moved with the skeletal joint point of the Olympic champion recorded in advance, instruct the dance movement correcting performer.The people such as JohnStowers, MichaelHayes rely on Kinect can obtain the function of object positional information in three dimensions cheap, accurately, are used for controlling the height of minienvironment hunter, and achieve good effect.K.Lothrop etc. utilize the depth information obtained to carry out the segmentation of human body and background, can carry out the translation of sign language.The people such as M.Skubic utilize Kinect to guard old man, obtain the paces data of guarding old man, comprise step-length, leg speed, walk the time by the segmentation of bone tracking and depth image.The Chen Yongbo of Guangdong Power Grid Ltd educational training assessment centers have studied the scene walkthrough technology identified based on Kinect figure, achieves the roaming of virtual portrait in simulating scenes and course changing control.But the best catching range of Kinect is in the scope of 2-3.5m, and action recognition rate is higher, and outside this scope, discrimination sharply declines; Kinect body sense simultaneously catches interaction technique and also comes with some shortcomings in gesture and limb rotating identification, and precision and the speed of identification also have much room for improvement.
Along with the development of technology, microsensor has appearred in the DYN dynamic basis of machinery, and it adopts MEMS (MicroelectroMechanicalSystems, MEMS (micro electro mechanical system)) technology, the device such as integrated sensor and actuator, highly sensitive, error is little, price is low.It is placed on human body detected part, coordinates wireless sensor network technology reliably can obtain the comprehensive data of human motion.Wireless microsensor motion acquisition system can be good at the shortcoming breaking through optics motion capture system, provides a kind of new solution for obtaining body motion information.Inertia human action capturing technology is by wireless sensor network, and the forms of motion of human body will be abundanter, has greatly broken away from the constraint of optical device to human body, will obtain more comprehensively human body movement data.
Based in the human body motion capture of inertial sensor, 2011, the Liu Bo of Beijing Institute of Technology carried out based on inertial sensor motion capture system development and Design; 2012, the Human bodys' response that Nanjing University's Xu's longitude and latitude has been carried out based on wireless senser was studied; Within 2013, Shandong University Li Lu has carried out the human motion pattern identification research based on multisensor; Within 2013, Shanghai Communications University Guo Zhi tiger has carried out based on the full attitude measurement human movement capture system research of micro-inertia.Magnetometer in inertial sensor and accelerometer can be subject to the interference of high-frequency random noises while measuring gravity field and magnetic field, and impulsive noise generation is in various degree being measured and also had in transmitting procedure to this external signal.Gyroscope is mainly used to the angular velocity measuring carrier, and the error in integral operation can be accumulated.
In sum, current widely used optical tracking system, does not possess because it has non-cpntact measurement and can survey other modes such as swiftly passing object, is widely used in entertainment industries such as animations.But optical tracking system also has its shortcoming, device normalization is comparatively loaded down with trivial details, data processing algorithm is complicated, calculated amount is large, cannot work when optical markings point is blocked, and grand movement capture system cost is high; Kinect is as based on optically tracked body sense equipment, and cost is low, but it is little to measure following range, and precision is not high, can not realize the identification of fine movement.Motion capture system based on inertial sensor exists cannot carry out absolute fix, cannot eliminate the problems such as cumulative errors.
2014, the beautiful upper limb motion measurement technology that have studied based on optics and the fusion of inertia tracking data is opened by Zhejiang University, achieve certain achievement, but have employed the optical tracking system Optitrack of NaturalPoint company of the U.S., need to carry out optical markings point setting and also cost high; Meanwhile, only carry out upper extremity exercise modeling work, the requirement that in power transformation simulated virtual environment, visual human's all-around exercises catches can not have been met.
Based on this, be necessary to invent a kind of brand-new motion capture method, existing motion capture system data processing algorithm be complicated to solve, calculated amount is large, cannot work when gauge point is obscured or be blocked, cannot carry out the problem of human body location and cumulative errors control.
Summary of the invention
In order to solve, existing motion capture system data processing algorithm is complicated, calculated amount is large in the present invention, the problem of cannot work when gauge point is obscured or be blocked, cannot carry out human body location and cumulative errors control, provides the power transformation simulated humanbody motion capture method of a kind of optics and coasting body sense data fusion.
The present invention adopts following technical scheme to realize: the power transformation simulated humanbody motion capture method of optics and coasting body sense data fusion, the method comprises the steps:
1) lineup's body exercise data is respectively obtained by Kinect somatosensory device and inertial sensor;
2) time alignment and space re-projection are carried out to the two groups of human body movement datas obtained, and utilize Kalman filtering algorithm to carry out data fusion to two groups of human body movement datas, obtain high-quality human skeleton model thus;
3) based on the human skeleton model obtained, calculate joint relative angle, relative position and movement velocity, and extract human body motion feature identification maneuver, then drive the visual human in power transformation simulated virtual environment to complete corresponding action.
Compared with existing motion capture system, the power transformation simulated humanbody motion capture method of optics of the present invention and coasting body sense data fusion carries out data fusion by the human body movement data obtained Kinect somatosensory device and inertial sensor, and by extracting human body joint motion feature in fused data, achieve and drive visual human to realize power transformation emulation training, it has possessed following advantage thus: one, present invention achieves low cost in power transformation simulated virtual environment, high-precision human body motion capture, in equipment manufacturing cost, following range, positioning precision, the aspects such as control errors have the advantage of two type games capture systems (optical tracking system and inertial sensor motion capture system) concurrently.Its two, the present invention achieves the inertia human body capture system based on inertial sensor and the human body movement data acquisition method based on Kinect somatosensory device under Unified frame.Its three, the present invention devises the uniform coordinate transform method of Kinect somatosensory device and inertial sensor, and carries out data fusion under unified coordinate system, obtains human body and unifies skeleton pattern.Its four, the present invention, towards transformer station's typical operation, achieves Semantic mapping and the fine movement identification of human action, can carry out pseudo operation emulation by Real Time Drive virtual human model.In sum, the present invention efficiently solves that existing motion capture system data processing algorithm is complicated, calculated amount large, cannot work when gauge point is obscured or be blocked, cannot carry out the problem that human body location and cumulative errors control.
The present invention efficiently solves that existing motion capture system data processing algorithm is complicated, calculated amount large, cannot work when gauge point is obscured or be blocked, cannot carry out the problem that human body location and cumulative errors control, and is applicable to substation simulation training.
Embodiment
The power transformation simulated humanbody motion capture method of optics and coasting body sense data fusion, the method comprises the steps:
1) lineup's body exercise data is respectively obtained by Kinect somatosensory device and inertial sensor;
2) time alignment and space re-projection are carried out to the two groups of human body movement datas obtained, and utilize Kalman filtering algorithm to carry out data fusion to two groups of human body movement datas, obtain high-quality human skeleton model thus;
3) based on the human skeleton model obtained, calculate joint relative angle, relative position and movement velocity, and extract human body motion feature identification maneuver, then drive the visual human in power transformation simulated virtual environment to complete corresponding action.
Described step 1) comprise the steps:
1.1) Kinect somatosensory device identifies human body contour outline from depth data stream, and is separated from depth image by human body contour outline;
1.2) Kinect somatosensory device uses machine learning algorithm, identifies the various actions of human body, and come the position of Fast Classification location human body by eigenwert with eigenwert record;
1.3) Kinect somatosensory device utilize self to provide SDK in bone following function, obtain the position data of human body 20 articulation points;
1.4) Kinect somatosensory device provides according to the position data of self built-in algorithm to each articulation point and judges mark, and carries out Credibility judgement to the tracking mode of each articulation point, thus to a low credibility or not have the articulation point of identification to mark;
1.5) wear inertial sensor in human body 17 articulation points, and obtain the attitude data of human body 17 articulation points by inertial sensor;
1.6) attitude data of acquisition is sent to DSP disposable plates by radio communication by inertial sensor, and attitude data is sent to computing machine by RS232 serial ports by DSP disposable plates then.
Described step 2) comprise the steps:
2.1) space conversion matrix is obtained according to the spatial relationship between Kinect somatosensory device and inertial sensor, and according to space conversion matrix, space transforming is carried out to the two groups of human body movement datas obtained, thus by under two groups of human body movement datas unifications to human space coordinate system;
2.2) adopt physiology and the kinematic constraint condition of human skeleton model, group human body movement data of two under human space coordinate system is tested, thus the misdata not meeting human motion rule is got rid of;
2.3) adopt the data anastomosing algorithm calculated based on articulation point variable weight, data fusion is carried out to two groups of human body movement datas, obtains human skeleton model thus;
2.4) check whether human body skeleton pattern exists the disappearance of articulation point; If there is the disappearance of articulation point, then adopt the three dimensional space coordinate position based on the Kalman filter prediction disappearance articulation point improved, thus by the completion of human skeleton model;
2.5) adopt physiology and the kinematic constraint condition of human skeleton model, human skeleton model is tested, verifies the reliability of two groups of human body movement datas thus.
Described step 3) comprise the steps:
3.1) based on the pre-service of time and space continuity human body movement data;
3.2) based on the human body movement data feature extraction of principal component analytical method;
3.3) the human motion Activity recognition of Corpus--based Method learning method;
3.4) Semantic mapping of the human action in transformer station's During typical operation;
3.5) transformer station's pseudo operation of real time human movement capture data-driven.
Described step 1.5) in, inertial sensor is MPU9150 motion sensor.
Described step 2.1) in, described human space coordinate system comprises 16 limb segments, and 16 limb segments are connected by 15 joints; This human space coordinate system is also provided with a theoretic human body root joint Rootjoint, and it comprises 3 translation freedoms and 3 rotary freedoms, for determine visual human locus and towards;
Described step 2.2) in, comprise the steps: to carry out Eulerian angle calculating to two groups of human body movement datas of each articulation point to the inspection of two groups of human body movement datas, and judge whether the Eulerian angle of two groups of human body movement datas of each articulation point exceed corresponding span; If exceed corresponding span, then think two groups of human body movement data mistakes of this articulation point; Concrete span is as shown in table 1:
Table 1 articulation point Eulerian angle span
Articulation point title Turning axle Minimum value Maximal value
Right shoulder X -π/4 3π/4
Z -π/9 π
Left shoulder X -3π/4 π/4
Z π/9
Right elbow X -π/10 3π/4
Y -4π/9 4π/9
Left elbow X -3π/4 -π/10
Y -4π/9 4π/9
Right stern X -π/4 7π/9
Z -π/6 π/4
Left stern X -7π/9 π/4
Z -π/4 π/6
Right knee X -π/18 3π/4
Z -2π/9 π/4
Left knee X -3π/4 π/18
Z -π/4 2π/9
Described step 2.3) in, first the described data anastomosing algorithm calculated based on articulation point variable weight comprises the steps:, is the moving window of 60, for depositing the tracking mode in a period of time for each articulation point arranges a size; When a new frame human body movement data arrives, judge the tracking mode of each articulation point, and upgrade moving window, obtain the availability of data situation of each articulation point in moving window thus, and the weight that when calculating data fusion according to availability of data situation, each articulation point has; Specific formula for calculation is as follows:
k = exp ( - m 60 - r 60 ) ;
In above formula: m is regulatory factor, for regulating the pace of change of weight; R is the data available frame number of articulation point in moving window;
Described step 2.4) in, first the described Kalman filter based on improving comprises the steps:, the motion state parameters of articulation point is set to position and the movement velocity at a certain moment articulation point place, by system state vector X kbe expressed as (S kX, S kY, S kZ, V kX, V kY, V kZ), by observer state vector Y kbe expressed as (a kX, a kY, a kZ); Then, by system state vector X kwith observer state vector Y kdecoupling zero on x-axis, y-axis and z-axis three subsystems, and carries out filter tracking respectively, the three dimensional space coordinate position of the articulation point of prediction disappearance thus.
Described step 3.1) comprise the steps: first, when needing when there being multiple human motion to catch, adopting bubble sort method to sort to everyone the Z value of head, and getting the minimum people of head Z value as operator, then gesture identification being carried out to it; Gesture identification sets a moving window, and preserves the human body movement data of continuous 20 frames, and the three-dimensional coordinate then extracting hand is analyzed; Then, moving window is slided backward, and next frame human body movement data is inserted the afterbody of moving window, thus foremost one frame human body movement data is rejected, thus reach the space-time expending of gesture identification;
Described step 3.2) comprise the steps:
3.2.1) establish a frame human body movement data to have P to tie up, and be expressed as X=(x 1, x 2..., x p) '; Altogether collect n frame data, and be expressed as x i=(x i1, x i2..., x iPthe observation matrix of human body movement data is expressed as by) ', thus: X=(x ij) n × P;
3.2.2) convert the original human body movement data in sample to direct index, and gained human body movement data carried out standardization, obtain following normalized matrix thus:
3.2.3) by the sample correlation coefficient matrix representation of normalized matrix be:
R = [ r i j ] P × P = Z ′ Z n - 1 ;
3.2.4) solve the secular equation of sample correlation coefficient matrix, obtain P eigenwert thus, and P eigenwert meets λ 1>=λ 2>=...>=λ p;
3.2.5) major component is expressed as Y=UX, wherein U is expressed as:
u ij=z ' im jand m jfor unit proper vector;
Described step 3.3) in, described statistical learning method is SVM classifier;
Described step 3.4) in, the Semantic mapping of human action is as shown in table 2:
Table 2 visual human Action Semantic table
Human action Semantic Visual human's action Duration (ms)
Left hand stretches out left Turn left Turn left 250
The right hand stretches out to the right Turn right Turn right 250
Look up and be greater than 30 degree Come back Come back 150
Overlook and be greater than 30 degree Bow Bow 150
Left hand is lifted over the top of the head Entered function menu Stand in original place 150
The right hand is lifted over the top of the head First person Stand in original place 150
Both hands are flat to be lifted Exit the first person Stand in original place 250
Left hand stretches out Selection operation parts Finger is chosen 150
The right hand stretches out Selection operation parts Finger is chosen 150
Palm bends right and left Conjugate/list Component movement 250
Palm bends up and down Conjugate/list Component movement 250
Right-hand man moves Choose menu Mouse moves 50

Claims (7)

1. a power transformation simulated humanbody motion capture method for optics and coasting body sense data fusion, is characterized in that: the method comprises the steps:
1) lineup's body exercise data is respectively obtained by Kinect somatosensory device and inertial sensor;
2) time alignment and space re-projection are carried out to the two groups of human body movement datas obtained, and utilize Kalman filtering algorithm to carry out data fusion to two groups of human body movement datas, obtain high-quality human skeleton model thus;
3) based on the human skeleton model obtained, calculate joint relative angle, relative position and movement velocity, and extract human body motion feature identification maneuver, then drive the visual human in power transformation simulated virtual environment to complete corresponding action.
2. the power transformation simulated humanbody motion capture method of optics according to claim 1 and coasting body sense data fusion, is characterized in that: described step 1) comprise the steps:
1.1) Kinect somatosensory device identifies human body contour outline from depth data stream, and is separated from depth image by human body contour outline;
1.2) Kinect somatosensory device uses machine learning algorithm, identifies the various actions of human body, and come the position of Fast Classification location human body by eigenwert with eigenwert record;
1.3) Kinect somatosensory device utilize self to provide SDK in bone following function, obtain the position data of human body 20 articulation points;
1.4) Kinect somatosensory device provides according to the position data of self built-in algorithm to each articulation point and judges mark, and carries out Credibility judgement to the tracking mode of each articulation point, thus to a low credibility or not have the articulation point of identification to mark;
1.5) wear inertial sensor in human body 17 articulation points, and obtain the attitude data of human body 17 articulation points by inertial sensor;
1.6) attitude data of acquisition is sent to DSP disposable plates by radio communication by inertial sensor, and attitude data is sent to computing machine by RS232 serial ports by DSP disposable plates then.
3. the power transformation simulated humanbody motion capture method of optics according to claim 1 and coasting body sense data fusion, is characterized in that: described step 2) comprise the steps:
2.1) space conversion matrix is obtained according to the spatial relationship between Kinect somatosensory device and inertial sensor, and according to space conversion matrix, space transforming is carried out to the two groups of human body movement datas obtained, thus by under two groups of human body movement datas unifications to human space coordinate system;
2.2) adopt physiology and the kinematic constraint condition of human skeleton model, group human body movement data of two under human space coordinate system is tested, thus the misdata not meeting human motion rule is got rid of;
2.3) adopt the data anastomosing algorithm calculated based on articulation point variable weight, data fusion is carried out to two groups of human body movement datas, obtains human skeleton model thus;
2.4) check whether human body skeleton pattern exists the disappearance of articulation point; If there is the disappearance of articulation point, then adopt the three dimensional space coordinate position based on the Kalman filter prediction disappearance articulation point improved, thus by the completion of human skeleton model;
2.5) adopt physiology and the kinematic constraint condition of human skeleton model, human skeleton model is tested, verifies the reliability of two groups of human body movement datas thus.
4. the power transformation simulated humanbody motion capture method of optics according to claim 1 and coasting body sense data fusion, is characterized in that: described step 3) comprise the steps:
3.1) based on the pre-service of time and space continuity human body movement data;
3.2) based on the human body movement data feature extraction of principal component analytical method;
3.3) the human motion Activity recognition of Corpus--based Method learning method;
3.4) Semantic mapping of the human action in transformer station's During typical operation;
3.5) transformer station's pseudo operation of real time human movement capture data-driven.
5. the power transformation simulated humanbody motion capture method of optics according to claim 2 and coasting body sense data fusion, is characterized in that: described step 1.5) in, inertial sensor is MPU9150 motion sensor.
6. the power transformation simulated humanbody motion capture method of optics according to claim 3 and coasting body sense data fusion, is characterized in that:
Described step 2.1) in, described human space coordinate system comprises 16 limb segments, and 16 limb segments are connected by 15 joints; This human space coordinate system is also provided with a theoretic human body root joint Rootjoint, and it comprises 3 translation freedoms and 3 rotary freedoms, for determine visual human locus and towards;
Described step 2.2) in, comprise the steps: to carry out Eulerian angle calculating to two groups of human body movement datas of each articulation point to the inspection of two groups of human body movement datas, and judge whether the Eulerian angle of two groups of human body movement datas of each articulation point exceed corresponding span; If exceed corresponding span, then think two groups of human body movement data mistakes of this articulation point; Concrete span is as shown in table 1:
Table 1 articulation point Eulerian angle span
Articulation point title Turning axle Minimum value Maximal value Right shoulder X -π/4 3π/4 Z -π/9 π Left shoulder X -3π/4 π/4 Z π/9 Right elbow X -π/10 3π/4 Y -4π/9 4π/9 Left elbow X -3π/4 -π/10 Y -4π/9 4π/9 Right stern X -π/4 7π/9 Z -π/6 π/4 Left stern X -7π/9 π/4 Z -π/4 π/6 Right knee X -π/18 3π/4 Z -2π/9 π/4 Left knee X -3π/4 π/18 Z -π/4 2π/9
Described step 2.3) in, first the described data anastomosing algorithm calculated based on articulation point variable weight comprises the steps:, is the moving window of 60, for depositing the tracking mode in a period of time for each articulation point arranges a size; When a new frame human body movement data arrives, judge the tracking mode of each articulation point, and upgrade moving window, obtain the availability of data situation of each articulation point in moving window thus, and the weight that when calculating data fusion according to availability of data situation, each articulation point has; Specific formula for calculation is as follows:
k = exp ( - m 60 - r 60 ) ;
In above formula: m is regulatory factor, for regulating the pace of change of weight; R is the data available frame number of articulation point in moving window;
Described step 2.4) in, first the described Kalman filter based on improving comprises the steps:, the motion state parameters of articulation point is set to position and the movement velocity at a certain moment articulation point place, by system state vector X kbe expressed as (S kX, S kY, S kZ, V kX, V kY, V kZ), by observer state vector Y kbe expressed as (a kX, a kY, a kZ); Then, by system state vector X kwith observer state vector Y kdecoupling zero on x-axis, y-axis and z-axis three subsystems, and carries out filter tracking respectively, the three dimensional space coordinate position of the articulation point of prediction disappearance thus.
7. the power transformation simulated humanbody motion capture method of optics according to claim 4 and coasting body sense data fusion, is characterized in that:
Described step 3.1) comprise the steps: first, when needing when there being multiple human motion to catch, adopting bubble sort method to sort to everyone the Z value of head, and getting the minimum people of head Z value as operator, then gesture identification being carried out to it; Gesture identification sets a moving window, and preserves the human body movement data of continuous 20 frames, and the three-dimensional coordinate then extracting hand is analyzed; Then, moving window is slided backward, and next frame human body movement data is inserted the afterbody of moving window, thus foremost one frame human body movement data is rejected, thus reach the space-time expending of gesture identification;
Described step 3.2) comprise the steps:
3.2.1) establish a frame human body movement data to have P to tie up, and be expressed as X=(x 1, x 2..., x p) '; Altogether collect n frame data, and be expressed as x i=(x i1, x i2..., x iPthe observation matrix of human body movement data is expressed as by) ', thus: X=(x ij) n × P;
3.2.2) convert the original human body movement data in sample to direct index, and gained human body movement data carried out standardization, obtain following normalized matrix thus:
3.2.3) by the sample correlation coefficient matrix representation of normalized matrix be:
R = [ r i j ] P × P = Z ′ Z n - 1 ;
3.2.4) solve the secular equation of sample correlation coefficient matrix, obtain P eigenwert thus, and P eigenwert meets λ 1>=λ 2>=...>=λ p;
3.2.5) major component is expressed as Y=UX, wherein U is expressed as:
u ij=z ' im jand m jfor unit proper vector;
Described step 3.3) in, described statistical learning method is SVM classifier;
Described step 3.4) in, the Semantic mapping of human action is as shown in table 2:
Table 2 visual human Action Semantic table
Human action Semantic Visual human's action Duration (ms) Left hand stretches out left Turn left Turn left 250 The right hand stretches out to the right Turn right Turn right 250 Look up and be greater than 30 degree Come back Come back 150 Overlook and be greater than 30 degree Bow Bow 150 Left hand is lifted over the top of the head Entered function menu Stand in original place 150 The right hand is lifted over the top of the head First person Stand in original place 150 Both hands are flat to be lifted Exit the first person Stand in original place 250 Left hand stretches out Selection operation parts Finger is chosen 150 The right hand stretches out Selection operation parts Finger is chosen 150 Palm bends right and left Conjugate/list Component movement 250 Palm bends up and down Conjugate/list Component movement 250 Right-hand man moves Choose menu Mouse moves 50
CN201510896907.5A 2015-12-08 2015-12-08 Power transformation simulation human body motion capturing method based on optical and inertial body feeling data fusion Pending CN105551059A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510896907.5A CN105551059A (en) 2015-12-08 2015-12-08 Power transformation simulation human body motion capturing method based on optical and inertial body feeling data fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510896907.5A CN105551059A (en) 2015-12-08 2015-12-08 Power transformation simulation human body motion capturing method based on optical and inertial body feeling data fusion

Publications (1)

Publication Number Publication Date
CN105551059A true CN105551059A (en) 2016-05-04

Family

ID=55830235

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510896907.5A Pending CN105551059A (en) 2015-12-08 2015-12-08 Power transformation simulation human body motion capturing method based on optical and inertial body feeling data fusion

Country Status (1)

Country Link
CN (1) CN105551059A (en)

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106123901A (en) * 2016-07-20 2016-11-16 上海乐相科技有限公司 A kind of localization method and device
CN106153077A (en) * 2016-09-22 2016-11-23 苏州坦特拉自动化科技有限公司 A kind of initialization of calibration method for M IMU human motion capture system
CN106557165A (en) * 2016-11-14 2017-04-05 北京智能管家科技有限公司 The action simulation exchange method of smart machine and device and smart machine
CN106600626A (en) * 2016-11-01 2017-04-26 中国科学院计算技术研究所 Three-dimensional human body movement capturing method and system
CN106709464A (en) * 2016-12-29 2017-05-24 华中师范大学 Method for collecting and integrating body and hand movements of Tujia brocade technique
CN106843507A (en) * 2017-03-24 2017-06-13 苏州创捷传媒展览股份有限公司 A kind of method and system of virtual reality multi-person interactive
CN107122048A (en) * 2017-04-21 2017-09-01 甘肃省歌舞剧院有限责任公司 One kind action assessment system
CN107260179A (en) * 2017-06-08 2017-10-20 朱翔 Human body motion tracking method based on inertia and body-sensing sensing data quality evaluation
CN107330967A (en) * 2017-05-12 2017-11-07 武汉商学院 Knight's athletic posture based on inertia sensing technology is caught and three-dimensional reconstruction system
CN107577451A (en) * 2017-08-03 2018-01-12 中国科学院自动化研究所 More Kinect human skeletons coordinate transformation methods and processing equipment, readable storage medium storing program for executing
CN108227931A (en) * 2018-01-23 2018-06-29 北京市商汤科技开发有限公司 For controlling the method for virtual portrait, equipment, system, program and storage medium
CN108376405A (en) * 2018-02-22 2018-08-07 国家体育总局体育科学研究所 Human movement capture system and method for catching based on binary sense tracing system
CN108553105A (en) * 2018-05-08 2018-09-21 上海逸动医学科技有限公司 Combined sensor, joint motions detecting system and the method for joint motions detection
CN108621164A (en) * 2018-05-10 2018-10-09 山东大学 Taiji push hands machine people based on depth camera
CN109186594A (en) * 2018-09-20 2019-01-11 鎏玥(上海)科技有限公司 The method for obtaining exercise data using inertial sensor and depth camera sensor
CN111539364A (en) * 2020-04-29 2020-08-14 金陵科技学院 Multi-somatosensory human behavior recognition algorithm based on feature fusion and multi-classifier voting
CN111531537A (en) * 2020-05-07 2020-08-14 金陵科技学院 Mechanical arm control method based on multiple sensors
CN112215928A (en) * 2020-09-28 2021-01-12 中国科学院计算技术研究所数字经济产业研究院 Motion capture method based on visual image and digital animation production method
CN113155070A (en) * 2021-03-25 2021-07-23 深圳英飞拓科技股份有限公司 Scaffold climbing safety detection method and system
CN113538514A (en) * 2021-07-14 2021-10-22 厦门大学 Ankle joint motion tracking method, system and storage medium
WO2022228056A1 (en) * 2021-04-30 2022-11-03 华为技术有限公司 Human-computer interaction method and device
WO2023030091A1 (en) * 2021-09-06 2023-03-09 北京字跳网络技术有限公司 Method and apparatus for controlling motion of moving object, device, and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102799191A (en) * 2012-08-07 2012-11-28 北京国铁华晨通信信息技术有限公司 Method and system for controlling pan/tilt/zoom based on motion recognition technology
CN103994765A (en) * 2014-02-27 2014-08-20 北京工业大学 Positioning method of inertial sensor
JP2014215678A (en) * 2013-04-23 2014-11-17 Kddi株式会社 Motion data segment determination device, motion data segment determination method, and computer program
CN104460971A (en) * 2013-11-25 2015-03-25 安徽寰智信息科技股份有限公司 Human motion rapid capturing method
US20150130841A1 (en) * 2012-12-19 2015-05-14 Medical Companion Llc Methods and computing devices to measure musculoskeletal movement deficiencies

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102799191A (en) * 2012-08-07 2012-11-28 北京国铁华晨通信信息技术有限公司 Method and system for controlling pan/tilt/zoom based on motion recognition technology
US20150130841A1 (en) * 2012-12-19 2015-05-14 Medical Companion Llc Methods and computing devices to measure musculoskeletal movement deficiencies
JP2014215678A (en) * 2013-04-23 2014-11-17 Kddi株式会社 Motion data segment determination device, motion data segment determination method, and computer program
CN104460971A (en) * 2013-11-25 2015-03-25 安徽寰智信息科技股份有限公司 Human motion rapid capturing method
CN103994765A (en) * 2014-02-27 2014-08-20 北京工业大学 Positioning method of inertial sensor

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
吴勇: "基于多体感设备的人体行为识别", 《中国优秀硕士学位论文全文数据库》 *
张姹: "基于光学和惯性跟踪数据融合的上肢运动测量技术及应用研究", 《中国优秀硕士学位论文全文数据库》 *
沈世宏等: "基于Kinect的体感手势识别系统的研究", 《和谐人机环境》 *
陈永波等: "沉浸式变电站仿真培训系统的设计与实现", 《电网技术》 *

Cited By (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106123901A (en) * 2016-07-20 2016-11-16 上海乐相科技有限公司 A kind of localization method and device
CN106123901B (en) * 2016-07-20 2019-08-06 上海乐相科技有限公司 A kind of localization method and device
CN106153077A (en) * 2016-09-22 2016-11-23 苏州坦特拉自动化科技有限公司 A kind of initialization of calibration method for M IMU human motion capture system
CN106153077B (en) * 2016-09-22 2019-06-14 苏州坦特拉智能科技有限公司 A kind of initialization of calibration method for M-IMU human motion capture system
CN106600626B (en) * 2016-11-01 2020-07-31 中国科学院计算技术研究所 Three-dimensional human motion capture method and system
CN106600626A (en) * 2016-11-01 2017-04-26 中国科学院计算技术研究所 Three-dimensional human body movement capturing method and system
CN106557165A (en) * 2016-11-14 2017-04-05 北京智能管家科技有限公司 The action simulation exchange method of smart machine and device and smart machine
CN106557165B (en) * 2016-11-14 2019-06-21 北京儒博科技有限公司 The action simulation exchange method and device and smart machine of smart machine
CN106709464A (en) * 2016-12-29 2017-05-24 华中师范大学 Method for collecting and integrating body and hand movements of Tujia brocade technique
CN106709464B (en) * 2016-12-29 2019-12-10 华中师范大学 Tujia brocade skill limb and hand motion collection and integration method
CN106843507A (en) * 2017-03-24 2017-06-13 苏州创捷传媒展览股份有限公司 A kind of method and system of virtual reality multi-person interactive
CN106843507B (en) * 2017-03-24 2024-01-05 苏州创捷传媒展览股份有限公司 Virtual reality multi-person interaction method and system
CN107122048A (en) * 2017-04-21 2017-09-01 甘肃省歌舞剧院有限责任公司 One kind action assessment system
CN107330967A (en) * 2017-05-12 2017-11-07 武汉商学院 Knight's athletic posture based on inertia sensing technology is caught and three-dimensional reconstruction system
CN107330967B (en) * 2017-05-12 2020-07-24 武汉商学院 Rider motion posture capturing and three-dimensional reconstruction system based on inertial sensing technology
CN107260179A (en) * 2017-06-08 2017-10-20 朱翔 Human body motion tracking method based on inertia and body-sensing sensing data quality evaluation
CN107577451A (en) * 2017-08-03 2018-01-12 中国科学院自动化研究所 More Kinect human skeletons coordinate transformation methods and processing equipment, readable storage medium storing program for executing
CN108227931A (en) * 2018-01-23 2018-06-29 北京市商汤科技开发有限公司 For controlling the method for virtual portrait, equipment, system, program and storage medium
CN108376405A (en) * 2018-02-22 2018-08-07 国家体育总局体育科学研究所 Human movement capture system and method for catching based on binary sense tracing system
CN108376405B (en) * 2018-02-22 2020-11-17 国家体育总局体育科学研究所 Human motion capture system and method based on double-body sensation tracking system
CN108553105A (en) * 2018-05-08 2018-09-21 上海逸动医学科技有限公司 Combined sensor, joint motions detecting system and the method for joint motions detection
CN108621164A (en) * 2018-05-10 2018-10-09 山东大学 Taiji push hands machine people based on depth camera
CN109186594A (en) * 2018-09-20 2019-01-11 鎏玥(上海)科技有限公司 The method for obtaining exercise data using inertial sensor and depth camera sensor
CN111539364A (en) * 2020-04-29 2020-08-14 金陵科技学院 Multi-somatosensory human behavior recognition algorithm based on feature fusion and multi-classifier voting
CN111539364B (en) * 2020-04-29 2021-07-23 金陵科技学院 Multi-somatosensory human behavior recognition algorithm based on feature fusion and multi-classifier voting
CN111531537B (en) * 2020-05-07 2022-11-01 金陵科技学院 Mechanical arm control method based on multiple sensors
CN111531537A (en) * 2020-05-07 2020-08-14 金陵科技学院 Mechanical arm control method based on multiple sensors
CN112215928A (en) * 2020-09-28 2021-01-12 中国科学院计算技术研究所数字经济产业研究院 Motion capture method based on visual image and digital animation production method
CN112215928B (en) * 2020-09-28 2023-11-10 中国科学院计算技术研究所数字经济产业研究院 Motion capturing method based on visual image and digital animation production method
CN113155070A (en) * 2021-03-25 2021-07-23 深圳英飞拓科技股份有限公司 Scaffold climbing safety detection method and system
WO2022228056A1 (en) * 2021-04-30 2022-11-03 华为技术有限公司 Human-computer interaction method and device
CN113538514A (en) * 2021-07-14 2021-10-22 厦门大学 Ankle joint motion tracking method, system and storage medium
CN113538514B (en) * 2021-07-14 2023-08-08 厦门大学 Ankle joint movement tracking method, system and storage medium
WO2023030091A1 (en) * 2021-09-06 2023-03-09 北京字跳网络技术有限公司 Method and apparatus for controlling motion of moving object, device, and storage medium

Similar Documents

Publication Publication Date Title
CN105551059A (en) Power transformation simulation human body motion capturing method based on optical and inertial body feeling data fusion
Velloso et al. MotionMA: Motion modelling and analysis by demonstration
CN101579238B (en) Human motion capture three dimensional playback system and method thereof
CN101964047B (en) Multiple trace point-based human body action recognition method
CN201431466Y (en) Human motion capture and thee-dimensional representation system
CN107253192A (en) It is a kind of based on Kinect without demarcation human-computer interactive control system and method
Fang et al. A novel data glove using inertial and magnetic sensors for motion capture and robotic arm-hand teleoperation
CN202662011U (en) Physical education teaching auxiliary system based on motion identification technology
CN102243687A (en) Physical education teaching auxiliary system based on motion identification technology and implementation method of physical education teaching auxiliary system
CN107330967A (en) Knight's athletic posture based on inertia sensing technology is caught and three-dimensional reconstruction system
CN104424650B (en) A kind of arm information compensation method in optical profile type human body motion capture
Chen et al. Real-time human motion capture driven by a wireless sensor network
CN108109460A (en) Equipment is visited in a kind of teaching with augmented reality chemical plant
CN107260179A (en) Human body motion tracking method based on inertia and body-sensing sensing data quality evaluation
Li et al. A Fire Drill Training System Based on VR and Kinect Somatosensory Technologies.
Nie et al. The construction of basketball training system based on motion capture technology
Alves et al. Developing a VR simulator for robotics navigation and human robot interactions employing digital twins
Yeh et al. An integrated system: virtual reality, haptics and modern sensing technique (VHS) for post-stroke rehabilitation
Ren et al. Application of wearable inertial sensor in optimization of basketball player’s human motion tracking method
Grzeskowiak et al. Toward virtual reality-based evaluation of robot navigation among people
Wang Basketball sports posture recognition based on neural computing and visual sensor
Lin et al. Using hybrid sensoring method for motion capture in volleyball techniques training
CN109102572A (en) Power transformation emulates virtual hand bone ratio in VR system and estimates method
Brooks et al. Simulation versus embodied agents: Does either induce better human adherence to physical therapy exercise?
Yang et al. Dance Posture Analysis Based on Virtual Reality Technology and Its Application in Dance Teac.

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20160504

RJ01 Rejection of invention patent application after publication